Next Article in Journal
A Judging Scheme for Large-Scale Innovative Class Competitions Based on Z-Score Pro Computational Model and BP Neural Network Model
Previous Article in Journal
Information-Theoretical Analysis of a Transformer-Based Generative AI Model
Previous Article in Special Issue
Detecting Signatures of Criticality Using Divergence Rate
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Information-Theoretic Reliability Analysis of Linear Consecutive r-out-of-n:F Systems and Uniformity Testing

by
Ghadah Alomani
1,*,
Faten Alrewely
2 and
Mohamed Kayid
3
1
Department of Mathematical Sciences, College of Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
2
Department of Mathematics, College of Science, Jouf University, P.O. Box 2014, Sakaka 72388, Saudi Arabia
3
Department of Statistics and Operations Research, College of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(6), 590; https://doi.org/10.3390/e27060590
Submission received: 24 April 2025 / Revised: 21 May 2025 / Accepted: 27 May 2025 / Published: 31 May 2025
(This article belongs to the Special Issue Information-Theoretic Methods in Data Analytics, 2nd Edition)

Abstract

:
This paper explores the reliability of linear consecutive r-out-of-n:F systems from an information-theoretic perspective, with a particular focus on testing for uniformity. At the heart of the study is extropy, a complementary measure to entropy that we use to gain deeper insights into the uncertainty associated with system lifetimes. We begin by deriving general expressions for extropy in these systems and examine how it behaves under different component lifetime distributions, particularly highlighting the role of heterogeneity. Theoretical bounds are developed, along with new characterization results, shedding light on the unique properties of the uniform distribution within this framework. To bridge theory and application, we propose a nonparametric estimator for extropy and build a new test statistic to assess uniformity. The effectiveness of this test is evaluated through comprehensive simulation studies, where we compare its power against several well-known alternatives across a range of scenarios. Overall, our findings offer both theoretical contributions to the understanding of information measures in reliability analysis and practical tools for statistical testing in applied settings.

1. Introduction

The quantification of uncertainty in probability distributions is a fundamental aspect of information theory, which provides essential tools for measuring and analyzing randomness. Among its key metrics, the concept of Shannon differential entropy holds particular significance. In his seminal work, Shannon [1] introduced entropy as a measure of uncertainty. For a non-negative random variable X with a probability density function (pdf) h ( x ) , the Shannon differential entropy is defined as H ( X ) = E ( log h ( X ) ) , where E ( ) denotes the expectation operator and log  represents the natural logarithm, provided the expectation is well-defined. Recently, Lad et al. [2] introduced extropy, a novel measure of uncertainty that serves as a dual counterpart to entropy. For a non-negative random variable X with pdf h ( x ) and cumulative distribution function (cdf) H ( x ) = ( X x ) , the extropy is defined as follows:
J X = 1 2 0 h 2 x d x = 1 2 E h H 1 U ,
where U is a uniform random variable on the interval [0,1], H 1 u = i n f x : H x u , u 0 , 1 ,   u 0 , 1 , signifying the quantile function of H .
These measures have proven highly effective in quantifying uncertainty, making them valuable across various disciplines. One significant statistical application of extropy involves evaluating forecasting distributions using the total log scoring rule. For a deeper exploration of its theoretical and applied aspects, notable references include Agro et al. [3], Capotorti et al. [4], and Gneiting and Raftery [5].
For an absolutely continuous non-negative random variable X with cdf H x and reversed hazard rate function τ x = h x / H x , an alternative representation of extropy can be expressed as follows:
J X = 1 4 E 22 τ X ,
where E 22 denotes the expectation with respect to the pdf (Toomaj et al. [6]):
h 22 x = 2 h x H x , x > 0 .
The density function in (3) represents the probability density of the maximum of two independent and identically distributed (i.i.d.) random variables. Several properties and statistical applications of the extropy measure in (1) have been extensively explored by Lad et al. [2] and Yang et al. [7].
Research on consecutive r -out-of- n systems has gained significant attention in reliability analysis, owing to their broad applicability in engineering and infrastructure systems. These systems are typically classified as either failure (F) or good (G) structures, depending on their operational principles. Between these, linear consecutive r -out-of- n :F systems have been extensively studied due to their practical relevance. These systems consist of n identical, independent components arranged in a linear sequence, and they fail if and only if at least r consecutive components fail. Conversely, a linear consecutive r -out-of- n :G system remains functional only if at least r consecutive components remain operational.
For the remainder of this discussion, we focus on the linear consecutive r -out-of- n :F system, given its widespread implementation in telecommunication networks, street parking management, microwave relay stations, vacuum systems, and oil pipeline infrastructure. A representative example is an n-station oil pipeline system, where each station is spaced 100 km apart and has the capacity to pump oil over 400 km. This system exhibits a 4-out-of-n:F reliability configuration, implying that the failure of any four consecutive stations leads to complete system failure. Notably, special cases of consecutive r -out-of- n :F systems correspond to well-known reliability structures: the 1-out-of-n:F system is equivalent to a series system, whereas the n -out-of- n :F system corresponds to a parallel system. The reliability characteristics of these systems have been rigorously analyzed under various modeling assumptions (see, e.g., Jung and Kim [8], Shen and Zuo [9], Kuo and Zuo [10], Chang et al. [11], Boland and Samaniego [12], and Eryılmaz [13,14]).
This paper examines the extropy properties of consecutive r -out-of- n :F systems, assuming that component lifetimes are independent and identically distributed (i.i.d.). The study of information measures related to ordered data has been a key area of research in reliability and information theory, as reflected in extensive studies, such as those in references [15,16,17,18,19].
In recent years, extropy has gained prominence as a measure of uncertainty, with several researchers contributing to its development. Notable studies include those by Lad et al. [2], Qiu [20], and Qiu and Jia [21,22]. Qiu et al. [20] conducted a comprehensive analysis of extropy in the context of order statistics and record values, shedding light on its uniqueness and monotonic properties. These aspects were further explored in later studies by Qiu and Jia [22]. Additional contributions in this area have examined extropy within different statistical and reliability frameworks. Research by Kayid and Alshehri [23] investigated its properties in relation to system lifetimes, while Shrahili and Kayid [24] explored the residual extropy of order statistics. Related studies by Alshehri et al. [25] expanded on these findings, offering deeper insights into its applications. Further work has also examined the extropy of past lifetimes in coherent systems, particularly in scenarios where components remain inactive for a specified period [26]. More recently, Alrewely and Kayid [27] explored the extropy of consecutive r-out-of-n:G systems, providing a framework for theoretical and practical applications. They derived expressions for system lifetime extropy and evaluated it across various distributions. They also proposed a novel test statistic for exponentiality, with performance assessments highlighting its effectiveness in specific contexts. Building on these foundational studies, this paper aims to extend the understanding of extropy, focusing on its behavior within linear consecutive r -out-of- n :F systems.
The remainder of this paper is organized as follows. Section 2 derives the extropy of a consecutive r -out-of- n :F system with lifetime T k n : F for an arbitrary lifetime distribution. This is achieved by establishing a relationship with the extropy of a comparable system in which component lifetimes follow a uniform distribution. This connection provides a broader perspective on extropy measures across different distributional settings. Section 3 addresses the challenge of obtaining explicit expressions for the extropy of order statistics, which is often difficult in various statistical models. To mitigate this, we derive bounds for the extropy of consecutive r -out-of- n systems. Section 4 presents a characterization of extropy in these systems, with particular emphasis on the uniform distribution. Section 5 includes computational results to validate the theoretical findings. Specifically, we introduce a nonparametric estimator for the extropy of consecutive systems and apply it to a hypothesis test for the standard uniform distribution. Finally, Section 6 summarizes the key findings and outlines potential directions for future research.

2. Extropy of Consecutive r -out-of- n :F System

In this section, we examine the extropy properties of consecutive r -out-of- n :F systems, where the system fails as soon as at least r consecutive components fail. To analyze this, let X 1 , X 2 , , X n denote the lifetimes of the system’s components, each following a common cumulative distribution function (cdf) H ( x ) and probability density function (pdf) h ( x ) . The overall system lifetime is denoted by T r n : F . When 2 r n , Navarro and Eryılmaz [28] established that the system’s reliability function can be expressed as follows:
H r n : F ( t ) = ( n r + 1 ) H r ( t ) ( n r ) H r + 1 ( t ) , t > 0 .
This formulation provides a fundamental relationship for evaluating the reliability of consecutive r -out-of- n :F systems under this condition. It follows that
h r n : F ( t ) = r ( n r + 1 ) H r 1 ( t ) h ( t ) r + 1 n r H r t h t = ( n r + 1 ) h r : r ( t ) ( n r ) h r + 1 : r + 1 ( t ) , t > 0 ,
where h j : j ( t ) = j h ( t ) H j 1 ( t ) is the pdf of the parallel system with lifetime X j : j = m a x X 1 , , X j . We will use the probability integral transform V r n : F = H T r n : F to find a formula for the extropy of a consecutive r -out-of- n :F system with lifetime T r n : F . This transform changes the original lifetimes X i into new lifetimes V i = H X i that are uniformly distributed in [ 0 , 1 ] . When 2 r is greater than or equal to n , the pdf U r n : F is
g r n : F v = r n r + 1 v r 1 r + 1 n r v r ,   f o r   a l l   0 < v < 1 .
The following theorem provides an explicit expression for the extropy of a consecutive r -out-of- n :F system with lifetime T r n : F . Since the proof of this result follows directly from established principles, it is omitted for brevity.
Theorem 1.
For 2 r n , the extropy of T r n : F  can be expressed as follows:
J T r n : F = 1 2 0 1 g r n : F 2 v h H 1 v d v ,
where  g r n : F ( v )  is given by (5).
Proof. 
By applying the transformation v = F ( x ) , and referencing Equations (1) and (4), we can derive the following expressions:
J T r n : F = 1 2 0 h r n : F 2 t d t = 1 2 0 h 2 x r n r + 1 H r 1 t r + 1 n r H r t 2 d t = 1 2 0 1 h H 1 v r n r + 1 v r 1 r + 1 n r u r 2 d v = 1 2 0 1 g r n : F 2 v h H 1 v d v ,
which completes the proof. □
An alternative representation derived from Equation (4) is given below:
J T r n : F = 1 2 0 h r n : F 2 ( t ) d t = 1 2 0 1 ( n r + 1 ) h r : r ( t ) ( n r ) h r + 1 : r + 1 ( t ) 2 d t = ( n r + 1 ) 2 J X r : r + ( n r ) 2 J X r + 1 : r + 1 2 ( n r + 1 ) ( n r ) J X r : r X r + 1 : r + 1 ,
where
J X r : r X r + 1 : r + 1 = 1 2 0 h r : r t h r + 1 : r + 1 t d t ,
represents the inaccuracy measure of h r : r ( x ) with respect to h r + 1 : r + 1 ( x ) , or vice versa.
As an application of representation (6), we now present the following example.
Example 1.
Consider a linear consecutive r -out-of- n :F system with component lifetimes X 1 , X 2 , , X n . The system fails if and only if at least r  consecutive components fail. The system’s lifetime is given by
T r n : F = m i n T [ 1 : r ] , T [ 2 : r + 1 ] , , T [ n r + 1 : n ]
where  T [ j : m ] = m a x T j , , T m  for  1 j < m n . Assuming that the lifetimes of the components follow a Weibull distribution characterized by a shape parameter α and a scale parameter of one, the cdf is given by
H x = 1 e x α , x > 0 , f o r   α > 0 .
It can be shown that
h H 1 ( v ) = α ( 1 v ) ( l o g ( 1 v ) ) α 1 α , 0 < v < 1
Recalling Equation (6), for all  2 r n , the extropy of the system’s lifetime can be expressed as follows:
J T r n : F = α 2 0 1 g r n : F 2 ( v ) ( 1 v ) ( l o g ( 1 v ) ) α 1 α d v .
Since obtaining an explicit expression for this integral is challenging, numerical methods were used to analyze the relationship between  J T r n : F  and the shape parameter  α . This study focused on the consecutive  r -out-of-8:F system, with r ranging from 4 to 8.
Figure 1 illustrates that the system’s extropy initially increases with increasing α before eventually decreasing again. This clearly demonstrates the influence of the shape parameter on the system’s extropy. Interestingly, no direct correlation was observed between the monotonic behavior of extropy and the number of functioning components.
Consider two random variables, X and Y with pdfs h X ( x ) and h Y ( x ) , and survival functions S X ( x ) and S Y ( x ) , respectively. Recall that X disp   Y , i.e., X is smaller than Y in the dispersive order if and only if
h X H X 1 ( v ) h Y H Y 1 v , f o r   a l l   0 < v < 1 .
Additionally, X h r Y , in the sense that X is smaller than Y in the hazard rate order if the ratio of survival functions S Y ( x ) / S X ( x ) is increasing in x > 0 . Moreover, X exhibits the decreasing failure rate (DFR) property if the ratio of the pdf to the reliability function, h X ( x ) / S X ( x ) , is decreasing in x > 0 . For a deeper understanding and applications of these concepts, refer to [29]. Using (6), the following theorem becomes apparent.
Theorem 2.
Consider two consecutive r-out-of-n:F systems with lifetimes T r n : F X  and T r n : F Y  composed of n  i.i.d. components with cdfs H X  and H Y , and pdfs h X  and h Y , respectively. Then, J T r n : F X J T r n : F Y  for all 2 r n , provided that X disp   Y .
Proof. 
By using Equation (6), we have
J T r n : F X = 1 2 0 1 g r n : F 2 ( v ) h X H X 1 ( v ) d v 1 2 0 1 g r n : F 2 ( v ) h Y H Y 1 ( v ) d v = J T r n : F Y ,
where the inequality is obtained from (8), which completes the proof. □
As shown by Bagai and Kochar [30], if X h r Y and either X or Y is DFR, then X d i s p Y . Considering this result and Theorem 2, the following corollary is straightforward to prove.
Corollary 1.
Under the assumptions of Theorem 2, if X h r Y  and either X  or Y  is DFR, then J T r n : F X J T r n : F Y  for all 2 r n .
A notable application of (6) is comparing the extropy of consecutive  r -out-of- n :F systems with independent but different component lifetime distributions, as stated in the following theorem.
Theorem 3.
Under the conditions of Theorem 2, if J ( X ) J ( Y )  and i n f A 1 g r n : F ( v )   s u p A 2 g r n : F ( v ) , for A 1 = v [ 0 , 1 ] : h X H X 1 ( v ) > h Y H Y 1 ( v ) ,
A 2 = v [ 0 , 1 ] : h X H X 1 ( v ) h Y H Y 1 ( v ) ,   t h e n   J T r n : F X J T r n : F Y   f o r   a l l   2 r n .
Proof. 
Since J ( X ) J ( Y ) , from (1), we have
J ( Y ) J ( X ) = 1 2 0 1 ζ ( v ) d v 0 ,
where ζ ( v ) = h Y H Y 1 ( v ) h X H X 1 ( v ) , for 0 < v < 1 . From (6), we have
J T r n : F Y J T r n : F X = 1 2 0 1 g r n : F 2 ( v ) ζ ( v ) d v = 1 2 A 1 g r n : F 2 ( v ) ζ ( v ) d v + A 2 g r n : F 2 ( v ) ζ ( v ) d v since   A 1 A 2 = [ 0 , 1 ]   1 2 i n f v A 1 g r n : F ( v ) 2 A 1 ζ ( v ) d v + s u p v A 2 g r n : F ( v ) 2 A 2 ζ ( v ) d v     ( since   ζ v < 0 in   A 1   and   ζ v 0 in   A 2   s u p v A 2 g r n : F ( v ) 2 2 0 1 ζ ( v ) d v   ( by   the   assumption )     0   ( by   ( 9 ) ) .
So, we have J T r n : F X J T r n : F Y for 2 r n , completing the proof. □
The subsequent example serves to illustrate the preceding theorem.
Example 2.
Assume coherent systems with lifetimes T 2 3 : F X = m i n m a x X 1 , X 2 , m a x X 2 , X 3  and T 2 3 : F Y = m i n m a x Y 1 , Y 2 , m a x Y 2 , Y 3 , where X 1 , X 2 , X 3  are i.i.d. component lifetimes with a common cdf H X ( t ) = 1 e 2 t , t > 0 , and Y 1 , Y 2 , Y 3  are i.i.d. component lifetimes with the common cdf H Y ( t ) = 1 e 6 t , t > 0 . We can easily confirm that J ( X ) = 0.125  and J ( Y ) = 0.0416 , so J ( X ) J ( Y ) . Additionally, since A 1 = [ 0 , 1 )  and A 2 = { 1 } , we have i n f A 1 g V ( v ) = s u p A 2 g V ( v ) = 0 . Thus, Theorem 3 implies that J T 2 3 : F X J T 2 3 : F Y .

3. Bounds of Extropy of Consecutive Systems

In many cases, deriving closed-form expressions for the extropy of consecutive systems is computationally infeasible, especially when the system consists of a large number of components or when the component lifetime distributions exhibit considerable complexity. In such scenarios, bounding techniques provide a practical and effective alternative. Motivated by this challenge, we investigated the use of bounds to characterize the extropy of consecutive r -out-of- n :F systems. To this end, we established a theorem that provides bounds for the extropy of such systems. Since the proof follows directly from established principles, it is omitted for brevity.
Theorem 4.
Let us consider a consecutive r -out-of- n :F system with lifetime T r n : F  having n  i.i.d. component lifetimes with pdf h .
(i)
If  M = h ( m ) < , where  m = s u p { x : h ( x ) M }  designates the mode of the pdf  h , for  2 r n , we have  J T r n : F M J U r n : F .
(ii)
For  2 r n , we have
B 2 J X J T r n : F D 2 J X ,
where  B = i n f v ( 0 , 1 ) g r n : F ( v )  and  D = s u p v ( 0 , 1 ) g r n : F ( v ) .
Proof 
The proof of Part (i) is straightforward; therefore, we focused on proving Part (ii). Since g r n : F v i n f v ( 0 , 1 ) g r n : F v , by recalling Equation (9), the upper bound is given by
J T r : n | F = 1 2 0 1 g r n : F 2 ( v ) h H 1 ( v ) d v   i n f v 0 , 1 g r n : F v 2 2 0 1 h H 1 ( v ) d v = B 2 J X .
The lower bound can be derived in a similar manner. □
As established in Part (i) of Theorem 4, we derived a lower bound for the extropy of T r n : F . This bound was obtained by considering the extropy of consecutive r -out-of- n :F systems with a uniform distribution, along with the mode M of the original distribution. In Part (ii) of Theorem 4, the extropy J T r n : F is shown to be bounded between the extropy of the individual components under certain sufficient conditions. To illustrate these lower bounds, we examined consecutive r -out-of- n :F systems consisting of mixtures of two Pareto-distributed components.
Example 3.
Consider a linear consecutive 3-out-of-5:F system with lifetime
T 3 5 : F = m i n m a x X 1 , X 2 , X 3 , m a x X 2 , X 3 , X 4 , m a x X 3 , X 4 , X 5 .
We assume that the component lifetimes are i.i.d. following a mixture of two Pareto distributions with parameters 2 and 1. The pdf of this mixture distribution is given by
h m ( x ) = 2 b ( 1 + x ) 3 + ( 1 b ) ( 1 + x ) 2 , x 0 ,   f o r   0 < b < 1 .
It is easy to see that the mode of this distribution is 1, and therefore,  M = b + 1 . Furthermore, we can calculate  B = i n f v ( 0 , 1 ) g 3 5 : F ( v ) = 0  and  D = s u p v ( 0 , 1 ) g 3 5 : F ( v ) = 1.6875 .
Furthermore, it is apparent that  J U 3 5 : F = 0.6714 . Moreover, one can write  J ( X ) = 2 b 2 +   5 b + 5 ) / 30 . Consequently, we can establish lower bounds for  J T 3 5 : F  based on Theorems 4. Specifically, the lower bounds based on Parts (i) and (ii) are given by
J T 3 5 : F 0.6714 ( b + 1 )     a n d     T 3 5 : F 0.095 2 b 2 + 5 b + 5 ,
respectively. As illustrated in Figure 2, the lower bound from Theorem 4, Part (i), offers a better approximation for this specific distribution compared to the lower bound from Part (ii).
In many practical scenarios, the only available prior information may be that the component lifetimes exhibit a decreasing reversed failure rate (DRFR) property. A random variable X is said to satisfy the DRFR property if its reversed hazard rate function, τ ( x ) , is decreasing in x > 0 .
Theorem 5.
Let X i , i = 1,2 , , n , be the i.i.d. lifetimes of components of a consecutive r -out-of-n:F systems with T r n : F  having the common RFR function τ ( x ) . If X  is DRFR, then
k 2 E τ T r n : F 22 J T r n : F 2 r n 2 E τ T r n : F 22 ,
where  T r n : F 22  has the pdf  h r n : F 22 ( x ) = 2 h r n : F ( x ) H r n : F ( x ) , for all  x > 0 .
Proof. 
It is easy to see that the reversed hazard rate function of T r n : F can be expressed as
τ r n : F x = ψ r , n H x τ x ,
where
ψ r , n ( z ) = r n r + 1 r + 1 n r z n r + 1 n r z , 0 < z < 1 .
Because ψ r , n ( z ) < 0 for 2 r n and 0 < z < 1 , it follows that ψ r , n ( z ) is a monotonically decreasing function of z . Given that ψ r , n ( 0 ) = r and ψ r , n ( 1 ) = 2 r n , we have 2 r n   ψ r , n ( H ( x ) ) k for 0 < H ( x ) < 1 . This implies that ( 2 r n ) τ ( x ) τ r n : F ( x ) r τ ( x ) , for x > 0 . Combining this with (2) completes the proof. □
The following theorem applies under the assumption that the expectation of the squared reversed hazard rate function of X is finite.
Theorem 6.
Under the conditions of Theorem 5, such that  E τ 2 ( X ) < , for 2 r n , it holds that
J T r n : F 1 2 Ω r , n E τ 2 X ,   Ω r , n = 0 1 v 2 g r n : F 4 ( v ) d v .
Proof. 
Note that the pdf of T r n : F can be rewritten as h r n : F ( x ) = h ( x ) g r n : F ( H ( x ) ) , its cumulative distribution function is H r n : F ( x ) = G r n : F ( H ( x ) ) , and its RFR function is
τ r n : F x = τ x H x g r n : F H x G r n : F H x ,   f o r   x > 0 .
Consequently, based on (2) and using Cauchy–Schwarz inequality, we obtain
0 τ r n : F ( x ) h r n : F ( x ) H r n : F ( x ) d x = 0 τ ( x ) h ( x ) h ( x ) H ( x ) g r n : F 2 ( H ( x ) ) d x   0 τ 2 ( x ) h ( x ) d x 1 / 2 0 H ( x ) g r n : F 2 ( H ( x ) ) 2 h ( x ) d x 1 / 2 = E τ 2 ( X ) 1 / 2 0 1 v 2 g r n : F 4 ( v ) d v 1 / 2 .
The last equality follows from the change of variable v = H ( x ) , completing the proof. □
The following example demonstrates how the previously stated theorem can be applied to establish bounds for the extropy of a specific consecutive r -out-of- n :F system.
Example 4.
Let us consider a system that fails if 6 consecutive components out of 8 components fail. Then, the lifetime of this system is T 6 8 : F . We further assume that each component’s lifetime follows a Fréchet distribution, with the cdf given by
H x = e x α , x > 0 ,   f o r   α > 0 .  
The reversed hazard rate function is given by  τ ( x ) = α x α 1 , x > 0 , and its second moment is  E τ 2 ( X ) = α 2 Γ 1 α + 2 . Given that  Ω 6,8 = 21.1795 , we obtain the following lower bound:
J T 6 8 : F 2.3 α Γ 1 α + 2 .
Figure 3 illustrates the exact value and this lower bound.
Unlike Theorem 6, we derived additional bounds for the extropy of consecutive k-out-of-n:F systems based on parallel system extropies. The proof is omitted as it follows directly from (7).
Theorem 7.
For the consecutive r-out-of-n:F system with lifetime T r n : F , we have
J T r n : F ( n r + 1 ) 2 J X r : r + ( n r ) 2 J X r + 1 : r + 1 ,   f o r   2 r n .

4. Characterization Results

This section examines the extropy properties of consecutive r -out-of-n:F systems. Recent studies by Husseiny et al. [31] and Gupta and Chaudhary [32] have explored the characterization of symmetric continuous distributions using extropy and related measures, such as cumulative residual extropy and cumulative past extropy. Their findings indicate that for symmetric distributions, these measures yield identical values for both upper and lower order statistics. Building on these insights, we demonstrate that the extropy of the lifetime of a consecutive r -out-of-n:F system uniquely determines the underlying component distribution.
Theorem 8.
Under the conditions of Theorem 2, H X  and H Y  belong to the same family of distributions but a change in location if and only if X disp   Y  and
J T r n : F X = J T r n : F Y ,           f o r   a l l   r   a n d   n ,   s u c h   t h a t   2 r n ,
and  g r : n | F ( v ) > 0  for all  0 < v < 1 .
Proof. 
As the necessity condition is straightforward, we proceed to demonstrate the sufficiency condition. To prove sufficiency, we note that the extropy of T r n : F X from (6), can be expressed as follows:
J T r n : F X = 1 2 0 1 g r n : F 2 ( v ) h X H X 1 ( v ) d v .
In a similar way, J T r n : F Y can be obtained. Assuming (10) holds for all 2 r n , we have
0 1 g r n : F 2 ( v ) h X H X 1 ( v ) h Y H Y 1 ( v ) d v = 0 .
For 2 r n , it holds that g r n : F 2 ( v ) > 0 for all 0 < v < 1 . Additionally, X disp   Y implies h X H X 1 ( v ) h Y H Y 1 ( v ) for all 0 < v < 1 by (8). Thus, the integrand in (11) is nonnegative, implying
h X H X 1 ( v ) = h Y H Y 1 ( v ) ,   a . e .   v ( 0 , 1 ) .
Thus, H Y 1 ( v ) = H X 1 ( v ) + d for some constant d , which completes the proof. □
Since a consecutive n -out-of- n :F system functions as a parallel system, the following corollary follows directly from the preceding theorem.
Corollary 2.
For two parallel systems with lifetimes T n n : F X  and T n n : F Y , then H X  and H Y  belong to the same family of distributions but a change in location if and only if X disp   Y  and
J T n n : F X = J T n n : F Y ,   for   all   n 1 .
The next theorem gives us another way to look at the same idea.
Theorem 9.
Under the conditions of Theorem 8, H X  and H Y  belong to the same family of distributions but a change in location and scale if and only if X disp   Y  and
J T r n : F X J ( X ) = J T r n : F Y J ( Y ) ,   f o r   a l l   r   a n d   n ,   s u c h   t h a t   2 r n .
Proof. 
As the necessity condition is straightforward, we proceed to demonstrate the sufficiency condition. To prove sufficiency, from (6) we have
J T r n : F X J ( X ) = 1 2 0 1 g r n : F 2 v h X H X 1 v J X d v .
In a similar way, J T r n : F Y / J ( Y ) can be obtained. From (12) and (13), 2 r n can be written as
0 1 g r n : F 2 v h X H X 1 v J X d v = 0 1 g r n : F 2 v h Y H Y 1 v J Y d v .
Let us set c = J ( Y ) / J ( X ) , where J ( X ) and J ( Y ) denote the extropies of X and Y , respectively. Assumption X disp   Y results in J ( X ) J ( Y ) , which means that c 1 . Additionally, relation (14) can be written as
0 1 g r n : F 2 ( v ) c h X H X 1 v h Y H Y 1 v d v = 0 .
Assumption X disp   Y implies that h X H X 1 ( v ) h Y H Y 1 ( v ) , or equivalently, c h X H X 1 ( v )   h Y H Y 1 ( v ) , for all 0 < v < 1 , since c 1 . Hence, the integrand of (15) is strictly positive. Therefore, we have H Y 1 ( v ) = c H X 1 ( v ) + d for some constant d , which completes the proof. □
As a direct consequence of Theorem 9, the following corollary holds.
Corollary 3.
Under the conditions of Corollary 2, H X  and H Y  belong to the same family of distributions but a change in scale if and only if X disp   Y  and
J T n n : F X J ( X ) = J T n n : F Y J ( Y ) ,     f o r   a l l   n 1 .
We now present a novel characterization of consecutive systems using extropy. To this end, we examined a linear consecutive ( n i )-out-of- n :F system under the condition n 2 i , where i = 0 , 1 , , n / 2 . As a foundational step, we revisited a lemma derived from the Müntz–Szász theorem, as referenced in Kamps [33], which plays a crucial role in moment-based characterization theorems.
Lemma 1.
For an integrable function ψ ( x )  on the finite interval ( a , b )  if a b x n j ψ ( x ) d x =   0 , j 1 , then ψ ( x ) = 0  for almost all x ( a , b ) , where n j , j 1  is a strictly increasing sequence of positive integers satisfying j = 1 1 n j = .
It is important to note that Lemma 1 is a well-established concept in functional analysis, stating that the sets  x n 1 , x n 2 , ; 1 n 1 < n 2 <  constitute a complete sequence. Notably, Hwang and Lin [34] expanded the scope of the Müntz–Szász theorem for the functions  ϕ n j ( x ) , n j 1 , where  ϕ ( x )  is both absolutely continuous and monotonic over the interval  ( a , b ) .
Theorem 10.
Consider two consecutive ( n i  )-out-of- n :F systems with lifetimes T n i n : F X  and T n i n : F Y  composed ofn i.i.d. components with cdfs H X  and H Y , and pdfs h X  and h Y , respectively. Then, H X  and H Y  belong to the same family of distributions but differ in location if and only if for a fixed i 0 ,
J T n i n : F X = J T n i n : F Y ,   f o r   a l l   n 2 i .
Proof. 
For the necessity part, since H X and H Y belong to the same family of distributions but differ in location, then H Y ( y ) = H X ( y a ) for all y a and a R . Then, it is clear that
J T n i n : F Y = 1 2 a h Y , n i n : F 2 ( y ) d y = 1 2 a h X , n i n : F 2 ( y a ) d y = 1 2 0 h X , n i n : F 2 x d x = J T n i n : F X .   taking   x = y a .
To establish the sufficiency part, we first note that for a consecutive ( n i )-out-of- n :F system, the following equation holds:
g n i n : F ( v ) = ( n i ) ( i + 1 ) v n i + 1 i ( n i + 1 ) v n i , 0 < v < 1 ,
where n 2 i and i ranges from 1 to n / 2 . Given the assumption that J T n i n : F X = J T n i n : F Y , we can write
0 1 g n i n : F 2 ( v ) h X H X 1 v h Y H Y 1 v d v = 0 ,
or equivalently,
0 1 v n 2 i ϕ i , n ( v ) h X H X 1 v h Y H Y 1 v d v = 0 ,
where
ϕ i , n ( v ) = v n [ ( n i ) ( i + 1 ) i ( n i + 1 ) v ] 2 ,   f o r   0 < v < 1 .
By applying Lemma 1 with the function
ψ ( v ) = ϕ i , n ( v ) h X H X 1 ( v ) h Y H Y 1 ( v ) ,
and considering the complete sequence v n 2 i , n 2 i , one can conclude that
h X H X 1 ( v ) = h Y H Y 1 ( v ) ,   a . e .   v ( 0 , 1 ) .
This implies that H Y 1 ( x ) = H X 1 ( x ) + d for a constant d , completing the proof. □
In the special case where i = 0 , which corresponds to an n -out-of- n :F or parallel system, the following corollary holds.
Corollary 4.
Under the conditions of Corollary 2, H X  and H Y  belong to the same family of distributions but differ in location if and only if
J T n n : F X = J T n n : F Y ,   f o r   a l l   n 1 .
Another helpful characterization is provided in the following theorem.
Theorem 11.
Under the conditions of Theorem 10, H X  and H Y  belong to the same family of distributions but differ in location and scale if and only if for a fixed i 0 ,
J T n i n : F X J ( X ) = J T n i n : F Y J ( Y ) ,   f o r   a l l   n 2 i .
Proof. 
As the necessity is straightforward, we need to establish the sufficiency aspect. Leveraging Equations (1) and (17), we can derive
J T n i n : F X J ( X ) = 1 2 0 1 g n i n : F 2 v h X H X 1 v J X d v .
An analogous argument can be made for J T n i n : F Y / J ( X ) . If relation (17) holds for two cdfs H X and H Y , then we can infer from Equation (18) that
0 1 g n i n : F 2 v h X H X 1 v J X d v = 0 1 g n i n : F 2 v h Y H Y 1 v J Y d v .
Let us set
c = J ( Y ) J ( X ) = 0 1 h Y H Y 1 ( z ) d z 0 1 h X H X 1 ( z ) d z .
Using similar arguments as in the proof of Theorem 11, we can write
0 1 v n 2 i ϕ i , n v c h X H X 1 v h Y H Y 1 v d v = 0 .
The proof is then completed by using similar arguments to those in Theorem 11. □
By applying Theorem 13, we derive the following corollary.
Corollary 5.
Suppose the assumptions of Corollary 2. H X  and H Y  belong to the same family of distributions but differ in location and scale if and only if
J T n n : F X J ( X ) = J T n n : F Y J ( Y ) ,   f o r   a l l   n 1 .
The following theorem provides a characterization of the uniform distribution based on the extropy of consecutive ( n i )-out-of- n :F systems.
Theorem 12.
Let us assume that T n i n : F  is the lifetime of a consecutive ( n i )-out-of-n:F system, where the n  i.i.d. component lifetimes have a pdf h , concentrated on ( 0 , 1 ) . Then, X  has a uniform distribution on ( 0 , 1 )  if and only if for a fixed i 0 ,
J T n i n : F X = 2 J X J U n i n : F ,   f o r   a l l   n 2 i .
Proof. 
Assuming X has a uniform distribution on ( 0 , 1 ) , we have h ( x ) = 1 for 0 < x < 1, and therefore, for all 0 < v < 1 , h H 1 ( v ) = 1 . Consequently, Theorem 1 implies that J T n i n : F X = J U n i n : F for all n 2 i . Moreover, it is easy to see that J ( X ) = 1 / 2 ; therefore, the necessity is obtained. To prove the sufficiency part, let us assume that for all i , J T n i n : F = 2 J ( X ) J U n i n : F , or equivalently,
0 1 g n i n : F 2 ( v ) h H 1 v + 2 J X d v = 0 ,
which holds when n 2 i , where g n i n : F ( v ) is defined in (16). Employing this, relation (19) can be rewritten as follows:
0 1 g n i n : F 2 ( v ) h H 1 ( v ) + 2 J ( X ) d v = 0
or equivalently, we can obtain
0 1 v n 2 i ϕ i , n v h H 1 v + 2 J X d v = 0 ,   for   n 2 i .
By applying Lemma 1 with the function
ψ v = ϕ i , n v h H 1 v + 2 J X ,
and considering the complete sequence v n 2 i , n 2 i , one can conclude that
h H 1 ( v ) = 2 J ( X ) ,   a . e .   v ( 0 , 1 ) .
This implies that
d H 1 d v = 1 h H 1 ( v ) = 1 2 J ( X )
Integrating both sides of the above equation from 0 to x , we find that H 1 ( x ) = x 2 J ( X ) + d for some constant d , where x ( 0 , 1 ) . Since l i m x 0 H 1 ( x ) = 0 , it follows that d = 0 , leading to H 1 ( x ) = x 2 J ( X ) for x ( 0 , 1 ) . Given that H 1 ( 1 ) = 1 , we must have 2 J ( X ) = 1 . Therefore, we conclude that H ( x ) = x for 0 < x < 1 , indicating that X has a uniform distribution on ( 0 , 1 ) , thus completing the proof. □

5. Nonparametric Estimation

In this section, we introduce a nonparametric estimation technique for the extropy of a consecutive ( n i ) -out-of- n :F system. We consider i.i.d. absolutely continuous, non-negative random variables X 1 , X 2 , , X N , and their corresponding order statistics X 1 : N X 2 : N X N : N . Leveraging the identity d H 1 ( v ) / d v = 1 / h H 1 ( v ) , 0 < v < 1 , from Equation (6), the extropy of T n i n : F can be formulated as follows:
J T n i n : F = 1 2 0 1 g n i n : F 2 v h H 1 v d v = 1 2 0 1 g n i n : F 2 ( v ) h H 1 ( v ) d v ,   f o r   2 r n .
To estimate the extropy J T n i n : F of the consecutive ( n i -out-of- n :F system, we utilize a difference operator-based estimator proposed by Vasicek [35] to approximate d H 1 ( v ) d v . This estimator employs the empirical distribution function to estimate the distribution function of the random variable in (20). As a result, the following estimator for J T n i n : F is obtained:
J ^ T n i n : F = 1 2 N i = 1 N g n i n : F i N + 1 2 2 m N X i + m : N X i m : N     = 1 2 N i = 1 N r n r + 1 i N + 1 r 1 r + 1 n r i N + 1 r 2 × 2 m N X i + m : N X i m : N
where m represents a positive integer less than N / 2 , defined as the window size. For cases where i m 1 , X i m : N is set to X 1 : N , and for i + m N , X i + m : N is set to X N : N . To see how well our estimator works, we first use it as a simple example. We assume that the component lifetimes follow an exponential distribution of a pdf given by h ( x ) = e x for x > 0 . Under this assumption, J T n i n : F can be computed as follows:
J T n i n : F = 1 2 0 1 g n i n : F 2 ( v ) ( 1 v ) d v = 1 4 n i n 3 i 1 + n n + 2 2 n i 1 2 n i + 1 ,   f o r   n 2 i .
The second case involves a uniform distribution with a pdf defined as h ( x ) = 1 for 0 < x < 1 . For this distribution, J T n i n : F can be determined as follows:
J T n i n : F = J U n i n : F = 1 2 0 1 g n i n : F 2 ( v ) d v = 1 2 ( ( n i ) ( i + 1 ) ) 2 2 n i 1 + ( ( n i + 1 ) i ) 2 2 n i + 1 i i + 1 n i + 1 ,   f o r   n 2 i .
The average bias and root mean square error (RMSE) of the estimators for these distributions are evaluated using Equation (21). To this end, we computed the bias and RMSE for various sample sizes, N = 20, 30, 40, 50, 100 and different values of i and n . For simplicity, we used the heuristic formula from [36] to determine the parameter m , given by
m = [ N + 0.5 ]
Based on 5000 repetitions, the results are summarized in Table 1 and Table 2. As the sample size increased, the RMSE of the extropy estimators for the consecutive ( n i ) -out-of- n :F system decreased, while the bias increased. This observation suggests that larger sample sizes yielded more precise estimates, albeit with a slight increase in bias.

5.1. Test of Uniformity

Testing for uniformity is a fundamental statistical task with applications across various fields. Consider a random sample X 1 , X 2 , , X N drawn from an absolutely continuous distribution H , with order statistics satisfying X 1 : N X 2 : N X N : N . Suppose that H 0 denotes the cumulative distribution function (cdf) of the standard uniform distribution U 0 , 1 , given by H 0 x = x , for 0 x 1 .
Our objective was to test the following hypothesis:
H 0 : H x = H 0 x ,   vs .   H 1 : H x H 0 x .
As established in Theorem 14, the uniform distribution is uniquely characterized by the extropy of consecutive ( n i ) -from- n : F systems. Building on this result, we introduced a new test statistic for uniformity, denoted by T C i , n , which is defined as follows:
T C i , n = J T n i n : F X + 2 J X J U n i n : F ,   f o r   i = 0 , 1 , , n / 2 .
Therefore, Theorem 13 directly implies that T C i , n = 0 if and only if X is uniformly distributed. Hence, T C i , n can serve as a basic measure of uniformity and be used as a test statistic. Given an estimator T C ^ i , n of T C i , n based on a random sample X 1 , X 2 , , X N , significant deviations of T C ^ i , n from its expected value under the null hypothesis suggest nonuniformity, leading to the rejection of H 0 . For illustration purposes, we focus on the special case of n = 4 and i = 2 , using T C ^ 2,4 as the test statistic. To derive an expression for T C ^ 2,4 , we recall that Qiu and Jia’s [21] estimator for J ( X ) , denoted as J Q 2 m n , is defined by
J Q 2 m n = 1 2 N l = 1 N c l m N X l + m : N X l m : N ,
where c l depends on the window size m and the sample size N and is defined as
c l = 1 + l 1 m   if   1 l m 2   if   m + 1 l N m 1 + N l m   if   N m + 1 l N
As J U 2 4 : F = 0.6 , a reasonable estimator for T C ^ 2,4 can be derived using Equation (21) and the J Q 2 m n estimator, as follows:
T C ^ 4 , 2 = J ^ T 2 4 : F X + 2 J Q 2 m n J U 2 4 : F = 1 2 N l = 1 N g 2 4 : F l N + 1 2 2 m N X l + m : N X l m : N + 1.2 2 N l = 1 N c l m N X l + m : N X l m : N = 1 2 N l = 1 N 72 l N + 1 l N + 1 2 2 1.2 c l l = 1 N c l m N X l + m : N X l m : N .
Ensuring the consistency of an estimator is essential, particularly when evaluating estimators for parametric functions. The following theorem establishes the results presented in Equation (23). Its proof follows a methodology similar to that of Theorem 1 in Vasicek’s paper [35]. Notably, Park [37] and Xiong et al. [38] adopted Vasicek’s approach to demonstrate the consistency of their respective test statistics.
Theorem 13.
Assume that X 1 , X 2 , , X N  is a random sample of size N  taken from a population with pdf h  and cdf H . Also, let the variance of the random variable be finite. Then, T C ^ 4 , 2 p   T C 4 , 2  as N + , m +  and m N 0 , where p    stands for the convergence in probability.
Proof. 
Part (2) of Theorem 2.1 in Qiu and Jia [21] establishes that J Q 2 m n p J ( X ) as N + , m + and m N 0 . Furthermore, by adapting the approach of Theorem 1 in Vasicek [35], it can be shown that J ^ T 2 4 : F X p J T 2 4 : F X under the same asymptotic conditions. Consequently, leveraging the properties of convergence in probability, we obtain T C ^ 4 , 2 p T C 4 , 2 as N + , m + and m N 0 , completing the proof. □
The following theorem demonstrates that shifting the random variable X does not affect the RMSE of T C ^ 4 , 2 when estimating T C 4 , 2 . However, this invariance does not extend to scale transformations. The proof of these results follows directly from the arguments presented by Ebrahimi et al. [39].
Theorem 14.
Assume that X 1 , X 2 , , X N  is a random sample of size N  taken from a population with pdf h  and c d f H  and Y j = a X j + b , a > 0 , b R . Denote the estimators for T C 4 , 2  on the basis of X j  and Y j  with T C ^ 4 , 2 X  and T C ^ 4 , 2 Y , respectively. Then, the following properties apply:
(i)
E T C ^ 4 , 2 Y = E T C ^ 4 , 2 X / a ,
(ii)
V a r T C ^ 4 , 2 Y = V a r T C ^ 4 , 2 X / a 2 ,
(iii)
R M S E T C ^ 4 , 2 Y = R M S E T C ^ 4 , 2 X / a .
Proof. 
It is not hard to see from (23) that
T C ^ 4 , 2 Y = 1 2 N l = 1 N 72 l N + 1 l N + 1 2 2 1.2 c l m N Y l + m : N Y l m : N = 1 2 N l = 1 N 72 l N + 1 l N + 1 2 2 1.2 c l m N a X l + m : N X l m : N = T C ^ 4 , 2 X a
The proof is then completed by leveraging the properties of the mean, variance, and RMSE of T C ^ 4 , 2 Y = T C ^ 4 , 2 X / a . □
The absolute value of T C ^ 4 , 2 converges to zero as the sample size N approaches infinity under the null hypothesis H 0 . Conversely, under an alternative distribution on [ 0 , 1 ] , with an absolutely continuous cdf H , the absolute value of T C ^ 4 , 2 reaches a value greater than zero with probability sa N + . Based on these properties, we reject the null hypothesis for any significance level α , and a finite sample size N , if the test statistic T C ^ 4 , 2 exceeds the critical value T C ^ 4 , 2 ( 1 α ) . Since the values of T C ^ 4 , 2 are influenced by the sample size and the window parameter m , its asymptotic distribution is complex and not easily analyzable theoretically. To address this, we employed the Monte Carlo method, generating 10,000 samples of sizes N = 5 , 10 , 20 , 30 , 40 , 50 , 100 from the null distribution and determined the ( 1 α ) -th quantile to serve as the critical value for the significance level α . We selected m using the heuristic formula m = N 2 1 .
Table 3 and Table 4 present the critical values corresponding to different sample sizes at significance levels α = 0.1 , 0.05 , 0.01 .

5.2. Power Comparisons

To calculate the power of the tests, a random sample assuming all possible values in the interval ( 0 , 1 ) was generated from non-standard uniform distributions, such as Beta, Kumaraswamy, and piecewise distributions (see Cordeiro and De Castro [40], whose supports varied between 0 and 1). The Monte Carlo study of the proposed test T C ^ 4 , 2 was performed under nine alternative distributions. For each sample size N , 10,000 samples of size N were generated from the alternative distributions, and the statistic T C ^ 4 , 2 was then calculated. For the level α , the power of T C ^ 4 , 2 was estimated by the proportion of 10,000 samples that fell within the critical range. The distribution functions of the alternatives considered were as follows:
  • A k : H ( x ) = I x ( k , k ) , 0 x 1 ,
  • B k : H ( x ) = 1 ( 1 x ) k , 0 x 1 ,
  • C k : H ( x ) = 2 k 1 x k   if   0 x 0.5 1 2 k 1 ( 1 x ) k   if   0.5 < x 1 ,
    for k = 1.5 , 2 , 3 , where
    I x ( a , b ) = B ( x ; a , b ) B ( a , b ) , B ( x ; a , b ) = 0 x t a 1 ( 1 t ) b 1 d t
denotes the regularized incomplete beta function, and B ( a , b ) is the complete beta function. Figure 4 illustrates the pdfs of the alternative hypotheses A k , B k , and C k . As evident from the figure, alternatives A and C are both more likely to produce values closer to 0.5 than the uniform distribution. However, alternative C exhibits a stronger concentration around 0.5, suggesting a higher probability of observing values very near the midpoint. Alternative A , on the other hand, tends to produce values closer to 0 than expected under the null hypothesis. To evaluate the performance of our proposed test statistic, we compare its power to that of several widely used statistics under the same alternative hypotheses. These include the following:
  • The Kolmogrov–Smirnov statistic (Kolmogorov [41] and Smirnov [42]):
    K S = m a x m a x 1 l N l N X l : N , m a x 1 l N X l : N l 1 N ,
  • The Anderson–Darling statistic (Anderson and Darling [43]):
    A D = l = 1 N 2 l 1 N l o g X l : N + l o g 1 X N l + 1 : N N ,
  • The Cramér–von Mises statistic (Cramér [44]; von Mises [45]):
    C M = l = 1 N 2 l 1 2 N X l : N 2 + 1 12 N ,
  • The Dudewicz and Van der statistic:
    E N T = 1 N l = 1 N l o g 2 2 m N X l + m : N X l m : N ,
  • The Kuiper statistic:
    V = m a x 1 l N l N X l : N + m a x 1 l N X l : N l 1 N ,
  • The Qiu and Jia statistic (Qiua and Jia [21]):
    T U = J Q 2 m n = 1 2 N l = 1 N c l m N X l + m : N X l m : N ,
where c l is defined in (22).
The power of the proposed test is influenced by both the window size m and the specific alternative distribution, making it challenging to determine the optimal value of m that maximizes power across all alternatives. Consequently, we adopted the heuristic formula m = N 2 1 , to select m , aiming for good (though not necessarily optimal) power across all alternative distributions. Figure 5, Figure 6 and Figure 7 present the results of the power comparisons. As shown in these figures, our T C ^ 4 , 2 test exhibited strong performance against alternative A , especially as the sample size N increased. However, it performed less favorably against alternative B . For alternative C , our T C ^ 4 , 2 test also performed well, with comparable performance to the Dudewicz and Van der Meulen ENT test for C 3 .
Example 5.
(Real data application). As discussed in Illowsky and Dean [46] on page 317, Table 5.1, we consider a dataset containing the smiling times, in seconds, of 55 babies. These smiling times are assumed to follow a uniform distribution between 0 and 23 s, inclusive.
Dataset: 10.4, 19.6, 18.8, 13.9, 17.8, 16.8, 21.6, 17.9, 12.5, 11.1, 4.9, 12.8, 14.8, 22.8, 20.0, 15.9, 16.3, 13.4, 17.1, 14.5, 19.0, 22.8, 1.3, 0.7, 8.9, 11.9, 10.9, 7.3, 5.9, 3.7, 17.9, 19.2, 9.8, 5.8, 6.9, 2.6, 5.8, 21.7, 11.8, 3.4, 2.1, 4.5, 6.3, 10.7, 8.9, 9.4, 9.4, 7.6, 10.0, 3.3, 6.7, 7.8, 11.6, 13.8, 18.6.
This implies that any smiling time between 0 and 23 s, inclusive, is equally likely. By applying the transformation x i / 23 , to the given data x l , l = 1,2 , , 55 , we standardized the data to a uniform distribution on the interval ( 0 , 1 ) . For these transformed data, the calculated value of the test statistic was T C ^ 4 , 2 = 0.1522 , while the critical value at the 0.05 significance level was T C ^ 4 , 2 0.95 = 0.1582 . Since the test statistic fell within the acceptance region, we failed to reject the null hypothesis and conclude that the data follow a uniform distribution on ( 0 , 1 ) .

6. Conclusions

Extropy and its various generalizations have emerged as powerful tools with widespread applications across diverse scientific and engineering fields, including information theory, economics, communication theory, and physics. For instance, negative cumulative extropy has been utilized in the analysis of stock markets in OECD countries by Tahmasebi and Toomaj [47], while Tsallis extropy has found application in pattern recognition by Balakrishnan et al. [48]. Furthermore, fractional Deng extropy has been applied to classification problems by Kazemi et al. [49], and various extropy measures have been employed in compressive sensing as shown by Tahmasebi et al. [50]. The objective of this study was to extend the understanding and application of extropy to the context of consecutive r-out-of-n:F systems. These systems are fundamental in reliability engineering and represent a critical class of systems, where failure occurs if r consecutive components fail. Understanding the information content and uncertainty within such systems, as quantified by extropy, is crucial for assessing their performance, reliability, and overall robustness. Our investigation successfully established a significant relationship between the extropy of consecutive r-out-of-n:F systems derived from continuous distributions and those obtained from uniform distributions, providing a foundational insight into their behavior. Recognizing the inherent challenges in deriving closed-form expressions for extropy, especially in scenarios involving a large number of system components or complex component distributions, we introduced a comprehensive range of useful bounds. These bounds offer practical tools for effectively estimating the extropy of consecutive r-out-of-n:F systems, even when exact calculations are intractable. Furthermore, we proposed a novel extropy estimator specifically tailored for consecutive r-out-of-n:F systems, designed for direct application in practical settings. As a compelling example of its practical utility, we showcased the application of the proposed extropy estimator for a goodness-of-fit test for the standard uniform distribution. While many existing test statistics for assessing uniformity are built upon other certainty measures, such as those discussed by Blinov and Lemeshko [51], Mohamed et al. [52,53] using fractional entropy and cumulative residual Tsallis entropy, and Noughabi [54] applying cumulative residual entropy, the use of extropy offers distinct advantages. In the context of consecutive systems, an extropy-based test can be particularly sensitive to deviations from uniformity that might manifest as specific patterns, which are inherently captured by number of components and r consecutive failed components as a parameter of flexibility. This provides a complementary and potentially more nuanced perspective on assessing uniformity, especially when the underlying data might exhibit characteristics relevant to system reliability or sequence-dependent events. The extropy-based test provides an alternative and robust method for evaluating the fit of data to a uniform distribution, leveraging the unique properties of extropy in capturing information content.

Author Contributions

G.A.: visualization, investigation, validation, resources, investigation, and conceptualization; F.A.: writing—review and editing, writing—original draft, visualization, validation, resources, investigation, and conceptualization; M.K.: writing—review and editing, writing—original draft, visualization, validation, software, resources, funding acquisition, data curation, and conceptualization. All authors have read and agreed to the published version of the manuscript.

Funding

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2025R226), Princess Nourah bintAbdulrahman University, Riyadh, Saudi Arabia.

Institutional Review Board Statement

This study did not involve human participants or animals.

Data Availability Statement

All data generated or analyzed during this study are included in this published article.

Acknowledgments

The authors would like to sincerely thank the two anonymous reviewers for their valuable comments and constructive suggestions, which greatly contributed to improving the quality of this paper.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar]
  2. Lad, F.; Sanfilippo, G.; Agro, G. Extropy: Complementary dual of entropy. Stat. Sci. 2015, 30, 40–58. [Google Scholar] [CrossRef]
  3. Agro, G.; Lad, F.; Sanfilippo, G. Sequentially forecasting economic indices using mixture linear combinations of EP distributions. J. Data Sci. 2010, 8, 101–126. [Google Scholar] [CrossRef]
  4. Capotorti, A.; Regoli, G.; Vattari, F. Correction of incoherent conditional probability assessments. Int. J. Approx. Reason. 2010, 51, 718–727. [Google Scholar] [CrossRef]
  5. Gneiting, T.; Raftery, A.E. Strictly proper scoring rules, prediction, and estimation. J. Am. Stat. Assoc. 2007, 102, 359–378. [Google Scholar] [CrossRef]
  6. Toomaj, A.; Hashempour, M.; Balakrishnan, N. Extropy: Characterizations and dynamic versions. J. Appl. Probab. 2023, 60, 1333–1351. [Google Scholar] [CrossRef]
  7. Yang, J.; Xia, W.; Hu, T. Bounds on extropy with variational distance constraint. Probab. Eng. Informational Sci. 2019, 33, 186–204. [Google Scholar] [CrossRef]
  8. Jung, K.-H.; Kim, H. Linear consecutive-k-out-of-n: F system reliability with common-mode forced outages. Reliab. Eng. Syst. Saf. 1993, 41, 49–55. [Google Scholar] [CrossRef]
  9. Shen, J.; Zuo, M.J. Optimal design of series consecutive-k-out-of-n: G systems. Reliab. Eng. Syst. Saf. 1994, 45, 277–283. [Google Scholar] [CrossRef]
  10. Kuo, W.; Zuo, M.J. Optimal Reliability Modeling: Principles and Applications; John Wiley and Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
  11. In-Hang, C.; Cui, L.; Hwang, F.K. Reliabilities of Consecutive-k Systems; Springer Nature: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  12. Boland, P.J.; Samaniego, F.J. Stochastic ordering results for consecutive k-out-of-n: F systems. IEEE Trans. Reliab. 2004, 53, 7–10. [Google Scholar] [CrossRef]
  13. Eryılmaz, S. Mixture representations for the reliability of consecutive-k systems. Math. Comput. Model. 2010, 51, 405–412. [Google Scholar] [CrossRef]
  14. Eryilmaz, S. Conditional lifetimes of consecutive k-out-of-n systems. IEEE Trans. Reliab. 2010, 59, 178–182. [Google Scholar] [CrossRef]
  15. Wong, K.M.; Chen, S. The entropy of ordered sequences and order statistics. IEEE Trans. Inf. Theory 1990, 36, 276–284. [Google Scholar] [CrossRef]
  16. Park, S. The entropy of consecutive order statistics. IEEE Trans. Inf. Theory 1995, 41, 2003–2007. [Google Scholar] [CrossRef]
  17. Ebrahimi, N.; Soofi, E.S.; Soyer, R. Information measures in perspective. Int. Stat. Rev. 2010, 78, 383–412. [Google Scholar] [CrossRef]
  18. Zarezadeh, S.; Asadi, M. Results on residual Rényi entropy of order statistics and record values. Inf. Sci. 2010, 180, 4195–4206. [Google Scholar] [CrossRef]
  19. Baratpour, S.; Ahmadi, J.; Arghami, N.R. Characterizations based on Rényi entropy of order statistics and record values. J. Stat. Plan. Inference 2008, 138, 2544–2551. [Google Scholar] [CrossRef]
  20. Qiu, G. The extropy of order statistics and record values. Stat. Probab. Lett. 2017, 120, 52–60. [Google Scholar] [CrossRef]
  21. Qiu, G.; Jia, K. Extropy estimators with applications in testing uniformity. J. Nonparametric Stat. 2018, 30, 182–196. [Google Scholar] [CrossRef]
  22. Qiu, G.; Jia, K. The residual extropy of order statistics. Stat. Probab. Lett. 2018, 133, 15–22. [Google Scholar] [CrossRef]
  23. Kayid, M.; Alshehri, M.A. System level extropy of the past life of a coherent system. J. Math. 2023, 2013, 9912509. [Google Scholar] [CrossRef]
  24. Shrahili, M.; Kayid, M. Excess lifetime extropy of order statistics. Axioms 2023, 12, 1024. [Google Scholar] [CrossRef]
  25. Shrahili, M.; Kayid, M.; Mesfioui, M. Stochastic inequalities involving past extropy of order statistics and past extropy of record values. AIMS Math. 2024, 9, 5827–5849. [Google Scholar] [CrossRef]
  26. Kayid, M.; Alshehri, M.A. Excess lifetime extropy for a mixed system at the system level. AIMS Math 2023, 8, 16137–16150. [Google Scholar] [CrossRef]
  27. Alrewely, F.; Kayid, M. Extropy analysis in consecutive r-out-of-n: G systems with applications in reliability and exponentiality testing. AIMS Math. 2025, 10, 6040–6068. [Google Scholar] [CrossRef]
  28. Navarro, J.; Eryilmaz, S. Mean residual lifetimes of consecutive-k-out-of-n systems. J. Appl. Probab. 2007, 44, 82–98. [Google Scholar] [CrossRef]
  29. Shaked, M.; Shanthikumar, J.G. Stochastic Orders; Springer Science and Business Media: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  30. Bagai, I.; Kochar, S.C. On tail-ordering and comparison of failure rates. Commun. Stat. Theory Methods 1986, 15, 1377–1388. [Google Scholar] [CrossRef]
  31. Husseiny, I.A.; Barakat, H.M.; Nagy, M.; Mansi, A.H. Analyzing symmetric distributions by utilizing extropy measures based on order statistics. J. Radiat. Res. Appl. Sci. 2024, 17, 101100. [Google Scholar] [CrossRef]
  32. Gupta, N.; Chaudhary, S.K. Some characterizations of continuous symmetric distributions based on extropy of record values. Stat. Pap. 2024, 65, 291–308. [Google Scholar] [CrossRef]
  33. Kamps, U. 10 characterizations of distributions by recurrence relations and identities for moments of order statistics. In Handbook of Statistics; Confederation of Indian Industry: New Delhi, India, 1998; Volume 16, pp. 291–311. [Google Scholar]
  34. Hwang, J.S.; Lin, G.D. On a generalized moment problem. II. Proc. Am. Math. Soc. 1984, 91, 577–580. [Google Scholar] [CrossRef]
  35. Vasicek, O. A test for normality based on sample entropy. J. R. Stat. Soc. Ser. B: Stat. Methodol. 1976, 38, 54–59. [Google Scholar] [CrossRef]
  36. Crzcgorzewski, P.; Wirczorkowski, R. Entropy-based goodness-of-fit test for exponentiality. Commun. Stat. Theory Methods 1999, 28, 1183–1202. [Google Scholar] [CrossRef]
  37. Park, S. A goodness-of-fit test for normality based on the sample entropy of order statistics. Stat. Probab. Lett. 1999, 44, 359–363. [Google Scholar] [CrossRef]
  38. Xiong, P.; Zhuang, W.; Qiu, G. Testing exponentiality based on the extropy of record values. J. Appl. Stat. 2022, 49, 782–802. [Google Scholar] [CrossRef]
  39. Ebrahimi, N.; Pflughoeft, K.; Soofi, E.S. Two measures of sample entropy. Stat. Probab. Lett. 1994, 20, 225–234. [Google Scholar] [CrossRef]
  40. Cordeiro, G.M.; De Castro, M. A new family of generalized distributions. J. Stat. Comput. Simul. 2011, 81, 883–898. [Google Scholar] [CrossRef]
  41. Kolmogorov, A.N. Sulla determinazione empirica di una legge di distribuzione. Giorn. Dell’Inst. Ital. Degli Atti 1933, 4, 89–91. [Google Scholar]
  42. Smirnov, N.V. Estimate of deviation between empirical distribution functions in two independent samples. Bull. Mosc. Univ. 1939, 2, 3–16. [Google Scholar]
  43. American Statistical Association. Journal of the American Statistical Association; American Statistical Association: Alexandria, VA, USA, 1922. [Google Scholar]
  44. Cramér, H. On the Composition of Elementary Errors: Statistical Applications; Almqvist and Wiksell: Stockholm, Sweden, 1928. [Google Scholar]
  45. Mises, R. Wahrscheinlichkeitsrechnung und ihre Anwendung in der Statistik und theoretischen Physik; Rosenberg, M.S., Ed.; F. Deuticke: Vienna, Austria, 1931. [Google Scholar]
  46. Illowsky, B.; Dean, S. Introductory Statistics; OpenStax: Houston, TX, USA, 2018. [Google Scholar]
  47. Tahmasebi, S.; Toomaj, A. On negative cumulative extropy with applications. Commun. Stat. Theory Methods 2022, 51, 5025–5047. [Google Scholar] [CrossRef]
  48. Balakrishnan, N.; Buono, F.; Longobardi, M. On Tsallis extropy with an application to pattern recognition. Stat. Probab. Lett. 2022, 180, 109241. [Google Scholar] [CrossRef]
  49. Kazemi, M.R.; Tahmasebi, S.; Buono, F.; Longobardi, M. Fractional Deng entropy and extropy and some applications. Entropy 2021, 23, 623. [Google Scholar] [CrossRef] [PubMed]
  50. Tahmasebi, S.; Kazemi, M.R.; Keshavarz, A.; Jafari, A.A.; Buono, F. Compressive sensing using extropy measures of ranked set sampling. Math. Slovaca 2023, 73, 245–262. [Google Scholar]
  51. Blinov, P.Y.; Lemeshko, B.Y. A review of the properties of tests for uniformity. In Proceedings of the 2014 12th International Conference on Actual Problems of Electronic Instrument Engineering, Novosibirsk, Russia, 2–4 October 2014. [Google Scholar]
  52. Mohamed, M.S.; Barakat, H.M.; Alyami, S.A.; Abd Elgawad, M.A. Fractional entropy-based test of uniformity with power comparisons. J. Math. 2021, 2021, 5331260. [Google Scholar] [CrossRef]
  53. Mohamed, M.S.; Barakat, H.M.; Alyami, S.A.; Abd Elgawad, M.A. Cumulative residual Tsallis entropy-based test of uniformity and some new findings. Mathematics 2022, 10, 771. [Google Scholar] [CrossRef]
  54. Noughabi, H.A. Cumulative residual entropy applied to testing uniformity. Commun. Stat. Theory Methods 2022, 51, 4151–4161. [Google Scholar] [CrossRef]
Figure 1. The precise values of J T r n : F as a function of α for various values of r and n = 8 , as demonstrated in Example 1.
Figure 1. The precise values of J T r n : F as a function of α for various values of r and n = 8 , as demonstrated in Example 1.
Entropy 27 00590 g001
Figure 2. The exact values of J T r n : F (black color) and lower bounds from Theorem 4 (Part (i), red color; Part (ii), green color) for a mixture of two Pareto distributions, as demonstrated in Example 3.
Figure 2. The exact values of J T r n : F (black color) and lower bounds from Theorem 4 (Part (i), red color; Part (ii), green color) for a mixture of two Pareto distributions, as demonstrated in Example 3.
Entropy 27 00590 g002
Figure 3. The exact values of J T 6 8 : F (solid line) and lower bounds (dashed line) from Theorem 6 for Fréchet distribution.
Figure 3. The exact values of J T 6 8 : F (solid line) and lower bounds (dashed line) from Theorem 6 for Fréchet distribution.
Entropy 27 00590 g003
Figure 4. The probability density functions of A k , B k , and C k distributions.
Figure 4. The probability density functions of A k , B k , and C k distributions.
Entropy 27 00590 g004
Figure 5. Power comparisons of the test statistics for A 1.5 (left), A 2 (middle), and A 3 (right) at significance level α = 0.05 .
Figure 5. Power comparisons of the test statistics for A 1.5 (left), A 2 (middle), and A 3 (right) at significance level α = 0.05 .
Entropy 27 00590 g005
Figure 6. Power comparisons of the test statistics for B 1.5 (left), B 2 (middle), and B 3 (right) at significance level α = 0.05 .
Figure 6. Power comparisons of the test statistics for B 1.5 (left), B 2 (middle), and B 3 (right) at significance level α = 0.05 .
Entropy 27 00590 g006
Figure 7. Power comparisons of the test statistics for C 1.5 (left), C 2 (middle), and C 3 (right) at significance level α = 0.05 .
Figure 7. Power comparisons of the test statistics for C 1.5 (left), C 2 (middle), and C 3 (right) at significance level α = 0.05 .
Entropy 27 00590 g007
Table 1. The bias and RMSE values of the estimate of J T n i n : F , with standard exponential distribution component lifetimes for different choices of i and n .
Table 1. The bias and RMSE values of the estimate of J T n i n : F , with standard exponential distribution component lifetimes for different choices of i and n .
N = 20N = 30N = 40N = 50N = 100
niBiasRMSEBiasRMSEBiasRMSEBiasRMSEBiasRMSE
50−0.1458950.050749−0.0897740.018277−0.0597700.009551−0.0448110.005837−0.0118640.001097
1−0.0786250.018531−0.0418850.006271−0.0265290.003121−0.0174650.001909−0.0033450.000571
2−0.0360390.006134−0.0181400.003219−0.0124380.002164−0.0092710.001516−0.0041160.000757
60−0.1779260.071332−0.1113490.028405−0.0836250.015152−0.0642540.009374−0.0200520.001779
1−0.1186960.031228−0.0664500.011488−0.0414320.005272−0.0294340.003241−0.0063910.000766
2−0.0555880.011645−0.0267990.003965−0.0150540.002124−0.0096080.001451−0.0008430.000593
3−0.0285730.007610−0.0195220.004667−0.0148650.003251−0.0093830.002604−0.0073020.001241
70−0.2159260.105451−0.1433780.041398−0.1055950.022722−0.0835270.015081−0.0276460.002861
1−0.1512130.053749−0.0892060.020752−0.0610460.010124−0.0448970.006181−0.0116510.001151
2−0.0876160.021155−0.0453730.007131−0.0269590.003476−0.0170640.002072−0.0019450.000608
3−0.0378210.008069−0.0169630.003372−0.0089870.002098−0.0031870.001562−0.0005140.000770
80−0.2458750.133512−0.1699200.058762−0.1270690.031831−0.1010610.021011−0.0380700.004022
1−0.1887370.081671−0.1167320.032998−0.0846630.016489−0.0637600.010117−0.0181650.001815
2−0.1249650.036731−0.0680220.013715−0.0424730.006675−0.0310820.003656−0.0047980.000838
3−0.0653240.015026−0.0289320.005053−0.0149060.002663−0.0077930.0015490.0010650.000666
4−0.0244990.008091−0.0113650.003940−0.0058890.002853−0.0017480.002046−0.0029140.001145
Table 2. The bias and RMSE values of the estimate of J T n i n : F , with standard uniform distribution component lifetimes for different choices of i and n .
Table 2. The bias and RMSE values of the estimate of J T n i n : F , with standard uniform distribution component lifetimes for different choices of i and n .
N = 20N = 30N = 40N = 50N = 100
niBiasRMSEBiasRMSEBiasRMSEBiasRMSEBiasRMSE
50−0.9360941.444092−0.7938511.144772−0.7138421.004388−0.6647560.905735−0.5127610.628195
1−0.5904500.871130−0.4542520.658060−0.3863770.536204−0.3448340.463333−0.2389680.311527
2−0.2605610.347903−0.1882470.236120−0.1475860.187416−0.1241210.159152−0.0750710.097070
60−1.1157871.785087−0.9666851.404569−0.8985001.260244−0.8495161.141714−0.6550930.859432
1−0.7893291.243988−0.6404230.918773−0.5742050.796915−0.5274100.702302−0.3694480.471387
2−0.4611310.652054−0.3356950.465028−0.2764770.379773−0.2350810.320505−0.1524610.199439
3−0.1956130.259936−0.1319790.179150−0.1016570.142935−0.0824560.119049−0.0480430.078041
70−1.3000572.517281−1.3406182.220741−1.3901722.041344−1.3513071.832077−1.1749141.460273
1−0.9745271.521404−0.8357481.250773−0.7529201.064358−0.7035410.932074−0.5158720.670624
2−0.6632650.980888−0.5194560.705243−0.4359160.605943−0.3885880.509926−0.2570600.332286
3−0.3473480.489964−0.2462780.340897−0.1914800.257628−0.1559080.216469−0.0943370.130185
80−1.2800092.387094−1.2923711.970096−1.2202431.730366−1.1736131.614975−0.9980481.256077
1−1.1490681.823176−1.0098161.512126−0.9342851.290326−0.8664991.153410−0.6577470.844607
2−0.8642341.323735−0.7233280.999044−0.6118580.838130−0.5600170.738741−0.3938680.510655
3−0.5830590.817869−0.4112580.576096−0.3298350.453799−0.2856570.389149−0.1765010.234242
4−0.2835380.391618−0.1926020.260096−0.1396420.198825−0.1112100.170093−0.0618580.107112
Table 3. Critical values of the T C ^ 4 , 2 statistic at significance level α = 0.05 .
Table 3. Critical values of the T C ^ 4 , 2 statistic at significance level α = 0.05 .
mN = 5N = 10N = 20N = 30N = 40N = 50N = 100
21.0552460.3786250.3341150.2929180.2359030.2142880.145114
3 0.2915540.2306320.2034580.1811640.1641390.111786
4 0.3334650.1824540.1708870.1585040.1429080.100424
5 0.1629010.1515370.1384880.1283580.092997
6 0.1618130.1365210.1273210.1193300.091328
7 0.1659620.1275480.1173270.1145730.086915
8 0.1835420.1226040.1105610.1064890.084083
9 0.2073590.1232690.1059090.0985790.081979
10 0.1270540.1057830.0988790.081248
11 0.1360030.1035380.0940490.077391
12 0.1463280.1058270.0930910.076731
13 0.1596880.1083990.0930090.074217
14 0.1774650.1135630.0947350.073368
15 0.1214170.0952080.071351
16 0.1289910.0975640.069159
17 0.1386960.1025740.069402
18 0.1504570.1065800.068427
19 0.1634770.1127380.067897
20 0.1192610.066284
21 0.1269340.065713
22 0.1355830.065328
23 0.1451610.065742
24 0.1559070.067071
25 0.066916
26 0.068518
27 0.068396
28 0.069368
29 0.070967
30 0.072728
Table 4. Critical values of the T C ^ 4 , 2 statistic at significance level α = 0.01 .
Table 4. Critical values of the T C ^ 4 , 2 statistic at significance level α = 0.01 .
mN = 5N = 10N = 20N = 30N = 40N = 50N = 100
21.7435620.6775430.6587440.482410.4347950.3686530.229218
3 0.3860520.3537180.3280390.2823770.248620.158054
4 0.4059180.256050.2555270.2277690.2031220.139741
5 0.2174230.2088390.1920970.1809640.135573
6 0.1949130.1796790.1786530.1664500.125934
7 0.1971270.1674310.1626250.1558820.121372
8 0.2071230.1561010.1536410.1456890.114386
9 0.2295250.1513090.1406980.1309600.110536
10 0.1489280.1326490.1286690.110754
11 0.1526610.1275550.1241880.106882
12 0.1615720.1244510.1174770.102301
13 0.1752480.1275490.1167590.097866
14 0.1896900.1311320.1130710.096778
15 0.1351450.1145220.094484
16 0.1410790.1134470.094325
17 0.1498200.1152860.092486
18 0.1601940.1197960.090843
19 0.1736290.1241190.089592
20 0.1287470.087058
21 0.1358210.085291
22 0.1444090.084226
23 0.1538750.083061
24 0.1634020.083231
25 0.083082
26 0.083062
27 0.083960
28 0.084174
29 0.085061
30 0.084448
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alomani, G.; Alrewely, F.; Kayid, M. Information-Theoretic Reliability Analysis of Linear Consecutive r-out-of-n:F Systems and Uniformity Testing. Entropy 2025, 27, 590. https://doi.org/10.3390/e27060590

AMA Style

Alomani G, Alrewely F, Kayid M. Information-Theoretic Reliability Analysis of Linear Consecutive r-out-of-n:F Systems and Uniformity Testing. Entropy. 2025; 27(6):590. https://doi.org/10.3390/e27060590

Chicago/Turabian Style

Alomani, Ghadah, Faten Alrewely, and Mohamed Kayid. 2025. "Information-Theoretic Reliability Analysis of Linear Consecutive r-out-of-n:F Systems and Uniformity Testing" Entropy 27, no. 6: 590. https://doi.org/10.3390/e27060590

APA Style

Alomani, G., Alrewely, F., & Kayid, M. (2025). Information-Theoretic Reliability Analysis of Linear Consecutive r-out-of-n:F Systems and Uniformity Testing. Entropy, 27(6), 590. https://doi.org/10.3390/e27060590

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop