Next Article in Journal
Analysis of Quantum Multiplicative Calculus and Related Inequalities
Previous Article in Journal
A Lower-Bounded Extreme Value Distribution for Flood Frequency Analysis with Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Laplace Transform-Based Test for Exponentiality Against the EBUCL Class with Applications to Censored and Uncensored Data

by
Walid B. H. Etman
1,
Mahmoud E. Bakr
2,*,
Arwa M. Alshangiti
2,
Oluwafemi Samson Balogun
3 and
Rashad M. EL-Sagheer
4,5
1
Faculty of Computer and Artificial Intelligence, Modern University for Technology and Information, Cairo 11585, Egypt
2
Department of Statistics and Operations Research, College of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia
3
Department of Computing, University of Eastern Finland, FI-70211, Finland
4
Mathematics Department, Faculty of Science, Al-Azhar University, Naser City 11884, Cairo, Egypt
5
High Institute of Computers and Management Information Systems, First Statement, New Cairo 11865, Egypt
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(21), 3379; https://doi.org/10.3390/math13213379
Submission received: 25 July 2025 / Revised: 17 September 2025 / Accepted: 1 October 2025 / Published: 23 October 2025

Abstract

This paper proposes a novel statistical test for evaluating exponentiality against the recently introduced EBUCL (Exponential Better than Used in Convex Laplace transform order) class of life distributions. The EBUCL class generalizes classical aging concepts and provides a flexible framework for modeling various non-exponential aging behaviors. The test is constructed using Laplace transform ordering and is shown to be effective in distinguishing exponential distributions from EBUCL alternatives. We derive the test statistic, establish its asymptotic properties, and assess its performance using Pitman’s asymptotic efficiency under standard alternatives, including Weibull, Makeham, and linear failure rate distributions. Critical values are obtained through extensive Monte Carlo simulations, and the power of the proposed test is evaluated and compared with existing methods. Furthermore, the test is extended to handle right-censored data, demonstrating its robustness and practical applicability. The effectiveness of the procedure is illustrated through several real-world datasets involving both censored and uncensored observations. The results confirm that the proposed test is a powerful and versatile tool for reliability and survival analysis.

1. Introduction

In reliability theory and related disciplines, numerous classes of life distributions have been introduced to model the aging behavior and failure patterns of systems and components. These classes have broad applications in fields such as biological sciences, biometrics, engineering, maintenance, and social sciences. Aging criteria derived from these distributions are instrumental for maintenance engineers and system designers in formulating optimal maintenance strategies. Additionally, stochastic comparisons among probability distributions play a foundational role in probability, statistics, and their applications in reliability, survival analysis, economics, and actuarial science. As a result, statisticians and reliability analysts have increasingly focused on modeling survival data using classifications of life distributions that capture various aging characteristics.
In this work, we focus on testing exponentiality against classes of life distributions that extend classical notions of aging. The exponential distribution, defined by the survival function F ( x ) = e λ x , x 0 , λ > 0 , serves as the baseline model due to its memoryless property. However, many practical datasets deviate from this assumption, motivating the development of broader classes. Among these, the NBU (New Better than Used) and NBUE (New Better than Used in Expectation) classes capture systems with aging properties stronger than exponential. Their extensions, such as HNBUE (Harmonic New Better than Used in Expectation) and HNWUE (Harmonic New Worse than Used in Expectation), provide refined criteria for reliability modeling. More recent classes have been defined using Laplace or moment generating function orderings, including NBRUL (New Better than Renewal Used in Laplace) and NBRU m g f (New Better than Renewal Used in Moment Generating Function), NBRULC (New Better than Renewal Used in Laplace Convex), and EBU m g f (Exponential Better Than Used in Moment Generating Function), each characterizing non-exponential behaviors via convexity or stochastic ordering principles. Other related classes, such as UBAC(2) (Used Better Than Aged in Increasing Concave Ordering), further enrich the framework of stochastic comparison. A clear understanding of these classes is central to our study, as the proposed EBUCL class builds upon and generalizes them using Laplace transform ordering. Introducing these classes at the outset highlights the motivation for our test and places it in the broader context of exponentiality testing.
The study of aging properties and life distribution classes has a long-standing history in reliability theory. Early foundational work by Bryson and Siddiqui [1] introduced formal criteria for aging, laying the groundwork for subsequent classification and comparison of life distributions. Barlow and Proschan [2] further advanced this field by establishing a comprehensive statistical theory of reliability and life testing, including essential aging concepts and stochastic ordering. Klefsjo [3] contributed to this area by defining the HNBUE and HNWUE classes, which capture nuanced forms of aging beyond traditional hazard rate analysis. Building on this line of research, Kumazawa [4] developed a class of test statistics specifically designed to assess whether a system is new or better than used, providing tools for empirical evaluation of aging behavior. Similarly, Cuparić and Milošević [5] discussed new characterization-based exponential tests for randomly censored data.
In recent years, more research has focused on developing reliable statistical tools to tell apart exponential distributions from other types of life distributions, especially in reliability analysis. One key area uses Laplace transform methods to identify and test for different aging properties in lifetime distributions. Hassan et al. [6] suggested a Laplace transform method for testing whether data follow an exponential pattern or come from the NBRU m g f class, showing that this approach can handle more complex failure patterns that do not fit the basic exponential model.
Etman et al. [7] introduced a novel test for exponentiality against the EBUCL class, an emerging life distribution model where members are ordered by the convexity of their Laplace transforms (a mathematical way to compare aging properties). Their work emphasized the utility of this class in modeling aging patterns that are asymmetric (aging differently over time) and non-monotonic (not consistently increasing or decreasing), particularly in the context of sustainability and reliability data. Further expansion of Laplace-based testing was provided by Etman et al. [8], who formalized the NBRULC class, another statistical framework for reliability data, and constructed goodness-of-fit procedures suitable for both censored (incomplete) and uncensored (complete) data. Their findings reinforced the adaptability of Laplace transform-based methods for handling incomplete or irregular data often seen in reliability experiments.
Parallel to these developments, many authors have made substantial contributions through a series of works focusing on nonparametric hypothesis testing for life distribution analysis. For example, Mohamed and Mansour [9] proposed a test against the NBUC (New Better than Used in Convex) class with practical implications for the health and engineering sectors. Building on this, El-Arishy et al. [10] offered characterizations of (DLTTF) (Decreasing The Laplace Transform of The Time to Failure) classes, reinforcing the applicability of Laplace-based methods for detecting different aging behaviors. Similarly, Gadallah [11] adopted a comparable approach by proposing testing for the EBU m g f class.
Several works have introduced new distributional classes and testing procedures using the Laplace transform as a central analytical tool. Abu-Youssef and El-Toony [12] developed a novel class of life distributions grounded in Laplace transforms, offering practical modeling strategies. In a related effort, Bakr and Al-Babtain [13] focused on nonparametric tests for unknown age classes using medical datasets, illustrating the adaptability of these tests in real-life settings. Atallah et al. [14] and Al-Gashgari et al. [15] presented new tests for exponentiality against the NBU m g f (New Better Than Used in Moment Generating Function) and NBUCA (New Better than Used in the Increasing Convex Average) classes, respectively, while Abu-Youssef et al. [16] proposed a nonparametric test for the UBAC(2) class, each leveraging Laplace transform techniques to capture nuanced aging characteristics.
Further emphasizing the practical relevance of these methods, Mansour [17,18] applied nonparametric exponentiality tests to evaluate treatment effects and clinical data, revealing the critical role of these tools in medical statistics. Finally, Abu-Youssef and Bakr [19] examined the Laplace transform order for unknown age classes, enhancing the methodological toolkit available for reliability analysts and statisticians working with complex or censored data.
The aim of this paper is to develop a new nonparametric test for assessing exponentiality against the EBUCL class of life distributions by utilizing Laplace transform ordering. The study seeks to establish the theoretical foundation of the proposed test, including its asymptotic distribution and Pitman’s asymptotic efficiency under standard alternatives such as the Weibull, Makeham, and linear failure rate distributions. To evaluate its practical effectiveness, the test’s performance is examined through extensive Monte Carlo simulations, with critical values and power estimates compared against existing methods. Additionally, the test is extended to handle right-censored data, broadening its applicability to real-world scenarios in reliability and survival analysis. The final objective is to demonstrate the utility of the proposed approach through empirical applications involving both censored and uncensored datasets.
The motivation for this study arises from the limitations of the exponential distribution in modeling real-world lifetime data, particularly in reliability and survival analysis, where systems often exhibit aging behaviors not captured by the memoryless property of the exponential model. While various alternative aging classes have been proposed, the recently introduced EBUCL class provides a more flexible and robust framework for describing non-monotonic and asymmetric aging patterns. However, existing tests for exponentiality often lack the sensitivity or generality needed to detect such departures effectively. This paper addresses this gap by developing a new nonparametric test based on Laplace transform ordering, aiming to enhance the ability to distinguish exponential distributions from those in the EBUCL class. The test is designed to be theoretically sound, computationally efficient, and applicable to both complete and right-censored data, thereby extending its relevance to practical scenarios encountered in engineering, medicine, and actuarial science.
The organization of this essay is as follows: Section 2 introduces the methodological background of the study. Section 3 provides essential definitions along with the theoretical framework. In Section 4, we develop a test for exponentiality against the EBUCL class using the Laplace transform method. Section 5 derives the Pitman asymptotic efficiency under various standard alternatives. Section 6 focuses on simulating the critical values of the test statistic and evaluating its power through Monte Carlo methods. Section 7 extends the proposed test to handle right-censored data. Lastly, Section 8 illustrates practical applications that highlight the effectiveness of the proposed statistical procedure.

2. Methodology of the Study

To evaluate the performance and practical applicability of the proposed test for exponentiality against the EBUCL class, a structured methodological approach is adopted. This methodology combines theoretical development, simulation-based validation, and real-data applications. The following steps outline the key components of the study’s methodology:
  • Definition of the EBUCL class: The study begins by formally defining the EBUCL class of life distributions, establishing its relationship with existing aging classes using Laplace transform ordering.
  • Construction of the test statistic: A new test statistic is proposed to test the null hypothesis that a given distribution is exponential against the alternative that it belongs to the EBUCL class. The test is derived using the properties of the Laplace transform of the survival function.
  • Asymptotic properties and Pitman efficiency: The asymptotic distribution of the test statistic is derived under the null hypothesis, and Pitman’s asymptotic efficiency is computed under standard alternative distributions, including Weibull, Makeham, and linear failure rate models.
  • Monte Carlo simulation of critical values: Critical values of the test statistic are obtained through Monte Carlo simulations using 10,000 replications across varying sample sizes. This provides reference thresholds for significance testing at common confidence levels of 90 % , 95 % , and 99 % .
  • Power analysis via simulation: The power of the proposed test is evaluated and compared against existing methods using simulated data from known non-exponential distributions to assess its ability to detect departures from exponentiality.
  • Extension to right-censored data: The test is adapted to handle right-censored data by redefining the test statistic based on the Kaplan–Meier estimator. This ensures applicability in survival analysis and reliability studies where censoring is common.
  • Application to real datasets: The test is applied to several real-life datasets, including complete and censored observations, to demonstrate its practical utility and effectiveness in identifying EBUCL behavior in empirical settings.

3. Concepts and Theoretical Foundations

In this section, we provide a rigorous mathematical formulation and explore the essential characteristics of the EBUCL class. Specifically, we establish its relationship with well-known aging classes by utilizing principles based on the Laplace transform. This theoretical development lays the groundwork for demonstrating the EBUCL class’s suitability in modeling lifetime data, particularly in contexts involving reliability and system durability.
Definition 1. 
A random variable X 0 with probability 1 and survival function F ¯ x is characterized as
(i) 
Exponentially better than used, represented as X E B U , if
F ¯ x + t F ¯ t e x μ , x , t > 0 ,
where μ = E X = 0 x d F x .
(ii) 
Exponentially better than used in increasing convex order, represented as X E B U C , if
y F ¯ x + t d x μ e x μ F ¯ t , x , t > 0 ,
or
x + t F ¯ y d y μ e x μ F ¯ t ,
and this leads to
μ W ¯ F x + t μ e x μ F ¯ t ,
where W ¯ F x + t = 1 μ x + t F ¯ ( y ) d y
W ¯ F x + t e x μ F ¯ t .
(iii) 
Exponentially better than used in increasing convex in Laplace transform order, depicted as X E B U C L ( E W U C L ) , if
0 e β x W ¯ F x + t d x ( ) F ¯ t 0 e β x e x μ d x , x , t , β > 0 ,
or
0 e β x W ¯ F x + t d x μ μ β + 1 F ¯ t , x , t , β > 0 ,
(iv) 
For a nonnegative lifetime X, we define its Laplace transform (which coincides with the MGF of X) by
ϕ ( β ) = E e β x = 0 e β x d F ( x ) , β 0 .
Remark 1. 
It is clear that by integration over x:
NBUNBUEHNBUE
EBUEBUCEBUCL
All random variables considered herein satisfy X 0 almost surely. This restriction is intrinsic to reliability/lifetime modeling and is needed for the Laplace and residual-life arguments underlying the aging classes in Definition 1. Allowing negative support can invalidate these properties and lead to misleading class membership.

4. Testing Under EBUCL Alternative Hypotheses

In this section, we introduce a test statistic grounded in the Laplace transform order, constructed from a random sample X 1 , X 2 , , X n (i.i.d) drawn from a distribution F. The objective is to test the null hypothesis H 0 : F is exponential ( F ( x ) = 1 e λ x , x 0 , λ > 0 ) against the alternative hypothesis H 1 : F belongs to the EBUCL class but is not exponential. The development and analysis of this test statistic are guided by the following concepts and methodological framework.
Theorem 1. 
Let random variable X E B U C L with distribution function F. Then, using the Laplace transform approach,
μ 2 ϕ ( λ ) ( β λ 1 ) 1 β μ ϕ ( β ) β λ 2 μ ϕ ( λ ) + 1 β 2 ϕ ( β ) 1 λ 2 ϕ ( λ ) + β λ 2 μ 1 λ μ + 1 λ 2 1 β 2 ,
where
μ = E X = 0 x d F x = 0 F ¯ x d x .
Proof. 
Since F is EBUCL, then
0 e β x W ¯ F x + t d x μ μ β + 1 F ¯ t ; x , t 0 .
Now multiplying both sides by e λ t and integrating both sides over [ 0 , ) with respect to t, we get
0 0 e λ t e β x W ¯ F x + t d x d t μ μ β + 1 0 e λ t F ¯ t d t .
The right hand side of (2) can be written as
I 1 = μ μ β + 1 0 e λ t F ¯ t d t = μ μ β + 1 E 0 e λ t I T > t d t ,
where
I T > t = 1 if t < T 0 otherwise
and
E 0 e λ t I T > t d t = E 0 T e λ t d t = E 1 λ 1 λ e λ T = 1 λ 1 λ ϕ ( λ ) ,
the right hand side of (2) is given by
I 1 = μ μ β + 1 1 λ 1 λ ϕ ( λ ) .
Similarly, the left-hand side of Equation (2) can be expressed in the following manner:
I 2 = 0 0 e λ t e β x W ¯ F x + t d x d t .
Hence
I 2 = 0 v e λ v e β u v W ¯ F u d u d v = 0 e β v W ¯ F v 0 v e u λ β d u d v = 1 λ β 0 e β v W ¯ F v d v 0 e λ v W ¯ F v d v , β λ .
Note that
0 e β v W ¯ F v d v = 1 μ 0 e β v v F ¯ z d z d v = 1 μ 0 F ¯ v 0 v e β z d z d v = 1 μ 1 β μ 1 β 2 1 ϕ β ,
therefore,
I 2 = 1 λ β 1 β 1 μ β 2 + 1 μ β 2 ϕ β 1 λ + 1 μ λ 2 1 μ λ 2 ϕ λ .
Substituting (3) and (4) into (2), we get
μ 2 ϕ ( λ ) ( β λ 1 ) 1 β μ ϕ ( β ) β λ 2 μ ϕ ( λ ) + 1 β 2 ϕ ( β ) 1 λ 2 ϕ ( λ ) + β λ 2 μ 1 λ μ + 1 λ 2 1 β 2
This completes the proof. □
Let us propose the measure of departure from exponentiality as follows:
δ β , λ = β λ μ 2 ϕ ( λ ) μ 2 ϕ ( λ ) 1 β μ ϕ ( β ) + β λ 2 μ ϕ ( λ ) 1 β 2 ϕ ( β ) + 1 λ 2 ϕ ( λ ) β λ 2 μ + 1 λ μ 1 λ 2 + 1 β 2 .
Under the null hypothesis H 0 : X E x p ( λ ) with density f 0 x = λ e λ x , x 0 , we have the closed form
ϕ 0 ( β ) = E 0 e β x = 0 e β x λ e λ x d x = λ λ + β = 1 1 + μ β ,
where μ = 1 / λ is the mean-parameterization. Throughout this section, we use the tuning pair ( β , λ ) with β > 0 and the null rate λ > 0 . We define the basic contrast driving our test as
δ β , λ = ϕ 0 ( β ) ϕ ( β ) = 1 1 + μ β E e β x
By construction, δ β , λ = 0 exactly when ϕ ( β ) = ϕ 0 ( β ) .
Lemma 1 
(Null value). If H 0 holds, then for every β > 0 ,
δ β , λ = 0
Proof. 
Under H 0 , ϕ ( β ) = ϕ 0 ( β ) = λ / λ + β . Plugging this identity into (6) gives δ β , λ = 0 .
Remark 2. 
Some sources write the exponential law with mean μ > 0 : F x = 1 e x μ . In that parameterization, ϕ 0 ( β ) = 1 / 1 + μ β . The identity δ β , λ = 0 persists after the change λ = 1 / μ .
Based on the above, and in light of the null hypothesis H 0 and Equation (5), we will have
δ β , λ = β λ 1 1 + λ 1 1 + λ 1 β 1 1 + β + β λ 2 1 1 + λ 1 β 2 1 1 + β + 1 λ 2 1 1 + λ β λ 2 + 1 λ 1 λ 2 + 1 β 2 = β λ ( 1 + λ ) 1 1 + λ 1 β ( 1 + β ) + β λ 2 ( 1 + λ ) 1 β 2 ( 1 + β ) + 1 λ 2 ( 1 + λ ) β λ 2 + 1 λ 1 λ 2 + 1 β 2 = 0
where
μ = 1 , ϕ ( λ ) = 1 1 + λ ,
whereas under the alternative hypothesis H 1 , δ β , λ > 0 . Suppose X 1 , X 2 , X 3 , , X n is a random sample drawn from a distribution F. Then, an empirical estimator δ n ( β , λ ) of δ β , λ can be constructed as follows:
δ ^ ( β , λ ) = 1 n 3 i = 1 n j = 1 n k = 1 n β λ X i X j e λ X k X i X j e λ X k 1 β X i e β X j + β λ 2 X i e λ X j 1 β 2 e β X i + 1 λ 2 e λ X i β λ 2 X i + 1 λ X i 1 λ 2 + 1 β 2 .
Based on Maclaurin series e X = 1 X + X 2 2 ! X 3 3 ! + . . + 1 n X n n ! + . . , we get
β λ X i X j e λ X k X i X j e λ X k 1 β X i e β X j + β λ 2 X i e λ X j 1 β 2 e β X i + 1 λ 2 e λ X i β λ 2 X i + 1 λ X i 1 λ 2 + 1 β 2 β λ X 1 X 2 1 λ X 3 X 1 X 2 1 λ X 3 1 β X 1 1 β X 2 + β λ 2 X 1 1 λ X 2 1 β 2 1 β X 1 + 1 λ 2 1 λ X 1 β λ 2 X 1 + 1 λ X 1 1 λ 2 + 1 β 2 λ β X 1 X 2 X 3 .
Remark 3. 
In displayed expansions, indices ( i , j , k ) are used for summations and ( 1 , 2 , 3 ) for clarity in kernels; the expressions are symmetric after summation. Equalities obtained through Maclaurin expansions are clarified as approximations.
To ensure the test remains invariant, define Δ ^ β , λ = δ ^ β , λ X ¯ 3 , where X ¯ = i = 1 n X i n is the sample mean. Then,
Δ ^ β , λ = 1 n 3 X ¯ 3 i = 1 n j = 1 n k = 1 n β λ X i X j e λ X k X i X j e λ X k 1 β X i e β X j + β λ 2 X i e λ X j 1 β 2 e β X i + 1 λ 2 e λ X i β λ 2 X i + 1 λ X i 1 λ 2 + 1 β 2 .
Remark 4 
(Scale invariance). Although the expression of the test statistic involves the scale parameter λ, the normalization by the cube of the sample mean X ¯ 3 ensures invariance to scale. Specifically, if all observations are multiplied by a constant factor c > 0 , both the numerator and denominator of the statistic are rescaled in such a way that the dependence on c cancels out. Consequently, the proposed statistic is scale-invariant, and the explicit appearance of λ in intermediate expressions does not affect this property.
It is easy to show that E Δ ^ β , λ = δ β , λ . Now, set
ϕ X i , X j , X k = β λ X i X j e λ X k X i X j e λ X k 1 β X i e β X j + β λ 2 X i e λ X j 1 β 2 e β X i + 1 λ 2 e λ X i β λ 2 X i + 1 λ X i 1 λ 2 + 1 β 2 ,
Next, define the symmetric kernel
ψ X i , X j , X k = 1 3 ! ϕ X i , X j , X k ,
where the summation is taken over all permutations of the triplet X i , X j , X k . By averaging over all configurations, the expression for Δ ^ β , λ in Equation (7) aligns with the form of a U n -statistic, as introduced by
U n = n 3 1 i < j < k n ψ X i , X j , X k .
Remark 5 
(Conditions for treating the estimator as a U-statistic). Let the base kernel ϕ ( X i , X j , X k ; μ ) be the symmetric function that appears in ( 8 ) when the true mean μ = E ( X ) is substituted. We use X ¯ (the sample mean) in the implemented statistic to enforce scale invariance. To justify replacing μ by X ¯ and applying U-statistic asymptotics, assume the following:
A1:
E ϕ ( X 1 , X 2 , X 3 ; μ ) 2 < .
A2:
The mapping μ ϕ ( X i , X j , X k ; μ ) is continuously differentiable in a neighborhood of the true μ and the derivative has finite second moment.
A3:
X ¯ is a root-n consistent estimator of μ, i.e. n X ¯ μ = O P 1 .
Under A 1 A 3 , the Taylor expansion (delta method) and Hoeffding decomposition imply that the influence of X ¯ μ on the V-statistic is O P n 1 / 2 (or can be represented as an additional influence term that is explicit and of order n 1 / 2 ). Thus, the V-statistic with X ¯ has the same first-order asymptotic normal distribution as the U-statistic obtained by symmetrizing the kernel with the true μ (with a possible explicit correction of the asymptotic variance to account for the estimated parameter).
The asymptotic normality of Δ n β , λ can be established by applying the following theorem.
Theorem 2. 
(a) With a mean of 0 and a variance of σ 2 β , λ , n Δ ^ β , λ Δ β , λ is asymptotically normal as n approaches ∞. The value of σ 2 β , λ is obtained by
σ 2 β , λ = V a r 2 β λ X μ ϕ λ 2 X μ ϕ λ + β λ μ 2 e λ X μ 2 e λ X 1 β X ϕ β 1 β μ e β X 1 β μ ϕ β + β λ 2 X ϕ λ + β λ 2 μ e λ X + β λ 2 μ ϕ λ 1 β 2 e β X 2 β 2 ϕ β + 1 λ 2 e λ X + 2 λ 2 ϕ λ β λ 2 X 2 β λ 2 μ + 1 λ X + 2 λ μ 3 λ 2 + 3 β 2
(b) Under H 0 , the variance σ 0 2 β , λ is
σ 0 2 β , λ = β λ 2 2 β 4 + β 3 11 + 2 λ + β 2 25 + 13 λ + β 26 + 22 λ + 2 1 + λ 5 + λ 1 + β 2 1 + 2 β 1 + λ 2 1 + β + λ 1 + 2 λ
Proof. 
With additional computations, the theory of U statistics as developed by Lee [20] can be employed to derive both the mean and the variance of the statistic. Specifically, the variance can be expressed as
σ 2 = V a r η X ,
where η X represents the projection of the symmetric kernel onto a single variable
η X = η 1 X + η 2 X + η 3 X ,
η 1 X = E ( ϕ ( X 1 , X 2 , X 3 ) X 1 ) = β λ X 0 x d F ( x ) 0 e λ x d F ( x ) X 0 x d F ( x ) 0 e λ x d F ( x ) 1 β X 0 e β x d F ( x ) + β λ 2 X 0 e λ x d F ( x ) 1 β 2 e β X + 1 λ 2 e λ X β λ 2 X + 1 λ X 1 λ 2 + 1 β 2 ,
η 2 X = E ( ϕ ( X 1 , X 2 , X 3 ) X 2 ) = β λ X 0 x d F ( x ) 0 e λ x d F ( x ) 1 β e β X 0 x d F ( x ) X 0 x d F ( x ) 0 e λ x d F ( x ) + β λ 2 e β X 0 x d F ( x ) 1 β 2 0 e β x d F ( x ) + 1 λ 2 0 e λ x d F ( x ) β λ 2 0 x d F ( x ) + 1 λ 0 x d F ( x ) 1 λ 2 + 1 β 2 ,
and
η 3 X = E ( ϕ ( X 1 , X 2 , X 3 ) X 3 ) = β λ e λ X 0 x d F ( x ) 0 x d F ( x ) e λ X 0 x d F ( x ) 0 x d F ( x ) 1 β 0 x d F ( x ) 0 e β x d F ( x ) + β λ 2 0 x d F ( x ) 0 e λ x d F ( x ) 1 β 2 0 e β x d F ( x ) + 1 λ 2 0 e λ x d F ( x ) β λ 2 0 x d F ( x ) + 1 λ 0 x d F ( x ) 1 λ 2 + 1 β 2 .
Therefore,
η X = 2 β λ X 0 x d F ( x ) 0 e λ x d F ( x ) 2 X 0 x d F ( x ) 0 e λ x d F ( x ) + β λ e λ X 0 x d F ( x ) 0 x d F ( x ) e λ X 0 x d F ( x ) 0 x d F ( x ) 1 β X 0 e β x d F ( x ) 1 β e β X 0 x d F ( x ) 1 β 0 x d F ( x ) 0 e β x d F ( x ) + β λ 2 X 0 e λ x d F ( x ) + β λ 2 e λ X 0 x d F ( x ) + β λ 2 0 x d F ( x ) 0 e λ x d F ( x ) 1 β 2 e β X 2 β 2 0 e β x d F ( x ) + 1 λ 2 e λ X + 2 λ 2 0 e λ x d F ( x ) β λ 2 X 2 β λ 2 0 x d F ( x ) + 1 λ X + 2 λ 0 x d F ( x ) 3 λ 2 + 3 β 2 .
Upon using (12) and (13), Equation (10) is obtained. Under H 0 , we get
η X = 2 β λ 1 1 + λ X 2 1 1 + λ X + β λ e λ X e λ X 1 β 1 1 + β X 1 β e β X 1 β 1 1 + β + β λ 2 1 1 + λ X + β λ 2 e λ X + β λ 2 1 1 + λ 1 β 2 e β X 2 β 2 1 1 + β + 1 λ 2 e λ X + 2 λ 2 1 1 + λ β λ 2 X 2 β λ 2 + 1 λ X + 2 λ 3 λ 2 + 3 β 2 , = 2 β λ 1 + λ X 2 1 + λ X + β λ e λ X e λ X 1 β 1 + β X 1 β e β X 1 β 1 + β + β λ 2 1 + λ X + β λ 2 e λ X + β λ 2 1 + λ 1 β 2 e β X 2 β 2 1 + β + 1 λ 2 e λ X + 2 λ 2 1 + λ β λ 2 X 2 β λ 2 + 1 λ X + 2 λ 3 λ 2 + 3 β 2 ,
and then,
η X = β λ λ 2 + β + 1 λ 2 e λ X β + 1 β 2 e β X + 2 β 2 λ + β 3 λ β λ 2 β 2 λ 2 + β λ λ 2 λ 3 β λ 2 1 + β 1 + λ X + 2 β λ 2 2 β 3 β 4 + 2 β λ 3 + λ 2 + λ 3 β 2 3 β 3 λ 2 β 4 λ β 2 λ + 2 β 2 λ 2 + 2 β 3 λ 2 β 2 λ 2 1 + β 1 + λ ,
where
0 x d F ( x ) = 1 , 0 e λ x d F ( x ) = 1 1 + λ ,
and
σ 0 2 ( β , λ ) = V a r β λ λ 2 + β + 1 λ 2 e λ X β + 1 β 2 e β X + 2 β 2 λ + β 3 λ β λ 2 β 2 λ 2 + β λ λ 2 λ 3 β λ 2 1 + β 1 + λ X + 2 β λ 2 2 β 3 β 4 + 2 β λ 3 + λ 2 + λ 3 β 2 3 β 3 λ 2 β 4 λ β 2 λ + 2 β 2 λ 2 + 2 β 3 λ 2 β 2 λ 2 1 + β 1 + λ .
By performing the necessary computations starting from Equation (14), one can derive the resulting expression presented in Equation (11). This process involves applying the relevant transformations and simplifications to connect the intermediate steps with the final form of the statistic. □
Theorem 3. 
If n then
n Δ ^ β , λ Δ β , λ 3 S n d N ( 0 , 1 )
where S n is the standard deviation. In particular, under H 0 , we have
n Δ ^ β , λ 3 S n d N ( 0 , 1 )
Now, we obtain a large sample test for H 0 vs. H 1 : for large n, if the observed value of n Δ ^ β , λ 3 S n is too large, then we reject H 0 .
Theorem 4. 
The test is asymptotically consistent.
Proof. 
Let P H 0 , P H 1 be distributions of the testing statistic under H 0 and H 1 , respectively. Using Theorem 3, the probability of rejecting P H 0 is
P H 0 n Δ ^ β , λ 3 S n > c 1 Φ ( c ) , n ,
where c is the critical value of the test. Under the alternative, we have
P H 1 n Δ ^ β , λ 3 S n > c = P H 1 n Δ ^ β , λ Δ β , λ 3 S n > c n Δ β , λ 3 S n 1 , n .
The following facts are used in the above: under H 1 , Δ β , λ > 0 and S n 2 a . s . σ 2 , n .
Therefore, the test is asymptotically consistent. □

Positivity δ β , λ Under EBUCL Alternatives

We now connect δ β , λ to the Laplace-transform order that underpins the E B U C L class. Recall that, by Definition 1, X E B U C L means that X is ordered against the exponential baseline E x p λ in the Laplace sense; specifically, there exists a set of β > 0 (of positive measure) on which the exponential Laplace transform dominates strictly
ϕ ( β ) < ϕ 0 ( β ) = λ λ + β
(Equivalently, X is “older” than E x p λ in the Laplace order used here.) This is the precise way in which E B U C L generalizes classical aging concepts via Laplace ordering.
Lemma 2 
(Strict positivity under H 1 ). If X E B U C L and E x p λ , then for every β > 0 satisfying ϕ ( β ) < ϕ 0 ( β ) ,
δ β , λ = ϕ 0 ( β ) ϕ ( β ) > 0
In particular, there exists a non-null set of β’s for which δ β , λ > 0 .
Proof. 
By (15), ϕ ( β ) < ϕ 0 ( β ) on a set of β > 0 of positive measure whenever X belongs to E B U C L but is not exponential with rate λ . Substituting in (6) yields δ β , λ = ϕ 0 ( β ) ϕ ( β ) > 0 for such β . □

5. Pitman’s Asymptotic Efficiency of Δ ^ ( β , λ )

Pitman’s Asymptotic Efficiency (PAE) is a central concept in statistical theory, used to compare the performance of statistical tests and estimators based on their asymptotic behavior under sequences of local alternatives. It measures how rapidly the power of a test approaches one as the sample size increases, particularly when the null and alternative hypotheses are nearly indistinguishable. Formally, PAE is defined as the limiting ratio of the sample sizes required by two competing procedures to achieve the same power at a given significance level. A higher PAE indicates a more efficient method, capable of detecting small deviations from the null hypothesis with fewer observations. In addition to its role in hypothesis testing, PAE is instrumental in the evaluation of statistical estimators, providing a rigorous framework for identifying those with superior long-run performance. By quantifying relative efficiency, PAE assists statisticians in selecting optimal models and inference procedures, especially in scenarios involving large datasets or subtle differences between competing hypotheses. This paper explores the practical utility of PAE by applying it to a variety of lifetime distributions, namely, the Weibull, Makeham, and linear failure rate (LFR) models, offering valuable insights into their comparative performance in the context of reliability and survival analysis as follows:
(i)
Weibull distribution: F ¯ 1 ( x ) = e x θ ; x > 0 , θ > 0 .
(ii)
LFR distribution: F ¯ 2 ( x ) = e x θ 2 x 2 ; x > 0 , θ > 0 .
(iii)
Makeham distribution: F ¯ 3 ( x ) = e x θ x + e x 1 ; x > 0 , θ > 0 .
Note that F ¯ 1 ( x ) accomplishes this reduction when θ = 1 , but F ¯ 2 ( x ) and F ¯ 3 ( x ) decrease exponential distributions when θ = 0 . The following is one way the PAE might be presented (see Pitman [21])
P A E ( Δ ( β , λ ) ) = 1 σ 0 β , λ d d θ δ θ ( β , λ ) θ θ 0 ,
where
δ θ ( β , λ ) = β λ μ θ 2 ϕ θ ( λ ) μ θ 2 ϕ θ ( λ ) 1 β μ θ ϕ θ ( β ) + β λ 2 μ θ ϕ θ ( λ ) 1 β 2 ϕ θ ( β ) + 1 λ 2 ϕ θ ( λ ) β λ 2 μ θ + 1 λ μ θ 1 λ 2 + 1 β 2 ,
ϕ θ ( β ) = 0 e β x d F ¯ θ ( x ) , μ θ = 0 F ¯ θ ( x ) d x .
Hence,
d d θ δ θ ( β , λ ) = β λ μ θ 2 ϕ θ ( λ ) + 2 μ θ μ θ ϕ θ ( λ ) μ θ 2 ϕ θ ( λ ) 2 μ θ μ θ ϕ θ ( λ ) 1 β μ θ ϕ θ ( β ) + μ θ ϕ θ ( β ) + β λ 2 μ θ ϕ θ ( λ ) + μ θ ϕ θ ( λ ) 1 β 2 ϕ θ ( β ) + 1 λ 2 ϕ θ ( λ ) β λ 2 μ θ + 1 λ μ θ ,
where
μ θ = 0 F ¯ θ ( x ) d x , ϕ θ ( β ) = 0 e β x d F ¯ θ ( x ) .
Using the PAE’s definition in (16), we get
P A E ( δ ) = 1 σ 0 β λ μ θ 2 ϕ θ ( λ ) + 2 μ θ μ θ ϕ θ ( λ ) μ θ 2 ϕ θ ( λ ) 2 μ θ μ θ ϕ θ ( λ ) 1 β μ θ ϕ θ ( β ) + μ θ ϕ θ ( β ) + β λ 2 μ θ ϕ θ ( λ ) + μ θ ϕ θ ( λ ) 1 β 2 ϕ θ ( β ) + 1 λ 2 ϕ θ ( λ ) β λ 2 μ θ + 1 λ μ θ θ θ 0 .
For a few different values of β , λ , Table 1 compares our test Δ ^ β , λ to those of Mugdadi and Ahmad [22] ( δ ( 3 ) ( r , s ) ), Kango [23] ( K * ), and Gadallah [11] ( δ ( 0.42 , 0.3 ) ).
Results from our test are consistently superior. It should be emphasized that the recommended test, Δ ^ ( β , λ ) , is very efficient. This was achieved by calculating several ( β , λ ) values in order to maximize efficiency across all alternatives.
Table 2 shows new alternatives to the distribution parameters discussed previously. It also lists the Pittman values for these parameters, illustrating the test’s performance under local deviations from exponentiality.

6. The Monte Carlo Method

The Monte Carlo method is a computational technique that uses random sampling to approximate numerical results, particularly in situations where analytical solutions are difficult or impossible to obtain. It is widely employed in various fields such as statistics, physics, finance, and engineering to estimate integrals, solve differential equations, and model complex systems. The core idea involves generating a large number of random samples from a probability distribution and using these samples to approximate desired quantities, such as means, variances, or probabilities. In statistical inference, the Monte Carlo method is often used to evaluate the performance of estimators, compute confidence intervals, or simulate sampling distributions. Its strength lies in its flexibility and applicability to high-dimensional or non-linear problems, where traditional methods may fail or be computationally intensive.

6.1. Critical Points

In statistical inference, a critical value serves as a pivotal cutoff point used to make decisions in hypothesis testing and to construct confidence intervals. It defines the boundary between the acceptance and rejection regions of a statistical test, based on a predetermined significance level ( α ). By comparing a calculated test statistic, such as a z-score, t-score, or chi-square value, to the corresponding critical value, researchers can assess whether the observed result is statistically significant. This comparison helps determine whether to reject the null hypothesis, shaping the conclusions drawn from the data. The choice of critical value depends on several factors, including the underlying probability distribution of the test statistic, the confidence level desired, and whether the test is one-tailed or two-tailed. Commonly used distributions include the standard normal, t, chi-square, and F distributions. Additionally, critical values play an essential role in forming confidence intervals, influencing both their location and width, and providing a range within which the true population parameter is likely to fall with a specified degree of certainty.
In conclusion, critical values play a pivotal role in statistical testing, providing thresholds for evaluating hypotheses and guiding data-driven decisions. A clear understanding and careful estimation of these values are fundamental for maintaining the accuracy and credibility of statistical findings. This section focuses on simulating critical values under the null hypothesis using Monte Carlo methods, with 10,000 randomly generated samples of sizes ranging from n = 5 to n = 100 in increments of 5, implemented in Mathematica 10. The simulations estimate the upper percentiles of Δ ^ ( 0.1 , 0.2 ) , Δ ^ ( 0.01 , 3 ) , and Δ ^ ( 0.1 , 2.5 ) at the 90 % , 95 % , and 99 % confidence levels. As demonstrated in Table 3, Table 4 and Table 5 and Figure 1, Figure 2 and Figure 3, the critical values increase with higher confidence levels and tend to decrease as the sample size increases.

6.2. Estimated Power of the Proposed Test

Power estimates of statistical tests represent the probability that a test will correctly reject the null hypothesis when the alternative hypothesis is true. This measure, known as the power of the test, is a key indicator of the test’s sensitivity and effectiveness in detecting real effects. A high power value, typically 0.80 or above, suggests that the test is likely to identify a true effect, reducing the risk of a Type II error (failing to reject a false null hypothesis). Power estimates depend on several factors, including the sample size, effect size, variability within the data, and the chosen significance level α . In practice, power analysis is often conducted before data collection to determine the required sample size for achieving a desired level of power, thereby ensuring the reliability of the statistical conclusions. Additionally, comparing the power estimates of different tests helps researchers select the most appropriate method for detecting effects under specific conditions, especially in simulation studies or when evaluating test performance across various distributions.
In conclusion, power estimates of statistical tests play a crucial role in the research process, influencing sample size determination, study design, result interpretation, and evidence synthesis across studies. Power analyses improve the rigor and dependability of scientific research by measuring the sensitivity of tests to discover actual effects, thereby advancing knowledge in a variety of domains. The 10,000 samples in Table 5 and Table 6 were used to assess the effectiveness of the proposed test at the ( 1 α ) % confidence level, with α = 0.05 . For n = 10 , 20 , and 30, assumed values for θ correspond to the Weibull, Gamma, and LFR distributions, respectively. Table 6, Table 7 and Table 8 show that the Δ ^ ( 0.1 , 0.2 ) , Δ ^ ( 0.01 , 3 ) , and Δ ^ ( 0.1 , 2.5 ) tests employed have enough power compared with all other alternatives.

7. Hypothesis Testing Under Censored Data

Testing for censored data is a crucial aspect of statistical analysis, particularly in fields such as survival analysis, reliability engineering, and biomedical research, where complete information on all observations is not always available. Censored data occur when the value of an observation is only partially known, for example, when a study ends before an event occurs or a subject drops out early. This incomplete data poses challenges for traditional statistical methods, which assume fully observed outcomes. As a result, specialized techniques and tests have been developed to handle censoring appropriately, including the Kaplan–Meier estimator, log-rank test, and likelihood-based methods. These methods account for the censored nature of the data, allowing researchers to make valid inferences about population parameters, compare survival distributions, and assess model fit. Proper handling of censored data ensures that statistical conclusions remain accurate and unbiased, preserving the integrity of the analysis despite missing or incomplete information.
An approach for hypothesis testing under censored conditions involves contrasting H 0 and H 1 using a test statistic derived from randomly right-censored data. In practical scenarios such as clinical trials or reliability studies, such censored observations may be the only information available, especially when subjects drop out or are lost before the completion of the study. The structure of this type of experiment can be formally described as follows: Suppose nn items are under observation, and X 1 , X 2 , , X n denote their true lifetimes, assumed to be independent and identically distributed (i.i.d.) from a continuous life distribution F. In parallel, let Y 1 , Y 2 , , Y n be i.i.d. censoring times drawn from another continuous distribution G, with the X’s and Y’s assumed independent. The observed data then form the randomly right-censored sample ( Z j , δ j ) , for j = 1 , , n , where Z j = min ( X j , Y j ) and
δ j = 1 , if Z j = X j ( j - th observation is uncensored ) 0 , if Z j = Y j ( j - th observation is censored ) .
Think of Z ( 0 ) = 0 < Z ( 1 ) < Z ( 2 ) < . < Z ( n ) as the ordered series of Z’s, where δ ( j ) denotes δ j connected to Z ( j ) . With j = 1 , , n Kaplan and Meier [24] introduced the product limit estimator using the censored data ( Z j , δ j ) so that
F ¯ n X = [ j : Z ( j ) X ] n j / n j + 1 δ ( j ) ; X 0 , Z ( n ) ,
and
ξ c β , λ = β λ μ 2 ϕ ( λ ) μ 2 ϕ ( λ ) 1 β μ ϕ ( β ) + β λ 2 μ ϕ ( λ ) 1 β 2 ϕ ( β ) + 1 λ 2 ϕ ( λ ) β λ 2 μ + 1 λ μ 1 λ 2 + 1 β 2 ,
where ϕ ( β ) = 0 e β x d F n ( x ) . To compare H 0 : ξ c = 0 with H 1 : ξ c > 0 , we suggest using the randomized right-censored data and a test statistic. ξ c can be rewritten for computational reasons.
ξ c β , λ = β λ Ω 2 Ψ Ω 2 Ψ 1 β Ω η + β λ 2 Ω ϕ θ ( λ ) 1 β 2 η + 1 λ 2 Ψ β λ 2 Ω + 1 λ Ω 1 λ 2 + 1 β 2 ,
where
Ω = s = 1 n m = 1 s 1 C m δ ( m ) Z ( s ) Z ( s 1 ) , η = j = 1 n e β Z ( j ) p = 1 j 2 C p δ ( p ) p = 1 j 1 C p δ ( p ) , Ψ = j = 1 n e λ Z ( j ) p = 1 j 2 C p δ ( p ) p = 1 j 1 C p δ ( p ) ,
C m δ ( m ) = n m n m + 1 δ ( m ) ,
and
d F n ( Z j ) = F ¯ n ( Z j 1 ) F ¯ n ( Z j ) , c k = n k n k + 1 1 .
Test invariance is achieved by letting
Δ ^ c = ξ c Z ¯ 3 , where Z ¯ = i = 1 n Z ( i ) n .
Lemma 3 
(Scale invariance under independent censoring). Suppose the observed data are ( Z j , δ j ) with Z j = min ( X j , Y j ) , δ j = 1 { X j Y j } , and assume independent censoring as above. Let every lifetime X j be scaled by c > 0 and, simultaneously, every censoring time Y j be scaled by c. Then the Kaplan–Meier estimate F ^ n and the plug-in statistic Δ ^ c (when normalized by Z ¯ 3 or the mean of the Z’s) are exactly invariant under the common scaling. If only the lifetimes X j are scaled and the censoring times Y j are unchanged, then under H 0 (where the lifetime distribution is exponential with rate λ), scaling X j by c is equivalent to changing λ to λ c ; because normalization uses the sample mean (or its consistent Kaplan–Meier analog), this change cancels in the limit, and the test statistic is asymptotically invariant provided the Kaplan–Meier mean estimator is consistent and n -consistent.
Table 9 and Figure 4 present the critical percentile values for the Δ ^ c test derived from the acquired samples ( n = 10 , 20 , , 100 ) . Using Mathematica 10’s conventional exponential distribution with β = 0.1 , λ = 2.5 , and 10,000 replications, critical values for the null Monte Carlo distribution were found. It is evident from Figure 4 and Table 9 that the essential values tend to increase with higher confidence levels and decrease with larger sample sizes.

Test Power Estimates Δ c ( β , λ )

By examining different values of the parameter θ for 10,000 samples, specifically for sample sizes n = 10 , 20 , and 30 in the Weibull, LFR, and Gamma distributions, respectively. The random right censoring was implemented by generating censoring times from an independent exponential distribution, with the rate chosen so that the expected censoring proportion was approximately 20 % . This percentage is consistent with typical survival data scenarios and ensures a balance between observed and censored lifetimes. We evaluated the test’s power while maintaining a significance level of α = 0.05 . Table 10 shows that all other parameter options had appropriate power estimates using the test Δ c ( 0.1 , 2.5 ) .

8. Analysis of Data with Both Censored and Uncensored Observations

Certain types of data present only a partial view, often due to censorship or missing values, which reflect gaps or limitations in the available information. These censored data points can complicate the analytical process, as they require the application of specialized statistical methods and assumptions to extract meaningful insights. In contrast, complete or uncensored data offer a more comprehensive and direct understanding of the phenomenon under study, as they include all relevant observations without restrictions. Analyzing uncensored data is generally more straightforward, allowing for the use of standard techniques and reducing the complexity associated with incomplete datasets. As a result, while censored data present valuable information, they demand more careful handling to ensure accurate and reliable conclusions.

8.1. Non-Censored Data

8.1.1. Dataset I: Polyester Fiber

This dataset relates to thirty measurements of the tensile strength of polyester fiber that Al-Omari and Dobbah [25] considered. Here are the observations: 0.023, 0.032, 0.054, 0.069, 0.081, 0.094, 0.105, 0.127, 0.148, 0.169, 0.188, 0.216, 0.255, 0.277, 0.311, 0.361, 0.376, 0.395, 0.432, 0.463, 0.481, 0.519, 0.529, 0.567, 0.642, 0.674, 0.752, 0.823, 0.887, 0.926.
To investigate the characteristics of the data, several nonparametric plots were employed. As illustrated in Figure 5, the distribution is unimodal with slight skewness and no apparent outliers. Moreover, the corresponding hazard function demonstrates a generally increasing pattern. The calculated value, Δ ^ ( 0.1 , 2.5 ) = 0.596361 , exceeds the threshold listed in Table 5. This result supports the presence of an EBUCL property within the dataset, leading us to favor this conclusion over the exponential assumption proposed by the alternative hypothesis H 1 .

8.1.2. Dataset II: Electrical Data

Alsadat et al. [26] examined experimental failure time data, recorded in seconds, for two varieties of electrical insulation subjected to a constant voltage load. For each insulation type, thirty samples were collected and evaluated under identical testing conditions.
  • The following are type X’s first failure rates: 0.097, 0.014, 0.030, 0.134, 0.240, 0.084, 0.146, 0.024, 0.045, 0.004, 0.099, 0.277, 0.472, 0.094, 0.023, 0.146, 0.030, 0.031, 0.104, 0.105, 0.036, 0.065, 0.022, 0.098, 0.178, 0.059, 0.014, 0.007, 0.007, 0.286.
  • The failure rates of the second type Y are as follows: 0.084, 0.236, 0.315, 0.199, 0.252, 0.103, 0.455, 0.135, 0.348, 0.321, 0.166, 0.040, 0.027, 0.519, 0.017, 0.821, 0.942, 0.270, 0.008, 0.030, 0.177, 0.268, 0.180, 0.796, 0.245, 0.703, 0.045, 0.314, 0.281, 0.652.
Figure 6 shows nonparametric plots for dataset II-X. The distribution appears multimodal and positively skewed, with no extreme outliers. Figure 7 displays the plots for dataset II-Y. The distribution is also multimodal but more spread out and asymmetrical. When comparing our determined conclusion, Δ ^ ( 0.1 , 2.5 ) = 0.151165 , to the crucial value in Table 5, it is evident that the data (type X) has an exponential character. The critical value for data type Y, on the other hand, is exceeded by Δ ^ ( 0.1 , 2.5 ) = 0.400354 , as shown in Table 5. As indicated in H 0 , this implies that the data collection exhibits an E B U C L characteristic rather than exponential development, which we subsequently agree with.

8.1.3. Dataset III: The Long Glass Fibers

This collection of experimental data on the strength of 15 cm long glass fibers comes from Meintanis et al. [27]. The 46 observations listed below provide the observations: 0.37, 0.40, 0.70, 0.75, 0.80, 0.81, 0.83, 0.86, 0.92, 0.92, 0.94, 0.95, 0.98, 1.03, 1.06, 1.06, 1.08, 1.09, 1.10, 1.10, 1.13, 1.14, 1.15, 1.17, 1.20, 1.20, 1.21, 1.22, 1.25, 1.28, 1.28, 1.29, 1.29, 1.30, 1.35, 1.35, 1.37, 1.37, 1.38, 1.40, 1.40, 1.42, 1.43, 1.51, 1.53, 1.61.
Figure 8 presents the plots for dataset III (Long Glass Fibers). The data show a clear multimodal distribution with a slight right skew. In this instance, the calculated value, Δ ^ ( 0.1 , 2.5 ) = 0.519071 , is above the critical limit stated in Table 5. This implies that rather than the exponential development that H 0 claimed, the data collection exhibits an EBUCL feature, which we subsequently accept.

8.1.4. Dataset IV: Strengths of Glass Fibers

Replicating the strengths of glass fibers is the aim of this data gathering. The dataset that Ahmad et al. [28] acquired is as follows: 1.014, 1.248, 1.267, 1.271, 1.272, 1.275, 1.276, 1.355, 1.361, 1.364, 1.379, 1.409, 1.426, 1.459, 2.456, 2.592, 1.292, 1.081, 1.082, 1.185, 1.223, 1.304, 1.306, 1.46, 1.476, 1.481, 1.484, 1.501, 1.506, 1.524, 1.526, 1.535, 1.541, 1.568, 1.735, 1.278, 1.286, 1.288, 1.867, 1.876, 1.878, 1.91, 1.916, 1.972, 2.012, 1.747, 1.748, 1.757, 1.8, 1.579, 1.581, 1.591, 1.593, 1.602, 1.666, 1.67, 1.684, 1.691, 1.704, 1.731, 1.806, 3.197, 4.121.
Figure 9 indicates that the data exhibit a slightly skewed, multimodal distribution with noticeable extreme values. The plots also suggest that the data are not symmetrically shaped. Moreover, the associated hazard function demonstrates an upward trend. The calculated value in this instance, Δ ^ ( 0.1 , 2.5 ) = 0.347803 , is higher than the critical limit shown in Table 5. This claim is accurate since, at the α = 0.05 significance level, the data also meet the EBUCL feature requirements.

8.1.5. Dataset V: Breaking Stress

A dataset from Ahmad et al.’s [28] study on the breaking stress of 100 carbon fibers (in Gba) was analyzed. The lifespans are the amounts of time that pass before the fibers break: “0.92, 0.928, 0.997, 0.9971, 1.061, 1.117, 1.162, 1.183, 1.187, 1.192, 1.196, 1.213, 1.215, 1.2199, 1.22, 1.224, 1.225, 1.228, 1.237, 1.24, 1.244, 1.259, 1.261, 1.263, 1.276, 1.31, 1.321, 1.329, 1.331, 1.337, 1.351, 1.359, 1.388, 1.408, 1.449, 1.4497, 1.45, 1.459, 1.471, 1.475, 1.477, 1.48, 1.489, 1.501, 1.507, 1.515, 1.53, 1.5304, 1.533, 1.544, 1.5443, 1.552, 1.556, 1.562, 1.566, 1.585, 1.586, 1.599, 1.602, 1.614, 1.616, 1.617, 1.628, 1.684, 1.711, 1.718, 1.733, 1.738, 1.743, 1.759, 1.777, 1.794, 1.799, 1.806, 1.814, 1.816, 1.828, 1.83, 1.884, 1.892, 1.944, 1.972, 1.984, 1.987, 2.02, 2.0304, 2.029, 2.035, 2.037, 2.043, 2.046, 2.059, 2.111, 2.165, 2.686, 2.778, 2.972, 3.504, 3.863, 5.306”.
Figure 10 shows that the data possess a moderately skewed, multimodal distribution with some extreme values present. The visual patterns confirm that the distribution lacks symmetry. In addition, the hazard function reveals an increasing trend, reflecting a growing likelihood of failure as time progresses. The calculated value in this instance, Δ ^ ( 0.1 , 2.5 ) = 0.324445 , is more than the crucial threshold shown in Table 5. This is accurate, as this specific data type satisfies the EBUCL characteristic at the significance level of α = 0.05 .

8.1.6. Dataset VI: Tensile Strength

Because of its excellent tensile strength, low density, electrical conductivity, chemical stability, and high thermal conductivity, carbon fiber was widely employed in the industry. Fibers are therefore employed nowadays to create a number of components that are required to be strong and lightweight. In this application, the strength of impregnated thousand-carbon fibers and single-carbon fibers, measured in gigapascal (GPa) at gauge lengths of 20 mm, under stress is investigated. This data was addressed by Ahmad et al. [29] and is displayed below: 1.312, 1.314, 1.479, 1.552, 1.7, 1.803, 1.861, 1.865, 1.944, 1.958, 1.966, 1.997, 2.006, 2.021, 2.027, 2.055, 2.063, 2.098, 2.14, 2.179, 2.224, 2.24, 2.253, 2.27, 2.272, 2.274, 2.301, 2.301, 2.359, 2.382, 2.426, 2.434, 2.435, 2.382, 2.478, 2.554, 2.514, 2.511, 2.49, 2.535, 2.566, 2.57, 2.586, 2.629, 2.8, 2.773, 2.77, 2.809, 3.585, 2.818, 2.642, 2.726, 2.697, 2.684, 2.648, 2.633, 3.128, 3.09, 3.096, 3.233, 2.821, 2.88, 2.848, 2.818, 3.067, 2.821, 2.954, 2.809, 3.585, 3.084, 3.012, 2.88, 2.848, 3.433.
Figure 11 illustrates that the data follow a unimodal distribution, with no apparent outliers and a fairly symmetric shape. The nonparametric plots support the absence of skewness or irregularities. Moreover, the hazard function indicates an upward trend. The calculated value in this instance, Δ ^ ( 0.1 , 2.5 ) = 0.214283 , is more than the crucial threshold shown in Table 5. This is accurate, as this specific data type satisfies the EBUCL characteristic at the significance level of α = 0.05 .

8.1.7. Dataset VII: Glass Fiber Strengths

Ekemezie et al. [30] examined the dataset, which includes 63 observations of 1.5 cm glass fiber strengths obtained at the National Physical Laboratory in England. The data is “0.55, 0.74, 0.77, 0.81, 0.84, 0.93, 1.04, 1.11, 1.13, 1.24, 1.25, 1.27, 1.28, 1.29, 1.30, 1.36, 1.39, 1.42, 1.48, 1.48, 1.49, 1.49, 1.50, 1.50, 1.51, 1.52, 1.53, 1.54, 1.55, 1.55, 1.58, 1.59, 1.60, 1.61, 1.61, 1.61, 1.61, 1.62, 1.62, 1.63, 1.64, 1.66, 1.66, 1.66, 1.67, 1.68, 1.68, 1.69, 1.70, 1.70, 1.73, 1.76, 1.76, 1.77, 1.78, 1.81, 1.82, 1.84, 1.84, 1.89, 2.00, 2.01, 2.24”.
Figure 12 demonstrates that the data follow a multimodal distribution, display no significant outliers, and have an asymmetrical structure. The shape of the plots confirms the lack of symmetry. Furthermore, the hazard rate shows an increasing pattern. Table 5’s critical limit is substantially lower than the calculated value in this instance, Δ ^ ( 0.1 , 2.5 ) = 0.387345 . The EBUCL feature is satisfied by this particular data type at the significance level of α = 0.05 ; hence, this is correct.

8.1.8. Dataset VIII: Carbon Fibers

One hundred unedited observations measuring the breaking stress of carbon fibers, expressed in gigapascals (GPa), make up the dataset under review. Shehata et al. [31] published these results, which are a useful tool for researching the mechanical characteristics of carbon fibers under stress. Because carbon fibers are strong and lightweight, they are utilized extensively in industries like aerospace and automotive, where it is essential to understand their breaking stress. The data is 3.70, 2.74, 2.73, 2.50, 3.60, 3.11, 3.27, 2.87, 1.47, 3.11, 4.42, 2.41, 3.19, 3.22, 1.69, 3.28, 3.09, 1.87, 3.15, 4.90, 3.75, 2.43, 2.95, 2.97, 3.39, 2.96, 2.53, 2.67, 2.93, 3.22, 3.39, 2.81, 4.20, 3.33, 2.55, 3.31, 3.31, 2.85, 2.56, 3.56, 3.15, 2.35, 2.55, 2.59, 2.38, 2.81, 2.77, 2.17, 2.83, 1.92, 1.41, 3.68, 2.97, 1.36, 0.98, 2.76, 4.91, 3.68, 1.84, 1.59, 3.19, 1.57, 0.81, 5.56, 1.73, 1.59, 2.00, 1.22, 1.12, 1.71, 2.17, 1.17, 5.08, 2.48, 1.18, 3.51, 2.17, 1.69, 1.25, 4.38, 1.84, 0.39, 3.68, 2.48, 0.85, 1.61, 2.79, 4.70, 2.03, 1.80, 1.57, 1.08, 2.03, 1.61, 2.12, 1.89, 2.88, 2.82, 2.05, 3.65.
To examine the properties of the data, nonparametric plots were generated. Figure 13 reveals that the data exhibit a unimodal distribution, are free from extreme values, and display a moderately asymmetric shape. Furthermore, the hazard rate indicates an increasing pattern over time. The calculated value Δ ^ ( 0.1 , 2.5 ) = 0.175172 in this instance exceeds the critical limit shown in Table 5. This implies that rather than the exponential development that H 0 claimed, the data collection exhibits an EBUCL feature, which we subsequently accept.

8.2. Censored Data

8.2.1. Dataset IX: Lung Cancer

Consider the dataset provided by Yu and Zhang [32], which presents the survival times, in weeks, of 61 patients diagnosed with terminal lung cancer and treated with cyclophosphamide. Among these patients, treatment was halted for some due to deteriorating health, resulting in 28 censored observations and 33 uncensored ones:
  • Censored observations: 0.14, 0.14, 0.29, 0.43, 0.57, 0.57, 1.86, 3, 3, 3.29, 3.29, 6, 6, 6.14, 8.71, 10.57, 11.86, 15.57, 16.57, 17.29, 18.71, 21.29, 23.86, 26, 27.57, 32.14, 33.14, 47.29.
  • Uncensored observations: 0.43, 2.86, 3.14, 3.14, 3.43, 3.43, 3.71, 3.86, 6.14, 6.86, 9, 9.43, 10.71, 10.86, 11.14, 13, 14.43, 15.71, 18.43, 18.57, 20.71, 29.14, 29.71, 40.57, 48.57, 49.43, 53.86, 61.86, 66.57, 68.71, 68.96, 72.86, 72.86.
The data visualization plots are presented in Figure 14 and Figure 15. The analysis reveals the presence of extreme observations, and the nucleation intensity is skewed towards the right. Upon considering the remaining data, both censored and uncensored, Table 9 demonstrates that the crucial value, Δ c ( 0.1 , 2.5 ) = 2.27373 × 10 54 , is greater than our estimate. The data are exponential as a result.

8.2.2. Dataset X: Melanoma Patient

Take Das and Ghosh’s [33] data as an example. Forty-six melanoma patient survival rates are displayed in these statistics. Thirty-five of them represent full lives (non-censored data). The following is a list of the suppressed observations. The data is 13, 14, 19, 19, 20, 21, 23, 23, 25, 26, 26, 27, 27, 31, 32, 34, 34, 37, 38, 38, 40, 46, 50, 53, 54, 57, 58, 59, 60, 65, 65, 66, 70, 85, 90, 98, 102, 103, 110, 118, 124, 130, 136, 138, 141, 234 (Figure 16).
The following sequence applies to the suppressed observations: 16, 21, 44, 50, 55, 67, 73, 76, 80, 81, 86, 93, 100, 108, 114, 120, 124, 125, 129, 130, 132, 134, 140, 147, 148, 151, 152, 152, 158, 181, 190, 193, 194, 213, 215.
Figure 17 indicates that the data follow a unimodal distribution, with no apparent extreme values, and display an asymmetric shape. In addition, the hazard rate shows a clear increasing trend. If we take into account all of the survival statistics, both filtered and uncensored. We derive Δ c ( 0.1 , 2.5 ) = 1.06219 × 10 119 , which is less than Table 9’s critical value. Consequently, we recognize that this data exhibits exponential characteristics.

9. Conclusions

Reliability refers to the consistency of measurements or system performance over time and is crucial for the development of effective testing methods and analytical tools. In statistics and engineering, reliable results ensure meaningful comparisons across datasets and enhance the credibility of conclusions. While reliability focuses on consistency, validity pertains to the accuracy of the measurements, highlighting that a test can be reliable without necessarily being valid. In industrial engineering, reliability is associated with a system’s ability to perform its intended function efficiently over an extended period, despite the complexity of modern systems composed of interdependent components. The failure of a single part can compromise the entire system, making reliability analysis essential. This has led to the development of reliability theory, which emphasizes the quantification and evaluation of product performance. Additionally, foundational concepts such as symmetry, asymmetry, and stochastic ordering of probability distributions play a key role in fields like reliability, survival analysis, and economics. This study introduced a new nonparametric test for assessing exponentiality against the recently defined EBUCL class of life distributions. The test was constructed using Laplace transform ordering and demonstrated strong theoretical properties, including asymptotic normality and high Pitman asymptotic efficiency under various alternatives such as Weibull, Makeham, and LFR distributions. In selecting the alternatives and their parameters, our intention was to ensure both theoretical relevance and practical representativeness. The Weibull, Makeham, linear failure rate (LFR), and Gamma distributions were chosen because they constitute standard benchmarks in the literature on exponentiality testing and are frequently encountered in reliability and survival analysis. Their parameter values were specified to capture a spectrum of departures from exponentiality: for example, the Weibull with a shape parameter of 2 reflects a monotone increasing hazard, while the Gamma with a shape parameter of ≥3 provides heavier-tailed behavior. These settings enable us to evaluate the proposed test not only under strong alternatives, which emphasize the test’s power, but also under moderate deviations commonly observed in applied data. The parameter choices are further guided by Pitman’s asymptotic efficiency analysis, ensuring that the scenarios examined are theoretically informative as well as practically meaningful. Through extensive Monte Carlo simulations, critical values were obtained, and the power of the proposed test was shown to surpass several existing methods. Additionally, the test was successfully extended to accommodate right-censored data, ensuring its applicability in survival and reliability analyses. Empirical applications to a variety of real-world datasets, both censored and uncensored, further confirmed the robustness and versatility of the proposed method. For each dataset, we report the standardized statistic, the p-value, and a clear decision: “At the 5 % significance level, we reject H 0 in favor of H 1 ” (or “do not reject H 0 ”). This conforms to standard statistical reporting. The findings validate the practical utility of the test in detecting deviations from exponentiality and support its relevance in engineering, medical, and actuarial sciences.

Author Contributions

Conceptualization, O.S.B. and R.M.E.-S.; Methodology, W.B.H.E., M.E.B., A.M.A., O.S.B., and R.M.E.-S.; Software, W.B.H.E. and R.M.E.-S.; Validation, W.B.H.E., M.E.B., O.S.B., and R.M.E.-S.; Formal analysis, M.E.B., A.M.A., and R.M.E.-S.; Investigation, W.B.H.E., A.M.A., and R.M.E.-S.; Resources, W.B.H.E., M.E.B., A.M.A., and O.S.B.; Data curation, M.E.B., A.M.A., O.S.B., and R.M.E.-S.; Writing—original draft, W.B.H.E. and A.M.A.; Writing—review and editing, R.M.E.-S.; Visualization, M.E.B. and O.S.B.; Supervision, A.M.A. and R.M.E.-S.; Funding acquisition, A.M.A. All authors have read and agreed to the published version of the manuscript.

Funding

Ongoing Research Funding program, (ORF-2025-1004), King Saud University, Riyadh, Saudi Arabia.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

This study was funded by Ongoing Research Funding program, (ORF-2025-1004), King Saud University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bryson, M.C.; Siddiqui, M.M. Some criteria for ageing. Am. Stat. Assoc. 1969, 64, 1472–1483. [Google Scholar] [CrossRef]
  2. Barlow, R.E.; Proschan, F. Statistical Theory of Reliability and Life Testing; Hold, Reinhart and Wiston, Inc.: Silver Spring, MD, USA, 1981. [Google Scholar]
  3. Klefsjo, B. The HNBUE and HNWUE classes of life distributions. Nav. Res. Logist. Q. 1982, 29, 331–344. [Google Scholar] [CrossRef]
  4. Kumazawa, Y. A class of tests statistics for testing whether new is better than used. Commun. Stat.-Theory Methods 1983, 12, 311–321. [Google Scholar] [CrossRef]
  5. Cuparić, M.; Milošević, B. New characterization-based exponentiality tests for randomly censored data. Test 2022, 31, 461–487. [Google Scholar] [CrossRef]
  6. Hassan, N.A.; Said, M.M.; Attwa, R.A.E.W.; Radwan, T. Applying the Laplace transform procedure, testing exponentiality against the NBRUmgf class. Mathematics 2024, 12, 2045. [Google Scholar] [CrossRef]
  7. Etman, W.B.; Bakr, M.E.; Balogun, O.S.; EL-Sagheer, R.M. A novel statistical test for life distribution analysis: Assessing exponentiality against EBUCL class with applications in sustainability and reliability data. Axioms 2025, 14, 140. [Google Scholar] [CrossRef]
  8. Etman, W.B.; Eliwa, M.S.; Alqifari, H.N.; El-Morshedy, M.; Al-Essa, L.A.; EL-Sagheer, R.M. The NBRULC reliability class: Mathematical theory and goodness-of-fit testing with applications to asymmetric censored and uncensored data. Mathematics 2023, 11, 2805. [Google Scholar] [CrossRef]
  9. Mohamed, N.M.; Mansour, M.M.M. Testing exponentiality against NBUC class with some applications in health and engineering fields. Front. Sci. Res. Technol. 2024, 8, 90–97. [Google Scholar] [CrossRef]
  10. El-Arishy, S.M.; Diab, L.S.; El-Atfy, E.S. Characterizations on decreasing laplace transform of time to failure class and hypotheses testing. J. Comput. Sci. Comput. 2020, 10, 49–54. [Google Scholar] [CrossRef]
  11. Gadallah, A.M. Testing EBUmgf class of life distributions based on Laplace transform technique. J. Stat. Probab. 2017, 6, 471–477. [Google Scholar]
  12. Abu-Youssef, S.E.; El-Toony, A.A. A new class of life distribution based on Laplace transform and it’s applications. Inf. Sci. Lett. 2022, 11, 355–362. [Google Scholar] [CrossRef]
  13. Bakr, M.E.; Al-Babtain, A.A. Non-parametric hypothesis testing for unknown aged class of life distribution using real medical data. Axioms 2023, 12, 369. [Google Scholar] [CrossRef]
  14. Atallah, M.A.; Mahmoud, M.A.W.; Alzahrani, B.M. A new test for exponentiality versus NBUmgf life distribution based on Laplace transform. Qual. Reliab. Eng. Int. 2014, 30, 1353–1359. [Google Scholar] [CrossRef]
  15. Al-Gashgari, F.H.; Shawky, A.I.; Mahmoud, M.A.W. A non-parametric test for exponentiality against NBUCA class of life distribution based on Laplace transform. Qual. Reliab. Int. 2016, 32, 29–36. [Google Scholar] [CrossRef]
  16. Abu-Youssef, S.E.; Ali, N.S.A.; El-Toony, A.A. Nonparametric test for a class of lifetime distribution UBAC(2) based on the Laplace transform. J. Egypt. Math. Soc. 2022, 30, 9. [Google Scholar] [CrossRef]
  17. Mansour, M.M.M. Assessing treatment methods via testing exponential property for clinical data. J. Stat. Probab. 2022, 11, 109–113. [Google Scholar]
  18. Mansour, M.M.M. Non-parametric statistical test for testing exponentiality with applications in medical research. Stat. Med. Res. 2019, 29, 413–420. [Google Scholar] [CrossRef]
  19. Abu-Youssef, S.E.; Bakr, M.E. Laplace transform order for unknown age class of life distribution with some applications. J. Stat. Appl. Probab. 2018, 7, 105–114. [Google Scholar] [CrossRef]
  20. Lee, A.J. U-Statistics; Marcel Dekker: New York, NY, USA, 1989. [Google Scholar]
  21. Pitman, E.J.G. Some Basic Theory for Statistical Inference; Chapman & Hall: London, UK, 1979. [Google Scholar]
  22. Mugdadi, A.R.; Ahmad, I.A. Moment inequalities derived from comparing life with its equilibrium form. J. Stat. Inference 2005, 134, 303–317. [Google Scholar]
  23. Kango, A.I. Testing for new is better than used. Commun. Stat.-Theory Methods 1993, 12, 311–321. [Google Scholar]
  24. Kaplan, E.L.; Meier, P. Nonparametric estimation from incomplete observation. J. Am. Stat. 1958, 53, 457–481. [Google Scholar] [CrossRef]
  25. Al-Omari, A.I.; Dobbah, S.A. On the mixture of Shanker and gamma distributions with applications to engineering data. J. Radiat. Res. Appl. Sci. 2023, 16, 100533. [Google Scholar] [CrossRef]
  26. Alsadat, N.; Almetwally, E.M.; Elgarhy, M.; Ahmad, H.; Marei, G.A. Bayesian and non-Bayesian analysis with MCMC algorithm of stress-strength for a new two parameters lifetime model with applications. AIP Adv. 2023, 13, 095203. [Google Scholar] [CrossRef]
  27. Meintanis, S.G.; Milošević, B.; Jiménez–Gamero, M.D. Goodness-of-fit tests based on the min-characteristic function. Comput. Stat. Data Anal. 2024, 197, 107988. [Google Scholar] [CrossRef]
  28. Ahmad, A.; Alghamdi, F.M.; Ahmad, A.; Albalawi, O.; Zaagan, A.A.; Zakarya, M.; Almetwally, E.M.; Mekiso, G.T. New Arctan-generator family of distributions with an example of Frechet distribution: Simulation and analysis to strength of glass and carbon fiber data. Alex. Eng. J. 2024, 100, 42–52. [Google Scholar] [CrossRef]
  29. Ahmad, H.H.; Abo-Kasem, O.E.; Rabaiah, A.; Elshahhat, A. Survival analysis of newly extended Weibull data via adaptive progressive Type-II censoring and its modeling to Carbon fiber and electromigration. AIMS Math. 2025, 10, 10228–10262. [Google Scholar] [CrossRef]
  30. Ekemezie, D.F.N.; Alghamdi, F.M.; Aljohani, H.M.; Riad, F.H.; Abd El-Raouf, M.M.; Obulezi, O.J. A more flexible Lomax distribution: Characterization, estimation, group acceptance sampling plan and applications. Alex. Eng. J. 2024, 109, 520–531. [Google Scholar] [CrossRef]
  31. Shehata, W.A.; Aljadani, A.; Mansour, M.M.; Alrweili, H.; Hamed, M.S.; Yousof, H.M. A novel reciprocal-Weibull model for extreme reliability data: Statistical properties, reliability applications, reliability PORT-VaR and mean of order P risk analysis. Pak. Stat. Oper. Res. 2024, 20, 693–718. [Google Scholar] [CrossRef]
  32. Yu, H.; Zhang, L. Copula-based semiparametric nonnormal transformed linear model for survival data with dependent censoring. J. Stat. Plan. Inference 2025, 240, 106296. [Google Scholar] [CrossRef]
  33. Das, K.; Ghosh, S. A class of nonparametric tests for DMTTF alternatives based on moment inequality. Stat. Pap. 2025, 66, 47. [Google Scholar] [CrossRef]
Figure 1. The correlation between the critical values, sample size, and degree of confidence.
Figure 1. The correlation between the critical values, sample size, and degree of confidence.
Mathematics 13 03379 g001
Figure 2. The correlation between the critical values, sample size, and degree of confidence.
Figure 2. The correlation between the critical values, sample size, and degree of confidence.
Mathematics 13 03379 g002
Figure 3. The correlation between the critical values, sample size, and degree of confidence.
Figure 3. The correlation between the critical values, sample size, and degree of confidence.
Mathematics 13 03379 g003
Figure 4. Relationship among sample size, confidence level, and critical values.
Figure 4. Relationship among sample size, confidence level, and critical values.
Mathematics 13 03379 g004
Figure 5. Nonparametric data visualizations for dataset I.
Figure 5. Nonparametric data visualizations for dataset I.
Mathematics 13 03379 g005
Figure 6. Nonparametric data visualizations for dataset II-X.
Figure 6. Nonparametric data visualizations for dataset II-X.
Mathematics 13 03379 g006
Figure 7. Nonparametric data visualizations for dataset II-Y.
Figure 7. Nonparametric data visualizations for dataset II-Y.
Mathematics 13 03379 g007
Figure 8. Nonparametric data visualizations for dataset III.
Figure 8. Nonparametric data visualizations for dataset III.
Mathematics 13 03379 g008
Figure 9. Nonparametric data visualizations for dataset IV.
Figure 9. Nonparametric data visualizations for dataset IV.
Mathematics 13 03379 g009
Figure 10. Nonparametric data visualizations for dataset V.
Figure 10. Nonparametric data visualizations for dataset V.
Mathematics 13 03379 g010
Figure 11. Nonparametric data visualizations for dataset VI.
Figure 11. Nonparametric data visualizations for dataset VI.
Mathematics 13 03379 g011
Figure 12. Nonparametric data visualizations for dataset VII.
Figure 12. Nonparametric data visualizations for dataset VII.
Mathematics 13 03379 g012
Figure 13. Nonparametric data visualizations for dataset VIII.
Figure 13. Nonparametric data visualizations for dataset VIII.
Mathematics 13 03379 g013
Figure 14. Nonparametric data visualizations for dataset IX-A.
Figure 14. Nonparametric data visualizations for dataset IX-A.
Mathematics 13 03379 g014
Figure 15. Nonparametric data visualizations for dataset IX-B.
Figure 15. Nonparametric data visualizations for dataset IX-B.
Mathematics 13 03379 g015
Figure 16. Nonparametric data visualizations for dataset X-A.
Figure 16. Nonparametric data visualizations for dataset X-A.
Mathematics 13 03379 g016
Figure 17. Nonparametric data visualizations for dataset X-B.
Figure 17. Nonparametric data visualizations for dataset X-B.
Mathematics 13 03379 g017
Table 1. Comparison of our test’s PAE to that of various other tests.
Table 1. Comparison of our test’s PAE to that of various other tests.
TestMakehamLFRWeibull
Mugdadi and Ahmad [22]0.0390.4080.170
Kango [23]0.1440.4330.132
Gadallah [11]0.1450.7420.594
Our test Δ ^ 0.1 , 0.2 0.2260.9880.897
Our test Δ ^ 0.01 , 3 0.2660.9911.090
Our test Δ ^ 0.1 , 2.5 0.2720.9821.114
Table 2. Pitman values corresponding to a different set of parameters.
Table 2. Pitman values corresponding to a different set of parameters.
TestMakeham ( θ = 2 ) LFR ( θ = 3 ) Weibull ( θ = 1.2 )
Our test Δ ^ 0.1 , 0.2 0.00940.00710.5072
Our test Δ ^ 0.01 , 3 0.00870.01070.7105
Our test Δ ^ 0.1 , 2.5 0.00920.01140.7425
Table 3. The statistic’s critical values of Δ ^ ( 0.1 , 0.2 ) .
Table 3. The statistic’s critical values of Δ ^ ( 0.1 , 0.2 ) .
n 90 % 95 % 99 %
50.1323810.1399900.153147
100.0721240.0774580.085393
150.0572630.0621090.071611
200.0496820.0539700.061641
250.0447930.0494900.057716
300.0410200.0465170.053630
350.0383600.0439340.048450
400.0355810.0407480.047222
450.0334020.0386360.045624
500.0307400.0355540.042583
550.0301890.0345300.041209
600.0294470.0335060.040992
650.0283750.0326100.039151
700.0272510.0312740.038007
750.0259870.0307470.037385
800.0256780.0300050.036759
850.0243980.0287340.035350
900.0230160.0270270.034125
950.0218590.0259680.033407
1000.0205670.0246990.032035
Table 4. The statistic’s critical values of Δ ^ ( 0.01 , 3 ) .
Table 4. The statistic’s critical values of Δ ^ ( 0.01 , 3 ) .
n 90 % 95 % 99 %
51.487591.764722.38517
100.602490.7568220.967755
150.5441930.6240390.839033
200.4249180.5474050.707837
250.3929840.425220.654352
300.3752160.4080620.5786
350.3562310.3850860.453378
400.3330830.3626640.439174
450.3197920.3415070.412928
500.299270.3227710.399814
550.2613010.2914290.368705
600.2440680.2687940.331664
650.2111730.248680.304674
700.1924910.216260.284462
750.1746380.1974860.275657
800.1642340.1706390.251628
850.1427060.1551220.231198
900.1203330.147130.217063
950.1156630.1397330.202082
1000.0912310.1102610.179874
Table 5. The statistic’s critical values of Δ ^ ( 0.1 , 2.5 ) .
Table 5. The statistic’s critical values of Δ ^ ( 0.1 , 2.5 ) .
n 90 % 95 % 99 %
51.2873901.5698602.164830
100.5542190.6881550.876597
150.3862330.4889550.628620
200.3215440.3990360.538038
250.2744140.3255370.478149
300.2493870.3055860.415197
350.2147900.2604540.350213
400.2102000.2524200.339850
450.2020270.2386620.311667
500.1920850.2314280.307443
550.1898540.2209900.281793
600.1580060.2067750.241050
650.1495700.1933840.225811
700.1455360.1711400.215356
750.1436630.1652990.210371
800.1404760.1619610.208542
850.1362170.1574040.200942
900.1210940.1511370.191495
950.1179980.1469890.170360
1000.1121430.1424450.166781
Table 6. Estimates of the power of Δ ^ ( 0.1 , 0.2 ) .
Table 6. Estimates of the power of Δ ^ ( 0.1 , 0.2 ) .
n θ WeibullGammaLFR
102
3
4
0.769
0.992
1.000
0.579
0.816
0.918
0.396
0.485
0.535
202
3
4
0.977
1.000
1.000
0.758
0.945
0.987
0.563
0.709
0.760
302
3
4
0.999
1.000
1.000
0.816
0.973
0.997
0.718
0.804
0.859
Table 7. Estimates of the power of Δ ^ ( 0.01 , 3 ) .
Table 7. Estimates of the power of Δ ^ ( 0.01 , 3 ) .
n θ WeibullGammaLFR
102
3
4
0.452
0.875
0.994
0.892
0.965
0.983
0.493
0.694
0.752
202
3
4
0.990
1.000
1.000
0.950
0.994
1.000
0.826
0.909
0.944
302
3
4
1.000
1.000
1.000
0.983
1.000
1.000
0.907
0.973
0.986
Table 8. Estimates of the power of Δ ^ ( 0.1 , 2.5 ) .
Table 8. Estimates of the power of Δ ^ ( 0.1 , 2.5 ) .
n θ WeibullGammaLFR
102
3
4
0.547
0.966
0.998
0.911
0.981
0.990
0.550
0.693
0.788
202
3
4
0.982
1.000
1.000
0.956
0.997
1.000
0.797
0.907
0.946
302
3
4
1.000
1.000
1.000
0.973
0.998
1.000
0.928
0.966
0.987
Table 9. The superior percentage of Δ ^ c with 10,000 replications.
Table 9. The superior percentage of Δ ^ c with 10,000 replications.
n 90 % 95 % 99 %
101017.0301592.6003706.060
20476.400667.2111257.120
30340.227449.407758.178
40262.558342.080565.709
50224.922284.496431.964
60196.498248.618374.033
70176.142221.852331.646
80160.752200.534291.902
90150.189189.766277.848
100137.863172.849245.270
Table 10. Estimates of the power of Δ c ( 0.1 , 2.5 ) .
Table 10. Estimates of the power of Δ c ( 0.1 , 2.5 ) .
n θ WeibullLFRGamma
102
3
4
0.971
0.985
0.992
0.856
0.885
0.923
0.914
0.928
0.956
202
3
4
0.947
0.972
0.995
0.897
0.957
0.984
0.964
0.987
0.991
302
3
4
1.000
1.000
1.000
0.968
0.992
1.000
1.000
1.000
1.000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Etman, W.B.H.; Bakr, M.E.; Alshangiti, A.M.; Balogun, O.S.; EL-Sagheer, R.M. A Laplace Transform-Based Test for Exponentiality Against the EBUCL Class with Applications to Censored and Uncensored Data. Mathematics 2025, 13, 3379. https://doi.org/10.3390/math13213379

AMA Style

Etman WBH, Bakr ME, Alshangiti AM, Balogun OS, EL-Sagheer RM. A Laplace Transform-Based Test for Exponentiality Against the EBUCL Class with Applications to Censored and Uncensored Data. Mathematics. 2025; 13(21):3379. https://doi.org/10.3390/math13213379

Chicago/Turabian Style

Etman, Walid B. H., Mahmoud E. Bakr, Arwa M. Alshangiti, Oluwafemi Samson Balogun, and Rashad M. EL-Sagheer. 2025. "A Laplace Transform-Based Test for Exponentiality Against the EBUCL Class with Applications to Censored and Uncensored Data" Mathematics 13, no. 21: 3379. https://doi.org/10.3390/math13213379

APA Style

Etman, W. B. H., Bakr, M. E., Alshangiti, A. M., Balogun, O. S., & EL-Sagheer, R. M. (2025). A Laplace Transform-Based Test for Exponentiality Against the EBUCL Class with Applications to Censored and Uncensored Data. Mathematics, 13(21), 3379. https://doi.org/10.3390/math13213379

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop