Next Article in Journal
Spherical Linear Diophantine Fuzzy Graphs: Unleashing the Power of Fuzzy Logic for Uncertainty Modeling and Real-World Applications
Next Article in Special Issue
Uniformly Shifted Exponential Distribution
Previous Article in Journal
Fractal Fractional Derivative Models for Simulating Chemical Degradation in a Bioreactor
Previous Article in Special Issue
Stress–Strength Reliability Analysis for Different Distributions Using Progressive Type-II Censoring with Binomial Removal
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation and Optimal Censoring Plan for a New Unit Log-Log Model via Improved Adaptive Progressively Censored Data

1
Department of Mathematical Sciences, College of Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
2
Department of Statistics, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia
3
Department of Statistics, Faculty of Commerce, Zagazig University, Zagazig 44519, Egypt
4
Faculty of Technology and Development, Zagazig University, Zagazig 44519, Egypt
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(3), 152; https://doi.org/10.3390/axioms13030152
Submission received: 22 January 2024 / Revised: 18 February 2024 / Accepted: 22 February 2024 / Published: 26 February 2024

Abstract

:
To gather enough data from studies that are ongoing for an extended duration, a newly improved adaptive Type-II progressive censoring technique has been offered to get around this difficulty and extend several well-known multi-stage censoring plans. This work, which takes this scheme into account, focuses on some conventional and Bayesian estimation missions for parameter and reliability indicators, where the unit log-log model acts as the base distribution. The point and interval estimations of the various parameters are looked at from a classical standpoint. In addition to the conventional approach, the Bayesian methodology is examined to derive credible intervals beside the Bayesian point by leveraging the squared error loss function and the Markov chain Monte Carlo technique. Under varied settings, a simulation study is carried out to distinguish between the standard and Bayesian estimates. To implement the proposed procedures, two actual data sets are analyzed. Finally, multiple precision standards are considered to pick the optimal progressive censoring scheme.

1. Introduction

In the modern world, product reliability is more crucial than ever. Customers today have the benefit of expecting great quality and a long lifespan from each item they buy. One strategy used by manufacturers to bring customers to their products in this very challenging market is to offer lifetime guarantees. Product failure-time distributions are a critical topic for producers to understand in order to develop a cost-effective assurance. Reliability tests are conducted to obtain this information before the release of products onto the market; see Balakrishnan and Aggarwala [1]. Because newly released products have a long lifespan, it is usual practice to obtain information about their lifetime through censored data. When an experimenter fails to record the failure times of every unit put through a life test, whether on purpose or accidentally, censored sampling occurs. Literature has a wide variety of censorship techniques. One of the most commonly used plans is progressive Type-II censoring (T2-PC). The T2-PC scheme is one of the most widely utilised strategies. This plan allows some still-living units to be removed during the experiment at predetermined points. Adaptive progressive Type-II censoring (AT2-PC) is a more comprehensive censorship scheme that was presented by Ng et al. [2]. If a predetermined time is met, the experimenter can modify the removal units in this technique. This strategy is specifically designed to address a few observed issues with the progressive Type-I hybrid censoring method by Kundu and Joarder [3]. Numerous research took the AT2-PC plan into account for certain lifetime models. Sobhi and Soliman [4], Chen and Gui [5], Panahi and Moradi [6], Kohansal and Shoaee [7], Du and Gui [8], and Alotaibi et al. [9] are some examples. On the other hand, if the test units are very dependable, the testing period will be unduly long, and the AT2-PC will not have been effective in ensuring a suitable overall test duration. To solve this issue, Yan et al. [10] created a new censoring method referred to as an improved adaptive progressive Type-II censoring (IT2-APC) mechanism.
The following is a thorough explanation of the IT2-APC sample: Assume two limits, T 1 < T 2 , a progressive censoring plan (PSP) S = S 1 , , S m , and the number of observed failures m < n are assigned before starting of the test, which includes n independent and identical units at time zero. A random subset of the remaining items S 1 is removed from the test at the time of the first failure X 1 : m : n . Again, following the second failure at time X 2 : m : n , S 2 items are eliminated at random from the test, and so on. We can obtain one of the three probable outcomes from the IT2-APC plan: Case-1: If X m : m : n < T 1 , then X m : m : n is where the experiment ends, and at the mth failure, all of the leftover items are removed, that is S m = n m i = 1 m 1 S i . The T2-PC sample is shown in this case. Case-2: The test ends at X m : m : n if T 1 < X m : m : n < T 2 . All items that did not fail are eliminated at the m t h failure, that is S m = n d 1 i = 1 d 1 S i . The number of failures before the initial limit T 1 is denoted by d 1 in this instance. It is significant to note that after experiencing X d 1 : m : n , no items are eliminated from the test. As a result, the PSP is changed to be S = S 1 , , S d 1 , 0 , , 0 , S m . The AP2-PC sample is described in this case. Case-3: The test stops at the limit T 2 if T 1 < T 2 < X m : m : n . All items that remain at this threshold are eliminated, i.e., S = n d 2 i = 1 d 1 S i , where d 2 represents the total number of observed failures obtained prior to T 2 . When the experiment reaches T 1 , the PSP adjustment is likewise used here as Case-2. As a result, S = S 1 , , S d 1 , 0 , , 0 , S becomes the PSP. The IT2-APC plan has not received much attention, according to a review of the literature. Several estimation issues for some lifetime models were examined using the IT2-APC scheme by Nassar and Elshahhat [11], Elshahhat and Nassar [12], Elbatal et al. [13], Alam and Nassar [14] and Dutta and Kayal [15]. Let us now assume the observed IT2-APC sample with PSP, denoted, respectively, as x ̲ = ( x 1 < < x d 1 < T 1 < < x d 2 < T 2 ) and S . Next, it is possible to formulate the likelihood function (LF) of the unknown parameters α ; see Yan et al. [10], as
L ( α | x ̲ ) = C i = 1 J 2 f ( x i ) i = 1 J 1 1 F ( x i ) S i 1 G ( T ) S ,
where α is the unknown parameters vector, C is a constant and, for simplicity, x i = x i : m : n . Table 1 lists the potential values of J 1 , J 2 , T , and S for Cases 1, 2, and 3.
An essential issue in terms of inferences is modelling real data sets with novel statistical models. A notable distinction can be observed between bounded (with boundaries) and unbounded (without boundaries) distributions. Limited values such as fractions, percentages, and proportions are frequently encountered in real-world scenarios. Instances of such data include percentages of educational attainment, training data, fractional debt payback, working hours, and international tests. Consequently, modelling methodologies on the unit interval have increased recently, throughout the past ten years. Recovery rates, death rates, proportions in educational assessments, and other particular challenges are the subject of these models. To represent random variables within a range of zero to one, we require a distribution unit. Recently, Korkmaz and Korkmaz [16] introduced a novel two-parameter distribution defined on the bounded (0,1) interval, named the unit–log–log (ULL) distribution. A random variable X is said to have the ULL distribution, denoted as X U L L ( α ) , where α = ( γ , σ ) , if its probability density function (PDF) and cumulative distribution function (CDF) are given by
f ( x ; α ) = γ log ( σ ) x 1 [ ω ( x ) ] γ 1 σ [ ω ( x ) ] γ e 1 σ [ ω ( x ) ] γ , x ( 0 , 1 )
and
F ( x ; α ) = e 1 σ [ ω ( x ) ] γ ,
respectively, where ω ( x ) = log ( x ) . The reliability function (RF) and hazard rate function (HRF) correspond to X, and are given, respectively, by
R ( x ; α ) = 1 e 1 σ [ ω ( x ) ] γ
and
h ( x ; α ) = γ log ( σ ) [ ω ( x ) ] γ 1 σ [ ω ( x ) ] γ x e σ [ ω ( x ) ] γ 1 1 ,
where γ > 0 and σ > 1 . The main purpose of the ULL lifetime model order is to model the educational measurements as well as other real data sets supported on the unit interval; see Korkmaz and Korkmaz [16] for additional details. Korkmaz et al. [17] investigated six classical estimation methods for the ULL model using the complete sample. Taking several selections of γ and σ , Figure 1 indicates that the density in (2) can take different shapes, including unimodal and U shape. On the other hand, the hazard rate in (5) can be increasing, bathtub, or N-shaped. Figure 1 shows that the ULL model’s varied density and hazard rate shapes make it highly flexible on the unit interval.
We are motivated to complete the current work for the following three reasons:
  • The superiority of the ULL model in fitting real data sets compared to several competing models, such as the beta and Kumaraswamy models, among others, is demonstrated later in the real data section.
  • To the best of our knowledge, this is the first investigation of the estimations of the ULL distribution under censorship plans. So we consider the IT2-APC scheme, which generalizes some common censoring plans such as Type-II censoring, T2-PC, and AT2-PC schemes. As a result, the estimations employing these schemes can be directly deduced from the findings of this study.
  • It is critical to understand the appropriate estimation approach for the ULL distribution and which PSP provides more information about the unknown parameters.
Before progressing further, we refer to α , RF and HRF as the unknown parameters. Considering the flexibility of the ULL model and the efficiency of the IT2-APC scheme, this paper has three specific objectives, as listed below:
  • Obtaining the traditional and Bayesian estimations of the unknown parameters. Employing the asymptotic properties (APs) of the maximum likelihood estimates (MLEs), the traditional maximum likelihood (ML) approach is taken into consideration in order to derive the approximate confidence intervals (ACIs) in addition to the MLEs. The squared error loss function with the Markov chain Monte Carlo (MCMC) method was then used to obtain the Bayesian estimates. Additionally, the highest posterior density (HPD) ranges are calculated.
  • Examining the effectiveness of the various point and interval estimations, it is worth mentioning that assessing the different estimations theoretically is more complex. For this reason, we employ simulations to accomplish this goal. Furthermore, we prove the validity of the ULL model and the suitability of the suggested techniques through the examination of two environmental and engineering actual data sets.
  • Researching the issue of choosing the best PSP for the ULL model when IT2-APC data are available. This is conducted using four precision standards. By analyzing the two given genuine data sets, these standards are numerically compared.
The remaining sections of this work are arranged as follows: Section 2 covers the classical estimation to obtain the MLEs and ACIs using the APs for various parameters. The Bayesian estimation, including prior information, posterior distribution and point and HPD interval estimations, is studied in Section 3. Section 4 presents the simulation design and simulation results based on several PSP, n, m and time boundary circumstances. In Section 5, two real environmental and engineering data sets are examined to demonstrate the effectiveness of the ULL model and the feasibility of the suggested approaches. Section 6 presents the PSP selection process as well as a comparative analysis of the various criteria that were considered. A few conclusions are presented in Section 7.

2. Likelihood Methodology

The ML estimation methodology is widely used for parameter estimation for statistical models. The ML estimation of the ULL parameters, including RF and HRF, is discussed in this part. The estimation is employed via an IT2-APC sample x ̲ and PSP S = S 1 , , S d 1 , 0 , , 0 , S . Both point and interval estimates of the specified quantities are taken into consideration in this section.

2.1. Point Estimation

Considering the observable IT2-APC sample x ̲ , the joint LF supplied by (1) with the PDF provided by (2), as well as the CDF in (3), can be used to write the LF of α as follows, after omitting the constant term,
L ( α | x ̲ ) = γ J 2 [ log ( σ ) ] J 2 exp ( γ 1 ) i = 1 J 2 log [ ω ( x i ) ] + log ( σ ) i = 1 J 2 [ ω ( x i ) ] γ ϕ ( x ̲ ; γ , σ ) ,
where
ϕ ( x ̲ ; γ , σ ) = i = 1 J 2 σ [ ω ( x i ) ] γ + i = 1 J 1 S i σ [ ω ( x i ) ] γ + S σ [ ω ( T ) ] γ .
The log-LF of (6) is
l ( α | x ̲ ) = J 2 log ( γ ) + J 2 log [ log ( σ ) ] + ( γ 1 ) i = 1 J 2 log [ ω ( x i ) ] + log ( σ ) i = 1 J 2 [ ω ( x i ) ] γ ϕ ( x ̲ ; γ , σ ) .
The solution of the next two normal equations provides the MLEs of γ and σ , shown by γ ^ and σ ^ , respectively,
l ( α | y ̲ ) γ = J 2 γ + i = 1 J 2 log [ ω ( x i ) ] + log ( σ ) i = 1 J 2 [ ω ( x i ) ] γ log [ ω ( x i ) ] ϕ 1 ( x ̲ ; γ , σ ) = 0
and
l ( α | y ̲ ) σ = J 2 σ log ( σ ) + 1 σ i = 1 J 2 [ ω ( x i ) ] γ ϕ 2 ( x ̲ ; γ , σ ) = 0
where
ϕ 1 ( x ̲ ; γ , σ ) = log ( σ ) i = 1 J 2 ϖ 1 ( x i ; γ , σ ) + i = 1 J 1 S i ϖ 1 ( x i ; γ , σ ) + S ϖ 1 ( T ; γ , σ ) ,
and
ϕ 2 ( x ̲ ; γ , σ ) = 1 σ i = 1 J 2 ϖ 2 ( x i ; γ , σ ) + i = 1 J 1 S i ϖ 2 ( x i ; γ , σ ) + S ϖ 2 ( T ; γ , σ ) ,
with ϖ 1 ( x i ; γ , σ ) = σ [ ω ( x i ) ] γ [ ω ( x i ) ] γ log [ ω ( x i ) ] and ϖ 2 ( x i ; γ , σ ) = σ [ ω ( x i ) ] γ [ ω ( x i ) ] γ . Given the nonlinear functions in (8) and (9), it is evident that the MLEs cannot be determined explicitly. To find a solution to this challenge, some numerical techniques can be implemented to obtain the necessary estimates γ ^ and σ ^ . The invariance trait of the MLEs can be applied to find the MLEs of RF and HRF, at a given time t, by changing the genuine parameters in (4) and (5) with their MLEs, respectively, as
R ^ ( t ) = 1 e 1 σ ^ [ ω ( t ) ] γ ^
and
h ^ ( t ) = γ ^ log ( σ ^ ) [ ω ( t ) ] γ ^ 1 σ ^ [ ω ( t ) ] γ ^ t e σ ^ [ ω ( t ) ] γ ^ 1 1 .

2.2. Interval Estimation of γ and σ

The ACIs of γ and σ can be created using the APs of the MLEs. Obtaining the variance-covariance matrix, denoted by I 1 ( α ) , is the first step towards achieving this. However, we can approximate the necessary variance-covariance matrix by inverting the observed Fisher information matrix, as a result of the intricate formulations of the Fisher information matrix. Therefore, the approximate variance covariance matrix can be expressed as
I 1 ( α ^ ) = 2 l ( α ) γ 2 2 l ( α ) γ σ 2 l ( α ) σ 2 α = α ^ 1 = V ˜ 1 C ˜ 12 V ˜ 2 .
where α ^ = ( γ ^ , σ ^ ) and
2 l ( α | x ̲ ) γ 2 = J 2 γ 2 + log ( σ ) i = 1 J 2 [ ω ( x i ) ] γ log 2 [ ω ( x i ) ] ϕ 11 ( x ̲ ; γ , σ ) ,
l 2 ( α | x ̲ ) σ 2 = J 2 [ 1 + log ( σ ) ] σ 2 log 2 ( σ ) 1 σ 2 i = 1 J 2 [ ω ( x i ) ] γ ϕ 22 ( x ̲ ; γ , σ )
and
2 l ( α | x ̲ ) γ σ = 1 σ i = 1 J 2 [ ω ( x i ) ] γ log [ ω ( x i ) ] ϕ 12 ( x ̲ ; γ , σ ) ,
where
ϕ 11 ( x ̲ ; γ , σ ) = log ( σ ) i = 1 J 2 ϖ 11 ( x i ; γ , σ ) + i = 1 J 1 S i ϖ 11 ( x i ; γ , σ ) + S ϖ 11 ( T ; γ , σ ) ,
ϕ 22 ( x ̲ ; γ , σ ) = 1 σ 2 i = 1 J 2 ϖ 22 ( x i ; γ , σ ) + i = 1 J 1 S i ϖ 22 ( x i ; γ , σ ) + S ϖ 22 ( T ; γ , σ )
and
ϕ 12 ( x ̲ ; γ , σ ) = 1 σ 2 i = 1 J 2 ϖ 12 ( x i ; γ , σ ) + i = 1 J 1 S i ϖ 12 ( x i ; γ , σ ) + S ϖ 12 ( T ; γ , σ ) ,
with ϖ 11 ( x i ; γ , σ ) = ϖ 1 ( x i ; γ , σ ) log [ ω ( x i ) ] { 1 + [ ω ( x i ) ] γ log [ ω ( x i ) ] } , ϖ 22 ( x i ; γ , σ ) = ϖ 2 ( x i ; γ , σ ) { [ ω ( x i ) ] γ 1 } and ϖ 12 ( x i ; γ , σ ) = ϖ 2 ( x i ; γ , σ ) { log ( σ ) [ [ ω ( x i ) ] γ 1 ] + 1 } .
The asymptotic distribution of ( α ^ α ) is known to be a bivariate normal distribution with a mean of zero and an estimated variance-covariance matrix I 1 ( α ^ ) as displayed in (10), as per the APs of the MLEs. Now, at a confidence level 100 ( 1 τ ) % , one can compute the required ACIs of γ and σ , respectively, as
γ ^ ± z τ / 2 V ^ 1 and σ ^ ± z τ / 2 V ^ 2 ,
where the standard normal distribution is used to determine z τ / 2 .

2.3. Interval Estimation of RF and HRF

One of the problems statisticians face when developing an estimator of any function of unknown parameters is figuring out the variance of the estimator. This variance is necessary for confidence interval estimation and/or hypothesis testing. To obtain the variance of an estimator, statisticians use a procedure called the delta method, which essentially involves approximating the more complex function with a linear function that can be obtained using calculus techniques; see, for more detail, Greene [18] and Alevizakos and Koukouvinos [19]. In our case, we employ the delta method to approximate the variances of the MLEs of RF and HRF in order to obtain the ACIs of R ( t ) and h ( t ) . Let D 1 and D 2 be two vectors defined as
D 1 = R ( t ) γ R ( t ) σ α = α ^ and D 2 = h ( t ) γ h ( t ) σ α = α ^ ,
where
R ( t ) γ = F ( t ; α ) σ [ ω ( t ) ] γ [ ω ( t ) ] γ log ( σ ) log [ ω ( t ) ] , R ( t ) σ = F ( t ; α ) σ [ ω ( t ) ] γ 1 [ ω ( t ) ] γ ,
h ( t ) γ = h ( t ; α ) 1 γ + log [ ω ( t ) ] + [ ω ( t ) ] γ log ( σ ) log [ ω ( t ) ] t ω ( t ) log [ ω ( t ) ] h ( t ; α ) γ F ( t ; α )
and
h ( t ) σ = h ( t ; α ) σ 1 log ( σ ) + [ ω ( t ) ] γ [ ω ( t ) ] γ σ [ ω ( t ) ] γ R ( t ; α ) .
Therefore, one can calculate the approximate estimates of the variances of the estimators of R ( t ) and h ( t ) , respectively, as given below
V ^ 3 = D 1 I 1 ( α ^ ) D ´ 1 and V ^ 4 = D 2 I 1 ( α ^ ) D ´ 2 .
Then, the 100 ( 1 τ ) % ACIs for R ( t ) and h ( t ) are, as follows
R ^ ( t ) ± z τ / 2 V ^ 3 , and h ^ ( t ) ± z τ / 2 V ^ 4 .

3. Bayesian Methodology

This section of the paper focuses on finding Bayes estimations for the unknown parameters, RF and HRF, providing point and HPD credible interval estimates. The Bayesian method not only offers an alternative analysis but also uses informed priori densities to incorporate historical data on the parameters. Uncertainty about this knowledge is taken into account when considering noninformative priori. The Bayesian methodology uses the posterior marginal distributions to gain knowledge about the model parameters. Notably, the squared error loss function serves as the foundation for the Bayes estimations in this paper, while any other loss function can be used with ease.

3.1. Prior and Posterior Distributions

The prior distribution is a crucial component in Bayesian estimation, as it represents what is currently known about the unknown parameters. It is evident that the unknown parameters γ and σ do not have any conjugate priors. In addition, it is not easy to compute the Jeffreys prior due to the complex form of the variance-covariance matrix. We therefore presume that γ and σ have gamma prior distributions and are independent. The gamma (G) prior is chosen because of its adaptability, particularly in computational tasks. The parameter γ > 0 is assumed to follow G ( θ 1 , β 1 ) . On the other hand, for σ > 1 , we use the three-parameter G distribution with a location parameter equal to one using the same way by Nassar et al. [20], i.e., σ G ( θ 2 , β 2 , 1 ) . It should be noted that all hype-parameter values are nonnegative. Using these assumptions, the joint prior can be written as
p ( α ) γ θ 1 1 ( σ 1 ) θ 2 1 e [ β 1 γ + β 2 ( σ 1 ) ] , γ , σ > 0 ,
where θ j , β j > 0 , j = 1 , 2 . The joint posterior distribution of the unknown parameters can be determined by merging the data from observations produced by the LF, as provided by (6), with the prior information that is already known, as supplied by the joint prior distribution in (11), as follows
Q ( α | x ̲ ) = 1 A γ J 2 + θ 1 1 ( σ 1 ) θ 2 1 [ log ( σ ) ] J 2 e β 2 ( σ 1 ) × exp ( γ 1 ) i = 1 J 2 log [ ω ( x i ) ] + β 1 γ + log ( σ ) i = 1 J 2 [ ω ( x i ) ] γ ϕ ( x ̲ ; γ , σ ) ,
where
A = 0 1 γ J 2 + θ 1 1 ( σ 1 ) θ 2 1 [ log ( σ ) ] J 2 e β 2 ( σ 1 ) × exp ( γ 1 ) i = 1 J 2 log [ ω ( x i ) ] + β 1 γ + log ( σ ) i = 1 J 2 [ ω ( x i ) ] γ ϕ ( x ̲ ; γ , σ ) d σ d γ .
The posterior mean of any parametric function, say ξ ( α ) , can be used to obtain the Bayes estimator by using the squared error loss function. Let ξ ˜ ( α ) denote the Bayes estimator of ξ ( α ) . Then, from (12), ξ ˜ ( α ) can be derived as
ξ ˜ ( α ) = 0 1 ξ ( α ) Q ( α | x ̲ ) d σ d γ = 0 1 ξ ( α ) p ( α ) L ( α | x ̲ ) d σ d γ 0 1 p ( α ) L ( α | x ̲ ) d σ d γ
The ratio of integrals in (13) results in the inability to obtain the Bayes estimator ξ ˜ ( α ) in a closed form, as anticipated. We suggest using the MCMC technique to overcome this issue to obtain the required Bayes estimates as well as the HPD credible intervals. The next section discusses this topic. It is worth mentioning that another method to calculate the ratio of the integrals in Equation (13) is through a numerical integration. In Bayesian estimation, MCMC methods are commonly preferred over numerical integration for various reasons. This preference is particularly strong when dealing with models that involve two or more parameters and a complex censoring plan.

3.2. MCMC, Bayes Estimates and HPD Intervals

Employing the MCMC technique allows sampling from the posterior distribution to compute relevant posterior values. As with other Monte Carlo techniques, the MCMC makes use of repeated random sampling to make use of the law of large numbers. Partitioning the joint posterior distribution into full conditional distributions for each model parameter is required when using this technique; colorredsee the work of Noii et al. [21] for more detail about the MCMC methods. To gain the necessary estimates, a sample must be taken from each of these conditional distributions. For the parameters γ and σ , the full conditional distributions are provided, respectively, by
Q 1 ( γ | σ , x ̲ ) γ J 2 + θ 1 1 × exp γ i = 1 J 2 log [ ω ( x i ) ] + β 1 + log ( σ ) i = 1 J 2 [ ω ( x i ) ] γ ϕ ( x ̲ ; γ , σ ) ,
and
Q 2 ( σ | γ , x ̲ ) ( σ 1 ) θ 2 1 [ log ( σ ) ] J 2 e β 2 σ exp log ( σ ) i = 1 J 2 [ ω ( x i ) ] γ ϕ ( x ̲ ; γ , σ ) .
It is important to determine whether or not the full conditional distributions fit into any well-known distributions before applying the MCMC approach. Choosing which MCMC algorithm to utilize is a crucial stage in this process. As we can observe, there is no well-known distribution that can be used to describe the distributions in (14) and (15). In this circumstance, the Metropolis–Hastings (M-H) method is appropriate for obtaining the required samples from Q 1 ( γ | σ , x ̲ ) and Q 2 ( γ | σ , x ̲ ) . Following the next steps, we can use the M-H process with a normal proposal distribution (NPD) to collect the necessary samples.
Step 1. 
Set ( γ ( 0 ) , σ ( 0 ) ) = ( γ ^ , σ ^ ) .
Step 2. 
Put j = 1 .
Step 3. 
Use NPD N γ ^ , V ^ 1 and the M-H steps to simulate γ ( j ) from (14).
Step 4. 
Based on NPD N σ ^ , V ^ 2 and the M-H steps, generate σ ( j ) from (15).
Step 5. 
Use ( γ ( j ) , σ ( j ) ) to estimate pute the RF and HRF as R ( j ) ( t ) and h ( j ) ( t ) .
Step 6. 
Put j = j + 1 .
Step 7. 
Redo steps 3 to 6 and M replications to obtain
γ ( j ) , σ ( j ) , R ( j ) , h ( j ) , j = 1 , , M .
It is crucial to eliminate the impact of the initial guesses while applying the MCMC technique. Removing the initial B replications as a burn-in phase will accomplish this. The Bayes estimate for any of the four parameters in this instance, let us say λ , can be acquired, as shown below
λ ˜ = 1 M B j = B + 1 M λ ( j ) .
By sorting the acquired λ ( j ) as λ ( j ) , j = B + 1 , , M , the HPD credible interval of λ can be computed accordingly
λ j , λ j + 1 τ ( M B ) ,
with j = B + 1 , , M , such that
λ ( j + [ ( 1 τ ) ( M B ) ] ) λ ( j ) = min 1 j τ ( M B ) λ ( j + [ ( 1 τ ) ( M B ) ] ) λ ( j ) ,
where the greatest integer that is either smaller than or equal to ι is found to be [ ι ] .

4. Numerical Evaluations

This part establishes Monte-Carlo simulations to observe how well our estimates of γ , σ , R ( t ) , and h ( t ) from earlier sections work.

4.1. Simulation Scenarios

We constructed one thousand samples from U L L ( 0.75 , 1.5 ) in order to assess the relative performance of several obtained estimators for γ , σ , R ( t ) , and h ( t ) . At t = 0.25 , the true value of ( R ( t ) , h ( t ) ) is taken as (0.49272, 1.93743). To evaluate the effectiveness of the proposed estimators under different conditions, Table 2 displays various combinations of T i , i = 1 , 2 , (threshold times), n(full sample size), m(full failure size), and S (progressive pattern). Additionally, to show the effects of the removal patterns, five designs of S are utilized, namely: L (left), M (middle), R (right), D (doubly), and U (uniformly) censoring fashions. For example, censoring ( 1 2 0 ) means 1 repeats 20 times. To show how the chosen times T i , i = 1 , 2 , affect the estimates, for n ( = 40 , 80 ) , we also consider two different options for T 1 ( = 0.2 , 0.4 ) and T 2 ( = 0.4 , 0.6 ) . Different options of m are also determined as failure percentages (FPs) out of each n, such as m n ( = 50 , 75 ) %.
To draw an IT2-APC sample, after assigning the values of ( n , m ) , T i , i = 1 , 2 , and S i , i = 1 , 2 , , m , perform the next steps:
Step 1. 
Fix the actual values of γ and σ .
Step 2. 
Obtain a T2-PC sample as:
a.
Simulate an uniform sample denoted as ( ϱ 1 , ϱ 2 , , ϱ m ).
b.
Set δ i = ϱ i i + j = m i + 1 m S j 1 , i = 1 , 2 , , m .
c.
Set U i = 1 δ m δ m 1 δ m i + 1 for i = 1 , 2 , , m .
d.
Obtain a T2-PC sample from U L L ( γ , σ ) as X i = exp log ( 1 log ( u i ) ) log ( σ ) 1 γ , i = 1 , 2 , , m .
Step 3. 
Find d 1 at T 1 , and ignore X i , i = d 1 + d 1 + 2 , , m .
Step 4. 
Find the first m d 1 1 order statistics (say X d 1 + 2 , , X m ) from a truncated distribution f x / R x d 1 + 1 with sample size n d 1 1 i = 1 d 1 S i .
Step 5. 
Obtain an IT2-APC sample case as follows:
a.
Case-1: If X m < T 1 < T 2 ; stop the test at X m .
b.
Case-2: If T 1 < X m < T 2 ; stop the test at X m .
c.
Case-3: If T 1 < T 2 < X m ; stop the test at T 2 .
Upon gathering the desired 1000 IT2-APC samples, we install two suggested packages using the R 4.2.2 programming software:
  • The ‘ maxLik ’ tool to perform the classical estimates developed by Henningsen and Toomet [22].
  • The ‘ coda ’ tool, developed by Plummer et al. [23], to perform the Bayes estimates.
We decline the first 2000 iterations (out of 12,000) as a “burn-in” period, under the M–H steps. The Bayes–MCMC estimates and 95% HPD intervals are then computed. The Bayes inferences are performed using two informative sets of ( θ 1 , θ 2 , β 1 , β 2 ) , referred to as Pr.A: (7.5, 5, 10, 10) and Pr.B: (15, 10, 20, 20), which are compatible with both prior-mean and prior-variance criteria. The suggested values of ( θ i , β i ) for i = 1 , 2 , are allocated so that the sample mean of σ or γ is achieved by the prior mean. In order to obtain an appropriate sample from the objective posterior distribution in MCMC assessments, we must ensure the convergence of the generated chains. Four convergence operators, (i) the auto-correlation function (ACF), (ii) Trace, (iii) Brooks–Gelman–Rubin (BGR) diagnostic, and (iv) Trace thinning-based are utilized to achieve this goal (here, we used every fifth point). When ( γ , σ ) = ( 0.75 , 1.5 ) , ( T 1 , T 2 ) = ( 0.2 , 0.3 ) , n [ F P % ] = 40 [ 20 % ] , S = ( 1 20 ) , and Pr.A. are applied, these measures are carried out. Figure 2a demonstrates that there is a strong correlation between the lag in each chain and the autocorrelation of each parameter. This suggests that the simulated chains are highly mixed together. Figure 2b shows that the variance within the Markovian chains and the variance between them are not significantly different. This also shows that the size of the burn-in sample is a good way to get rid of the effects of the starting points. Figure 2c displays that the simulated chains are substantially mixed. All Markovian chains, shown in Figure 3, seem to explore the same region of the actual parameter values of γ or σ , which is a good sign. These chains support the same facts displayed in Figure 2c, which states that the simulated chains are substantially mixed. As a result, the computed point (or interval) estimates of γ , σ , R ( t ) , and h ( t ) are reliable and good. The same findings are observed when using Pr.B.
From the acquired estimates, say for the parameter σ as an example, we obtain some statistical measures, namely the average estimates (AEs), root mean squared-errors (RMSEs), mean absolute biases (MABs), average confidence widths (ACWs) and coverage percentages (CPs). The expressions of these measures are, respectively, given by
A E ( σ ` ) = 1 1000 i = 1 1000 σ ` ( i ) ,
RMSE ( σ ` ) = 1 1000 i = 1 1000 σ ` ( i ) σ 2 ,
MAB ( σ ` ) = 1 1000 i = 1 1000 σ ` ( i ) σ ,
ACW ( 1 θ ) % ( σ ) = 1 1000 i = 1 1000 U σ ` ( i ) L σ ` ( i ) ,
and
CP ( 1 θ ) % ( σ ) = 1 1000 i = 1 1000 Ψ L σ ` ( i ) ; U σ ˇ ( i ) σ ,
where where σ ` ( i ) is the estimate of σ at ith sample, Ψ ( · ) denotes the indicator operator and ( L ( · ) , U ( · ) ) denote the (lower,upper) limits of ( 1 τ ) % ACI (or HPD) interval of σ .

4.2. Simulation Results

All outcomes of the simulation of γ , σ , R ( t ) , and h ( t ) are displayed in the supplementary file. In Table 3 and Table 4, for brevity, the point and interval estimations of γ , σ , R ( t ) , and h ( t ) when n[FP%] = 40[50%] are presented. Considering the lowest values of RMSE, MAB, and ACW along with the greatest values of CP, we list the following observations:
  • The most significant finding is that the provided γ , σ , R ( t ) , or h ( t ) estimates are accurate.
  • As n(or m) grows, all estimates of γ , σ , R ( t ) , or h ( t ) behave better. When i = 1 m S i decreases, a similar conclusion is offered.
  • As T i for i = 1 , 2 , increase, all offered estimates of γ , σ , R ( t ) , or h ( t ) perform satisfactorily.
  • Due to the additional information we already have about γ and σ , the Bayes estimates of all parameters are more accurate than other estimates, as expected. The same thing is noticed when comparing the HPD credible intervals with the ACIs.
  • By changing the hyperparameters from Pr.A to Pr.B, we can observe the same conclusion that the Bayes point estimates and HPD credible intervals outperform those based on the ML method.
  • Because the variance of Pr.B is smaller than the variance of Pr.A, all the Bayes estimations based on Pr.B are more accurate than others.
  • Comparing the proposed schemes L, M, R, D, and U, it is observed that all results of γ , σ , R ( t ) , or h ( t ) behave superiorly based on censoring-U ‘uniformly’ (next, censoring-D ‘doubly’) than others.
  • So, in order to obtain accurate results for any parameter of life, the practitioner doing the experiment needs to make the experiment last for as long as possible if and only if the experiment cost is enough.
  • In summary, when dealing with data gathered using an IT2-APC process, it is recommended to use the Bayes’ framework with M-H sampling to estimate the ULL parameters ( γ and σ ) or reliability features ( R ( t ) and h ( t ) ).

5. Real-Life Applications

This part illustrates two examples that show how to use the suggested methods in real-life situations. These examples use real data from the fields of environmental science and engineering.

5.1. Environmental Data Analysis

In this application, we will study a set of data that shows the maximum (highest) flood level (MFL) of the Susquehanna River in Harrisburg, Pennsylvania. The data are measured in millions of cubic feet per second; see Table 5. Dumonceaux and Antle [24] presented this information, which was later assessed by Dey et al. [25].
To highlight the superiority of the ULL lifetime model based on the full MFL data, we will compare it with seven other models in the literature, named:
(1)
unit-Birnbaum-Saunders (UBS ( γ , σ ) ) by Mazucheli et al. [26];
(2)
unit-Gompertz (UGom ( γ , σ ) ) by Mazucheli et al. [27];
(3)
unit-Weibull (UW ( γ , σ ) ) by Mazucheli et al. [28];
(4)
unit-gamma (UG ( γ , σ ) ) by Mazucheli et al. [29];
(5)
Topp-Leone (TL ( γ ) ) by Topp and Leone [30];
(6)
Kumaraswamy (Kum ( γ , σ ) ) by Mitnik and Baek [31];
(7)
Beta ( γ , σ ) by Gupta and Nadarajah [32].
To specify the best model among the ULL and its competitors, we consider the following metrics:
(1)
Estimated log-likelihood (say L ), where L = log L ;
(2)
Akaike information ( AI ), where AI = 2 ( k L ) ;
(3)
Bayesian information ( BI ), where BI = k log ( n ) 2 L ;
(4)
Consistent Akaike information ( CAI ), where CAI = k ( 1 + log ( n ) ) 2 L ;
(5)
Hannan-Quinn information ( HQI ), where HQI = 2 k log ( log ( n ) ) 2 L ;
(6)
The Kolmogorov–Smirnov ( KS ) statistic is defined as
Z = sup x G n ( x ) G X ( x ) = max D + , D ,
where D + = max i = 1 , . . . , n i n G X ( x i ) and D = max i = 1 , . . . , n G X ( x i ) i 1 n , such as its P-value being given by
P - value = Pr Z x = 1 2 i = 1 ( 1 ) i 1 exp 2 i x 2 .
(7)
Anderson–Darling ( AD ) statistic is defined as
A = n n 1 i = 1 n 2 i 1 log Φ ( i ) + log 1 Φ ( n i + 1 ) ,
where Φ ( i ) = Φ 1 s x ( i ) x ¯ , Φ ( · ) is the cumulative of the standard normal distribution, s and x ¯ denote the standard deviation and mean data points. The P-value of AD statistic is provided by
P - value = Pr A x ,
where A = A 1 + 3 4 n + 9 4 n 2 is a modified statistic; see Table 4.9 in Stephens [33].
(8)
Cramér-von Mises ( CvM ) statistic is defined as
W = 1 12 n + i = 1 n Φ ( i ) 2 i 1 2 n 2 ,
such as its P-value is given by
P - value = Pr W x ,
where W = W 1 + 1 2 n is a modified statistic; see Table 4.9 in Stephens [33].
Except for the highest P-values, the best model is the one that yields the lowest values for all goodness of fit statistics. Table 6 shows the MLEs (with standard errors (St.Ers)) of γ and σ for the ULL or its rivals based on the full MFL data, together with the fitted values of all the criteria that were previously mentioned. Because the ULL lifetime model yields the greatest P-values from the AD , CvM , or KS test and the lowest values for all other metrics, it is the best option among the fitted models, as presented in Table 6.
We also presented five graphical demonstrations of our fitting, namely:
(1)
Probability–probability (PP); see Figure 4a;
(2)
Data histograms with fitted density lines; see Figure 4b;
(3)
Fitted reliability lines; see Figure 4c;
(4)
Scaled–TTT transform; see Figure 4d;
(5)
Contour; see Figure 4e.
Figure 4a shows that the probability dots closely match the theoretical probability line. The fitted ULL density line in Figure 4b accurately captures the MFL data histograms. The fitted reliability line of the ULL model in Figure 4c is a better match for the empirical reliability line compared to other models. Figure 4d also illustrates that the MFL data set has an increasing failure rate, which confirms the same information shown in Figure 1. Additionally, Figure 4e displays that the estimated values of γ ^ and σ ^ exist and are unique. Henceforth, we suggest using γ 2.9192 and σ 1.9339 as starting points to run any additional computations.
To examine the proposed inference methodologies, from Table 5, for a fixed FP = 50% and several choices of S and T i , i = 1 , 2 , different IT2-APC samples of size d 2 are created; see Table 7. For each data set in Table 7, the MLE and Bayes’ MCMC estimate (along with their St.Ers) as well as the 95% ACI/HPD interval estimates (along with their interval widths (IWs)) of γ , σ , R ( t ) , or h ( t ) (at t = 0.35 ) are obtained; see Table 8. After repeating the MCMC sampler 50,000 times and removing the first 10,000 times (burn-in), the acquired Bayes and HPD interval estimators are evaluated using improper gamma priors. For computational logic, we set ( θ i , β i ) for i = 1 , 2 , equal to 0.001. The results reported in Table 8 indicate that the offered point and HPD interval estimates using the Bayesian approach outperform the classical ones in terms of minimum St.Ers and IWs.
To show that the acquired MLEs of γ and σ exist and are unique, we look at their profile log-likelihood functions. Please refer to the supplementary file. They demonstrate, based on all samples produced from MFL, that the estimated values of γ or σ exist and are unique.
Both density and trace plots of γ , σ , R ( t ) , and h ( t ) for each data set mentioned in Table 7 are shown in the supplementary file to demonstrate the convergence of MCMC iterations. These plots indicate that the MCMC approach yields satisfactory results. The recommended number of samples to be discarded is sufficient to mitigate the impact of the recommended beginning values. In these plots, the dotted line represents the 95% HPD interval boundaries, and the solid line represents the Bayes estimate. Additionally, the findings show that while the estimated values of R ( t ) are negatively skewed, the values of γ , σ , and h ( t ) are almost symmetrical. We extracted several characteristics from the remaining 40,000 MCMC variates of γ , σ , R ( t ) , and h ( t ) , including mean, first quartile, median, third quartile, mode, standard deviation (St.Dv.), and skewness (Skew.), which are also provided in the supplementary file.

5.2. Engineering Data Analysis

This application examines a collection of data indicating the duration of twenty mechanical components (MCs) until they cease functioning. This data set was first provided by Murthy et al. [34]. In Table 9, the MCs data are provided.
Before considering the estimations, we have two concerns: (i) what is the validity of ULL for MCs data, and (ii) what is the superiority of the ULL model compared to the other models discussed in Section 5.1? Table 10 presents the MLEs (with their St.Ers) of γ and σ in addition to all fitted criteria (including: L , AI , BI , CAI , HQI , AD (p-value), CvM (p-value), and KS (p-value)). The findings reported in Table 10 show that the ULL distribution provides a good fit for the MCs data set compared to others.
Following the same graphical tools depicted in Figure 4, Figure 5a shows that the dots representing probability closely match the line representing the theoretical probability more than others. The estimated ULL density line in Figure 5b shows that the ULL distribution matches better with the histograms of the MCs data than others. The ULL model’s reliability line in Figure 5c matches the real-life reliability line better than other models. The information depicted in Figure 5d is reinforced by the observation that the MCs’ data set exhibits an increasing failure rate. Additionally, it can be observed from Figure 5e that the estimated values exist and are unique for both γ ^ and σ ^ . We thus consider the estimates γ 6.0224 and σ 1.0038 as initial guess points to perform any additional computations based on MCs’ data.
Just like our scenarios discussed in Section 5.1, from the complete MCs data, we shall evaluate all offered estimators of γ , σ , R ( t ) , and h ( t ) (at t = 0.1 ). From Table 11, by taking m = 10 , based on some choices of T i , i = 1 , 2 , and S , five various IT2-APC samples are generated; see Table 9. Frequentist and Bayes’ MCMC estimates (with their St.Ers) of γ , σ , R ( t ) , and h ( t ) are calculated; see Table 12. Additionally, in Table 12, two-sided 95% ACI/HPD interval estimates (with their IWs) of the same unknown parameters are reported. From Table 12, one can observe that the Bayesian-based point and HPD interval estimates are superior to the conventional estimates in terms of minimum St.Ers and IWs.
The profile log-likelihood functions of γ and σ (shown in the Supplementary File) indicate the existence and uniqueness of the estimates of γ ^ and σ ^ . We examine the trace plots of the variables γ , σ , R ( t ) , and h ( t ) to observe whether the MCMC algorithm is operating efficiently. All trace plots illustrate that applying the final 40,000 MCMC iterations is effective and produces good results for all the unknown values. Additionally, the vital statistics of γ , σ , R ( t ) , and h ( t ) (presented in the supplementary file) demonstrate that although the computed estimates for h ( t ) are negatively skewed, those for γ , σ , or R ( t ) are fairly symmetrical.

6. Optimum Progressive Scenario

The goal during reliability trials is to assess and decide upon the optimal (best) progressive censoring from a group of available options. So, choosing the best progressive design has been a topic of interest in statistics. For more information, one can refer to Ng et al. [35], Pradhan and Kundu [36], and other researchers have also studied this topic. To determine the best fashion of progressive censoring among others to gather information about unknown parameters under consideration, Table 1 lists different criteria to help us choose the best progressive plan.
Our objective for Crit[1] is to maximize the values of the numbers found on the main diagonal of the estimated Fisher I ( · ) information. In the same way, considering both Crit[i] for i = 1 , 2 , we aim to reduce both trace and determinant of the approximated OVC matrix, respectively. Furthermore, criterion Crit[4] helps to decrease the variance of log-MLE of the qth quantile, denoted by V ^ ( log ( P ^ q ) ) , such as
log ( P ^ q ) = log ( 1 log ( q ) ) log ( σ ) γ 1 , 0 < q < 1 ,
where the delta method is reconsidered here to evaluate V ^ ( log ( Q ^ q ) ) . Subsequently, to choose the best progressive design, one needs to find the progressive pattern that has the smallest values for Crit[i] for i = 2 , 3 , 4 , and the largest value for Crit[1].

6.1. Optimum Progressive Using Environmental Data

To pick the optimum progressive scenario from the given MFL data, all optimum criteria reported in Table 13 are evaluated through the acquired MLEs γ ^ and σ ^ (which are provided in Table 8). In order to ascertain the optimal progressive design among the suggested schemes utilized in samples A, B, C, D, and E, Table 14 presents the optimum Crit[i] for i = 1 , 2 , 3 , 4 derived from the MFL data.
Results in Table 14 indicate that:
  • Via Crit[i] for i = 1 , 3 , 4 ; the R-censoring S = ( 0 5 , 2 5 ) (in Sample C) is the optimum than others.
  • Via Crit[2]; the U-censoring S = ( 1 10 ) (in Sample E) is optimal one vs. others.

6.2. Optimum Progressive Using Engineering Data

Using the MCs’ data, to find the optimum progressive pattern, all optimum criteria reported in Table 13 are evaluated through the acquired MLEs γ ^ and σ ^ (which are provided in Table 12). In Table 15, the fitted optimum Crit[i] for i = 1 , 2 , 3 , 4 , from the MCs’ data, are provided.
Results in Table 15 indicated that:
  • Via Crit[1]; the L-censoring S = ( 2 5 , 0 5 ) (in Sample A) is the optimal one vs. others.
  • Via Crit[2]; the U-censoring S = ( 1 10 ) (in Sample E) is the optimal one vs. others.
  • Via Crit[i] for i = 3 , 4 ; the R-censoring S = ( 0 5 , 2 5 ) (in Sample C) is the optimal one vs. others.
The best progressive designs, based on the highest flood level and data on mechanical parts, support the same conclusions mentioned in Section 2. In simpler terms, based on the analysis conducted in the environment and engineering fields, we can say that the suggested methods work well on real-world data and provide a good understanding of the lifetime model. These findings are limited by the use of the ULL model and the IT2-APC plan, and they cannot be generalised to other lifetime models or censorship techniques.

7. Concluding Remarks

This study provided a range of statistical inference methodologies covering the estimation of the unit log-log model, including the model parameters and some reliability benchmarks in the context of improved adaptive progressively Type-II censored data. The work in this paper is divided into three parts. The first part derives the point and interval estimations using classical and Bayesian approaches. The maximum likelihood and Bayesian estimation via squared error loss functions are employed for this purpose. The second part includes the numerical comparison between the various estimates. A simulation study under several situations is considered to compare the various point and interval estimations using some statistical standards, including the mean square error and coverage probability. Two genuine data sets from various domains are analyzed from a practical perspective to demonstrate the applicability of the proposed methodologies. The primary deduction drawn from the second part is that the Bayesian method outperforms the conventional method. Furthermore, the two applications demonstrated how adaptable the unit log-log model is as well as how it can yield better results than certain well-known models. The final part explored the selection of the appropriate progressive censoring approach. For this, four precision criteria are considered. We applied the given criteria to the real data sets examined in the real data section to demonstrate their significance. In future work, it is important to compare the proposed analytical methods with some other methods, such as the product of spacings and E-Bayesian methods.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/axioms13030152/s1, Table S1: Av.Es (1st column), RMSEs (2nd column) and MABs (3rd column) of γ when ( T 1 , T 2 ) = ( 0.2 , 0.3 ) ; Table S2: Av.Es (1st column), RMSEs (2nd column) and MABs (3rd column) of γ when ( T 1 , T 2 ) = ( 0.4 , 0.6 ) ; Table S3: Av.Es (1st column), RMSEs (2nd column) and MABs (3rd column) of σ when ( T 1 , T 2 ) = ( 0.2 , 0.3 ) ; Table S4: Av.Es (1st column), RMSEs (2nd column) and MABs (3rd column) of σ when ( T 1 , T 2 ) = ( 0.4 , 0.6 ) ; Table S5: Av.Es (1st column), RMSEs (2nd column) and MABs (3rd column) of R ( t ) when ( T 1 , T 2 ) = ( 0.2 , 0.3 ) ; Table S6: Av.Es (1st column), RMSEs (2nd column) and MABs (3rd column) of R ( t ) when ( T 1 , T 2 ) = ( 0.4 , 0.6 ) ; Table S7: Av.Es (1st column), RMSEs (2nd column) and MABs (3rd column) of h ( t ) when ( T 1 , T 2 ) = ( 0.2 , 0.3 ) ; Table S8: Av.Es (1st column), RMSEs (2nd column) and MABs (3rd column) of h ( t ) when ( T 1 , T 2 ) = ( 0.4 , 0.6 ) ; Table S9: The ACWs (1st column) and CPs (2nd column) of 95% ACI/HPD intervals of γ when ( T 1 , T 2 ) = ( 0.2 , 0.3 ) ; Table S10: The ACWs (1st column) and CPs (2nd column) of 95% ACI/HPD intervals of γ when ( T 1 , T 2 ) = ( 0.4 , 0.6 ) ; Table S11: The ACWs (1st column) and CPs (2nd column) of 95% ACI/HPD intervals of σ when ( T 1 , T 2 ) = ( 0.2 , 0.3 ) ; Table S12: The ACWs (1st column) and CPs (2nd column) of 95% ACI/HPD intervals of σ when ( T 1 , T 2 ) = ( 0.4 , 0.6 ) ; Table S13: The ACWs (1st column) and CPs (2nd column) of 95% ACI/HPD intervals of R ( t ) when ( T 1 , T 2 ) = ( 0.2 , 0.3 ) ; Table S14: The ACWs (1st column) and CPs (2nd column) of 95% ACI/HPD intervals of R ( t ) when ( T 1 , T 2 ) = ( 0.4 , 0.6 ) ; Table S15: The ACWs (1st column) and CPs (2nd column) of 95% ACI/HPD intervals of h ( t ) when ( T 1 , T 2 ) = ( 0.2 , 0.3 ) ; Table S16: The ACWs (1st column) and CPs (2nd column) of 95% ACI/HPD intervals of h ( t ) when ( T 1 , T 2 ) = ( 0.4 , 0.6 ) ; Table S17: Properties of γ , σ , R ( t ) , and h ( t ) from MFL data; Table S18: Properties of γ , σ , R ( t ) , and h ( t ) from MCs data; Figure S1: Profile log-likelihoods of γ (left) and σ (right) from MFL data; Figure S2: Density (left) and Trace (right) plots of γ , σ , R ( t ) , and h ( t ) from MFL data; Figure S3: Profile log-likelihoods of γ (left) and σ (right) from MFL data; and Figure S4: Density (left) and Trace (right) plots of γ , σ , R ( t ) , and h ( t ) from MCs data.

Author Contributions

Methodology, R.A. and M.N.; Funding acquisition, R.A.; Software, A.E.; Supervision, M.N.; Writing—original draft, R.A. and M.N.; Writing—review and editing, A.E. and M.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2024R50), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Data Availability Statement

The authors confirm that the data supporting the findings of this study are available within the article.

Acknowledgments

The authors would also like to express thanks to the Editor-in-Chief and three anonymous referees for their constructive comments and suggestions, which significantly improved the paper. Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2024R50), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Balakrishnan, N.; Aggarwala, R. Progressive Censoring: Theory, Methods, and Applications; Springer Science & Business Media: Berlin, Germany, 2000. [Google Scholar]
  2. Ng, H.K.T.; Kundu, D.; Chan, P.S. Statistical analysis of exponential lifetimes under an adaptive Type-II progressive censoring scheme. Nav. Res. Logist. 2009, 56, 687–698. [Google Scholar] [CrossRef]
  3. Kundu, D.; Joarder, A. Analysis of Type-II progressively hybrid censored data. Comput. Stat. Data Anal. 2006, 50, 2509–2528. [Google Scholar] [CrossRef]
  4. Sobhi, M.M.A.; Soliman, A.A. Estimation for the exponentiated Weibull model with adaptive Type-II progressive censored schemes. Appl. Math. Model. 2016, 40, 1180–1192. [Google Scholar] [CrossRef]
  5. Chen, S.; Gui, W. Statistical analysis of a lifetime distribution with a bathtub-shaped failure rate function under adaptive progressive type-II censoring. Mathematics 2020, 8, 670. [Google Scholar] [CrossRef]
  6. Panahi, H.; Moradi, N. Estimation of the inverted exponentiated Rayleigh distribution based on adaptive Type II progressive hybrid censored sample. J. Comput. Appl. Math. 2020, 364, 112345. [Google Scholar] [CrossRef]
  7. Kohansal, A.; Shoaee, S. Bayesian and classical estimation of reliability in a multicomponent stress-strength model under adaptive hybrid progressive censored data. Stat. Pap. 2021, 62, 309–359. [Google Scholar] [CrossRef]
  8. Du, Y.; Gui, W. Statistical inference of adaptive type II progressive hybrid censored data with dependent competing risks under bivariate exponential distribution. J. Appl. Stat. 2022, 49, 3120–3140. [Google Scholar] [CrossRef] [PubMed]
  9. Alotaibi, R.; Elshahhat, A.; Rezk, H.; Nassar, M. Inferences for Alpha Power Exponential Distribution Using Adaptive Progressively Type-II Hybrid Censored Data with Applications. Symmetry 2022, 14, 651. [Google Scholar] [CrossRef]
  10. Yan, W.; Li, P.; Yu, Y. Statistical inference for the reliability of Burr-XII distribution under improved adaptive Type-II progressive censoring. Appl. Math. Model. 2021, 95, 38–52. [Google Scholar] [CrossRef]
  11. Nassar, M.; Elshahhat, A. Estimation procedures and optimal censoring schemes for an improved adaptive progressively type-II censored Weibull distribution. J. Appl. Stat. 2023, 1–25. [Google Scholar] [CrossRef]
  12. Elshahhat, A.; Nassar, M. Inference of improved adaptive progressively censored competing risks data for Weibull lifetime models. Stat. Pap. 2023, 1–34. [Google Scholar] [CrossRef]
  13. Elbatal, I.; Nassar, M.; Ben Ghorbal, A.; Diab, L.S.G.; Elshahhat, A. Reliability Analysis and Its Applications for a Newly Improved Type-II Adaptive Progressive Alpha Power Exponential Censored Sample. Symmetry 2023, 15, 2137. [Google Scholar] [CrossRef]
  14. Alam, F.M.A.; Nassar, M. On Entropy Estimation of Inverse Weibull Distribution under Improved Adaptive Progressively Type-II Censoring with Applications. Axioms 2023, 12, 751. [Google Scholar] [CrossRef]
  15. Dutta, S.; Kayal, S. Inference of a competing risks model with partially observed failure causes under improved adaptive type-II progressive censoring. Proceedings of the Institution of Mechanical Engineers. Part J. Risk Reliab. 2023, 237, 765–780. [Google Scholar]
  16. Korkmaz, M.Ç.; Korkmaz, Z.S. The unit log–log distribution: A new unit distribution with alternative quantile regression modeling and educational measurements applications. J. Appl. Stat. 2023, 50, 889–908. [Google Scholar] [CrossRef] [PubMed]
  17. Korkmaz, M.Ç.; Karakaya, K.; Akdoğan, Y.; Yener, Ü.N.A.L. Parameters Estimation for the Unit log-log Distribution. Cumhur. Sci. J. 2023, 44, 224–228. [Google Scholar] [CrossRef]
  18. Greene, W.H. Econometric Analysis, 4th ed.; Prentice-Hall: NewYork, NY, USA, 2000. [Google Scholar]
  19. Alevizakos, V.; Koukouvinos, C. An asymptotic confidence interval for the process capability index Cpm. Commun.-Stat.-Theory Methods 2019, 48, 5138–5144. [Google Scholar] [CrossRef]
  20. Nassar, M.; Alotaibi, R.; Elshahhat, A. Reliability Estimation of XLindley Constant-Stress Partially Accelerated Life Tests using Progressively Censored Samples. Mathematics 2023, 11, 1331. [Google Scholar] [CrossRef]
  21. Noii, N.; Khodadadian, A.; Ulloa, J.; Aldakheel, F.; Wick, T.; Francois, S.; Wriggers, P. Bayesian inversion with open-source codes for various one-dimensional model problems in computational mechanics. Arch. Comput. Methods Eng. 2022, 29, 4285–4318. [Google Scholar] [CrossRef]
  22. Henningsen, A.; Toomet, O. maxLik: A package for maximum likelihood estimation in R. Comput. Stat. 2011, 26, 443–458. [Google Scholar] [CrossRef]
  23. Plummer, M.; Best, N.; Cowles, K.; Vines, K. CODA: Convergence diagnosis and output analysis for MCMC. R News 2006, 6, 7–11. [Google Scholar]
  24. Dumonceaux, R.; Antle, C.E. Discrimination between the log-normal and the Weibull distributions. Technometrics 1973, 15, 923–926. [Google Scholar] [CrossRef]
  25. Dey, S.; Menezes, A.F.; Mazucheli, J. Comparison of estimation methods for unit-gamma distribution. J. Data Sci. 2019, 17, 768–801. [Google Scholar] [CrossRef]
  26. Mazucheli, J.; Menezes, A.F.; Dey, S. The unit-Birnbaum-Saunders distribution with applications. Chil. J. Stat. 2018, 9, 47–57. [Google Scholar]
  27. Mazucheli, J.; Menezes, A.F.; Dey, S. Unit-Gompertz distribution with applications. Statistica 2019, 79, 25–43. [Google Scholar]
  28. Mazucheli, J.; Menezes, A.F.B.; Fern, L.B.; De Oliveira, R.P.; Ghitany, M.E. The unit-Weibull distribution as an alternative to the Kumaraswamy distribution for the modeling of quantiles conditional on covariates. J. Appl. Stat. 2020, 47, 954–974. [Google Scholar] [CrossRef] [PubMed]
  29. Mazucheli, J.; Menezes, A.F.B.; Dey, S. Improved maximum-likelihood estimators for the parameters of the unit-gamma distribution. Commun.-Stat.-Theory Methods 2018, 47, 3767–3778. [Google Scholar] [CrossRef]
  30. Topp, C.W.; Leone, F.C. A family of J-shaped frequency functions. J. Am. Stat. Assoc. 1955, 50, 209–219. [Google Scholar] [CrossRef]
  31. Mitnik, P.A.; Baek, S. The Kumaraswamy distribution: Median-dispersion re-parameterizations for regression modeling and simulation-based estimation. Stat. Pap. 2013, 54, 177–192. [Google Scholar] [CrossRef]
  32. Gupta, A.K.; Nadarajah, S. Handbook of Beta Distribution and Its Applications; CRC Press: Boca Raton, FL, USA, 2004. [Google Scholar]
  33. Stephens, M.A. Tests based on EDF statistics. In Goodness-of-Fit Techniques; D’Agostino, R.B., Stephens, M.A., Eds.; Marcel Dekker: New York, NY, USA, 1986. [Google Scholar]
  34. Murthy, D.N.P.; Xie, M.; Jiang, R. Weibull Models; Wiley Series in Probability and Statistics; Wiley: Hoboken, NJ, USA, 2004. [Google Scholar]
  35. Ng, H.K.T.; Chan, C.S.; Balakrishnan, N. Optimal progressive censoring plans for the Weibull distribution. Technometrics 2004, 46, 470–481. [Google Scholar] [CrossRef]
  36. Pradhan, B.; Kundu, D. On progressively censored generalized exponential distribution. Test 2009, 18, 497–515. [Google Scholar] [CrossRef]
Figure 1. Density (left) and Hazard rate (right) shapes of the ULL distribution.
Figure 1. Density (left) and Hazard rate (right) shapes of the ULL distribution.
Axioms 13 00152 g001
Figure 2. The MCMC visuals of γ , σ , R ( t ) , and h ( t ) with (a) ACF, (b) BGR and (c) Trace, for Monte Carlo simulation.
Figure 2. The MCMC visuals of γ , σ , R ( t ) , and h ( t ) with (a) ACF, (b) BGR and (c) Trace, for Monte Carlo simulation.
Axioms 13 00152 g002
Figure 3. Trace (left panel) and density (right panel) plots γ and σ in Monte Carlo simulation.
Figure 3. Trace (left panel) and density (right panel) plots γ and σ in Monte Carlo simulation.
Axioms 13 00152 g003
Figure 4. Fitting visualizations, including (a) PP, (b) Histogram, (c) Reliability, (d) TTT, and (e) Contour, from MFL data.
Figure 4. Fitting visualizations, including (a) PP, (b) Histogram, (c) Reliability, (d) TTT, and (e) Contour, from MFL data.
Axioms 13 00152 g004
Figure 5. Fitting visualizations, including (a) PP, (b) Histogram, (c) Reliability, (d) TTT, and (e) Contour, from MCs data.
Figure 5. Fitting visualizations, including (a) PP, (b) Histogram, (c) Reliability, (d) TTT, and (e) Contour, from MCs data.
Axioms 13 00152 g005
Table 1. Possible values of J 1 , J 2 , T , and S .
Table 1. Possible values of J 1 , J 2 , T , and S .
Case T J 1 J 2 S
10mm0
2 x m d 1 m n m i = 1 d 1 S i
3 T 2 d 1 d 2 n d 2 i = 1 d 1 S i
Table 2. Testing scenarios performed in the Monte Carlo study.
Table 2. Testing scenarios performed in the Monte Carlo study.
n [ FP % ] TestCensoringS
40[50%]1L ( 2 10 , 0 10 )
2M ( 0 5 , 2 10 , 0 5 )
3R ( 0 10 , 2 10 )
4D ( 2 5 , 0 10 , 2 5 )
5U ( 1 20 )
40[75%]6L ( 2 5 , 0 25 )
7M ( 0 13 , 2 5 , 0 12 )
8R ( 0 25 , 2 5 )
9D ( 2 3 , 0 25 , 2 2 )
10U ( 1 10 , 0 20 )
80[50%]1L ( 4 10 , 0 30 )
2M ( 0 15 , 4 10 , 0 15 )
3R ( 0 30 , 4 10 )
4D ( 4 5 , 0 30 , 4 5 )
5U ( 1 40 )
80[75%]6L ( 4 5 , 0 55 )
7M ( 0 27 , 4 5 , 0 28 )
8R ( 0 55 , 4 5 )
9D ( 4 3 , 0 55 , 4 2 )
10U ( 1 20 , 0 40 )
Table 3. Av.Es (1st column), RMSEs (2nd column) and MABs (3rd column) of γ , σ , R ( t ) , and h ( t ) when n[FP%] = 40[50%].
Table 3. Av.Es (1st column), RMSEs (2nd column) and MABs (3rd column) of γ , σ , R ( t ) , and h ( t ) when n[FP%] = 40[50%].
Par. ( T 1 , T 2 ) TestMLEMCMC
Prior→Pr.APr.B
γ (0.2, 0.3)10.7900.2150.1970.5870.1840.1740.7720.1410.142
20.8030.1920.1870.6560.1600.1660.8150.1350.132
30.8250.1750.1750.8360.1570.1460.6900.1220.124
40.7620.1630.1570.6380.1460.1390.7730.1140.090
50.7720.1530.1470.7490.1350.1270.7840.1090.085
(0.4, 0.6)10.7770.2240.1850.7490.1800.1650.8590.1250.112
20.7870.2070.1740.8230.1540.1510.8390.1170.108
30.8200.1870.1630.7230.1460.1420.7740.1010.082
40.9450.1540.1470.8920.1370.1260.7510.0920.076
50.7800.1450.1310.8400.1320.1210.8070.0820.072
σ (0.2, 0.3)11.4960.2140.1921.7360.1530.1641.4720.1360.112
21.4870.1930.1861.7980.1410.1521.5590.1300.101
31.4720.1740.1741.4440.1350.1431.5640.1240.096
41.5250.1640.1701.6260.1290.1401.4150.1120.085
51.5090.1570.1671.5740.1240.1351.5730.1070.078
(0.4, 0.6)11.5080.2040.1881.5840.1460.1581.4470.1000.093
21.4980.1870.1811.4780.1350.1481.4610.0980.086
31.4590.1690.1641.5960.1290.1381.5160.0930.078
41.3810.1570.1551.3040.1180.1241.4650.0840.077
51.5060.1510.1421.3890.1130.1161.4670.0780.068
R ( t ) (0.2, 0.3)10.4840.1520.1280.6030.1150.1130.4750.0740.087
20.4810.1480.1210.6480.1090.1100.5440.0720.081
30.4630.1330.1140.4520.0960.0920.5270.0680.079
40.5030.1240.0980.5470.0810.0850.4300.0660.074
50.4940.1150.0950.5390.0770.0820.5490.0610.070
(0.4, 0.6)10.4920.1470.1240.5370.1130.1030.4670.0700.084
20.4890.1360.1190.4810.0830.0960.4760.0690.075
30.4660.1270.1080.5470.0800.0870.5080.0660.073
40.4010.1190.0880.3510.0770.0810.4650.0630.071
50.4900.1090.0850.4250.0740.0760.4750.0610.067
h ( t ) (0.2, 0.3)11.9580.1780.1821.8260.1700.1621.9550.1350.096
21.9730.1700.1731.9580.1660.1572.1080.1280.088
31.9390.1660.1651.9640.1620.1541.9180.1240.084
41.9550.1620.1601.8410.1570.1491.8840.1150.079
51.9600.1560.1522.0010.1470.1412.0730.1070.076
(0.4, 0.6)11.9600.1530.1721.9720.1330.1432.0480.1240.083
21.9710.1470.1532.0120.1250.1322.0390.1180.080
31.9800.1410.1421.9710.1190.1262.0060.1090.079
41.9470.1350.1341.9290.1100.1191.9120.1040.077
51.9560.1290.1222.0450.1010.1081.9980.0970.074
Table 4. The ACWs (1st column) and CPs (2nd column) of 95% ACI/HPD intervals of γ , σ , R ( t ) , and h ( t ) when n[FP%] = 40[50%].
Table 4. The ACWs (1st column) and CPs (2nd column) of 95% ACI/HPD intervals of γ , σ , R ( t ) , and h ( t ) when n[FP%] = 40[50%].
Par. ( T 1 , T 2 ) TestACIHPD
Prior→Pr.APr.B
γ (0.2, 0.3)10.5660.9420.5070.9500.3650.955
20.5580.9450.4820.9540.3480.958
30.5300.9490.4560.9580.3340.961
40.5210.9510.4420.9600.3230.963
50.5120.9520.4240.9610.3130.964
(0.4, 0.6)10.5490.9470.4870.9550.3560.959
20.5330.9500.4670.9580.3390.962
30.4840.9520.4450.9610.3280.964
40.4640.9550.4360.9630.3160.966
50.4590.9570.4270.9650.2940.968
σ (0.2, 0.3)10.5540.9370.5190.9400.3580.949
20.5380.9390.5070.9420.3250.951
30.5220.9410.4900.9440.3170.953
40.5150.9420.4760.9450.3100.954
50.5040.9430.4640.9460.2840.956
(0.4, 0.6)10.5470.9410.4940.9440.3520.952
20.5320.9430.4780.9460.3230.955
30.5180.9440.4690.9470.3150.956
40.5070.9450.4540.9480.3040.957
50.4900.9470.4390.9510.2880.960
R ( t ) (0.2, 0.3)10.3680.9530.3380.9570.3050.962
20.3360.9560.3130.9600.2910.964
30.3130.9580.2820.9620.2720.965
40.2940.9600.2720.9640.2570.967
50.2840.9610.2660.9650.2480.969
(0.4, 0.6)10.3190.9560.3060.9600.2970.965
20.2890.9590.2830.9630.2760.968
30.2780.9610.2700.9650.2660.969
40.2740.9620.2640.9660.2540.971
50.2660.9630.2550.9670.2430.973
h ( t ) (0.2,0.3)10.4650.9460.4160.9500.3690.954
20.4340.9470.4070.9510.3560.955
30.4130.9490.3880.9530.3380.957
40.3990.9510.3650.9550.3240.959
50.3850.9530.3590.9560.3160.960
(0.4,0.6)10.4360.9480.3990.9530.3570.956
20.4270.9490.3850.9540.3490.957
30.4040.9510.3670.9560.3260.960
40.3840.9530.3560.9580.3180.962
50.3760.9550.3420.9590.3110.963
Table 5. The MFL data of Susquehanna River.
Table 5. The MFL data of Susquehanna River.
0.6540.6130.3150.4490.2970.4020.3790.4230.3790.324
0.2690.7400.4180.4120.4940.4160.3380.3920.4840.265
Table 6. Fitting outputs of the ULL and its competitive models from MFL data.
Table 6. Fitting outputs of the ULL and its competitive models from MFL data.
ModelMLE(St.Er) L AI BI CAI HQI AD (p-Value) CvM (p-Value) KS (p-Value)
γ σ
ULL2.9191(0.5538)1.9338(0.2183)−16.581−29.163−27.171−28.457−28.7740.312(0.449)0.054(0.450)0.136(0.850)
UBS0.3782(0.0598)0.8373(0.0695)−12.681−21.362−19.371−20.656−20.9731.087(0.008)0.184(0.009)0.229(0.247)
UGom0.0150(0.0132)4.1252(0.7504)−16.362−28.724−26.732−28.018−28.3350.314(0.448)0.056(0.424)0.146(0.786)
UW1.0287(0.2393)3.8928(0.7087)−16.081−28.163−26.171−27.457−27.7740.358(0.442)0.058(0.401)0.138(0.848)
UG8.7372(2.7118)9.7335(3.1095)−14.191−24.383−22.391−23.677−23.9940.531(0.174)0.085(0.180)0.150(0.761)
TL2.2450(0.5020)−7.3682−12.736−11.741−12.514−12.5420.743(0.053)0.122(0.057)0.335(0.022)
Kum3.3634(0.6033)11.788(5.3583)−12.869−21.737−19.746−21.031−21.3481.014(0.011)0.170(0.013)0.211(0.336)
Beta6.7594(2.0954)9.1141(2.8525)−14.065−24.130−22.139−23.424−23.7420.785(0.042)0.129(0.045)0.199(0.408)
Table 7. Artificial IT2-APC samples from MFL data.
Table 7. Artificial IT2-APC samples from MFL data.
SampleS T 1 ( d 1 ) T 2 ( d 2 ) S T Data
A ( 2 5 , 0 5 ) 0.28(1)0.38(6)120.380.265, 0.297, 0.315, 0.324, 0.338, 0.379
B ( 0 3 , 2 5 , 0 2 ) 0.35(5)0.40(7)90.400.265, 0.269, 0.297, 0.315, 0.324, 0.379, 0.392
C ( 0 5 , 2 5 ) 0.38(7)0.41(9)70.410.265, 0.269, 0.297, 0.315, 0.324, 0.338, 0.379, 0.392, 0.402
D ( 2 3 , 0 5 , 2 2 ) 0.30(2)0.42(8)80.420.265, 0.297, 0.315, 0.324, 0.338, 0.379, 0.402, 0.412
E ( 1 10 ) 0.40(6)0.45(9)50.450.265, 0.297, 0.315, 0.338, 0.379, 0.392, 0.402, 0.418, 0.449
Table 8. Estimates of γ , σ , R ( t ) , and h ( t ) from MFL data.
Table 8. Estimates of γ , σ , R ( t ) , and h ( t ) from MFL data.
SamplePar.MLEBayes95% ACI95% HPD
Est. St.Er Est. St.Er Lower Upper IW Lower Upper IW
A γ 2.45790.79642.35830.13940.89694.01893.12202.15492.54460.3896
σ 2.18450.33762.08360.14111.52272.84621.32351.90192.28340.3815
R ( 0.35 ) 0.75640.08960.71950.05020.58080.93200.35110.65160.78170.1301
h ( 0.35 ) 4.57601.99594.64720.30340.66408.48797.82394.07395.21011.1362
B γ 2.34220.72172.24170.13940.92763.75682.82922.05642.43620.3798
σ 2.12070.31232.01480.14571.50872.73271.22401.82082.21020.3895
R ( 0.35 ) 0.73340.09000.69180.05600.55690.90990.35300.61860.76310.1445
h ( 0.35 ) 4.53271.85934.58800.29040.88878.17687.28824.06265.14611.0834
C γ 2.64120.71052.54120.13921.24874.03382.78502.35552.73550.3800
σ 2.01530.27211.90900.14591.48202.54851.06651.71542.10470.3893
R ( 0.35 ) 0.70430.08890.65740.06320.53010.87850.34840.57490.73810.1632
h ( 0.35 ) 5.33471.89345.41670.32491.62389.04567.42184.84096.04601.2051
D γ 2.61030.72862.50990.13951.18224.03842.85612.32462.70470.3801
σ 2.12470.30602.01900.14541.52482.72451.19971.82482.21450.3898
R ( 0.35 ) 0.74150.08810.70020.05570.56880.91410.34530.62700.77110.1441
h ( 0.35 ) 4.98611.89185.08040.32231.27818.69407.41594.53215.73401.2019
E γ 2.54010.65822.43980.13931.25003.83032.58032.25462.63460.3800
σ 2.20810.31092.10220.14561.59872.81741.21871.90812.29790.3898
R ( 0.35 ) 0.76550.08230.72780.05070.60430.92680.32250.66130.79200.1307
h ( 0.35 ) 4.65031.68344.74700.31311.35107.94976.59874.22125.38911.1679
Table 9. Failure times of 20 mechanical components.
Table 9. Failure times of 20 mechanical components.
0.0670.0680.0760.0810.0840.0850.0850.0860.0890.098
0.0980.1140.1140.1150.1210.1250.1310.1490.1600.485
Table 10. Fitting outputs of the ULL and its competitive models from MCs’ data.
Table 10. Fitting outputs of the ULL and its competitive models from MCs’ data.
ModelMLE(St.Er) L AI BI CAI HQI AD (p-Value) CvM (p-Value) KS (p-Value)
γ σ
ULL6.0225(0.8731)1.0038(0.0029)−37.808−71.617−69.625−70.911−71.2280.540(0.167)0.067(0.303)0.124(0.919)
UBS0.2841(0.0449)2.1427(0.1347)−26.103−48.205−46.214−47.499−47.8172.787(0.001)0.452(0.001)0.276(0.094)
UGom0.0022(0.0005)2.6088(0.1255)−36.867−69.734−67.742−69.028−69.3450.544(0.163)0.070(0.278)0.211(0.335)
UW0.0031(0.0013)6.7294(0.4996)−35.819−67.639−65.647−66.933−67.2500.866(0.026)0.112(0.078)0.160(0.683)
UG17.648(5.5289)7.9145(2.5150)−29.272−54.544−52.553−53.839−54.1561.647(0.008)0.238(0.009)0.215(0.314)
TL0.6248(0.1397)−13.743−25.486−24.490−25.264−25.2912.249(0.005)0.348(0.002)0.484(0.005)
Kum1.5865(0.2442)21.809(10.172)−25.648−47.297−45.305−46.591−46.9082.764(0.001)0.448(0.001)0.263(0.126)
Beta3.1129(0.9369)21.826(7.0425)−27.881−51.763−49.771−51.057−51.3742.415(0.004)0.379(0.002)0.254(0.152)
Table 11. Artificial IT2-APC samples from MCs data.
Table 11. Artificial IT2-APC samples from MCs data.
SampleS T 1 ( d 1 ) T 2 ( d 2 ) S T Data
A ( 2 5 , 0 5 ) 0.069(1)0.087(6)120.0870.067, 0.076, 0.081, 0.084, 0.085, 0.085
B ( 0 3 , 2 5 , 0 2 ) 0.085(5)0.090(7)90.0900.067, 0.068, 0.076, 0.081, 0.084, 0.086, 0.089
C ( 0 5 , 2 5 ) 0.087(7)0.099(9)70.0990.067, 0.068, 0.076, 0.081, 0.084, 0.085, 0.085, 0.089, 0.098
D ( 2 3 , 0 5 , 2 2 ) 0.070(1)0.098(9)100.0980.067, 0.076, 0.081, 0.084, 0.085, 0.086, 0.089, 0.098
E ( 1 10 ) 0.118(8)0.122(9)30.1220.067, 0.081, 0.084, 0.086, 0.089, 0.098, 0.114, 0.115, 0.121
Table 12. Estimates of γ , σ , R ( t ) , and h ( t ) from MCs’ data.
Table 12. Estimates of γ , σ , R ( t ) , and h ( t ) from MCs’ data.
SamplePar.MLEBayesACIHPD
Est. St.Er Est. St.Er Lower Upper IW Lower Upper IW
A γ 7.78723.95050.044415.53015.4867.78720.00017.78707.78740.0004
σ 1.00070.00280.99531.00620.01101.00070.00011.00051.00090.0004
R ( 0.1 ) 0.46850.21380.04950.88760.83810.46660.05710.35450.57700.2225
h ( 0.1 ) 30.67218.7930.000067.50667.50630.6170.898028.84132.2853.4442
B γ 7.06922.48272.203111.9359.73227.06920.00017.06907.06940.0004
σ 1.00140.00330.99501.00780.01281.00140.00011.00121.00160.0004
R ( 0.1 ) 0.47440.14670.18690.76200.57510.47380.03150.41220.53550.1233
h ( 0.1 ) 27.75911.7274.774550.74345.96927.7440.457526.84028.6201.7800
C γ 7.06532.15812.835411.2958.45977.06530.00017.06517.06540.0004
σ 1.00140.00280.99581.00690.01111.00140.00011.00121.00160.0004
R ( 0.1 ) 0.47250.12790.22170.72330.50150.47190.03150.41040.53340.1230
h ( 0.1 ) 27.77210.1557.867847.67539.80727.7560.453526.86128.6251.7643
D γ 6.79752.01052.857010.7387.88116.79750.00016.79746.79770.0004
σ 1.00190.00370.99471.00920.01441.00190.00011.00171.00210.0004
R ( 0.1 ) 0.52660.12340.28470.76850.48380.52610.02410.47910.57330.0942
h ( 0.1 ) 25.9009.45947.360244.44037.08025.8920.398325.11226.6671.5548
E γ 5.48511.41512.71168.25875.54715.48510.00015.48495.48530.0004
σ 1.00730.00970.98821.02640.03821.00730.00011.00711.00750.0004
R ( 0.1 ) 0.64030.10230.43970.84090.40110.64020.00700.62650.65410.0276
h ( 0.1 ) 19.0636.49216.338531.78725.44819.0630.136418.79519.3290.5340
Table 13. Criteria of the best progressive censoring.
Table 13. Criteria of the best progressive censoring.
CriterionSubject
Crit[1] Maximize trace ( I ( α ^ ) )
Crit[2] Minimize trace ( V ( α ^ ) )
Crit[3] Minimize det ( V ( α ^ ) )
Crit[4] Minimize V ^ ( log ( P ^ q ) ) , 0 < q < 1
Table 14. Optimum progressive plans from MFL data.
Table 14. Optimum progressive plans from MFL data.
SampleCrit[1]Crit[2]Crit[3]Crit[4]
q 0.3 0.6 0.9
A20.0620.74830.037300.000850.002920.00870
B24.6730.61840.025060.000850.002880.00836
C30.8780.57880.018740.000570.001800.00549
D23.0730.62460.027070.000660.002050.00620
E20.7060.52990.025590.000670.001930.00572
Table 15. Optimum progressive plans from MCs’ data.
Table 15. Optimum progressive plans from MCs’ data.
SampleCrit[1]Crit[2]Crit[3]Crit[4]
q 0.3 0.6 0.9
A92,435,021.52315.6071.688 × 10 7 5.729 × 10 5 3.189 × 10 4 2.284 × 10 3
B28,664,462.0236.16402.150 × 10 7 3.654 × 10 5 1.838 × 10 4 1.348 × 10 3
C28,897,096.8104.65751.612 × 10 7 2.981 × 10 5 1.383 × 10 4 1.007 × 10 3
D13,740,904.8594.04222.942 × 10 7 3.508 × 10 5 1.540 × 10 4 1.086 × 10 3
E974,049.198632.00262.056 × 10 6 5.371 × 10 5 2.193 × 10 4 1.503 × 10 3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alotaibi, R.; Nassar, M.; Elshahhat, A. Estimation and Optimal Censoring Plan for a New Unit Log-Log Model via Improved Adaptive Progressively Censored Data. Axioms 2024, 13, 152. https://doi.org/10.3390/axioms13030152

AMA Style

Alotaibi R, Nassar M, Elshahhat A. Estimation and Optimal Censoring Plan for a New Unit Log-Log Model via Improved Adaptive Progressively Censored Data. Axioms. 2024; 13(3):152. https://doi.org/10.3390/axioms13030152

Chicago/Turabian Style

Alotaibi, Refah, Mazen Nassar, and Ahmed Elshahhat. 2024. "Estimation and Optimal Censoring Plan for a New Unit Log-Log Model via Improved Adaptive Progressively Censored Data" Axioms 13, no. 3: 152. https://doi.org/10.3390/axioms13030152

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop