Classical and Bayesian Inference of an Exponentiated Half-Logistic Distribution under Adaptive Type II Progressive Censoring

The point and interval estimations for the unknown parameters of an exponentiated half-logistic distribution based on adaptive type II progressive censoring are obtained in this article. At the beginning, the maximum likelihood estimators are derived. Afterward, the observed and expected Fisher’s information matrix are obtained to construct the asymptotic confidence intervals. Meanwhile, the percentile bootstrap method and the bootstrap-t method are put forward for the establishment of confidence intervals. With respect to Bayesian estimation, the Lindley method is used under three different loss functions. The importance sampling method is also applied to calculate Bayesian estimates and construct corresponding highest posterior density (HPD) credible intervals. Finally, numerous simulation studies are conducted on the basis of Markov Chain Monte Carlo (MCMC) samples to contrast the performance of the estimations, and an authentic data set is analyzed for exemplifying intention.


Adaptive Type II Progressive Censoring Scheme
In this day and age, owing to the development of science and technology, industrial products have become greatly reliable and as a result, getting sufficient failure time during a life testing experiment for any statistical analysis purposes results in a sharp increase in cost and time. Hence, the aim of reducing test time and saving the cost leads us into the realm of censoring. With units removed before their failure time purposefully, the duration and cost can be greatly reduced. Many statisticians have investigated various censoring schemes. The two most commonly used censoring schemes are type I and type II censoring schemes. In type I censoring, the life-testing experiment terminates at a predetermined time while, under type II censoring, the life-testing test stops once the observed failure units reach the predetermined number. For the sake of further reducing the experimental cost and time, a concoction of these two schemes called hybrid censoring was put forward. However, none of these schemes permits the survival units to be removed during the experiment, which lacks flexibility. Accordingly, the concept of progressive censoring was brought forward by [1] to increase the flexibility of removing units other than the terminal experimental time. A concise presentation of the progressive type II censoring is as follows. Assume that there are totally n identical units in the test. In addition, the failure time of the units is defined as X = (X (1:m:n) , X (2:m:n) , · · · , X (m−1:m:n) , X (m:m:n) ), and the censoring scheme denotes as R = (R 1 , R 2 , · · · , R m−1 , R m ), where n − m = ∑ m i=1 R i . When the first unit fails at X 1 , we remove R 1 units from n − 1 units remained randomly. Then, we remove R j units from the n − j − ∑ Since the adaptive type II progressive censoring scheme was proposed, its good property has attracted a great number of researchers to study this field. [4] further studied the adaptive progressive type II censoring model. Under this censoring model, [5] also studied the estimator of unknown parameters of Weibull distribution. The classical estimations and the Bayesian estimations were both derived from the scheme. [6] collaborated the adaptive type II progressive censoring with the exponential step-stress accelerated lifetesting model to derive confidence intervals. [7] extended this censoring scheme by taking account of competing risks under two-Parameter Rayleigh Distribution and made classical and Bayesian inference.

The Exponentiated Half-Logistic Distribution
The exponentiated half-logistic distribution (EHL) is extremely famous in numerous applications particularly in parameter estimates. It has been applied in many areas, including insurance, engineering, medicine, education, etc. [8] noted that the this distribution is suitable for modeling lifetime data and is extremely parallel to the two-parameter family of distributions. For example, the Gamma distribution is an important distribution in the two-parameter family of distributions. But compared to the Gamma distribution, exponentiated half-logistic distribution has a whip hand due to the closed form of its cumulative distribution.
In this article, we focus on the exponentiated half-logistic distribution. The probability density function (PDF) is written as : Since the adaptive type II progressive censoring scheme was proposed, its good property has attracted a great number of researchers to study this field. The adaptive progressive type II censoring model was further studied in Ref. [4]. Under this censoring model, Ref. [5] also studied the estimator of unknown parameters of Weibull distribution. The classical estimations and the Bayesian estimations were both derived from the scheme. The adaptive type II progressive censoring was collaborated with the exponential stepstress accelerated life-testing model to derive confidence intervals in Ref. [6]. Furthermore, this censoring scheme was also extended by taking account of competing risks under two-Parameter Rayleigh Distribution and making classical and Bayesian inference by Ref. [7].

The Exponentiated Half-Logistic Distribution
The exponentiated half-logistic distribution (EHL) is extremely famous in numerous applications particularly in parameter estimates. It has been applied in many areas, including insurance, engineering, medicine, education, etc. This distribution is suitable for modeling lifetime data and is extremely parallel to the two-parameter family of distributions, which is noted in Ref. [8]. For example, the Gamma distribution is an important distribution in the two-parameter family of distributions. However, compared to the Gamma distribution, exponentiated half-logistic distribution has a whip hand due to the closed form of its cumulative distribution.
In this article, we focus on the exponentiated half-logistic distribution. The probability density function (PDF) is written as: x > 0, λ, σ > 0, (1) and the cumulative distribution function (CDF) is described as where λ > 0 is the shape parameter and σ > 0 is the scale parameter. We denote this distribution as EHI(λ,σ). The corresponding reliability function is written as: while the hazard rate function is: From Figure 2, when λ σ > 1, the PDF of the exponentiated half-logistic distribution is unimodal. In addition, while λ σ < 1, it becomes monotonically decreasing. When λ is fixed, the smaller σ is, the more sharply the PDF decreases. As for the CDF of the distribution, the growth of CDF becomes slow with σ increasing. Furthermore, smaller λ results in a higher rising rate. When λ = 1, the exponentiated half-logistic distribution degrades into the renowned half logistic distribution. The half logistic distribution has extensive use particularly employed in censored data in the area of survival analysis. This distribution has been studied by some researchers. The order statistics of the half logistic distribution was studied in Ref. [9]. On the basis of progressively type II censored data, Ref. [10] derived the classical and Bayes estimators of the scale parameter of this distribution. In accordance with the study results of [10], analytic expressions were studied further for the biases of the maximum likelihood estimators of the distribution in [11]. The generalized ranked-set sampling technique was employed for obtaining parameters estimation of the half-logistic distribution in [12].
The exponentiated half-logistic distribution has recently attracted a lot of researchers. On the basis of progressive Type-II censored data, Ref. [13] derived the maximum likelihood estimator of the scale parameter in an exponentiated half logistic distribution and proposed some approximate maximum likelihood estimators as well. In addition to the MLE, Ref. [14] focused on the moment estimators and entropy estimator in this distribution. For the purpose of promoting practicability of the distribution, Ref. [15] extended the exponentiated half-logistic distribution by putting forward the concept of the exponentiated half-logistic family, which is a fresh generator of continuous distributions of two excess parameters.
Considering that the life test sometimes stops at a pre-determined time, Ref. [16] developed acceptance sampling for the percentiles of this distribution. Meanwhile, not only the operating characteristic values of the sampling plans but also the producer's risk were shown. Based on the distribution, Ref. [17] proposed an attribute control chart for time truncated life tests with different shape parameters. Thus far, research associated with this distribution has a great deal of space to explore.
In this article, the problem of the point and interval estimation of the parameters for exponentiated half logistic distribution under adaptive type II progressive censored data are considered. We organize the remainder paper as follows. In Section 2, the maximum likelihood estimates are derived and computed. Meanwhile, the observed and expected Fisher information matrix is acquired and then the asymptotic confidence intervals are established. We employ the bootstrap resampling method to build two bootstrap confidence intervals in Section 3. As for Section 4, Bayesian estimations under several loss functions are carried out by utilizing the Lindley method. The importance sampling method is also used to calculate the Bayesian estimates and construct the highest posterior density (HPD) credible intervals. Simulations are conducted and the behaviors of estimators obtained with the diverse methods are evaluated and compared in Section 5. An authentic data set is studied to illustrate the effectiveness of estimation means in the above sections in Section 6. In the end, the conclusions of point and interval estimations are drawn in Section 7.

Point Estimation
In this section, maximum likelihood estimation is used to estimate the unknown parameters on the basis of the adaptive type II progressive censored data. Assume that the adaptive type II progressive censored data come from an exponentiated half-logistic distribution. Let x (i:m:n) denote the i−th observation, thus we know x (1:m:n) < x (2:m:n) . . . < x (m:m:n) . In addition, T represents the expected experimental time and J denotes the index of the last failure before time T.
For the sake of simplicity, let x = (x 1 , x 2 , · · · , x m ) denote (x (1:m:n) , x (2:m:n) , · · · , x (m:m:n) ). The likelihood function turns to be where The corresponding likelihood function is derived as Therefore, the log-likelihood function can be obtained by where D is a constant. Finding the partial derivatives involving σ and λ separately and letting them equal zero, the equations correspond to where The roots of the equations correspond to the MLEs. However, owing to the nonlinearity of the equations, obviously we can not obtain the explicit expressions. Thus, the Newton-Raphson method is employed to solve this problem. The Newton-Raphson method is an important method to find the roots of equations by employing the Taylor series method. Thus, the Newton-Raphson method is employed to acquire the MLEs, written asσ andλ.

Asymptotic Confidence Interval
In this subsection, the asymptotic confidence intervals for σ and λ are established by employing Var(σ) and Var(λ). We acquire the asymptotic confidence intervals for σ and λ from the variance-covariance matrix, which is also known as the inverse Fisher information matrix. The Fisher information matrix is a generalization of the Fisher information amount. The Fisher information amount represents the average amount of information about the state parameters in a certain sense that a sample of random variables can provide. The Fisher information matrix (FIM) I(σ, λ) is Here, (13) where is called the expected Fisher matrix. It is determined by the distribution of the order statistics X (i) . The PDF of X (i) based on the progressive type II censored sample generally can be derived from [1]. where The adaptive progressive type II censoring is considered as an improvement of the progressive type II censoring. Actually, the PDF of X (i) of EHL(λ,σ) under adaptive progressive type II censoring turns out to be where After sorting out, the formula (15) can be written as Afterwards, we can calculate Fisher information matrix FIM I(σ, λ) directly based on (16). In order to simplify complex calculation, the observed Fisher Information matrix I σ,λ is employed skillfully to approximate the expected Fisher information matrix, and then the variance-covariance matrix can be obtained. Then, the I(σ,λ) turns out to be Here,σ andλ are the MLEs of σ and λ separately.
Then, the asymptotic variance-covariance matrix is the inverse of the observed Fisher Information matrix I σ,λ , denoted as I −1 σ,λ .
Thus, the 100(1 − α)% asymptotic confidence intervals for σ and λ can be constructed as where d α denotes the upper α-th quantile of the standard normal distribution.

Bootstrap Confidence Intervals
It is noticed that the classical theory works well with a large sample size while it makes little sense on the condition that the sample size is small. Thus, the bootstrap methods are applied to provide more precise confidence intervals.
The two most commonly used bootstrap methods are proposed, see [18]. One is the percentile bootstrap method (boot-p). It replaces the distribution of original sample statistics with the distribution of Bootstrap sample statistics to establish confidence intervals. The other is the bootstrap-t method (boot-t). In addition, the core idea of this method is to convert the Bootstrap sample statistic into the corresponding t statistic. The detailed procedure for simulation of the two bootstrap methods is listed, see Algorithms 1 and 2. x = (x 1 , x 2 , · · · , x m ), denoted asσ andλ. (If we carry out a simulation study, we should first generate an adaptive progressive type II censored sample x = (x 1 , x 2 , · · · , x m ) from EHL(λ,σ) with T, n, m, R as the original sample.) step 3 Generate a bootstrap sample x * usingσ,λ and the same censoring pattern (n, m, T, R). Then, calculate the bootstrap MLEs under sample x * , denote asσ * andλ * . step 4 Repeat step 3 N boot times, then we can obtain a series of bootstrap MLEs in ascending order, respectively, and obtain (σ ).

Bootstrap-t Confidence Intervals
Then, the 100(1 − α)% Boot-t confidence intervals are given by where K 1 and K 2 are the integer parts of α 2 × N boot and (1 − α 2 ) × N boot , respectively.

Algorithm 2:
Constructing bootstrap-t confidence intervals step 1 Set the simulation number N boot times ahead. step 2 Compute the MLEs of σ and λ under the original censored sample x = (x 1 , x 2 , · · · , x m ), denoted asσ andλ. (If we carry out a simulation study, we should first generate an adaptive progressive type II censored sample x = (x 1 , x 2 , · · · , x m ) from EHL(λ,σ) with T, n, m, R as the original sample.) step 3 Generate a bootstrap sample x * usingσ,λ and the same censoring pattern (n, m, T, R). Then, calculate the bootstrap MLEsσ * andλ * and their variances Var(σ * ) and Var(λ * ).
step 5 Repeat steps 2-3 N boot times to acquire a series of bootstrap t-statistics S 1 (1) * * ,S 1 in ascending order respectively and obtain S 1

Bayesian Estimation
In this section, we compute the Bayesian estimates of the quantities by using the Lindley method and the importance sampling procedure. Unlike classical statistics, Bayesian statistics treat quantities as random variables, which combines the prior information with observed information.
The option of prior distribution is a pivotal problem. Generally speaking, the conjugate prior distribution is the first choice due to its algebraic simplicity. However, it is very difficult to find such prior when both quantities σ and λ are unknown. The prior distribution is reasonable to keep the same form as (6). Suppose that σ ∼ IG(γ, δ) and λ ∼ Ga(α, β) and that these two priors are independent. The PDFs of their prior distributions correspond to The corresponding joint distribution is Given the sample x, the posterior distribution π(σ, λ|x) can be written as

Symmetric and Asymmetric Loss Functions
The loss function is employed to appraise the intensity of inconsistency between the estimation of the parameter and the true value. The squared error loss function is a symmetric loss function, which is applied in many areas. However, on the condition that overestimation causes greater loss compared with underestimation or vice versa, using a symmetric loss function is not suitable. Instead, the asymmetric loss function is employed to fix the problem. Therefore, we consider the Bayesian estimations under one symmetric loss function, namely the squared error loss function (SELF) as well as two asymmetric loss functions, namely the Linex Loss Function (LLF) and the General Entropy Loss Function (GELF) in this subsection.

Squared Error Loss Function (SELF)
The squared error loss function is a symmetric loss function, which puts the overestimate and underestimate on the same level. It is the sum of squared distances between the target variable and the predicted value. The function corresponds to whereυ is the estimation of υ.
The Bayesian estimation of υ under SELF is given bŷ Then, for the unknown parameters σ and λ, the Bayesian estimates under SELF can be given directly asσ

Linex Loss Function (LLF)
The Linex function is a well-known asymmetric loss function. It is defined as The size of p denotes the level of asymmetry and its sign represents the direction of asymmetry. For p < 0, LLF alters exponentially in the negative direction and linearly in the positive direction, thus a negative bias has a more serious impact-while, for p > 0, the positive error will be punished heavily. The larger the dimension of p is, the larger the punishment intensity is. When |p| approaches 0, LLF is almost symmetric.
The Bayesian estimation of υ under LLF is written aŝ Then, for unknown parameters σ and λ, the Bayesian estimates under LLF arê

General Entropy Loss Function (GELF)
The General Entropy loss function (GELF) is another noted asymmetric loss function, which is For q > 0, the overestimation has a more serious impact compared with the underestimation, and vice versa. The Bayesian estimation of υ under GELF is derived: Notably, when q = −1, the Bayesian estimation under GELF has the same value as that under SELF. The Bayesian estimates of σ and λ under GELF correspond tô We can know that the Bayesian estimates of σ and λ are in the modality of a ratio of two complicated integrals and the specific and explicit forms cannot be represented without trouble. Thus, the Lindley method is employed to solve this problem.

Squared Error Loss Function (SELF)
For σ, let ϕ(σ, λ) = σ; therefore, Then, the Bayesian estimate of σ under SELF iŝ Then, the Bayesian estimate of λ under SELF can be written aŝ

General Entropy Loss Function (GELF)
For parameter σ, let ϕ(σ, λ) = σ −q , hence The Bayesian estimate of σ under GELF can be written aŝ Similarly, for parameter λ, , it is clear that ϕ(σ, λ) = λ −q , hence The Bayesian estimate of λ under GELF can be written aŝ Though the Lindley approximation is effective to obtain point estimations by estimating the ratio of integrals, it can not provide credible intervals of the unknown parameters. Therefore, the importance sampling method is adopted to gain not only point estimation but also credible intervals.

Importance Sampling Procedure
The importance sampling procedure is an extension to the Monte Carlo method, which can greatly reduce the number of sample points drawn in the simulation, and is widely used in the reliability analysis of various models. From (6) and (21), the joint posterior distribution is derived by Therefore, the Bayesian estimation of ϕ(σ, λ) is acquired by the following steps: On the basis of step 1, generate λ from

3.
Repeat step 1 and step 2 M times and produce a series of samples. 4.

Simulation
Plenty of simulation experiments are carried out to appraise the performance of our estimations by Monte Carlo simulations. Here, the R software is employed for all the simulations. The point estimation is evaluated by the mean square error (MSE) and estimation value (VALUE), while the interval estimation is assessed based on the coverage rate (CR) and interval mean length (ML). For point estimation, smaller mean square error and closer estimation value suggest better performance of estimation. In addition, for interval estimation, the higher the coverage rate is and the narrower the interval mean length is, the better the estimation is.
First of all, adaptive type II progressive censored data from an exponentiated halflogistic distribution should be generated. The algorithm for generating adaptive Type II progressive censored data from a general distribution can be obtained in [3]. The algorithm to generate the censored data is listed in Algorithm 3.

2.
Confirm the value of J, and abandon the sample X J+2 , · · · , X m .

3.
Generate the first m − J − 1 order statistics from a truncated distribution In order to carry out simulations, we set σ = 1. Scheme I (Sch I) : R 1 = n − m, R k = 0, k = 2, 3, · · · , m. Scheme II (Sch II) : In addition, the specific diverse censoring schemes conceived for the simulation are listed in Table 1.
For simplicity, we abbreviate the censoring schemes. For example, (1, 1, 1, 0, 0, 0, 0) is represented as (1*3, 0*4). In each case, the simulation is repeated 3000 times. Then, the associated MSEs and VALUEs with the point estimation and the related coverage rates and mean lengths with the interval estimation can be acquired through Monte Carlo simulations using R software.  For maximum likelihood estimation, the L-BFGS-B method is used and the simulation results are put into Table A1. In Bayesian estimation, we employ not only non-informative priors (non-infor) but also informative priors (infor). For the non-informative priors, we set α = β = γ = δ = 0.0001. Then, for the informative priors, we should first determine the hyper-parameters for Bayesian estimation. Generally speaking, the actual value of the parameter is usually considered as the expectation of the prior distribution. However, due to the complexity and interactive influence of the two prior distributions, the optimal value can not be found directly. Thus, we adopt a genetic algorithm and simulated annealing algorithm to determine the optimal hyper-parameters and the results are: γ = 4.5, δ = 7.5, α = 4.5, β = 4.5. To get Bayesian point estimation, the Lindley method and the importance sampling method are employed. Three loss functions are adopted separately for comparison purposes. The parameter p of LLF is set to 0.5 and 1 and the parameter q of GELF is set to −0.5 and 0.5.
The informative Bayes method uses minimization of loss functions, and such minimizations can only be performed if the true parameter values are known. Hence, informative Bayes can only be seen as a reference, or an oracle method.
The results are presented in Tables A2-A9. In addition, the mean length and coverage rate of asymptotic confidence intervals, boot-t intervals, boot-p intervals, and HPD intervals at 95% confidence/credible level are also shown in Tables A10 and A11. Due to the excessive amount of tables, it is not easy for readers to find rules of the estimation. Therefore, some figures which present the most representative simulation results are made to show the rules more intuitively. Figures 3 and 4 present the MSEs of the maximum likelihood estimates of the two parameters under censoring scheme I and censoring scheme II when T = 2. Figures 5 and 6 compare the MSEs of maximum likelihood estimates with the Bayesian estimates with non-informative and informative priors obtained by importance sampling under censoring scheme I and T = 2.  From Table A1, we can draw that we can know that, when σ is considered, Sch I performs better than Sch II in all cases, yet when λ is considered, Sch II is more effective than Sch I except the case of n = 30. (4) There is no observed specific pattern with the change of T. It is apprehensible because the observed data may remain unaltered when T changes.
From Tables A2-A9, we can find that (1) Generally, the Bayesian estimates under three loss functions with informative priors are more accurate contrasted with MLEs in terms of MSE in all cases. This rule can be intuitively summarized from Figures 5 and 6. This is because the Bayesian method not only considers the data but also takes the prior information of unknown parameters into account. In addition, the importance sampling procedure outperforms the Lindley method. (2) From Figures 5 and 6, it is clear that the performance of the Bayesian estimates with non-informative priors is almost similar to MLEs under all circumstances. This is because we have no information with respect to the unknown parameters. In other words, it only takes the data into account. Thus, it is reasonable that the results are analogous to MLEs. (3) The Bayesian estimates under GELF are superior compared with those under SELF and LLF. For LLF, Bayesian estimates under p = 1 are better than those under p = 0.5 for the parameter λ, while choosing p = 0.5 is better than p = 1 for the estimate of σ. For GELF, take the fact that both q = −0.5 and q = 0.5 are satisfactory and perform well. On the whole, the Bayesian estimates under GELF using the importance sampling procedure are the most effective as they possess the minimal MSEs and the closest estimation values. (4) When σ is considered, Sch I performs better than Sch II except when n = 50, yet when λ is taken into account, Sch II is superior compared with Sch I in most cases. From Tables A10 and A11, we can draw these conclusions (1) The mean lengths of all the intervals become narrower as n and m increase, and this pattern holds for both σ and λ. In addition, the coverage rate of intervals of σ is higher while the coverage rate of intervals of λ is stable with the increase of m and n. (2) The HPD credible intervals and boot-t intervals perform better contrasted by asymptotic confidence intervals due to narrower mean length and higher coverage rate. In addition, the HPD credible intervals possess the narrowest mean length while the boot-t intervals have the highest coverage rate. (3) The results of the two parameters' intervals have no obvious connection with different censoring schemes.

Real Data Analysis
An authentic dataset is analyzed for expository intention by employing the methods mentioned above in this section. The dataset was initially from [20] and further employed by [21,22]. The complete data set describes log times to the breakdown of an insulating fluid testing experiment and is presented in Table 2. At the beginning, we should consider the problem whether the distribution EHL(λ, σ) fits the data set well. The fitting effect of exponentiated half-logistic distribution and Half for examining the goodness of fit include the negative log-likelihood function (− ln L), Kolmogorov-Smirnov (K-S) statistics with its p-value, Bayesian Information Criterion (BIC), and Akaike Information Criterion (AIC). The definitions are: where d is the number of parameters, L is the maximized value of the likelihood function, and n denotes the total number of observed values.
The results of the K-S, p-value, AIC, BIC, and − ln L of the two distributions are listed in Table 3. Obviously, exponentiated half-logistic distribution fits the model better since it has lower K-S, AIC, BIC, − ln L statistics, and higher p-value. Then, we can analyze this data on the basis of our model. We set n = 16, m = 12 and T = 3 2 , 2. The two different censoring schemes are (4, 0 * 11) and (1 * 4, 0 * 8). Table 4 presents the specific adaptive type II censoring data under different schemes based on the data set. The point estimations for σ and λ are presented in Tables 5 and 6. For Bayesian estimation, since we have no informative prior, a non-informative prior is applied, namely α = β = γ = δ = 0.0001. Three loss functions are considered, and we still use the parameters in the previous simulation. At the same time, 95% ACIs, boot-p, boot-t, and HPD intervals are established, while Tables 7 and 8 display the corresponding results. Let Lower denote the lower bound and Upper denote the upper bound.   From Tables 5-8, the following conclusions are drawn: (1) The estimates of parameter σ using the Lindley method generally tend to be larger than those gained by the importance sampling procedure.

Conclusions
In this manuscript, classical and Bayesian inference for exponentiated half-logistic distribution under adaptive Type II progressive censoring is considered. The maximum likelihood estimates are derived through the Newton-Raphson algorithm. Bayesian estimation under three loss functions is also considered and the estimates are derived through importance sampling and the Lindley method. Meanwhile, we establish the confidence and credible intervals of σ and λ and contrast them with each other. Asymptotic confidence intervals are constructed based on observed and expected Fisher information matrices. In order to tackle the problem of small sample size, boot-p and boot-t intervals are computed.
In the simulation section, estimation values and mean squared values are calculated to test the performance of the point estimation while mean lengths and coverage rates are considered for the interval estimation. According to the simulation results, it is clear that the Bayesian estimation which possesses suitable informative priors performs better than MLEs under all circumstances. In more detail, the Bayesian estimations under GELF perform best among all the estimations and the importance sampling procedure makes more sense than Lindley approximation. In addition, when it comes to interval estimation, boot-t and boot-p intervals perform better in the case of a small sample size than asymptotic confidence intervals. In addition, HPD credible intervals generally possess the shortest mean length while boot-t intervals have the highest coverage rate compared with other intervals.
Exponentiated half-logistic distribution under adaptive Type II progressive censoring is significant and practical due to the flexibility of the censoring scheme and the superior features of distribution. Furthermore, the competing risks and accelerated life test can be explored in the research field. In brief, carrying out further research on this model has great potential for survival and reliability analysis.

Data Availability Statement:
The data presented in this study are openly available in [20].

Conflicts of Interest:
The authors declare no conflict of interest.