Applying Transformer Insulation Using Weibull Extended Distribution Based on Progressive Censoring Scheme

: In this paper, the Weibull extension distribution parameters are estimated under a progressive type-II censoring scheme with random removal. The parameters of the model are estimated using the maximum likelihood method, maximum product spacing, and Bayesian estimation methods. In classical estimation (maximum likelihood method and maximum product spacing), we did use the Newton–Raphson algorithm. The Bayesian estimation is done using the Metropolis–Hastings algorithm based on the square error loss function. The proposed estimation methods are compared using Monte Carlo simulations under a progressive type-II censoring scheme. An empirical study using a real data set of transformer insulation and a simulation study is performed to validate the introduced methods of inference. Based on the result of our study, it can be concluded that the Bayesian method outperforms the maximum likelihood and maximum product-spacing methods for estimating the Weibull extension parameters under a progressive type-II censoring scheme in both simulation and empirical studies.


Introduction
Several cases in life-testing and reliability experiments arise when units are withdrawn or lost from the test before failure. These data of such tests or studies are called censored samples. Right, left, interval censoring, single or multiple censoring, and type-I or type-II censoring are all examples of censoring schemes, but conventional type-I and type-II censoring schemes do not allow units to be withdrawn at any stage other than the end of the experiment. In type-II censoring, a total of n units are placed on the test but instead of continuing until all units fail, the test is terminated at the time of the mth failure (1 ≤ m ≤ n) of units. Progressive type-II censoring is a generalization of this type of censorship. Increasingly, the type-II censoring scheme has recently sparked a lot of interest among statisticians. Under this case, n units are placed under test at time zero, and m failures are observed. In the first failure observed r 1 of surviving units are randomly selected, removed, and so on. In mth failures are observed and the remaining r m = n − r 1 − r 2 − r m−1 are all removed, and the experiment terminates. For more information, see Balakrishnan and Aggarwala (2000) [1] Balakrishnan, N. (2007) [2].
Under progressive type-II censoring, Almetwally and Almongy (2019) [3] discussed the Bayesian approach to infer the parameter estimation of the distribution of generalized power of Weibull. Hashem and Alyami (2021) [4] discussed inference on an exponential doubly Poisson distribution for a parallel-series structure. Abu-Moussa et al. (2021) [5] estimated the reliability of the stress-strength parameter for the Rayleigh distribution. Chen and Gui (2021) [6] estimated the unknown parameters of truncated normal distribution.
Monte Carlo (MCMC) method is used to find an approximate value of integrals of the posterior model parameters. The performance of the MLE, MPS and Bayesian estimation methods are investigated numerically for different sample sizes and parameter values.
The rest of this paper is organized as follows. The model description and notations are introduced in Section 2. MLE, MPS, and Bayesian estimation methods are given in Section 3. We provide a simulation study in Section 4. In Section 5, the transformer insulation application of real data is discussed. Finally, some remarks are offered in Section 6.

Model Description and Notations
Suppose a random variable t has WE distribution with parameter vector Ω = (β, δ, λ). The reliability function of WE distribution is given by Yuen and Tse (1996) [11] as follows: ; t > 0, f or any λ, δ, β > 0. (1) The corresponding hazard rate function of the WE model has the following form: (2) and the pdf is given by the following: The cumulative distribution function for the new Weibull extended distribution is given by the following: A versatile model that provides left-skewed, symmetrical, right-skewed, and reverse-J-shaped densities is the WE distribution (See Figure 1 left). Its hazard rate (HR) feature can provide decreasing, steady, rising, upside down bathtub, bathtub, and reverse-J-shaped risk rates (see Figure 1

right).
Axioms 2021, 10, x FOR PEER REVIEW 3 of 15 The major objective of this paper is to address the estimation problem of the WE distribution parameters when the data are progressively type-II censored with binomial removals. Here, the MLE of the model parameters is derived. The MPS of this model is discussed. Moreover, the Bayesian estimation method is derived and the Markov Chain Monte Carlo (MCMC) method is used to find an approximate value of integrals of the posterior model parameters. The performance of the MLE, MPS and Bayesian estimation methods are investigated numerically for different sample sizes and parameter values.
The rest of this paper is organized as follows. The model description and notations are introduced in Section 2. MLE, MPS, and Bayesian estimation methods are given in Section 3. We provide a simulation study in Section 4. In Section 5, the transformer insulation application of real data is discussed. Finally, some remarks are offered in Section 6.

Model Description and Notations
Suppose a random variable has WE distribution with parameter vector Ω = ( , , ). The reliability function of WE distribution is given by Yuen and Tse (1996) [11] as follows: The corresponding hazard rate function of the WE model has the following form: and the pdf is given by the following: The cumulative distribution function for the new Weibull extended distribution is given by the following: A versatile model that provides left-skewed, symmetrical, right-skewed, and reverse-J-shaped densities is the WE distribution (See Figure 1 left). Its hazard rate (HR) feature can provide decreasing, steady, rising, upside down bathtub, bathtub, and reverse-J-shaped risk rates (see Figure 1 right). The quantile function of the WE distribution is as follows: The progressive type-II censored scheme (PTIICS) can be described as follows: Assume that n independent observations have been set on life testing and the progressive censoring scheme i , i = 1, 2, . . . , m. At the time of the first failure, t 1 , 1 ∼ binomal(n − m, P), and units are randomly removed from the remaining (n − 1) surviving items. At the time of the second failure, t 2 , 2 ∼ binomal(n − m − 1 , V) and units of the remaining n − 2 − 1 , are randomly removed, and so on. The test continues until the mth failure at which time, all the remaining n − m − 1 − 2 − · · · − m−1 units are removed.
The experimenter determines the number of m failures and the removal probability V in PTIIC. Assume that each unit being excluded from the test is independent of the others but has the same removal probability V as the others. Then, at each failure time, the number of units removed follows a binomial distribution, as follows: j . The data form is as follows: T 1:m:n < T 2:m:n < · · · < T m:m:n .
The probability mass function for the number of units removed at each failure time, it's a binomial distribution, is as follows: while for i = 2, 3, . . . , m − 1.
where 0 ≤ r i ≤ n − m − ∑ i−1 j=1 r j . Moreover, suppose that i is independent of T i:m:n for all i. Then the joint likelihood function can be derived as follows: L(t i:m:n , Ω) = L 1 (t i:m:n , Ω)Pr( = r), where Pr( = r) = Pr( 1 = r 1 , 2 = r 2 , . . . , m−1 = r m−1 ), i.e., P's MLE can be quickly calculated by maximizing Equation (8). As a result, the MLE of P can be found by solving the equation below.
∂ ln L(t i:m:n , Ω ) ∂P does not require the binomial parameter P, then the likelihood function under PTIICS can be written as In MPS, the joint MPS under PTIICS can be written in the same mode as follows: where A is a constant that does not depend on parameters and the following is true: ; i = 2, . . . m,

The MLE, MPS, and Bayesian Estimation Methods under PTIICS
The MLE, MPS, and Bayesian estimation methods of WE distribution parameters based on PTIICS data with binomial random removal are discussed in this section.

MLE Method
Using Equation (9), the likelihood function for WE distribution based on PTIICS can be written as follows: is a constant that does not depend on the parameters.
The natural logarithm of the likelihood function equation can be obtained as follows: For convenience, ln L 1 (t i:m:n , Ω), hence the partial derivatives of Equation (12) are given as follows: ∂ ln L 1 (t i:m:n , Ω) ∂δ and ∂ ln L 1 (t i:m:n , Ω) ∂λ The MLE has no tractable expression since the above Equations (13)-(15) are difficult to solve analytically. However, through the use of statistical tools, they can be approached by regular optimization algorithms, such as the Newton-Raphson, Nelder-Mead or quasi-Newton Broyden-Fletcher-Goldfarb-Shannon (BFGS) algorithms.

MPS Method
Using Equation (9), the MPS function of WE distribution based on PTIICS can be written as follows: where A is a constant which does not depend on the parameters. The natural logarithm of the product spacing function is as follows: Let s(Ω) = ln S 1 (t i:m:n , Ω), then the partial derivatives by the MPS method of Equation (17) are given as follows: where H i (β, λ) = e ( t i:m:n λ ) β . Using the Newton-Raphson algorithm, the MPS estimates of the WE distribution parameters can be obtained.

Bayesian Estimation
We consider the Bayesian approximation in this section for estimating the WE distribution parameters based on PTIICS under the assumption that the random variables Ω = (β, λ, δ) have an independent prior distribution of gamma, assuming that β ∼ Gamma( 1 , 1 ), λ ∼ Gamma( 2 , 2 ) and δ ∼ Gamma( 3 , 3 ). Then, the prior joint density of β, λ, and δ can be written as follows: Axioms 2021, 10, 100 The posterior probability can be interpreted as a proportion to the probability equation product (11) and the densities of the joint prior to Equation (21): Π(Ω|t i:m:n ) ∝ L 1 (t i:m:n , Ω)π(Ω) Then, the posterior joint density of Ω is as follows: The squared error (SE) of the loss function, which is the symmetric loss function used, can be defined by The Bayes method leads to the estimator Ω, which, if the SE loss function is applied, is called the Bayes estimator. Under the SE loss function, the usual estimator of the parameters is the posterior mean. Therefore, the Bayesian estimators of the parameters Ω under SE, say Ω is obtained as the posterior mean, as follows: where the conditional posterior densities are defined as follows: and It is very difficult, analytically, to solve these integrals, so the MCMC method will be used. Gibb's sampling and more general Metropolis within Gibbs samplers are an important sub-class of the MCMC techniques. Such an algorithm was first developed by Metropolis et al. (1953) [28] and Hastings (1970) [29].
The Metropolis-Hastings (M-H) algorithm is one of the two most popular examples of the MCMC method, along with Gibb's sampling. The M-H algorithm is similar to acceptance-rejection sampling in that it assumes that a candidate value can be derived from a proposal distribution as a normal distribution for each iteration of the algorithm.

Simulation Study
For estimating parameters of the WE distribution in a lifetime under PTIICS, the Monte Carlo simulation was used to compare MLE, MPS, and Bayesian estimation methods. The following data were developed from the WE distribution, using Equation (5), where t is distributed as the WE distribution for different true parameters as follows: (1) Case I: (β = 0.5, λ = 0.5, δ = 0.5). See Table 1. (2) Case II: (β = 0.5, λ = 3, δ = 0.5 ). See Table 2. (3) Case III: (β = 1.5, λ = 3, δ = 2 ). See Table 3. After generating a sample from the WE distribution with different sample sizes (n) as 40, 80, and 150, we determine different probability P as 0.35 and 0.85 and determine the different size of the censored sample by using the ratio of affected as r = m n as 0.7 and 0.9, then m = nr. We generate the random removal of progressive censoring i from binomial removal as the following: Balakrishnan and Sandhu [3] defined the algorithm to generate progressive censoring scheme as follows: Generate m independent Uniform (0, 1) observations U i , U n , . . . , U m .
For estimating parameters of the WE distribution in a lifetime under PTIICS, the Monte Carlo simulation was used to compare MLE, MPS, and Bayesian estimation methods. The following data were developed from the WE distribution, using Equation (5), where is distributed as the WE distribution for different true parameters as follows: (1) Case I: (β = 0.5, = 0.5, = 0.5). See Table 1.
After generating a sample from the WE distribution with different sample sizes ( ) as 40, 80, and 150, we determine different probability as 0.35 and 0.85 and determine the different size of the censored sample by using the ratio of affected as = as 0.7 and 0.9, then = . We generate the random removal of progressive censoring ℜ from binomial removal as the following: Balakrishnan and Sandhu [3] defined the algorithm to generate progressive censoring scheme as follows: Generate independent Uniform (0,1) observations , , … , .
We generate WE distributed based on PTIICS as follows: : The MLE estimators are obtained by solving Equations (13)- (15). The MPS estimators are obtained by solving Equations (18)- (20). We can use the Newton-Raphson algorithm to find the optimal solution of MLE. Berndt et al. (1974) [30] discussed the Newton-Raphson algorithm. To find the root of a parameter's estimators of WE distribution based on PTIICS by MLE and MPS method, we use the next algorithm: (1) Start with a near value of the true value as initial values Ω ⟨ ⟩ ; Ω = ( , , ); = 1,2,3 satisfying Ω ⟨ ⟩ > 0.
(2) Jacobian matrix defined over the function vector Ω ⟨ ⟩ can be defined as:

Simulation Study
For estimating parameters of the WE distribution in a lifetime under PTIICS, the Monte Carlo simulation was used to compare MLE, MPS, and Bayesian estimation methods. The following data were developed from the WE distribution, using Equation (5), where is distributed as the WE distribution for different true parameters as follows: (1) Case I: (β = 0.5, = 0.5, = 0.5). See Table 1.
After generating a sample from the WE distribution with different sample sizes ( ) (2) Jacobian matrix defined over the function vector Ω ⟨ ⟩ can be defined as:

Simulation Study
For estimating parameters of the WE distribution in a lifetime under PTIICS, the Monte Carlo simulation was used to compare MLE, MPS, and Bayesian estimation methods. The following data were developed from the WE distribution, using Equation (5), where is distributed as the WE distribution for different true parameters as follows: (1) Case I: (β = 0.5, = 0.5, = 0.5). See Table 1.
After generating a sample from the WE distribution with different sample sizes ( ) as 40, 80, and 150, we determine different probability as 0.35 and 0.85 and determine the different size of the censored sample by using the ratio of affected as = as 0.7 and 0.9, then = . We generate the random removal of progressive censoring ℜ from binomial removal as the following: (2) Jacobian matrix defined over the function vector Ω ⟨ ⟩ can be defined as: We generate WE distributed based on PTIICS as follows: The MLE estimators are obtained by solving Equations (13)- (15). The MPS estimators are obtained by solving Equations (18)- (20). We can use the Newton-Raphson algorithm to find the optimal solution of MLE. Berndt et al. (1974) [30] discussed the Newton-Raphson algorithm. To find the root of a parameter's estimators of WE distribution based on PTIICS by MLE and MPS method, we use the next algorithm: ; Ω = (β, λ, δ); l = 1,2,3 satisfying Ω 0 l > 0.
(2) Jacobian matrix defined over the function vector f Ω l can be defined as: (3) The root can be found improved iteratively as the following: (4) Repeat these steps M times as 10,000 to get an estimator of Ω l .
(2) Choose a candidate point based on the initial value (Ω * ) from the proposal q(Ω * ) as normal with mean Ω methods. The following data were developed from the WE distribution, using Equation (5), where is distributed as the WE distribution for different true parameters as follows: (1) Case I: (β = 0.5, = 0.5, = 0.5). See Table 1.
After generating a sample from the WE distribution with different sample sizes ( ) as 40, 80, and 150, we determine different probability as 0.35 and 0.85 and determine the different size of the censored sample by using the ratio of affected as = as 0.7 and 0.9, then = . We generate the random removal of progressive censoring ℜ from binomial removal as the following: Balakrishnan and Sandhu [3] defined the algorithm to generate progressive censoring scheme as follows: Generate independent Uniform (0,1) observations , , … , .
We generate WE distributed based on PTIICS as follows: The MLE estimators are obtained by solving Equations (13)- (15). The MPS estimators are obtained by solving Equations (18)- (20). We can use the Newton-Raphson algorithm to find the optimal solution of MLE. Berndt et al. (1974) [30] discussed the Newton-Raphson algorithm. To find the root of a parameter's estimators of WE distribution based on PTIICS by MLE and MPS method, we use the next algorithm: (1) Start with a near value of the true value as initial values Ω ⟨ ⟩ ; Ω = ( , , ); = 1,2,3 satisfying Ω ⟨ ⟩ > 0.
(2) Jacobian matrix defined over the function vector Ω ⟨ ⟩ can be defined as: At fixed values of p and r, the bias decreases for three estimations as n increases.
Monte Carlo simulation was used to compare MLE, MPS, and Bayesian estimation methods. The following data were developed from the WE distribution, using Equation (5), where is distributed as the WE distribution for different true parameters as follows: (1) Case I: (β = 0.5, = 0.5, = 0.5). See Table 1.
After generating a sample from the WE distribution with different sample sizes ( ) as 40, 80, and 150, we determine different probability as 0.35 and 0.85 and determine the different size of the censored sample by using the ratio of affected as = as 0.7 and 0.9, then = . We generate the random removal of progressive censoring ℜ from binomial removal as the following: Balakrishnan and Sandhu [3] defined the algorithm to generate progressive censoring scheme as follows: Generate independent Uniform (0,1) observations , , … , . 

We define set
We generate WE distributed based on PTIICS as follows: The MLE estimators are obtained by solving Equations (13)- (15). The MPS estimators are obtained by solving Equations (18)- (20). We can use the Newton-Raphson algorithm to find the optimal solution of MLE. Berndt et al. (1974) [30] discussed the Newton-Raphson algorithm. To find the root of a parameter's estimators of WE distribution based on PTIICS by MLE and MPS method, we use the next algorithm: (1) Start with a near value of the true value as initial values Ω ⟨ ⟩ ; Ω = ( , , ); = 1,2,3 satisfying Ω ⟨ ⟩ > 0.
(2) Jacobian matrix defined over the function vector Ω ⟨ ⟩ can be defined as: The most accurate method is MLE, as it has a minimum square error (MSE).

Simulation Study
For estimating parameters of the WE distribution in a lifetime under PTIICS, the Monte Carlo simulation was used to compare MLE, MPS, and Bayesian estimation methods. The following data were developed from the WE distribution, using Equation (5), where is distributed as the WE distribution for different true parameters as follows: (1) Case I: (β = 0.5, = 0.5, = 0.5). See Table 1.
After generating a sample from the WE distribution with different sample sizes ( ) as 40, 80, and 150, we determine different probability as 0.35 and 0.85 and determine the different size of the censored sample by using the ratio of affected as = as 0.7 and 0.9, then = . We generate the random removal of progressive censoring ℜ from binomial removal as the following: Balakrishnan and Sandhu [3] defined the algorithm to generate progressive censoring scheme as follows: Generate independent Uniform (0,1) observations , , … , . 

We define set
We generate WE distributed based on PTIICS as follows: The MLE estimators are obtained by solving Equations (13)- (15). The MPS estimators are obtained by solving Equations (18)- (20). We can use the Newton-Raphson algorithm to find the optimal solution of MLE. Berndt et al. (1974) [30] discussed the Newton-Raphson algorithm. To find the root of a parameter's estimators of WE distribution based on PTIICS by MLE and MPS method, we use the next algorithm: (1) Start with a near value of the true value as initial values Ω ⟨ ⟩ ; Ω = ( , , ); = 1,2,3 satisfying Ω ⟨ ⟩ > 0.
(2) Jacobian matrix defined over the function vector Ω ⟨ ⟩ can be defined as: As r, n, and p increase, the bias decreases for MLE, MPS, and Bayesian measures.
acceptance-rejection sampling in that it assumes that a candidate value can be derived from a proposal distribution as a normal distribution for each iteration of the algorithm.

Simulation Study
For estimating parameters of the WE distribution in a lifetime under PTIICS, the Monte Carlo simulation was used to compare MLE, MPS, and Bayesian estimation methods. The following data were developed from the WE distribution, using Equation (5), where is distributed as the WE distribution for different true parameters as follows: (1) Case I: (β = 0.5, = 0.5, = 0.5). See Table 1.
After generating a sample from the WE distribution with different sample sizes ( ) as 40, 80, and 150, we determine different probability as 0.35 and 0.85 and determine the different size of the censored sample by using the ratio of affected as = as 0.7 and 0.9, then = . We generate the random removal of progressive censoring ℜ from binomial removal as the following: Balakrishnan and Sandhu [3] defined the algorithm to generate progressive censoring scheme as follows: Generate independent Uniform (0,1) observations , , … , . 

We define set
We generate WE distributed based on PTIICS as follows: The MLE estimators are obtained by solving Equations (13)- (15). The MPS estimators are obtained by solving Equations (18)- (20). We can use the Newton-Raphson algorithm to find the optimal solution of MLE. Berndt et al. (1974) [30] discussed the Newton-Raphson algorithm. To find the root of a parameter's estimators of WE distribution based on PTIICS by MLE and MPS method, we use the next algorithm: (1) Start with a near value of the true value as initial values Ω ⟨ ⟩ ; Ω = ( , , ); = 1,2,3 satisfying Ω ⟨ ⟩ > 0.
(2) Jacobian matrix defined over the function vector Ω ⟨ ⟩ can be defined as: The Bayesian estimation method represents the most accurate method because it has a MSE less than the others.

3.
Case III: (β = 1.5, λ = 3, δ = 2). Metropolis et al. (1953) [28] and Hastings (1970) [29]. The Metropolis-Hastings (M-H) algorithm is one of the two most popular examples of the MCMC method, along with Gibb's sampling. The M-H algorithm is similar to acceptance-rejection sampling in that it assumes that a candidate value can be derived from a proposal distribution as a normal distribution for each iteration of the algorithm.

Simulation Study
For estimating parameters of the WE distribution in a lifetime under PTIICS, the Monte Carlo simulation was used to compare MLE, MPS, and Bayesian estimation methods. The following data were developed from the WE distribution, using Equation (5), where is distributed as the WE distribution for different true parameters as follows: (1) Case I: (β = 0.5, = 0.5, = 0.5). See Table 1.
After generating a sample from the WE distribution with different sample sizes ( ) as 40, 80, and 150, we determine different probability as 0.35 and 0.85 and determine the different size of the censored sample by using the ratio of affected as = as 0.7 and 0.9, then = . We generate the random removal of progressive censoring ℜ from binomial removal as the following: The MLE estimators are obtained by solving Equations (13)- (15). The MPS estimators are obtained by solving Equations (18)- (20). We can use the Newton-Raphson algorithm to find the optimal solution of MLE. Berndt et al. (1974) [30] discussed the Newton-Raphson algorithm. To find the root of a parameter's estimators of WE distribution based on PTIICS by MLE and MPS method, we use the next algorithm: (1) Start with a near value of the true value as initial values Ω ⟨ ⟩ ; Ω = ( , , ); = 1,2,3 satisfying Ω ⟨ ⟩ > 0.
(2) Jacobian matrix defined over the function vector Ω ⟨ ⟩ can be defined as: The bias decreases as the sample size increases and the most accurate method is the Bayesian measure.
be used. Gibb's sampling and more general Metropolis within Gibbs samplers are an important sub-class of the MCMC techniques. Such an algorithm was first developed by Metropolis et al. (1953) [28] and Hastings (1970) [29]. The Metropolis-Hastings (M-H) algorithm is one of the two most popular examples of the MCMC method, along with Gibb's sampling. The M-H algorithm is similar to acceptance-rejection sampling in that it assumes that a candidate value can be derived from a proposal distribution as a normal distribution for each iteration of the algorithm.

Simulation Study
For estimating parameters of the WE distribution in a lifetime under PTIICS, the Monte Carlo simulation was used to compare MLE, MPS, and Bayesian estimation methods. The following data were developed from the WE distribution, using Equation (5), where is distributed as the WE distribution for different true parameters as follows: (1) Case I: (β = 0.5, = 0.5, = 0.5). See Table 1.
After generating a sample from the WE distribution with different sample sizes ( ) as 40, 80, and 150, we determine different probability as 0.35 and 0.85 and determine the different size of the censored sample by using the ratio of affected as = as 0.7 and 0.9, then = . We generate the random removal of progressive censoring ℜ from binomial removal as the following: The MLE estimators are obtained by solving Equations (13)- (15). The MPS estimators are obtained by solving Equations (18)- (20). We can use the Newton-Raphson algorithm to find the optimal solution of MLE. Berndt et al. (1974) [30] discussed the Newton-Raphson algorithm. To find the root of a parameter's estimators of WE distribution based on PTIICS by MLE and MPS method, we use the next algorithm: (1) Start with a near value of the true value as initial values Ω ⟨ ⟩ ; Ω = ( , , ); = 1,2,3 satisfying Ω ⟨ ⟩ > 0.
(2) Jacobian matrix defined over the function vector Ω ⟨ ⟩ can be defined as: We find that Bayesian estimators are more reliable than MLE and MPS estimators in the vast majority of cases.

Transformer Insulation Application
A real dataset is analyzed in this section to explain the proposed model and methods in the preceding sections. In addition, this dataset is used to demonstrate that the WE distribution based on the PTIICS sample can be a potential alternative to commonly known distributions, such as extended odd Weibull exponential (EOW) distribution, which is discussed under the PTIICS sample by Alshenawy et al. (2020) [14]; exponential Lomax(EL) distribution, introduced by El-Bassiouny et al. (2015) [31]; Weibull Lomax (WL) which was introduced by Tahir et al. (2015) [32]; and Odds Exponential-Pareto IV (OEPIV) which was introduced by Baharith et al. (2020) [33].
We explain how to conduct a goodness-of-fit test for the transformer insulation data and the proposed WE distribution based on PTIICS in the following subsection. The empirical cdf, the histogram of the pdf, and PP plots are displayed in Figure 2. The Bayesian estimation method represents the most accurate method because it has a MSE less than the others.
3. Case III: (β = 1.5, λ = 3, δ = 2).  The bias decreases as the sample size increases and the most accurate method is the Bayesian measure.  We find that Bayesian estimators are more reliable than MLE and MPS estimators in the vast majority of cases.

Transformer Insulation Application
A real dataset is analyzed in this section to explain the proposed model and methods in the preceding sections. In addition, this dataset is used to demonstrate that the WE distribution based on the PTIICS sample can be a potential alternative to commonly known distributions, such as extended odd Weibull exponential (EOW) distribution, which is discussed under the PTIICS sample by Alshenawy et al. (2020) [14]; exponential Lomax(EL) distribution, introduced by El-Bassiouny et al. (2015) [31]; Weibull Lomax (WL) which was introduced by Tahir et al. (2015) [32]; and Odds Exponential-Pareto IV (OEPIV) which was introduced by Baharith et al. (2020) [33].
We explain how to conduct a goodness-of-fit test for the transformer insulation data and the proposed WE distribution based on PTIICS in the following subsection. The empirical cdf, the histogram of the pdf, and PP plots are displayed in Figure 2.

Modified Kolmogorov-Smirnov Algorithm for Censored Data Fitting
We have to use the modified Kolmogorov-Smirnov (KS) goodness-of-fit test if the data are PT-II censored data and not complete. Pakyari and Balakrishnanan (2012) [35] originally produced the modified KS (MKS) statistics for THE PTIICS data. This algorithm relies on a number of steps: (1) Estimate the parameters of WE distribution based on PTIICS.
(3) By using the statistic of MKS test and sample size, we calculate the p-Value of this test.
The summary of the results of the MKS test for WE distribution and commonly known distributions, such as EOW, WL, EL, and OEPIV, are shown in Table 4 based on the results of MLE. We conclude that the best model to fit the data of the transformer insulation is WE distribution based on PTIICS, according to the results of the MKS test with p-value, the Akaike information criterion (AIC), the corrected AIC (CAIC), and the Hannan-Quinn information criterion (HQIC). The WE distribution based on PTIICS has the smallest value of MKS, AIC, BIC, CAIC, and HQIC, and has the largest value of p-value.
Using the MLE, MPS, and Bayesian methods, Table 5 shows the parameter estimates and their standard errors (SEs) for the WE distribution based on PTIICS. We conclude that the Bayesian estimation method is the best.  Figure 3 shows the history plots, estimated marginal posterior density, and MCMC convergence of β, λ, and δ.

Conclusions
In this article, the Bayesian estimation, MPS, and MLE methods were adopted for estimating the WE distribution parameters under a progressive type-II censored sample with binomial removals. Simulations were used to investigate the output of the three proposed estimators for various parameter values and sample sizes. The Newton-Raphson algorithm and Metropolis-Hastings algorithm were determined for the non-Bayesian and Bayesian estimation methods. Furthermore, the simulation results were used to investigate the effects of sample size, failure size, and removal probabilities on estimate accuracy. We may infer from our research that the Bayesian estimation method outperforms the MLE and MPS methods in estimating the WE parameters in PTIICS with random removal. Finally, to illustrate the methods of inference discussed in the paper, transformer insulation of real data from engineering fields was investigated. Data Availability Statement: Data is available in this paper.

Conflicts of Interest:
The authors declare that they have no conflicts of interest to report regarding the present study.

Conclusions
In this article, the Bayesian estimation, MPS, and MLE methods were adopted for estimating the WE distribution parameters under a progressive type-II censored sample with binomial removals. Simulations were used to investigate the output of the three proposed estimators for various parameter values and sample sizes. The Newton-Raphson algorithm and Metropolis-Hastings algorithm were determined for the non-Bayesian and Bayesian estimation methods. Furthermore, the simulation results were used to investigate the effects of sample size, failure size, and removal probabilities P on estimate accuracy. We may infer from our research that the Bayesian estimation method outperforms the MLE and MPS methods in estimating the WE parameters in PTIICS with random removal. Finally, to illustrate the methods of inference discussed in the paper, transformer insulation of real data from engineering fields was investigated.