Next Article in Journal
Thermal Fisher Information for a Rotating BTZ Black Hole
Previous Article in Journal
LDPC Codes on Balanced Incomplete Block Designs: Construction, Girth, and Cycle Structure Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Classical Versus Bayesian Error-Controlled Sampling Under Lognormal Distributions with Type II Censoring

School of Mathematics and Statistics, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(5), 477; https://doi.org/10.3390/e27050477
Submission received: 9 March 2025 / Revised: 11 April 2025 / Accepted: 25 April 2025 / Published: 28 April 2025
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
This paper presents a comparative study of classical and Bayesian risks in the design of optimal failure-censored sampling plans for lognormal lifetime models. The analysis focuses on how variations in prior distributions, specifically the beta distribution for defect rates, influence the producer’s and consumer’s risks, along with the optimal sample size. We explore the sensitivity of the sampling plan’s risks to changes in the prior mean and variance, offering insight into the impacts of uncertainty in prior knowledge on sampling efficiency. Classical and Bayesian approaches are evaluated, highlighting the trade-offs between minimizing sample size and controlling risks for both the producer and the consumer. The results demonstrate that Bayesian methods generally provide more robust designs under uncertain prior information, while classical methods exhibit greater sensitivity to parameter changes. A computational procedure for determining the optimal sampling plans is provided, and the outcomes are validated through simulations, showcasing the practical implications for quality control in reliability testing and industrial applications.

1. Introduction

Type II censoring is a commonly used censoring method, especially in reliability testing and survival analysis. It is defined as the situation where an experiment or trial is terminated after a predetermined time or a certain number of samples. In experiments with time or cost constraints, Type II censoring provides an efficient and resource-saving experimental design approach, especially in fields such as reliability testing and survival analysis. By setting an upper limit on the number of failures, it ensures that the experiment can obtain valid statistical inferences within limited resources while maintaining reliability and accuracy. Since 1950, many authors have studied this topic. Parameter estimation methods for various distributions under Type II censoring were provided by [1], providing significant support to future researchers. In recent years, several studies have explored this topic [2,3,4,5,6,7,8].
Failure-censored sampling plans aim to estimate the reliability parameters of a population by sampling product lifetimes or failure times. This problem has been extensively studied, particularly in the development of lifetime models and the optimization of sampling design methods. The earliest research on failure-censored sampling was introduced in [9], laying the foundation for subsequent advancements in this area. In recent years, further research has been conducted on sampling plans for different lifetime models. For instance, reliability design methods with double censoring, applicable to certain two-parameter distributions, were introduced in [10], extending the applicability of failure-censored sampling. Moreover, in [11], sampling schemes optimized for lifetime distributions characterized by log-location–scale parameters were explored, leading to the development of design approaches suitable for more sophisticated lifetime models. Additionally, various computational methods and optimization strategies for sampling design under the Weibull distribution were proposed in [12], particularly for designing efficient sampling plans in higher-dimensional and more complex scenarios. Furthermore, the determination of the optimal reliability acceptance sampling plan under cost constraints within the hybrid censoring framework was explored in [13], addressing the challenge of balancing reliability and cost efficiency. In [14], an acceptance sampling scheme utilizing the lifetime performance index was proposed for exponential populations with and without censoring. The method achieved similar performance compared to the approximation approach based on full-order observed exponential data. Similarly, in [15], an optimal reliability acceptance sampling plan was designed using the Type I generalized hybrid censoring scheme for non-repairable products, while a cost function approach was introduced for products with Weibull-distributed lifetimes. In addition, ref. [16] introduced two innovative variable reliability acceptance sampling plans, namely, repetitive group sampling and resubmitted sampling, both of which are suitable for reliability tests under failure-censoring. Building on this progress, a variable multiple-dependent-state sampling scheme based on the Weibull distribution with Type II right censoring was proposed in [17], incorporating the lifetime performance index. Compared to the single-variable sampling scheme, the proposed approach demonstrates higher cost efficiency and greater discriminatory power. In summary, failure-censored sampling plans require significantly fewer samples than attribute sampling plans, while also considerably reducing test time compared to complete variable sampling schemes. This makes failure-censored sampling plans highly valuable for reliability testing and quality control under resource constraints.
Traditional acceptance sampling typically assumes that the proportion of nonconforming items, denoted as p, is a fixed value for each production batch. However, in practical applications, the true value of p is usually unknown prior to inspection, and it may vary across different batches due to differences in materials, processing conditions, or environmental factors. If this uncertainty is ignored, classical methods that assume a known p may lead to suboptimal or misaligned sampling plans. To address this issue, Bayesian methods have been introduced. In the Bayesian framework, p is still assumed to be fixed for a given batch, but treated as a random variable to represent our uncertainty about its true value. A Beta distribution is commonly used as the prior for p, due to its flexibility on the interval [0,1] and its conjugate relationship with the binomial distribution, which simplifies posterior computation. The application of Beta priors for optimizing sampling schemes was first proposed in [18] and has since provided a foundational framework for Bayesian-based acceptance sampling.
In recent years, Bayesian sampling has continued to receive widespread attention and extensive research. For instance, in [19], the application of Bayesian sampling in acceptance sampling was significantly expanded by exploring various prior distributions for the defective rate p, which not only enriched the theoretical framework but also enhanced practical implementations. In [20], a systematic comparison of conventional and Bayesian sampling methods was conducted. This study provided an in-depth evaluation of both consumer and producer risks while examining the sensitivity of prior distribution parameter variations. These findings contributed to a more comprehensive understanding of Bayesian sampling performance and its practical implications. As research in this field progressed, in [21], a novel acceptance sampling plan was introduced to determine whether the received lot met the predefined acceptance criteria. This approach integrated a cost objective function that accounted for potential inspection errors. Additionally, Bayesian inference was applied to refine the probability distribution function of the nonconforming proportion, whereas a backward recursive approach was employed to evaluate terminal costs and derive optimal decisions. Expanding Bayesian sampling methodologies further, in [22], two Bayesian accelerated acceptance sampling plans were proposed for a lognormal lifetime distribution under Type I censoring. The first plan incorporated risk considerations for both producers and consumers, while the second exclusively focused on consumer risk. Moreover, a sensitivity analysis was conducted to assess the impact of prior distribution selection, thereby enhancing the robustness and applicability of these models. In [23], a Bayesian reliability sampling plan was presented for the Weibull distribution under progressively Type II censoring. The study identified the optimal strategy by analyzing sample sizes, recorded failure counts, binomial-based removal probabilities, and the minimum reliability threshold. This refinement further advanced Bayesian reliability sampling techniques, making them more adaptable to real-world reliability assessments. Most recently, in [24], a Bayesian double-group sampling plan was introduced to estimate the average number of nonconforming products. By incorporating Bayesian inference principles, this approach contributed to the growing body of research focused on improving the efficiency and accuracy of Bayesian-based acceptance sampling strategies.
This paper analyzes the average and posterior risks in designing optimal failure-censored sampling plans for lognormal lifetime models, where the defect rate p follows a beta distribution. In Section 2, we derive the operating characteristic function for the lognormal distribution under Type II censoring, which is used to evaluate the acceptance probability of products based on the lognormal lifetime distribution. Through large-sample approximations, we estimate the impact of censoring rate and defect probability on acceptance decisions, providing a theoretical foundation for optimizing sampling plans. In Section 3, we first introduce the producer’s risk and consumer’s risk under both classical and Bayesian sampling frameworks. Then, we present a computational procedure for determining the optimal sample size and decision threshold. Finally, we analyze the properties of prior distributions under different parameter settings and explore their influence on the optimization of sampling schemes. In Section 4, we compute the optimal sample sizes corresponding to different prior distribution parameters and conduct a sensitivity analysis on both classical and Bayesian sampling methods. Based on these results, we identify the optimal sampling scheme for specific prior distributions, balancing sample size and risk control. In Section 5, we synthesize the graphs and simulation results obtained in Section 4 to draw conclusions. We further provide recommendations for producers and consumers regarding the application of optimal sampling schemes. The findings contribute to reliability testing and quality control, aiding in the development of more robust sampling strategies.

2. Operating Characteristic Function

The lognormal distribution is one of the most widely used methods of survival analysis, reliability analysis, and sampling inspection. In reliability testing, suppose the lifetime T of a batch of electronic components follows a two-parameter lognormal distribution. The logarithm of the lifetime variable, X = log ( T ) , follows a normal distribution with an unknown location parameter μ and scale parameter σ . In this article, we primarily study the logarithmic lifetime X.
The cumulative distribution function (CDF), probability density function (PDF), and survival function (SF) of the random variable X are given by:
F ( x ; μ , σ ) = Φ x μ σ ,
f ( x ; μ , σ ) = φ ( x ) = 1 2 π σ exp ( x μ ) 2 2 σ 2 ,
S ( x ; μ , σ ) = 1 F ( x ; μ , σ ) = 1 Φ x μ σ .
In the following, Φ ( · ) represents the standard normal distribution function, and φ ( x ) represents the probability density function.
Given a total sample size of n, the test is terminated upon the failure of the m-th component. At this point, the logarithmic lifetime X ( 1 ) X ( 2 ) X ( m ) is recorded, while the remaining n m components have not yet failed and are considered censored, with a censoring rate defined as q = 1 m n , representing the proportion of units that are censored due to not failing within the test duration. This parameter will be crucial in later derivations. Consequently, only the first m order statistics X ( 1 ) , , X ( m ) are available, and it is known that n m units remain beyond X ( m ) .
When X i is independently and identically distributed as N ( μ , σ 2 ) and only the first m order statistics X ( 1 ) , , X ( m ) are available, their joint density function is:
f x 1 , x 2 , , x m ( x ( 1 ) , , x ( m ) ; μ , σ ) = n ! ( n m ) ! i = 1 m f X ( x ( i ) ) S X ( x ( m ) ) n m .
Here, the faction n ! ( n m ) ! arises from the joint density of order statistics and reflects the number of ways to arrange m failure times among n total units, with the remaining n m units being censored.
The lifetime variable is assumed to follow a lognormal distribution, which is a type of log-location–scale lifetime model. Therefore, we apply a logarithmic transformation to the data and perform all subsequent analysis on the logarithmic scale, including the logarithmic lifetime X and the lower limit . Under this transformation, the lifetime variable becomes normally distributed, allowing us to formulate the model within the framework of the normal distribution. This transformation does not alter the acceptance decision criterion for the products, and it also enables us to take advantage of the well-established theoretical results for Type II censoring under the normal distribution.
For an electronic component to meet quality standards, its lifetime must exceed a given lower limit . If x < 𝓁 , the product is considered defective, commonly referred to as a “bad value” in quality control. The defect rate p is defined as:
Pr ( X < 𝓁 ) = p .
Since X N ( μ , σ 2 ) , we have Pr ( X < 𝓁 ) = Φ 𝓁 μ σ = p ; hence, solving for yields:
𝓁 = μ + σ Φ 1 ( p ) .
A batch of products is accepted if the maximum likelihood estimates (MLEs) μ ^ and σ ^ under Type II censoring satisfy:
μ ^ k σ ^ > 𝓁 ,
where k is a predetermined constant. Substituting , we obtain μ ^ μ σ k σ ^ σ > Φ 1 ( p ) . Defining the pivotal quantities Y 1 = μ ^ μ σ and Y 2 = σ ^ σ , the inequality simplifies to Y 1 k Y 2 > Φ 1 ( p ) . Thus, the operating characteristic (OC) curve is defined as:
L ( p ) = Pr ( μ ^ k σ ^ > 𝓁 ) = Pr ( Y 1 k Y 2 > Φ 1 ( p ) ) , 0 < p < 1 .
As shown in [25], the asymptotic normal approximation of the OC function L ( · ) is reasonable for the lognormal case under Type II censoring:
n Y 1 , Y 2 1 d N 0 , Σ ,
The notation d N ( 0 , Σ ) indicates that, as the sample size n , the random vector composed of two statistics,
n Y 1 Y 2 1 ,
converges in distribution to a bivariate normal distribution with zero mean and covariance matrix Σ . Here, Y 1 = ( μ ^ μ ) / σ represents the standardized deviation of the MLE for the location parameter, and Y 2 1 = ( σ ^ σ ) / σ denotes the relative deviation of the MLE for the scale parameter.
Here, Σ is a 2 × 2 covariance matrix with elements γ i j ( i , j = 1 , 2 ) derived from the inverse of the Fisher information matrix under Type II censoring.
Let the information matrix be:
J = J 11 J 12 J 12 J 22 ,
Then:
Σ = γ 11 γ 12 γ 12 γ 22 = 1 J 11 J 22 J 12 2 J 22 J 12 J 12 J 11 .
Defining Z = Y 1 k ( Y 2 1 ) , its asymptotic distribution is given by:
n Z N ( 0 , γ 11 2 k γ 12 + k 2 γ 22 ) ,
Thus, the decision rule is rewritten as Z > Φ 1 ( p ) + k . By the asymptotic normality,
Pr Y 1 k Y 2 > Φ 1 ( p ) 1 Φ n Φ 1 ( p ) + k γ 11 2 k γ 12 + k 2 γ 22 .
Thus, the approximate expression for the OC curve under Type II censoring is:
L ( p ; n , m , k ) 1 Φ λ p , n , m , k , 0 < p < 1 ,
where
λ p , n , m , k = n Φ 1 ( p ) + k γ 11 2 k γ 12 + k 2 γ 22 .
For ( μ ^ , σ ^ ) under Type II censoring in a normal distribution, the asymptotic variance–covariance matrix elements correspond to the negative expected values of the second-order partial derivatives of the log-likelihood function concerning the parameters. According to [1], if x N ( μ , σ 2 ) and
Φ 11 ( q , ξ ) = 1 + Ω ( q , ξ ) [ Q ( ξ ) + ξ ] ,
Φ 12 ( q , ξ ) = Ω ( q , ξ ) [ 1 + ξ Q ( ξ ) + ξ ] ,
Φ 22 ( q , ξ ) = 2 + ξ Φ 12 ( q , ξ ) ,
In Type II censoring, the truncation point is denoted by w, and the standardized truncation point, represented as ζ , is defined by ζ = w μ σ . The function Q ( x ) is given by Q ( x ) = φ ( x ) 1 Φ ( x ) , while Ω ( h , ζ ) is expressed as Ω ( h , ζ ) = h 1 h Q ( ζ ) .
We have:
γ 11 = Var ( μ ^ ) = σ 2 n μ 11 , γ 22 = Var ( σ ^ ) = σ 2 n μ 22 ,
γ 12 = γ 21 = Cov ( μ ^ , σ ^ ) = σ 2 n μ 12 .
Here, μ 11 , μ 12 , μ 22 are functions of Φ 11 , Φ 12 , Φ 22 , which are elements of the information matrix. In the case of Type II censoring with a total sample size of n and a censoring rate of q, the calculation methods for μ 11 , μ 12 , μ 22 are as follows:
μ 11 = n m Φ 22 Φ 11 Φ 22 Φ 12 2 ,
μ 22 = n m Φ 11 Φ 11 Φ 22 Φ 12 2 ,
μ 12 = n m Φ 12 Φ 11 Φ 22 Φ 12 2 .
The following diagram illustrates the relationship between Φ 11 , Φ 12 , Φ 22 and the elements γ i j in the covariance matrix Σ .
Entropy 27 00477 i001
In practical applications, the probability bounds for the product failure rate p are typically determined according to industrial standards. These bounds define the region over which the producer’s and consumer’s risks are assessed, and they further guide the determination of the sample size n, the number of failures m, and the predetermined constant k for the test statistic. A more detailed mathematical formulation of this design process is provided in Section 3.

3. Risk Criteria and Prior Models

In reliability testing and quality control, both producers and consumers need to jointly establish quality standards based on the defect rate p to ensure that the production process controls the outflow of nonconforming products while not being overly stringent on conforming products. The acceptable quality level and the rejectable quality level are two key indicators that represent the highest defect rate p τ 1 acceptable to the producer and the lowest defect rate p τ 2 unacceptable to the consumer, respectively, where p τ 2 > p τ 1 .
In quality inspection, sampling tests determine whether a production batch meets the required quality standards. If a batch satisfies the acceptance criteria, it is considered accepted; otherwise, it is rejected. Thus, batch acceptance indicates a passed test, while batch rejection signifies a failed test.
Based on this, the quality inspection process involves two main risks: producer’s risk (PR) refers to the probability that a conforming batch ( p p τ 1 ) is rejected due to sampling error, denoted as τ 1 ; consumer’s risk (CR) refers to the probability that a nonconforming batch ( p p τ 2 ) is accepted due to sampling error, indicated as τ 2 .
The goal of the sampling plan is to determine the optimal scheme ( n , m , k ) (i.e., minimizing sample size) such that:
P R ( n , m , k ) τ 1 , C R ( n , m , k ) τ 2 .
This ensures that both producer and consumer risks are controlled, balancing the acceptance rate of conforming products and the rejection rate of nonconforming products while maintaining a reasonable sample size and testing cost. A censoring mechanism is introduced to reduce testing time or cost, and prior distributions are further incorporated to optimize decision-making. Here, h 1 ( · ) and h 2 ( · ) represent the prior probability density functions of p for the producer and consumer, respectively, with the corresponding cumulative distribution functions H 1 ( · ) and H 2 ( · ) . Bayesian methods can further optimize the sampling strategy, making decisions statistically more rational.

3.1. Classical and Bayesian Risks

Different risk criteria guide the selection of the optimal sampling plan ( n , m , k ) . Given that the probability of batch acceptance is expressed as Pr ( batch is accepted ) = L ( p ; n , m , k ) , the classical average risks are defined as follows:
  • Average Producer’s Risk (APR)
    Pr ( batch is rejected p p τ 1 ) = E h 1 [ 1 L ( p ) p p τ 1 ]
    = 1 1 H 1 ( p τ 1 ) 0 p τ 1 L ( p ) h 1 ( p ) d p ,
  • Average Consumer’s Risk (ACR)
    Pr ( batch is accepted p p τ 2 ) = E h 2 [ L ( p ) p p τ 2 ]
    = 1 1 H 2 ( p τ 2 ) p τ 2 1 L ( p ) h 2 ( p ) d p .
Additionally, Bayesian or posterior risks are defined as follows:
  • Bayesian Producer’s Risk (BPR)
    Pr ( p p τ 1 batch is rejected ) = 0 p τ 1 ( 1 L ( p ) ) h 1 ( p ) d p 0 1 ( 1 L ( p ) ) h 1 ( p ) d p ,
  • Bayesian Consumer’s Risk (BCR)
    Pr ( p p τ 2 batch is accepted ) = p τ 2 1 L ( p ) h 2 ( p ) d p 0 1 L ( p ) h 2 ( p ) d p .

3.2. A Computational Procedure

To obtain the optimal sampling plan ( n , m , k ) that satisfies the condition in Equation (3), we solve it using Algorithm 1. In this study, we employed the particle swarm optimization algorithm to estimate the model parameters. In the initial stage, we adopted a relatively broad search range to avoid premature convergence to local optima. Based on preliminary results, the parameter bounds were then refined. The final search intervals were set as:
n [ 2.0001 , 50.0 ] , k [ 10 , 10 ] .
The remaining particle swarm optimization parameters were configured as follows: swarm size of 500, maximum number of iterations set to 100, inertia weight w = 0.5 , and acceleration coefficients c 1 = c 2 = 0.7 . A boundary-clipping mechanism was applied at each iteration to ensure that all particles remained within the feasible search space, thereby enhancing the stability and convergence of the algorithm.
After obtaining the optimal solution ( n 0 , k 0 ) from the equation, we take into account that the sample size n must be a positive integer. Therefore, we round n 0 up to the nearest integer to obtain the actual sample size n. Next, we substitute n and k 0 into Equation (3)–(7) for verification: if the condition in Equation (3) is satisfied, the value of m is set to its floor; otherwise, it is set to its ceiling.
Finally, the determined value of n is substituted into the two equations from Step 2 to compute the corresponding values k τ 1 and k τ 2 , and the final value of k is taken as the average of these two. A detailed description of the procedure is provided in the algorithm below.
Algorithm 1: Computational Procedure for Optimal Sampling Plan
Entropy 27 00477 i002

3.3. Prior Distribution of Defect Rate

In industrial production, the defect rate p of batches of electronic components is not fixed but varies randomly due to factors such as materials, processes, and environmental conditions. The overall defect rate p is generally unobservable, but it can be estimated based on historical data or empirical prior information. Suppose the prior probability density function of p follows a beta distribution:
p Beta ( a , b ) ,
With the probability density function given by:
h ( p ) = p a 1 ( 1 p ) b 1 B ( a , b ) , 0 p 1 .
where B ( a , b ) is the beta function. When the producer’s and consumer’s risk assessments adopt the same prior information, i.e., h 1 ( p ) = h 2 ( p ) , it indicates that both parties rely on the same historical data and quality control standards. This assumption enables producers and consumers to optimize the sampling plan within a common informational framework. In the Bayesian approach, the prior mean and prior variance of the defect rate are expressed as:
μ p = a a + b , σ p 2 = a b ( a + b ) 2 ( a + b + 1 ) .
Here, μ p represents the prior mean of the defect rate, influencing the overall judgment of product quality by both producers and consumers, while σ p 2 quantifies uncertainty. As a and b increase, the weight of prior information strengthens, leading to a more conservative risk assessment. Figure 1 displays several beta densities for selected values of μ p and σ p .

4. Sensitivity of Optimal Designs

In this section, we define the optimal sampling design as a triplet ( n , m , k ) that minimizes the sample size n while satisfying the producer’s risk constraint PR ( n , m , k ) τ 1 and the consumer’s risk constraint CR ( n , m , k ) τ 2 . The computational procedure proposed in Section 3 is designed to search for such optimal designs under various assumptions on the prior distributions. We primarily investigate the impact of variations in the prior beta distribution’s mean μ p and standard deviation σ p on producer and consumer risks and conduct a sensitivity analysis of μ p and σ p .
In the following sampling schemes, we consider a lognormal sampling process with a censoring rate of 50%, where τ 1 = 0.05 and τ 2 = 0.10 . Table 1 also includes simulated producer and consumer risks corresponding to the optimal design solutions. In electronic component manufacturing and quality control, the defect rate p of product batches is not fixed but fluctuates due to factors such as raw materials, process stability, and environmental conditions. Consequently, relying on a single-point estimate may lead to producer risk and consumer risk exceeding acceptable limits, affecting the effectiveness of quality management.
To better characterize the uncertainty in p, we introduce the beta distribution as a prior distribution. The prior mean μ p represents the average defect rate in the production process, while the prior variance σ p quantifies the extent of fluctuations in the defect rate. A smaller σ p indicates a more stable production process, whereas a larger σ p suggests greater quality variations, necessitating a stricter inspection strategy.
Based on the [26] quality standard, we select p τ 1 = 0.0319 and p τ 2 = 0.0942 as quality control parameters and analyze the impact of different μ p and σ p values on producer and consumer risks. This ensures that even with imperfect prior information, the sampling plan can be optimized to balance the interests of both producers and consumers while enhancing the robustness of quality management.
The selected values of μ p lie within the interval ( p τ 1 , p τ 2 ) , while σ p is proportional to the length of this interval. The prior mean values of p, set at 0.032 , 0.063 , and 0.094 , represent “low”, “medium” and “high” levels, respectively. The additional values of 0.048 and 0.078 allow us to better understand the trend of changes in μ p , enabling a more detailed sensitivity analysis in subsequent steps. Similarly, σ p is chosen as 0.047 , 0.054 , and 0.062 , representing “low”, “medium” and “high” levels’ values, respectively.
For the Bayesian risk scheme, when μ p = 0.032 , this value is too close to 0.0319 , making the requirements overly stringent. The resulting sample size n < 5 is unrealistically small and lacks practical significance in real-world sampling, so specific results are not provided in the table. Observing the overall trend, we can see that, under the same prior distribution, the average risk method requires a larger sample size compared to the Bayesian risk approach. This outcome aligns with our expectations.
As discussed earlier, the optimal sampling plan proposed in this study is based on the large-sample theory. To evaluate the effectiveness of these asymptotic solutions, we perform a Monte Carlo simulation study. A detailed description of the simulation methodology can be found in [11]. The general procedure is as follows: First, we generate 5000 values of p following the current prior beta distribution. Then, we generate 5000 sets of samples from a lognormal distribution, each containing n observations. Subsequently, risk simulations are performed under a censoring rate of 50%. It is evident that under the lognormal distribution, the simulated average producer risk and consumer risk closely approximate the expected risks, τ 1 = 0.05 and τ 2 = 0.10 , respectively.
Using the results calculated in Table 1, we plot the OC curves for the lognormal distribution under classical sampling and Bayesian sampling, with a censoring rate of 50%, and a prior distribution with mean μ p = 0.063 and standard deviation σ p = 0.054 .
As shown in Figure 2, the AR design exhibits a curve that rapidly approaches 0, demonstrating high sensitivity to variations in quality levels, particularly near the acceptance threshold, where the acceptance probability declines sharply. This behavior aligns with the characteristics of classical sampling, which relies on a fixed standard without incorporating data or prior distributions, thus responding quickly to quality fluctuations. In contrast, the BR design exhibits a curve that gradually approaches 0, with a higher initial acceptance probability and greater tolerance for uncertainty. By incorporating prior information and updating iteratively, the Bayesian method enhances tolerance for low-quality batches, resulting in a smoother OC curve.

4.1. Influence of Prior Moments on Risks

4.1.1. Average Risk Design and Bayesian Risk Design

This study considers the optimal average risk sampling plan under given specifications, with prior μ p = 0.063 and σ p = 0.054 . By modifying the values of μ p and σ p , we illustrate their impact on design risks. To this end, Figure 3 presents the sensitivity of the average risk when μ p and σ p vary simultaneously, while Figure 4 provides the corresponding contour plots.
Clearly, the producer’s risk is highly sensitive to changes in both σ p and μ p . In the lower range of σ p , the APR initially exceeds the threshold τ 1 and gradually decreases towards τ 1 as σ p increases. When μ p is high, the reduction in APR is less pronounced, while at lower μ p values, the change is more significant. Furthermore, in the medium to high range of σ p and in lower values of μ p , APR decreases significantly. However, when σ p is high and μ p is low, the decrease in APR slows down.
The consumer risk is similarly sensitive to variations in σ p and μ p . In the lower range of σ p , ACR remains at a relatively high level and decreases as σ p increases. When μ p is higher, the reduction in ACR is more gradual; however, in the lower μ p range, the decline in ACR is more pronounced. From a three-dimensional perspective, ACR exhibits a clear downward trend as σ p increases, particularly in the low to moderate range μ p . In the high σ p and μ p regions, changes in ACR tend to stabilize. When σ p is high, the rate of ACR reduction slows down.
We analyze the Bayesian risk sampling plan under the same prior as in the AR design. Figure 5 shows how prior-moment variations affect Bayesian producer and consumer risks under the optimal design. These risks are evaluated using Equations (6) and (7) and compared with the reference planes B P R = τ 1 and B C R = τ 2 . The contour plots are in Figure 6.
In general, for small σ p , as μ p increases from low to moderate values, BPR exhibits a significant decline, while BCR grows more gradually. At higher μ p values, the decrease in BPR becomes more gradual, while BCR increases at a faster rate. For moderate to high values of σ p , changes in BPR tend to stabilize, while BCR increases significantly, indicating a lower sensitivity of both risks to variations in σ p .

4.1.2. An Illustrative Example

In the following, we present a comparative analysis of the risk variations in the optimal AR and BR sampling plans when the prior mean μ p and standard deviation σ p are modified. The analysis is conducted under a censoring rate of 50%, with a moderate prior mean of μ p = 0.063 and a standard deviation of σ p = 0.047 .
The optimal A R design is given by:
( n , m , k ) = ( 26 , 13 , 1.54779 ) ,
and the optimal B R design is given by:
( n , m , k ) = ( 11 , 5 , 1.31573 ) .
Table 2 presents the impact of varying μ p while keeping σ p = 0.054 fixed. As μ p increases from 0.032 to 0.094, the producer and consumer risks exhibit different trends under the optimal AR and BR designs. The producer risk increases as μ p grows, particularly in the AR design, where it rises significantly from 0.0162 to 0.0816, corresponding to an increase of approximately 49%. Similarly, the consumer risk follows an increasing trend but with a smaller growth, rising from 0.0829 to 0.0965, a relative change of 16.8%.
Under the Bayesian design, the producer risk decreases as μ p increases, dropping from 0.0802 to 0.0816, showing a 49% reduction. In contrast, the Bayesian consumer risk exhibits a more substantial increase, with BCR rising from 0.0334 to 0.2479, representing a 146.8% increment. Overall, the producer risk is more sensitive to variations in μ p , while the consumer risk remains relatively stable. Compared to the classical design, the Bayesian design generally results in lower risks and demonstrates a higher sensitivity to risk changes as μ p increases.
Table 3 presents the impact of varying σ p while keeping μ p = 0.063 fixed. As σ p increases, the producer’s risk decreases significantly. Notably, when σ p increases from 0.036 to 0.068, the APR reduction reaches −28.4%. The consumer’s risk also decreases with increasing σ p , but the magnitude of change is smaller. For example, when σ p increases to 0.068, the ACR decreases by 17.6%, indicating that increasing prior variance has some effect on consumer risk, but compared to the producer’s risk, the influence remains relatively stable.
For the Bayesian producer’s risk, the risk gradually decreases as σ p increases, but the reduction is less pronounced compared to the classical producer’s risk. The Bayesian consumer’s risk remains nearly unchanged with increasing σ p , especially at higher values of σ p . Even when σ p reaches 0.068, the BCR only changes by −10.2%, indicating that the Bayesian consumer’s risk is relatively insensitive to variations in prior variance. Particularly for larger values of σ p , the Bayesian model’s impact on risk stabilizes.
To eliminate the influence of the censoring rate on sampling risks, we conducted additional experiments at a censoring rate of 10% and a censoring rate of 90%. Similarly, for a prior mean and standard deviation of moderate values ( μ p = 0.063 , σ p = 0.047 ), we recalculated the optimal sampling plans under these two censoring rates. Theoretical computations were performed to compare the changes in risks for the AR and BR optimal sampling plans as the prior mean μ p and standard deviation σ p varied.
Table 4 and Table 5 present the risk variations for the optimal AR and BR sampling plans under a censoring rate of 10%, given a prior mean and standard deviation of moderate values ( μ p = 0.063 , σ p = 0.047 ). The comparison illustrates how changes in the prior mean μ p and standard deviation σ p influence the risks.
The optimal A R design is given by:
( n , m , k ) = ( 22 , 11 , 1.54781 ) ,
and the optimal B R design is given by:
( n , m , k ) = ( 8 , 4 , 1.28515 ) .
Table 6 and Table 7 present the risk variations for the optimal AR and BR sampling plans under a censoring rate of 90%, given a prior mean and standard deviation of moderate values ( μ p = 0.063 , σ p = 0.047 ). The comparison illustrates how changes in the prior mean μ p and standard deviation σ p influence the risks.
The optimal A R design is given by:
( n , m , k ) = ( 29 , 15 , 1.54799 ) ,
and the optimal B R design is given by:
( n , m , k ) = ( 14 , 7 , 1.31574 ) .
By observing the risk variations in the optimal sampling plans under low, medium, and high censoring levels as the prior mean μ p and standard deviation σ p change, we can conclude that the AR design exhibits higher sensitivity to parameter variations. In particular, when μ p is relatively low or σ p is high, the changes in risk become more pronounced. This makes the AR design more suitable for scenarios where fine-tuned optimization is required based on subtle parameter variations.
On the other hand, the BR design demonstrates lower sensitivity, with relatively minor changes in risk values. The BR design remains more robust to parameter variations, maintaining stable risk levels. Therefore, it is better suited for environments where greater parameter variability is expected and system stability is a key requirement.

4.2. Optimal Sample Sizes

In the following, we separately present the effects of changes in μ P and σ P on the optimal sample size. Figure 7 illustrates the sensitivity of the optimal sample size to changes in μ P . When τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , and σ P = 0.054 , the figure depicts the trend of the optimal sample size for the A R and B R designs as μ P varies. From the figure, it can be observed that the optimal sample size for both the A R and B R designs increases as μ P increases. In contrast, the increase in sample size for the B R design is smaller, indicating that it is relatively less sensitive to changes in μ P .
Figure 8 illustrates the sensitivity of the optimal sample size to changes in σ P . When τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , and μ P = 0.063 , the figure shows how the A R and B R designs respond to variations in σ P . From the figure, it can be observed that as σ P increases, the optimal sample size for the A R design decreases significantly in a stepwise manner, indicating that the A R design is highly sensitive to changes in the prior variance. In contrast, the B R design exhibits only minor fluctuations in sample size, demonstrating a relatively stable trend. This suggests that the B R design is more robust to variations in σ P .
Additionally, we have highlighted in Figure 7 and Figure 8 the optimal sample sizes for the A R and B R designs when the prior parameters are set to μ P = 0.063 and σ P = 0.054 . Under these conditions, the optimal sample sizes are n = 26 for the A R design and n = 11 for the B R design.

5. Concluding Remarks

Based on the experimental data analysis, we conclude that Bayesian risk is more sensitive to μ p variations than to σ p changes. When μ p varies, the producer’s risk in the AR design is more affected than the consumer’s risk, whereas in the BR design, the consumer’s risk dominates. In contrast, when σ p changes, the producer’s risk fluctuates more than consumer’s risk. Additionally, the optimal sample size remains stable under minor to moderate prior moment variations. When the absolute change in μ p and σ p is below 10%, sample size variations in AR and BR designs stay within 1–2 units. However, with larger σ p variations, the AR design’s sample size fluctuates significantly.
In practical production, selecting an appropriate batch sampling inspection plan requires evaluating the stability of prior parameters to determine the suitability of the AR and BR designs. When σ P is known but μ P has significant uncertainty, both the producer’s and consumer’s risks are highly sensitive to this uncertainty, especially in the BR design, where the consumer’s risk fluctuates significantly. Therefore, the AR design is recommended in such cases, as it is less sensitive to changes in μ P and is more suitable when μ P estimation is unstable. Conversely, if μ P is accurately estimated but σ P is highly variable, the producer’s risk is more affected, particularly in the AR design, where sample size variations become substantial. In this scenario, the BR design is recommended, as it offers greater stability in sample size and better controls consumer risk.
When variations in μ P and σ P are small, the optimal sample size n should be employed for sampling, as fluctuations in sample size remain within 1–2 units, minimizing the impact on production costs. This approach is particularly suitable for stable production environments with consistent raw material quality and low equipment errors. However, when μ P or σ P undergoes significant variations, especially when σ P decreases in the AR design, the optimal sample size may increase sharply. To maintain an acceptable producer’s risk, it is advisable to increase the sample size accordingly and dynamically adjust the sampling strategy. Optimizing the plan over successive batches helps mitigate quality control deviations caused by uncertainty in prior information.
Overall, in practical production, if σ P is stable but μ P fluctuates, the AR design is preferable to reduce risks arising from prior mean uncertainty, making it suitable for high-risk products such as medical devices and aerospace components. Conversely, if μ P is stable but σ P varies, the BR design is recommended to enhance sample size robustness and reduce sensitivity to variance deviations, making it more appropriate for products with high production consistency, such as consumer goods and electronic components.

Author Contributions

Conceptualization: H.Z. and W.G.; Methodology: H.Z. and W.G.; Software: H.Z.; Investigation: H.Z.; Writing—Original Draft: H.Z.; Writing—Review & Editing: W.G.; Supervision: W.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Project 202510004185 which was supported by National Training Program of Innovation and Entrepreneurship for Undergraduates. Wenhao’s work was partially supported by the Science and Technology Research and Development Project of China State Railway Group Company, Ltd. (No. N2023Z020).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in [26].

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cohen, A. Truncated and Censored Samples: Theory and Applications; CRC Press: Boca Raton, FL, USA, 1991. [Google Scholar]
  2. Dey, S.; Singh, S.; Tripathi, Y.M.; Asgharzadeh, A. Estimation and Prediction for a Progressively Censored Generalized Inverted Exponential Distribution. Stat. Methodol. 2016, 32, 185–202. [Google Scholar] [CrossRef]
  3. Alsobhi, S.M.; Almutairi, R.; Almongy, F. Estimation for the Exponentiated Weibull Model with Adaptive Type-II Progressive Censored Schemes. Appl. Math. Model. 2016, 40, 1180–1192. [Google Scholar] [CrossRef]
  4. Nassar, M.; Abo-Kasem, O.; Zhang, C. Analysis of Weibull Distribution Under Adaptive Type-II Progressive Hybrid Censoring Scheme. J. Indian Soc. Probab. Stat. 2018, 19, 25–65. [Google Scholar] [CrossRef]
  5. Singh, S.; Tripathi, Y.M.; Wu, S.J. Bayesian Analysis for Lognormal Distribution Under Progressive Type-II Censoring. Hacet. J. Math. Stat. 2019, 48, 1488–1504. [Google Scholar] [CrossRef]
  6. Panahi, M. Estimation of the Inverted Exponentiated Rayleigh Distribution Based on Adaptive Type II Progressive Hybrid Censored Sample. J. Comput. Appl. Math. 2020, 364, 112345. [Google Scholar] [CrossRef]
  7. Dey, S.; Elshahhat, A.; Nassar, M. Analysis of Progressive Type-II Censored Gamma Distribution. Comput. Stat. 2023, 38, 481–508. [Google Scholar] [CrossRef]
  8. Nassar, M.; Elshahhat, A. Estimation Procedures and Optimal Censoring Schemes for an Improved Adaptive Progressively Type-II Censored Weibull Distribution. J. Appl. Stat. 2023, 51, 1664–1688. [Google Scholar] [CrossRef]
  9. Epstein, B. Truncated life test in the exponential case. Ann. Math. Stat. 1954, 25, 555–564. [Google Scholar] [CrossRef]
  10. Fernández, A. Reliability inference and sample-size determination under double censoring for some two-parameter models. Comput. Stat. Data Anal. 2008, 52, 3426–3440. [Google Scholar] [CrossRef]
  11. Fernández, A.J.; Pérez-González, C. Optimal acceptance sampling plans for log-location-scale lifetime models using average risk. Comput. Stat. Data Anal. 2012, 56, 719–731. [Google Scholar] [CrossRef]
  12. Aslam, M.; Balamurali, S.; Jun, C.H.; Ahmad, M. A two-plan sampling system for life testing under Weibull distribution. Ind. Eng. Manag. Syst. 2010, 9, 54–59. [Google Scholar] [CrossRef]
  13. Bhattacharya, R.; Pradhan, B.; Dewanji, A. Computation of Optimum Reliability Acceptance Sampling Plans in Presence of Hybrid Censoring. Comput. Stat. Data Anal. 2015, 83, 91–100. [Google Scholar] [CrossRef]
  14. Wu, C.; Shu, M.; Chang, Y. Variable-sampling plans based on lifetime-performance index under exponential distribution with censoring and its extensions. Appl. Math. Model. 2018, 55, 81–93. [Google Scholar] [CrossRef]
  15. Chakrabarty, J.B.; Chowdhury, S.; Roy, S. Optimum Reliability Acceptance Sampling Plan Using Type-I Generalized Hybrid Censoring Scheme for Products Under Warranty. Int. J. Qual. Reliab. Manag. 2020, 38, 780–799. [Google Scholar] [CrossRef]
  16. Rasay, H.; Naderkhani, F.; Golmohammadi, A.M. Designing Variable Sampling Plans Based on Lifetime Performance Index Under Failure Censoring Reliability Tests. Qual. Eng. 2020, 32, 354–370. [Google Scholar] [CrossRef]
  17. Wang, T.C.; Wu, C.W.; Shu, M.H. A Variables-Type Multiple-Dependent-State Sampling Plan Based on the Lifetime Performance Index under a Weibull Distribution. Ann. Oper. Res. 2022, 311, 381–399. [Google Scholar] [CrossRef]
  18. Champernowne, D. The economics of sequential sampling procedure for defectives. Appl. Stat. 1953, 2, 118–130. [Google Scholar] [CrossRef]
  19. Fernández, A.J.; Pérez-González, C. Generalized Beta prior models on fraction defective in reliability test planning. J. Comput. Appl. Math. 2012, 236, 3147–3159. [Google Scholar] [CrossRef]
  20. Pérez-González, C.J.; Fernández, A.J. Classical versus Bayesian risks in acceptance sampling: A sensitivity analysis. Comput. Stat. 2013, 28, 1333–1350. [Google Scholar] [CrossRef]
  21. Fallahnezhad, M.S.; Babadi, A. A new acceptance sampling plan using Bayesian approach in the presence of inspection errors. Trans. Inst. Meas. Control 2014, 37, 1060–1073. [Google Scholar] [CrossRef]
  22. Li, X.; Chen, W.; Sun, F.; Liao, H.; Kang, R.; Li, R. Bayesian accelerated acceptance sampling plans for a lognormal lifetime distribution under Type-I censoring. Reliab. Eng. Syst. Saf. 2018, 171, 78–86. [Google Scholar] [CrossRef]
  23. Salem, M.; Amin, Z.; Ismail, M. Designing Bayesian Reliability Sampling Plans for Weibull Lifetime Models Using Progressively Censored Data. Int. J. Reliab. Qual. Saf. Eng. 2018, 25, 1850012. [Google Scholar] [CrossRef]
  24. Hafeez, W.; Wu, S.; Aziz, N. Bayesian double group sampling plan for assessing the average number of nonconforming products in manufacturing: A Poisson-Gamma distribution approach. Sci. Rep. 2025, 15, 6095. [Google Scholar] [CrossRef] [PubMed]
  25. Schneider, H. Failure-censored variables-sampling plans for lognormal and Weibull distributions. Technometrics 1989, 31, 199–206. [Google Scholar] [CrossRef]
  26. ANSI/ASQC. Standard ANSI Guide; ASQC: Milwaukee, WI, USA, 1993. [Google Scholar]
Figure 1. Beta( a , b ) prior distributions for p at given μ p and σ p . Subfigures illustrate the influence of different prior means and variances: (a) Beta( a , b ) prior distributions for the defect rate p under μ p = 0.032 with varying prior variances; (b) Beta( a , b ) prior distributions for the defect rate p under figure μ p = 0.048 with varying prior variances; (c) Beta( a , b ) prior distributions for the defect rate p under μ p = 0.054 with varying prior variances; (d) Beta( a , b ) prior distributions for the defect rate p under μ p = 0.078 with varying prior variances; (e) Beta( a , b ) prior distributions for the defect rate p under μ p = 0.094 with varying prior variances.
Figure 1. Beta( a , b ) prior distributions for p at given μ p and σ p . Subfigures illustrate the influence of different prior means and variances: (a) Beta( a , b ) prior distributions for the defect rate p under μ p = 0.032 with varying prior variances; (b) Beta( a , b ) prior distributions for the defect rate p under figure μ p = 0.048 with varying prior variances; (c) Beta( a , b ) prior distributions for the defect rate p under μ p = 0.054 with varying prior variances; (d) Beta( a , b ) prior distributions for the defect rate p under μ p = 0.078 with varying prior variances; (e) Beta( a , b ) prior distributions for the defect rate p under μ p = 0.094 with varying prior variances.
Entropy 27 00477 g001
Figure 2. OC curves for AR design and BR design under 50% censoring rate with μ p = 0.063 and σ p = 0.054 .
Figure 2. OC curves for AR design and BR design under 50% censoring rate with μ p = 0.063 and σ p = 0.054 .
Entropy 27 00477 g002
Figure 3. Sensitivity of average risks to μ p and σ p variations in AR design, with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , μ p = 0.063 , and σ p = 0.054 .
Figure 3. Sensitivity of average risks to μ p and σ p variations in AR design, with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , μ p = 0.063 , and σ p = 0.054 .
Entropy 27 00477 g003
Figure 4. Sensitivity of average risks to μ p and σ p variations in AR design, with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , μ p = 0.063 , and σ p = 0.054 .
Figure 4. Sensitivity of average risks to μ p and σ p variations in AR design, with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , μ p = 0.063 , and σ p = 0.054 .
Entropy 27 00477 g004
Figure 5. Sensitivity of Bayesian risks to μ p and σ p variations in BR design, with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , μ p = 0.063 , and σ p = 0.054 .
Figure 5. Sensitivity of Bayesian risks to μ p and σ p variations in BR design, with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , μ p = 0.063 , and σ p = 0.054 .
Entropy 27 00477 g005
Figure 6. Sensitivity of Bayesian risks to μ p and σ p variations in BR design, with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , μ p = 0.063 , and σ p = 0.054 .
Figure 6. Sensitivity of Bayesian risks to μ p and σ p variations in BR design, with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , μ p = 0.063 , and σ p = 0.054 .
Entropy 27 00477 g006
Figure 7. Sensitivity of optimal sample size to μ p variations, with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , and σ p = 0.054 .
Figure 7. Sensitivity of optimal sample size to μ p variations, with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , and σ p = 0.054 .
Entropy 27 00477 g007
Figure 8. Sensitivity of optimal sample size to σ p variations, with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , and μ p = 0.063 .
Figure 8. Sensitivity of optimal sample size to σ p variations, with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , and μ p = 0.063 .
Entropy 27 00477 g008
Table 1. Sample sizes and sampling risks for optimal AR and BR designs under 50% failure-censored lognormal distribution at given μ p and σ p , with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , and p τ 2 = 0.0942 .
Table 1. Sample sizes and sampling risks for optimal AR and BR designs under 50% failure-censored lognormal distribution at given μ p and σ p , with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , and p τ 2 = 0.0942 .
μ p σ p AR Design BR Design
n APR sim ACR sim n BPR sim BCR sim
0.0320.047180.0510.099
0.054160.0490.100
0.062130.0480.103
0.0480.047250.0510.09660.0480.103
0.054220.0510.10460.0490.098
0.062190.0490.10160.0510.108
0.0630.047290.0500.093110.0540.099
0.054260.0490.101110.0500.097
0.062230.0510.972100.0510.094
0.0780.047320.0520.098150.0470.102
0.054290.0500.106150.0500.099
0.062250.0530.097140.0490.098
0.0940.047340.0470.099200.0470.097
0.054310.0510.098180.0490.097
0.062270.0540.102170.0490.098
Table 2. Effect of μ p on producer and consumer risks in AR and BR designs, with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , μ p = 0.063 , and σ p = 0.054 under 50% censoring.
Table 2. Effect of μ p on producer and consumer risks in AR and BR designs, with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , μ p = 0.063 , and σ p = 0.054 under 50% censoring.
μ p AR DesignBR Design
APRACRBPRBCR
0.032 (−49%)0.0162 (−68.0%)0.0829 (−15.7%)0.0802 (57.3%)0.0334 (−66.8%)
0.038 (−40%)0.0219 (−55.2%)0.0880 (−10.6%)0.0797 (56.2%)0.0426 (−57.6%)
0.044 (−30%)0.0280 (−42.6%)0.0918 (−6.7%)0.0758 (48.6%)0.0532 (−47.0%)
0.051 (−20%)0.0355 (−27.2%)0.0950 (−3.3%)0.0681 (33.6%)0.0678 (−32.4%)
0.054 (−15%)0.0388 (−20.4%)0.0962 (−2.2%)0.0642 (25.8%)0.0735 (−25.3%)
0.057 (−10%)0.0421 (−13.7%)0.0971 (−1.3%)0.0599 (17.5%)0.0827 (−17.6%)
0.060 (−5%)0.0450 (−6.8%)0.0980 (−0.6%)0.0555 (8.8%)0.0912 (−9.1%)
0.063 (0%)0.0488 (0.0%)0.0984 (0.0%)0.0510 (0.0%)0.1000 (0.0%)
0.066 (5%)0.0522 (6.8%)0.0988 (0.3%)0.0464 (−8.8%)0.1100 (9.9%)
0.069 (10%)0.0555 (13.6%)0.0985 (0.6%)0.0421 (−17.5%)0.1210 (20.6%)
0.072 (15%)0.0587 (20.3%)0.0992 (0.8%)0.0377 (−26.0%)0.1328 (32.2%)
0.076 (20%)0.0630 (29.1%)0.0992 (0.7%)0.0322 (−36.7%)0.1497 (49.1%)
0.082 (30%)0.0693 (42.0%)0.0988 (0.35%)0.0248 (−51.3%)0.1784 (77.6%)
0.088 (40%)0.0753 (54.4%)0.0979 (0.5%)0.0186 (−63.6%)0.2110 (110.2%)
0.094 (49%)0.0810 (66.0%)0.0965 (1.0%)0.0134 (−73.7%)0.2479 (146.8%)
Table 3. Effect of σ p on producer and consumer risks in AR and BR designs, with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , μ p = 0.063 , and σ p = 0.054 under 50% censoring.
Table 3. Effect of σ p on producer and consumer risks in AR and BR designs, with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , μ p = 0.063 , and σ p = 0.054 under 50% censoring.
σ p AR DesignBR Design
APRACRBPRBCR
0.036 (−33%)0.0783 (60.4%)0.1316 (33.7%)0.0417 (−18.3%)0.0967 (−3.6%)
0.041 (−25%)0.0684 (40.1%)0.1208 (22.7%)0.0465 (−8.9%)0.1012 (0.7%)
0.044 (−18%)0.0631 (29.3%)0.1150 (16.8%)0.0484 (−5.1%)0.1023 (1.9%)
0.048 (−12%)0.0568 (16.5%)0.1078 (9.5%)0.0500 (−1.9%)0.0827 (2.0%)
0.051 (−6%)0.0527 (7.8%)0.1029 (4.6%)0.0507 (−0.5%)0.1020 (1.3%)
0.054 (0%)0.0488 (0.0%)0.0984 (0.0%)0.0510 (0.0%)0.1000 (0.0%)
0.057 (5%)0.0453 (−7.1%)0.0942 (−4.3%)0.0510 (0.0%)0.0987 (−1.6%)
0.060 (11%)0.0421 (−13.6%)0.0902 (−8.3%)0.0508 (−0.4%)0.0967 (−3.7%)
0.063 (15%)0.0392 (−19.6%)0.0866 (−12.0%)0.0503 (−1.3%)0.0944 (−5.9%)
0.065 (20%)0.0374 (−23.3%)0.0843 (−14.3%)0.0500 (−1.9%)0.0928 (−7.6%)
0.068 (25%)0.0349 (−28.4%)0.0810 (−17.6%)0.0494 (−3.2%)0.0902 (−10.2%)
Table 4. Effect of μ p on producer and consumer risks in AR and BR designs, with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , μ p = 0.063 , and σ p = 0.054 under 10% censoring.
Table 4. Effect of μ p on producer and consumer risks in AR and BR designs, with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , μ p = 0.063 , and σ p = 0.054 under 10% censoring.
μ p AR DesignBR Design
APRACRBPRBCR
0.032 (−49%)0.0170 (−66.7%)0.0855 (−15.6%)0.0890 (59.1%)0.0366 (−66.2%)
0.038 (−40%)0.0213 (−55.1%)0.0910 (−10.5%)0.0882 (57.7%)0.0466 (−56.9%)
0.044 (−30%)0.0293 (−42.5%)0.0946 (−6.6%)0.0837 (49.7%)0.0580 (−46.5%)
0.051 (−20%)0.0371 (−27.1%)0.0980 (−3.3%)0.0751 (34.2%)0.0737 (−32.0%)
0.054 (−15%)0.0405 (−20.4%)0.0991 (−2.2%)0.0706 (26.3%)0.0814 (−24.9%)
0.057 (−10%)0.0440 (−13.6%)0.1000 (−1.3%)0.0659 (17.8%)0.0897 (−17.3%)
0.060 (−5%)0.0475 (−6.8%)0.1000 (−0.6%)0.0609 (9.0%)0.0987 (−9.0%)
0.063 (0%)0.0509 (0.0%)0.1014 (0.0%)0.0559 (0.0%)0.1084 (0.0%)
0.066 (5%)0.0544 (6.8%)0.1018 (0.4%)0.0510 (−8.7%)0.1189 (9.7%)
0.069 (10%)0.0578 (13.5%)0.1021 (0.7%)0.0461 (−17.6%)0.1305 (20.2%)
0.072 (15%)0.0612 (20.1%)0.1022 (0.9%)0.0413 (−26.1%)0.1426 (31.5%)
0.076 (20%)0.0656 (28.8%)0.1022 (0.9%)0.0353 (−36.8%)0.1605 (48.0%)
0.082 (30%)0.0720 (42.0%)0.1020 (0.4%)0.0271 (−51.4%)0.1905 (75.7%)
0.088 (40%)0.0783 (53.7%)0.1009 (0.5%)0.0202 (−63.8%)0.2246 (107.1%)
0.094 (49%)0.0841 (66.0%)0.1015 (1.0%)0.0146 (−73.8%)0.2628 (142.4%)
Table 5. Effect of σ p on producer and consumer risks in AR and BR designs, with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , μ p = 0.063 , and σ p = 0.054 under 10% censoring.
Table 5. Effect of σ p on producer and consumer risks in AR and BR designs, with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , μ p = 0.063 , and σ p = 0.054 under 10% censoring.
σ p AR DesignBR Design
APRACRBPRBCR
0.036 (−33%)0.0813 (59.6%)0.1351 (33.3%)0.0452 (−19.1%)0.1030 (−5.2%)
0.041 (−25%)0.0711 (39.6%)0.1241 (22.5%)0.0506 (−9.4%)0.1080 (−0.3%)
0.044 (−18%)0.0657 (29.0%)0.1182 (16.6%)0.0528 (−6.0%)0.1095 (1.1%)
0.048 (−12%)0.0592 (16.3%)0.1109 (9.4%)0.0540 (−2.2%)0.1102 (1.6%)
0.051 (−6%)0.0549 (7.7%)0.1060 (4.6%)0.0507(−0.5%)0.1020 (1.3%)
0.054 (0%)0.0509 (0.0%)0.1014 (0.0%)0.0559 (0.0%)0.1084 (0.0%)
0.057 (5%)0.0473 (−7.1%)0.0970 (−4.2%)0.0561 (0.3%)0.1068 (−1.5%)
0.060 (11%)0.0440 (−13.6%)0.0930 (−8.2%)0.0559 (0.0%)0.1049 (−3.2%)
0.063 (15%)0.0409 (−19.5%)0.0893 (−11.8%)0.0555 (−0.7%)0.1026 (−5.3%)
0.065 (20%)0.0391 (−23.2%)0.0870 (−14.2%)0.0551 (−1.4%)0.1010 (−6.8%)
0.068 (25%)0.0365 (−28.4%)0.0836 (−17.5%)0.0545 (−2.6%)0.0984 (−9.2%)
Table 6. Effect of μ p on producer and consumer risks in AR and BR designs, with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , μ p = 0.063 , and σ p = 0.054 under 10% censoring.
Table 6. Effect of μ p on producer and consumer risks in AR and BR designs, with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , μ p = 0.063 , and σ p = 0.054 under 10% censoring.
μ p AR DesignBR Design
APRACRBPRBCR
0.032 (−49%)0.0161 (−66.9%)0.0827 (−15.7%)0.0789 (57.1%)0.0333 (−66.8%)
0.038 (−40%)0.0218 (−55.2%)0.0876 (−10.6%)0.0784 (56.0%)0.0425 (−57.6%)
0.044 (−30%)0.0279 (−42.7%)0.0915 (−6.7%)0.0746 (48.5%)0.0530 (−47.0%)
0.051 (−20%)0.0371 (−27.3%)0.0948 (−3.3%)0.0671 (33.5%)0.0676 (−32.5%)
0.054 (−15%)0.0387 (−20.5%)0.0959 (−2.2%)0.0632 (25.7%)0.0748 (−25.3%)
0.057 (−10%)0.0420 (−13.7%)0.0968 (−1.3%)0.0590 (17.4%)0.0825 (−17.6%)
0.060 (−5%)0.0453 (−6.9%)0.0975 (−0.6%)0.0547 (8.8%)0.0910 (−9.1%)
0.063 (0%)0.0487 (0.0%)0.0981 (0.0%)0.0502 (0.0%)0.1001 (0.0%)
0.066 (5%)0.0520 (6.8%)0.0984 (0.4%)0.0458 (−8.8%)0.1100 (9.9%)
0.069 (10%)0.0553 (13.6%)0.0987 (0.7%)0.0414 (−17.5%)0.1208 (20.6%)
0.072 (15%)0.0586 (20.3%)0.0988 (0.8%)0.0372 (−26.0%)0.1324 (32.2%)
0.076 (20%)0.0629 (29.1%)0.0988 (0.8%)0.0318 (−36.7%)0.1493 (49.1%)
0.082 (30%)0.0691 (42.0%)0.0984 (0.4%)0.0245 (−51.2%)0.1778 (77.6%)
0.088 (40%)0.0751 (54.3%)0.0975 (−0.5%)0.0183 (−63.6%)0.2104 (110.2%)
0.094 (49%)0.0808 (66.0%)0.0962 (−1.8%)0.0132 (−73.7%)0.2472 (146.9%)
Table 7. Effect of σ p on producer and consumer risks in AR and BR designs, with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , μ p = 0.063 , and σ p = 0.054 under 90% censoring.
Table 7. Effect of σ p on producer and consumer risks in AR and BR designs, with τ 1 = 0.05 , τ 2 = 0.10 , p τ 1 = 0.0319 , p τ 2 = 0.0942 , μ p = 0.063 , and σ p = 0.054 under 90% censoring.
σ p AR DesignBR Design
APRACRBPRBCR
0.036 (−33%)0.0780 (60.3%)0.1312 (33.8%)0.0411 (−18.1%)0.0965 (−3.6%)
0.041 (−25%)0.0682 (40.0%)0.1204 (22.7%)0.0459 (−8.7%)0.1009 (0.8%)
0.044 (−18%)0.0630 (29.3%)0.1146 (16.8%)0.0477 (−5.0%)0.1020 (1.9%)
0.048 (−12%)0.0567 (16.4%)0.1075 (9.5%)0.0493 (−1.9%)0.1021 (2.0%)
0.051 (−6%)0.0525 (7.8%)0.1026 (4.6%)0.0500 (−0.6%)0.1013 (1.2%)
0.054 (0%)0.0487 (0.0%)0.0981 (0.0%)0.0502 (0.0%)0.1001 (0.0%)
0.057 (5%)0.0452 (−7.2%)0.0939 (−4.3%)0.0502 (−0.0%)0.0984 (−1.7%)
0.060 (11%)0.0420 (−13.7%)0.0899 (−8.3%)0.0499 (0.4%)0.0964 (−3.7%)
0.063 (15%)0.0391 (−19.6%)0.0863 (−11.9%)0.0496 (−1.3%)0.0941 (−6.0%)
0.065 (20%)0.0373 (−23.3%)0.0840 (−14.3%)0.0492 (−2.0%)0.0924 (−7.6%)
0.068 (25%)0.0348 (−28.5%)0.0808 (−17.6%)0.0486 (−3.3%)0.0899 (−10.2%)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhou, H.; Gui, W. Classical Versus Bayesian Error-Controlled Sampling Under Lognormal Distributions with Type II Censoring. Entropy 2025, 27, 477. https://doi.org/10.3390/e27050477

AMA Style

Zhou H, Gui W. Classical Versus Bayesian Error-Controlled Sampling Under Lognormal Distributions with Type II Censoring. Entropy. 2025; 27(5):477. https://doi.org/10.3390/e27050477

Chicago/Turabian Style

Zhou, Huasen, and Wenhao Gui. 2025. "Classical Versus Bayesian Error-Controlled Sampling Under Lognormal Distributions with Type II Censoring" Entropy 27, no. 5: 477. https://doi.org/10.3390/e27050477

APA Style

Zhou, H., & Gui, W. (2025). Classical Versus Bayesian Error-Controlled Sampling Under Lognormal Distributions with Type II Censoring. Entropy, 27(5), 477. https://doi.org/10.3390/e27050477

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop