Next Article in Journal
Moment Estimation in Paired Comparison Models with a Growing Number of Subjects
Previous Article in Journal
Bayesian Decision-Making Shapes Phenotypic Landscapes from Differentiation to Cancer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Classical and Bayesian Inference for the Two-Parameter Rayleigh Distribution with Random Censored Data

School of Mathematics and Statistics, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Entropy 2026, 28(3), 313; https://doi.org/10.3390/e28030313
Submission received: 4 February 2026 / Revised: 26 February 2026 / Accepted: 7 March 2026 / Published: 10 March 2026
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

This study focuses on parameter estimation and reliability analysis for the two-parameter Rayleigh distribution under random censoring. It is shown that directly fitting the standard Rayleigh distribution can lead to substantial estimation errors, especially when the dataset contains a markedly high minimum value. To overcome the limitation of the conventional single-parameter Rayleigh distribution, which lacks a threshold parameter in practical applications, a two-parameter Rayleigh distribution model is proposed. The main research contents include the following: establishing a randomly censored data model; deriving classical inference methods based on maximum likelihood estimation along with several other classical estimation techniques; and constructing a Bayesian estimation framework. We also analyze several reliability and experimental characteristics by deriving their corresponding estimates. A Monte Carlo simulation study is carried out to assess the performance of the proposed estimators. Finally, the practicality and superiority of the two-parameter model are validated using real strength datasets. The results demonstrate that the two-parameter Rayleigh distribution can more accurately describe survival data with threshold characteristics and outperforms the single-parameter model in terms of model fit and reliability estimation.

1. Introduction

Survival analysis constitutes a methodology for examining expected lifetimes or event occurrence times, finding extensive applications across medicine, engineering, and social sciences. Within reliability theory, event occurrence times are also termed lifetime data or failure time data. Modeling and analyzing such lifetime data enables the assessment and prediction of reliability levels for products or systems, facilitating accurate inferences regarding statistical patterns over temporal dimensions. In the fields of reliability theory and survival analysis, parametric lifetime models like the exponential, Weibull, and gamma distributions offer a concise mathematical framework for characterizing patterns of “event occurrence times”. Advances in communications technology and the growing need to analyze underlying failure mechanisms now demand more precise and robust statistical distributions to model failure data.
First proposed in 1880 to describe the results of superimposed random vibrations, the Rayleigh distribution later gained prominence as a widely used lifetime model, driven by practical engineering needs in fields such as wireless communications. Numerous researchers have conducted in-depth studies on it within reliability theory. In practical scenarios, constraints from multiple factors often render obtaining “complete event times” difficult or prohibitively costly. As a direct consequence, a diverse array of censoring strategies are intentionally adopted in the course of lifetime experimental studies. The most conventional methods are Type I and Type II censoring. For instance, partial Bayes estimation has been developed to enhance inference under informative priors [1]. Within a progressively Type-II censored framework, methods have been proposed specifically for estimating parameters of the two-parameter Rayleigh model, improving efficiency in life-testing experiments [2]. Furthermore, robust explicit estimators are derived for the shifted Rayleigh distribution under Type-II censoring [3], addressing estimation stability in the presence of location parameters. These developments illustrate the continued methodological refinement of Rayleigh-based models in reliability analysis.Within such methodological frameworks, either the censoring duration or the quantity of censored samples is commonly set in advance in a deliberate manner. In actual observations, subjects are frequently lost or randomly removed before failure. This phenomenon of random removal at non-termination points is termed random censoring.
With randomly censored data, the authors of [4,5,6,7] extensively investigated various distributions, including the exponential, Rayleigh, and Burr Type XII distributions. In recent research efforts, the studies documented in [8,9] have specifically investigated Bayesian estimation methodologies tailored to generalized exponential and Weibull distributions under the scenario of random censoring. The authors of [10,11] have investigated estimation for both the Maxwell distribution and the generalized inverted exponential distribution in the context of randomly censored data. The authors of [12] further extended the scope of this field by investigating the parameter estimation problem of the exponential distribution with an introduced location parameter under random censoring. Recent research continues to expand into the study of other flexible models. For instance, the authors of [13] employed a combination of classical and Bayesian approaches to establish a comprehensive analytical framework for the randomly censored Kumaraswamy distribution.
For the conventional study of data with random censoring, it is assumed that failure times and censoring times both start from the zero point. However, in practice, most event times possess an inherent threshold, meaning events cannot occur before a minimum time ( μ ). Consequently, the assumptions of the standard Rayleigh distribution prove overly idealized and unrealistic in practical applications. For mathematical convenience, the censor time is also set to the same minimum threshold. Introducing a location parameter effectively corrects systematic fitting bias, enhances model flexibility and parameter interpretability, and indirectly improves inference accuracy. The introduction of a location parameter μ under random censoring is not a trivial extension—it brings new methodological challenges. Specifically, for the observed sample y 1 , , y n , the support of the distribution becomes data-dependent through the constraint μ < min { y 1 , , y n } , and the censoring mechanism interacts with the location parameter in ways that complicate both classical and Bayesian inference. Standard estimation procedures developed for the one-parameter case are no longer directly applicable, requiring the development of tailored inferential methods.
This paper addresses these challenges by developing comprehensive classical and Bayesian inference procedures for the two-parameter Rayleigh distribution under random censoring. We derive maximum likelihood estimators with closed-form expressions for μ , construct Fisher information matrices for asymptotic inference, and develop Gibbs sampling algorithms for Bayesian estimation under generalized entropy loss functions. Through extensive simulations and real data analysis, we demonstrate that the proposed methods not only extend the scope of existing censored-data methodologies but also provide more accurate and reliable estimates when the data exhibit a non-zero minimum lifetime.
The structure of the rest of this paper is set out below. Section 2 first introduces the basic definition of the two-parameter Rayleigh distribution, then constructs the mathematical model under random censoring. It specifies the distribution assumptions for failure and censoring times, along with the corresponding joint likelihood function of the model. Section 3 systematically expounds on classical parameter estimation methods. Section 4 moves on to the Bayesian theoretical framework, elaborating on the specific Bayesian estimation process constructed under the generalized entropy loss function with conjugate inverted-gamma prior distributions, the practical implementation of the Gibbs sampling algorithm for acquiring posterior sampling data, and the systematic construction of highest posterior density credible intervals by means of the Chen--Shao algorithm. Section 5 deals with the derivation and estimation of key reliability characteristics under the random-censoring model, including the mean time to system failure, hazard function, reliability function and expected time on test. Section 6 conducts extensive Monte Carlo simulation study to evaluate and compare the finite-sample performance of all the proposed estimators under various parameter configurations and prior settings. Section 7 provides a practical illustration using a carbon fiber strength dataset; goodness-of-fit tests and graphical comparisons demonstrate the superiority of the two-parameter Rayleigh model over its one-parameter counterpart, and the estimation methods developed in this paper are applied to obtain parameter estimates and reliability indices. Finally, Section 8 summarizes the work and discusses possible directions for future research. All numerical computations and simulations in this paper are performed using the statistical software R (Version 4.5.2).

2. The Model and Its Assumptions

The probability density function (PDF) pertaining to the two-parameter Rayleigh distribution can be expressed as follows:
f ( x μ , σ ) = x μ σ 2 exp ( x μ ) 2 2 σ 2 ; 0 μ x < , σ > 0 ,
where σ denotes the scale parameter and μ the location parameter, which stands for the minimum lifetime of the studied units. This distribution is denoted by Rayleigh ( μ , σ ) , and its corresponding cumulative distribution function (CDF) is defined as follows:
F ( x μ , σ ) = 1 exp ( x μ ) 2 2 σ 2 ; 0 μ x < , σ > 0 .
The two-parameter Rayleigh distribution offers greater flexibility than the standard Rayleigh distribution by incorporating a location parameter μ ( μ 0 ) . This flexibility is particularly important in modeling scenarios where lifetime measurements do not originate from zero. Furthermore, the two-parameter Rayleigh distribution is invariably uni-modal and is characterized by an increasing hazard rate function. This renders it particularly well-suited for modeling the lifetime distributions of components subject to rapid aging over time. The probability density function and cumulative distribution function of this distribution are plotted in Figure 1.
Consider a test conducted on n specimens, where their individual lifetimes are denoted by X 1 , X 2 , , X n . These random variables follow an independent and identical distribution, with the corresponding PDF f X ( x μ , σ ) and CDF F X ( x μ , σ ) .
Moreover, let T 1 , T 2 , , T n represent the random censoring times for the aforementioned specimens, with their probability density function and cumulative distribution function given by f T ( x μ , γ ) and F T ( x μ , γ ) , respectively. For mathematical tractability, we assume that the failure time X i and the censoring time T i share the same location parameter μ . This simplification, which has been commonly adopted in the literature on random censoring (e.g., [12]), enables a clean derivation of the joint likelihood and preserves the two-parameter Rayleigh structure for the marginal distribution of Y i . We further assume that X i and T i are mutually independent for each i. It is important to note that in practice, only one of X i and T i can be observed for each test unit. Let the actual observed time be denoted as Y i = min ( X i , T i ) for i = 1 , 2 , , n . In addition, an indicator variable D i is defined as follows:
D i = 1 if X i T i 0 if X i > T i ; i = 1 , 2 , , n
Note that D i is a Bernoulli random variable whose probability mass function is defined as
P D i = d i = p d i ( 1 p ) 1 d i ; d i = 0 , 1 , where p = P X i T i .
The probability of failure prior to censoring is derived as follows:
p = P [ An item fails ] = P [ D i = 1 ] = P [ X i T i ] = μ f T ( t ) F X ( t ) d t = μ t μ γ 2 e ( t μ ) 2 2 γ 2 1 e ( t μ ) 2 2 σ 2 d t = γ 2 σ 2 + γ 2 .
Substituting this result gives the explicit marginal distribution:
P ( D i = d i ) = γ 2 σ 2 + γ 2 d i σ 2 σ 2 + γ 2 1 d i , d i = 0 , 1 .
Note that the expression for the above probability does not contain μ . To see that Y i and D i are independent, consider the transformed variables U i = ( X i μ ) 2 and V i = ( T i μ ) 2 . Under the assumptions that X i and T i are independent and share the same location parameter μ , it follows that U i and V i are independent and follow exponential distributions. Specifically, U i Exp ( 1 / ( 2 σ 2 ) ) and V i Exp ( 1 / ( 2 γ 2 ) ) .
A classical result for exponential distributions (see [12]) states that, for two independent exponential variables, U and V, the event { U V } is independent of min ( U , V ) . Applying this result to U i and V i , we have that { U i V i } is independent of min ( U i , V i ) . Since { U i V i } = { X i T i } = { D i = 1 } , and min ( U i , V i ) is a one-to-one function of Y i = min ( X i , T i ) = μ + min ( U i , V i ) , it follows that D i and Y i are independent.
Therefore, the joint probability distribution of ( Y i , D i ) can be expressed as
f Y , D ( y i , d i μ , σ , γ ) = f X ( y i μ , σ ) ( 1 F T ( y i μ , γ ) ) d i f T ( y i μ , γ ) ( 1 F X ( y i μ , σ ) ) 1 d i = y i μ σ 2 exp ( y i μ ) 2 2 σ 2 exp ( y i μ ) 2 2 γ 2 d i × y i μ γ 2 exp ( y i μ ) 2 2 γ 2 exp ( y i μ ) 2 2 σ 2 1 d i = y i μ σ 2 d i y i μ γ 2 1 d i exp ( y i μ ) 2 2 σ 2 ( y i μ ) 2 2 γ 2 = y i μ σ 2 d i y i μ γ 2 1 d i exp ( y i μ ) 2 β 2
y i > μ 0 , σ , γ 0 , d i = 0 , 1 , β : = 1 σ 2 + 1 γ 2 .
where the parameter β is defined to simplify subsequent calculations.
The marginal distribution of Y i can be obtained by summing the joint distribution in (2) over d i = 0 , 1 :
f Y ( y i μ , σ , γ ) = d i = 0 1 f Y , D ( y i , d i ) = 1 σ 2 + 1 γ 2 ( y i μ ) exp ( y i μ ) 2 β 2 = β ( y i μ ) exp ( y i μ ) 2 β 2 , y i > μ ,
This is exactly the probability density function of a two-parameter Rayleigh distribution with location parameter μ and scale parameter 1 / β , denoted as Rayleigh ( μ , 1 / β ) .
The expected values for Y i and D i are derived as E ( Y i ) = μ + π 2 β and E ( D i ) = γ 2 σ 2 + γ 2 = p , in that order. On this basis, the collection { ( y i , d i ) ; i = 1 , 2 , , n } constitutes a random censoring sample, for which the joint probability function is elaborated in (2), and the respective marginal distributions are formulated by the mathematical expressions presented in (1) and (2).

3. Classical Estimation Methods

Maximum likelihood estimation (MLE) is employed to infer the unknown model parameters, where the ensuing analytical procedures are conducted on the basis of a randomly censored sample designated as ( y , d ) = { ( y i , d i ) ; i = 1 , 2 , , n } . The likelihood function associated with this sample is formulated as follows:
L ( μ , σ , γ y , d ) = i = 1 n f ( y i , d i ) = σ 2 i = 1 n d i γ 2 ( n i = 1 n d i ) i = 1 n ( y i μ ) exp β 2 i = 1 n ( y i μ ) 2 y i > μ 0 ; σ , γ > 0 , β = 1 σ 2 + 1 γ 2 .
Now, the log–likelihood function becomes
log L ( μ , σ , γ ) = log σ 2 i = 1 n d i γ 2 ( n i = 1 n d i ) i = 1 n ( y i μ ) exp β 2 i = 1 n ( y i μ ) 2 = 2 i = 1 n d i log σ 2 n i = 1 n d i log γ + i = 1 n log ( y i μ ) β 2 i = 1 n ( y i μ ) 2
The likelihood is an increasing function of μ and μ is less than or equal to min { y 1 , , y n } . Consequently, the MLE of the parameter μ is exactly given by μ ^ = min { y 1 , , y n } . Note that μ ^ = min { y 1 , , y n } = y ( 1 ) is a boundary estimator, as it coincides with the smallest observation, while the true parameter μ is strictly less than all realizations of Y i . Consequently, this estimator exhibits positive bias in finite samples, with its expectation given in Proposition  1 as E ( μ ^ ) = μ + 1 n π 2 β . The bias decreases as the sample size n increases, confirming the consistency of μ ^ .
Taking w 1 = i = 1 n d i , w 2 = i = 1 n ( y i μ ^ ) 2 . The MLEs corresponding to parameters σ and γ can be derived by numerically solving these two associated equations log L ( μ , σ , γ ) σ = 0 and log L ( μ , σ , γ ) γ = 0 as
σ ^ 2 = i = 1 n ( y i μ ^ ) 2 2 i = 1 n d i = w 2 2 w 1 a n d γ ^ 2 = i = 1 n ( y i μ ^ ) 2 2 ( n i = 1 n d i ) = w 2 2 ( n w 1 )

3.1. Variance in the Maximum Likelihood Estimator

Proposition 1.
Let y 1 , y 2 , , y n denote independent and identically distributed random variables that share a common probability density function
f ( y i μ , β ) = β ( y i μ ) exp ( y i μ ) 2 β 2 ; y i > μ 0 , β = 1 σ 2 + 1 γ 2 .
Then the minimum order statistic y ( 1 ) = min { y 1 , y 2 , , y n } follows a two-parameter Rayleigh distribution with location parameter μ and scale parameter 1 / n β , i.e.,
y ( 1 ) Rayleigh μ , 1 n β .
Proof. 
The CDF corresponding to (6) is
F ( y i μ , β ) = 1 exp ( y i μ ) 2 β 2 ; y i > μ 0 .
For the minimum of n independent random variables, we have
F y ( 1 ) ( y i ) = 1 P y ( 1 ) > y i = 1 P y 1 > y i , y 2 > y i , , y n > y i = 1 i = 1 n P y i > y i = 1 1 F y ( y i ) n .
Substituting (8) gives
F y ( 1 ) ( y i ) = 1 1 1 exp ( y i μ ) 2 β 2 n = 1 exp ( y i μ ) 2 β 2 n = 1 exp n ( y i μ ) 2 β 2 .
Rewriting the exponent
F y ( 1 ) ( y i ) = 1 exp ( y i μ ) 2 ( n β ) 2 ; y i > μ 0 .
Expression (9) coincides with the cumulative distribution function of a two-parameter Rayleigh distribution having location μ and scale 1 / n β . Differentiating (9) with respect to y i gives the corresponding probability density function
f y ( 1 ) ( y i ) = d d y F y ( 1 ) ( y i ) = n β ( y i μ ) exp ( y i μ ) 2 n β 2 ; y i > μ 0 .
This density matches precisely the form of a Rayleigh density with parameters μ and 1 / n β .
Consequently, it follows that y ( 1 ) Rayleigh μ , 1 / n β , which thereby completes the theoretical proof.    □
Since y ( 1 ) follows Rayleigh ( μ , 1 / n β ) , the mean and variance in the μ ^ are
E ( μ ^ ) = E ( y ( 1 ) ) = μ + 1 n π 2 β and V ^ ( μ ^ ) = 4 π 2 n β ^ = ( 4 π ) w 2 4 n 2 .
Note that μ ^ = y ( 1 ) is a consistent but biased estimator of μ . By estimating μ with the sample minimum, the original three-parameter estimation problem reduces to a two-parameter problem involving only σ and γ , simplifying subsequent inference.
The second-order partial derivatives, needed for the observed Fisher information matrix, are
2 log L ( μ , σ , γ ) σ 2 = 2 w 1 σ 2 3 w 2 σ 4
2 log L ( μ , σ , γ ) γ 2 = 2 ( n w 1 ) γ 2 3 w 2 γ 4
2 log L ( μ , σ , γ ) σ γ = 0 = 2 log L ( μ , σ , γ ) γ σ .
It is worth acknowledging that the maximum likelihood estimator of μ lies on the boundary of the parameter space, which may affect the regularity conditions underlying the Fisher information matrix. However, the MLE of μ is super-consistent and converges at rate n, which mitigates its impact on the asymptotic distribution of σ ^ and γ ^ . Nevertheless, the Fisher information matrix derived below focuses on the scale parameters σ and γ , which are interior points, and the resulting asymptotic intervals are commonly used in practice as approximations.
From (1), we have
E ( w 1 ) = E i = 1 n D i = n p = n γ 2 σ 2 + γ 2 .
For the Rayleigh distribution, it can be shown that
E ( w 2 ) = E i = 1 n ( y i μ ^ ) 2 = 2 ( n 1 ) σ 2 γ 2 σ 2 + γ 2 .
Now, the Fisher information matrix for ( σ , γ ) can be derived as
I ( σ , γ ) = E 2 log L ( μ , σ , γ ) σ 2 2 log L ( μ , σ , γ ) σ γ 2 log L ( μ , σ , γ ) γ σ 2 log L ( μ , σ , γ ) γ 2 = 4 n γ 2 σ 2 ( σ 2 + γ 2 ) 0 0 4 n σ 2 γ 2 ( σ 2 + γ 2 )
Since the Fisher information matrix I ( σ , γ ) is diagonal here, it can be conveniently inverted by simply inverting each element along its main diagonal. The diagonal elements of the inverse information matrix I 1 ( σ , γ ) provide the asymptotic variance estimators for the maximum likelihood estimators (MLEs). Specifically, these variance estimates are given by
V ( σ ^ ) = σ 2 ( σ 2 + γ 2 ) 4 n γ 2 and V ( γ ^ ) = γ 2 ( σ 2 + γ 2 ) 4 n σ 2
These variance expressions depend on the unknown parameters σ and γ . For practical application, usable estimates of the variances are obtained by replacing σ and γ with their corresponding MLEs. The estimated variances for the estimators σ ^ and γ ^ are subsequently derived as follows:
V ( σ ^ ) = σ ^ 2 ( σ ^ 2 + γ ^ 2 ) 4 n γ ^ 2 = w 2 8 w 1 2
V ( γ ^ ) = γ ^ 2 ( σ ^ 2 + γ ^ 2 ) 4 n σ ^ 2 = w 2 8 ( n w 1 ) 2

3.2. Asymptotic Confidence Intervals

Given that y ( 1 ) follows a Rayleigh ( μ , 1 / n β ) distribution, the two-sided equal-tailed confidence interval of ( 1 α ) × 100 % for the parameter μ can be readily deduced in accordance with the method presented in reference [12]. Let
w 2 = i = 1 n ( y i y ( 1 ) ) 2 ,
Then, the confidence interval (CI) of μ is
y ( 1 ) w 2 2 n ( n 1 ) F 2 , 2 ( n 1 ) ( 1 α / 2 ) , y ( 1 ) w 2 2 n ( n 1 ) F 2 , 2 ( n 1 ) ( α / 2 ) ,
where F 2 , 2 ( n 1 ) ( α / 2 ) denotes the upper α / 2 × 100 % critical value of Snedecor’s F-distribution with numerator and denominator degrees of freedom equal to 2 and 2 ( n 1 ) , respectively. Herein, ( 1 α ) serves as the confidence coefficient, with the parameter α satisfying the constraint that 0 < α < 1 .
This interval construction assumes that y ( 1 ) is sufficient and that the data are exactly follow the two-parameter Rayleigh distribution. Under model misspecification, the actual coverage probability may differ from the nominal level. In such cases, bootstrap methods offer a more robust alternative for interval estimation of μ .
For the scale parameters σ and γ , the asymptotic sampling distribution associated with the MLEs ( σ ^ , γ ^ ) follows a bivariate normal distribution denoted as BND ( σ , γ ) , I 1 ( σ , γ ) . On this basis, we are able to derive the CI for the parameters ( σ , γ ) by means of the normal approximation method. The two-sided 100 × ( 1 α ) % CI corresponding to σ and γ can be expressed as
CI σ : σ ^ ± z 1 α / 2 var ^ ( σ ^ ) ,
CI γ : γ ^ ± z 1 α / 2 var ^ ( γ ^ ) ,
Here, z 1 α / 2 refers to the upper ( 1 α / 2 ) th quantile of the standard normal distribution. It should be noted that these confidence intervals are based on asymptotic normal approximations, which may not be accurate for small or moderately sized samples, especially under heavy censoring. In practice, alternative methods such as bootstrap confidence intervals could be considered to potentially improve coverage accuracy in such scenarios.
In Section 6, we carry out a Monte Carlo simulation experiment to calculate the upper and lower confidence bounds of the parameters as well as the average lengths of their confidence intervals by using simulated samples, and further obtain the corresponding coverage probabilities for these intervals accordingly.

3.3. Method of Moments

Upon equating sample moments with the corresponding population moments, the relevant estimates of the model parameters are obtainable via this moment-matching approach. For the random variables D and Y, we define their sample counterparts based on the observed data as follows: d ¯ = 1 n i = 1 n d i , y ¯ = 1 n i = 1 n y i , s y 2 = 1 n i = 1 n ( y i y ¯ ) 2 and population moments E ( D ) = γ 2 σ 2 + γ 2 ,    E ( Y ) = μ + π σ 2 γ 2 2 ( σ 2 + γ 2 ) ,    V ( Y ) = 4 π 2 · σ 2 γ 2 σ 2 + γ 2 . By equating the respective moments to one another, we thus obtain
d ¯ = γ 2 σ 2 + γ 2 ; y ¯ = μ + π σ 2 γ 2 2 ( σ 2 + γ 2 ) ; s y 2 = 4 π 2 · σ 2 γ 2 σ 2 + γ 2
We can derive the moment estimators corresponding to each parameter as
μ ^ = y ¯ s y π 4 π ; σ ^ 2 = 2 s y 2 ( 4 π ) d ¯ ; γ ^ 2 = 2 s y 2 ( 4 π ) ( 1 d ¯ )
It can be observed that these moment estimators provide unique solutions for μ , σ 2 , and γ 2 since the sample moments satisfy 0 < d ¯ < 1 and s y 2 > 0 in practice. This indicates that the parameters are identifiable from the chosen moments.

3.4. Least-Squares Estimates

Y ( 1 ) , Y ( 2 ) , , Y ( n ) denote the order statistics from the marginal distribution of Y, where Y obeys a two-parameter Rayleigh distribution with location parameter μ and scale parameter 1 / β . The parameter β is defined as β = 1 σ 2 + 1 γ 2 . The cumulative distribution function (CDF) corresponding to the random variable Y is expressed as
F Y ( y ) = 1 exp β 2 ( y μ ) 2 ; y > μ 0 .
The expected value of F Y ( Y ( i ) ) is approximately
E [ F Y ( Y ( i ) ) ] = i n + 1 ; i = 1 , 2 , , n .
The least-squares estimates of the parameters μ , σ , and γ can be obtained by minimizing the following objective function
Q 1 = i = 1 n F Y ( Y ( i ) ) E [ F Y ( Y ( i ) ) ] 2 = i = 1 n 1 exp β 2 ( Y ( i ) μ ) 2 i n + 1 2 ; i = 1 , 2 , , n .
To account for the heterogeneity in the variances of F Y ( Y ( i ) ) , we consider the weighted least-squares method. The variance in F Y ( Y ( i ) ) is approximately
V [ F Y ( Y ( i ) ) ] = i ( n i + 1 ) ( n + 1 ) 2 ( n + 2 ) ; i = 1 , 2 , , n .
Taking the weights as k i = 1 / V [ F Y ( Y ( i ) ) ] , the weighted least-squares estimates are acquired through the minimization of
Q 2 = i = 1 n k i F Y ( Y ( i ) ) E [ F Y ( Y ( i ) ) ] 2 = i = 1 n ( n + 1 ) 2 ( n + 2 ) i ( n i + 1 ) 1 exp β 2 ( Y ( i ) μ ) 2 i n + 1 2 .
The minimization is carried out for μ , σ , and γ , where β = 1 σ 2 + 1 γ 2 .

4. Bayesian Estimation

We proceed to develop a Bayesian inferential framework for the unknown parameters associated with the two-parameter Rayleigh distribution, where the whole analytical process is based on randomly censored data. On this basis, we derive the Bayes estimators for the aforementioned parameters under the generalized entropy loss function (GELF), as well as the associated highest posterior density (HPD)-credible intervals corresponding to each parameter. We then advance the Bayesian inference method for σ and γ with randomly censored data as the research basis; specifically, we impose the prior assumption that σ 2 and γ 2 , respectively, follow conjugate inverse gamma distributions with hyperparameters ( a 1 , b 1 ) and ( a 2 , b 2 ) , and the corresponding PDF are expressed as
g 1 ( σ 2 a 1 , b 1 ) 1 ( σ 2 ) a 1 + 1 e b 1 / σ 2 ; σ 2 > 0 , a 1 , b 1 > 0 .
g 2 ( γ 2 a 2 , b 2 ) 1 ( γ 2 ) a 2 + 1 e b 2 / γ 2 ; γ 2 > 0 , a 2 , b 2 > 0 .
For the location parameter μ , we introduce the subsequent improper uniform prior distribution
g 3 ( μ ) 1 c ; 0 μ < y ( 1 ) , c > 0 .
The parameters μ , σ 2 , and γ 2 are assumed to be independent priori.
The joint posterior distribution is
π ( μ , σ 2 , γ 2 y , d ) = L ( y , d μ , σ 2 , γ 2 ) g 1 ( σ 2 a 1 , b 1 ) g 2 ( γ 2 a 2 , b 2 ) g 3 ( μ ) L ( y , d μ , σ 2 , γ 2 ) g 1 ( σ 2 a 1 , b 1 ) g 2 ( γ 2 a 2 , b 2 ) g 3 ( μ ) d μ d σ 2 d γ 2
= 1 ( σ 2 ) a 1 + w 1 + 1 e b 1 + i = 1 n y i n μ / σ 2 1 ( γ 2 ) a 2 + n w 1 + 1 e b 2 + i = 1 n y i n μ / γ 2 y ( 1 ) 0 0 1 ( σ 2 ) a 1 + w 1 + 1 e b 1 + i = 1 n y i n μ / σ 2 1 ( γ 2 ) a 2 + n w 1 + 1 e b 2 + i = 1 n y i n μ / γ 2 d γ 2 d σ 2 d μ ;
where w 1 = i = 1 n d i .
This joint posterior density function does not admit an explicit closed form. Therefore, to compute the Bayes estimates, one may employ a numerical integration procedure based on the posterior distribution given in (17) under an appropriate loss function. Alternatively, the importance sampling algorithm can be employed to acquire simulation-consistent Bayesian estimators and construct the corresponding HPD-credible intervals, in accordance with the methodological approach introduced by [14].
To implement the Gibbs sampling procedure, we employ a technique analogous to that described by [15]. Setting b 1 = b 2 = b , we take the posterior as
π ( μ , σ 2 , γ 2 y , d ) L ( y , d μ , σ 2 , γ 2 ) · g 1 ( σ 2 ) · g 2 ( γ 2 ) · g 3 ( μ ) ,
It follows that the joint posterior distribution can be written in the form of
π ( μ , σ 2 , γ 2 y , d ) = f 1 ( σ 2 μ , y , d ) f 2 ( γ 2 μ , y , d ) f 3 ( μ y , d ) μ < y ( 1 ) f 1 ( σ 2 μ , y , d ) f 2 ( γ 2 μ , y , d ) f 3 ( μ y , d ) d μ d σ 2 d γ 2
where,
f 1 ( σ 2 μ , y , d ) ( σ 2 ) ( a 1 + w 1 + 1 ) exp 1 σ 2 b + 1 2 S ( μ ) , f 2 ( γ 2 μ , y , d ) ( γ 2 ) ( a 2 + n w 1 + 1 ) exp 1 γ 2 b + 1 2 S ( μ ) , f 3 ( μ y , d ) i = 1 n ( y i μ ) exp β 2 S ( μ ) I ( , y ( 1 ) ) ( μ ) , with S ( μ ) = 2 i = 1 n y i 2 n μ = 2 n ( y ¯ μ ) .
We note here that f 1 ( σ 2 μ , y , d ) and f 2 ( γ 2 μ , y , d ) correspond to inverse gamma density functions with a common scale parameter b + 1 2 i = 1 n ( y i μ ) 2 but different shape parameters ( a 1 + w 1 ) and ( a 2 + n w 1 ) , respectively.
The density function f 3 ( μ y , d ) is a valid proper density function. Its cumulative distribution function (CDF) can be derived as
F 3 ( μ ) = b + 1 2 i = 1 n ( y i y ( 1 ) ) 2 a 1 + a 2 + n 1 b + 1 2 i = 1 n ( y i μ ) 2 a 1 + a 2 + n 1 ; 0 μ < y ( 1 ) .
This CDF is readily invertible, allowing for straightforward generation of random samples for μ via the probability integral transform. Specifically, for a uniform random variate u U ( 0 , 1 ) , we can obtain a sample of μ by solving
μ = F 3 1 ( u ) = y ( 1 ) 2 n b + 1 2 i = 1 n ( y i y ( 1 ) ) 2 u 1 / ( a 1 + a 2 + n 1 ) b .
Since the prior for μ is now defined on the finite interval [ 0 , y ( 1 ) ) , it is a proper prior. Together with the proper priors for σ 2 and γ 2 , the joint posterior distribution π ( μ , σ 2 , γ 2 y , d ) is guaranteed to be proper, ensuring the validity of Bayesian inference even for small sample sizes. This invertibility facilitates the implementation of the Gibbs sampling procedure.

4.1. Bayes Estimates Under GELF

For Bayesian estimation, we employ the generalized entropy loss function (GELF). Although this loss function was introduced earlier [16], it remains relevant in modern reliability analysis, as demonstrated in recent Bayesian studies on lifetime distributions under various censoring schemes [13,17]. The generalized entropy loss function is defined as follows:
L ( θ , θ ) = θ θ δ δ log θ θ 1 ; δ 0 .
Here, θ is an estimate of θ . For δ < 0 , underestimation incurs a more severe penalty than overestimation, whereas for δ > 0 , the situation is reversed, with overestimation being penalized more heavily. In particular, δ = 1 corresponds to the entropy loss function (ELF), δ = 2 corresponds to the squared error loss function (SELF), and δ = 1 corresponds to the precautionary loss function (PLF). These values represent different degrees of asymmetry and allow us to comprehensively assess the impact of the loss function on Bayesian estimates.
Within the Bayesian estimation framework, we select the θ value that minimizes the risk function r ( θ , θ ) = E L ( θ , θ ) | y , d . Deriving the Bayes estimator of θ involves computing the derivative of r ( θ , θ ) with respect to θ and equating the derived expression to zero. The Bayes estimator for θ and its related risk function are formulated as
θ = E θ δ | y , d 1 / δ
r ( θ , θ ) = E log θ δ | y , d + log E θ δ | y , d = E log θ δ | y , d log θ δ .
We can also characterize Bayesian estimation of σ and γ under the GELF.
Although the improper uniform prior g 3 ( μ ) 1 is used for the location parameter μ , the resulting joint posterior distribution π ( μ , σ 2 , γ 2 y , d ) remains proper. This can be seen by noting that the likelihood function in (3) contains the factor i = 1 n ( y i μ ) , which is bounded on the interval [ 0 , y ( 1 ) ) since 0 y i μ y i 0 < . The exponential term exp β 2 i = 1 n ( y i μ ) 2 is also bounded on this interval. Hence, the likelihood, viewed as a function of μ , is bounded on the truncated support [ 0 , y ( 1 ) ) .
Moreover, the constraint μ < y ( 1 ) imposed by the observed data truncates the support of μ to the finite interval [ 0 , y ( 1 ) ) . Over this compact set, the bounded likelihood combined with the proper inverse gamma priors for σ 2 and γ 2 ensures that the joint posterior is integrable. More formally,
0 y ( 1 ) 0 0 L ( μ , σ 2 , γ 2 y , d ) g 1 ( σ 2 ) g 2 ( γ 2 ) g 3 ( μ ) d σ 2 d γ 2 d μ < ,
because:
  • L ( μ , σ 2 , γ 2 y , d ) is bounded in μ on [ 0 , y ( 1 ) ) ;
  • g 1 ( σ 2 ) and g 2 ( γ 2 ) are proper inverse gamma densities;
  • The improper prior g 3 ( μ ) 1 integrates to a finite constant y ( 1 ) over the finite interval [ 0 , y ( 1 ) ) .
Consequently, the joint posterior is well-defined and valid for Bayesian inference.

4.2. Gibbs Sampling Procedure

The Gibbs sampling algorithm (Algorithm 1), which is used to generate samples from the joint posterior distribution π ( μ , σ 2 , γ 2 y , d ) , is implemented as follows:
Algorithm 1 Gibbs Sampling for Bayesian Estimation under GELF.
Require: Initial values μ ( 0 ) , σ 2 ( 0 ) , γ 2 ( 0 ) ; hyperparameters a 1 , a 2 , b ; total iterations M; burn-in period B; loss function parameter δ
 1:
for  t = 1  to M do
 2:
    Sample  σ 2 ( t ) :  σ 2 ( t ) Inv - Gamma a 1 + w 1 , b + 1 2 i = 1 n ( y i μ ( t 1 ) ) 2
 3:
    Sample  γ 2 ( t ) :  γ 2 ( t ) Inv - Gamma a 2 + n w 1 , b + 1 2 i = 1 n ( y i μ ( t 1 ) ) 2
 4:
    Sample  μ ( t )  (using inverse CDF method):
 5:
    Generate u U ( 0 , 1 )
 6:
    Compute μ ( t ) = F 3 1 ( u )
 7:
end for
 8:
Burn-in: Discard first B samples: { ( μ ( j ) , σ ( j ) , γ ( j ) ) } j = 1 B
 9:
Bayesian estimation under GELF:
10:
μ = 1 M B j = B + 1 M ( μ ( j ) ) δ 1 / δ
11:
σ = 1 M B j = B + 1 M ( σ ( j ) ) δ 1 / δ
12:
γ = 1 M B j = B + 1 M ( γ ( j ) ) δ 1 / δ
13:
Return:  μ , σ , γ
When no prior information is available, we can employ non-informative prior distributions for the relevant parameters. For the scale parameters σ 2 and γ 2 , this corresponds to setting a 1 = a 2 = b = 0 in the inverted gamma priors. For the location parameter μ , we retain the improper uniform prior g 3 ( μ ) 1 . Under these non-informative priors, the posterior distribution is dominated by the likelihood.
Convergence of the Gibbs sampler was assessed using trace plots and the Gelman–Rubin diagnostic. We ran three parallel chains with over-dispersed starting values, each for M = 20,000 iterations, discarding the first B = 5000 iterations as burn-in. In all cases, R ^ values were below 1.1, indicating that convergence was achieved. The effective sample sizes (ESSs) for all parameters exceeded 1000, ensuring reliable posterior inference.

4.3. HPD-Credible Intervals and Chen–Shao Algorithm

A 100 ( 1 α ) % highest posterior density (HPD) credible interval is defined to be an interval that fulfills the two subsequent criteria: it contains a posterior probability mass of ( 1 α ) , and the posterior density at every interior point within the interval exceeds that at all exterior points outside the interval. As a result, the HPD interval is the shortest credible interval achievable for a specified probability level of ( 1 α ) . Based on these basic properties, ref. [14] proposed a computational procedure for deriving the HPD-credible interval corresponding to the parameter σ .
The Chen–Shao algorithm [14] is given in Algorithm 2.
Algorithm 2 Finding the HPD-credible interval.
Require: MCMC total iterations M, burn-in period B, significance level α
 1:
Step 1: Prepare effective posterior samples
 2:
Retain the effective samples after burn-in: { σ j 2 , j = B + 1 , B + 2 , , M } , giving an effective sample size N = M B .
 3:
Sort the effective sample set in ascending order to obtain the order statistics:
σ ( 1 ) 2 σ ( 2 ) 2 σ ( N ) 2
 4:
Step 2: Generate candidate HPD intervals.
 5:
Set K = ( 1 α ) N .
 6:
for  j = 1  to  N K  do
 7:
     I j = σ ( j ) 2 , σ ( j + K ) 2 , w j = σ ( j + K ) 2 σ ( j ) 2
 8:
end for
 9:
Step 3: Identify the HPD interval (minimum width).
10:
I HPD = arg min j = 1 , , N K w j = σ low 2 , σ up 2
11:
Step 4: Output the result.
12:
Return the 100 ( 1 α ) % HPD-credible interval for σ 2 : σ low 2 , σ up 2
The same procedure can be applied to obtain HPD-credible intervals for γ 2 and μ . Specifically, after obtaining the posterior samples { γ j 2 } j = B + 1 M and { μ j } j = B + 1 M from the Gibbs sampling algorithm, we sort each set of samples in ascending order and apply Steps 2–3 of Algorithm 2 to find the shortest intervals that contain the desired posterior probability. This yields the 100 ( 1 α ) % HPD-credible intervals for γ 2 and μ , respectively.

5. Reliability and Experimental Characteristics Estimation

Within the present analytical framework, the two-parameter Rayleigh distribution parameterized by ( μ , σ ) is employed to model lifetime data, while its reliability metrics and experimental behavior are examined using the corresponding estimators derived under a randomly censored sampling scheme.
A key practical consideration is whether a life test can be finished within a predefined time interval. As experimental costs depend directly on the test duration, this information is key to selecting a suitable sampling strategy and determining the sample size for life test specimens.
In random censoring model, the observed time Y i = min ( X i , T i ) follows a two-parameter Rayleigh distribution Rayleigh ( μ , 1 / β ) , where β = 1 σ 2 + 1 γ 2 . The completion time of the experiment is determined by the maximum observed value, i.e., Y ( n ) = max ( Y 1 , Y 2 , , Y n ) .
The cumulative distribution function of Y ( n ) is
F Y ( n ) ( y ) = P [ Y ( n ) y ] = 1 exp β 2 ( y μ ) 2 n , y > μ 0 .
The expected time on test (ETT) is defined as the expectation of Y ( n )
ETT = E [ Y ( n ) ] = μ + 0 1 F Y ( n ) ( μ + t ) d t .
Substituting (22) into (23) yields
ETT = μ + 0 1 1 e β t 2 / 2 n d t .
For the Rayleigh distribution, this integral lacks a straightforward closed-form expression yet can be solved via numerical methods. In practice, the observed time on test (OBTT), which is Y ( n ) itself, serves as a natural estimator of ETT.
For the complete data (no censoring) scenario, the test time is determined by the maximum failure time X ( n ) , and its expected value is
ETT complete = E [ X ( n ) ] = μ + 0 1 1 e t 2 / ( 2 σ 2 ) n d t .
where { ( μ j , σ j , γ j ) ; j = 1 , 2 , , M } are the posterior samples obtained through Gibbs sampling, and ETT ( μ , σ , γ ) is computed using Equation (24).
Since the expected time on test does not have a closed-form expression for the Rayleigh distribution, both ETT ^ and ETT require numerical integration. For each parameter set ( μ , σ , γ ) , we compute the integral in Equation (24) using adaptive numerical integration methods. In the Bayesian framework, this computation is performed for each posterior sample ( μ j , σ j , γ j ) , and the results are then combined according to Table 1.
For the numerical computation of the expected time on test (ETT), we employed adaptive numerical integration as implemented in the R integrate function. The integration was performed over the interval [ 0 , μ + 10 max ( σ , γ ) ] , which was verified to capture the essential support of the distribution. The relative tolerance was set to 10 4 (for estimation) and 10 6 (for true value computation), with subdivisions up to 1000 to ensure accuracy. Based on our implementation, the computational cost of a single ETT evaluation is approximately 0.002 s on a standard desktop computer. In the simulation study with 10 parameter combinations, three sample sizes ( n = 20 , 50 , 100 ), and N sim = 5000 replications per configuration, the total time for ETT estimation was approximately 0.2 min, which was negligible compared to the overall computational burden.

6. Simulation Studies

A Monte Carlo simulation experiment is carried out to assess and compare the performance of the various estimators proposed previously. Specifically, to facilitate a direct comparison between classical and Bayesian inference methods, we generate randomly censored samples across a range of values for parameters μ , σ , and γ . For each generated sample, we calculate the MLEs along with their respective confidence intervals. Besides the Bayes estimators derived under the generalized entropy loss function, along with the associated HPD-credible intervals corresponding to such estimators.
Before detailing the simulation procedure, we clarify the selection of hyperparameters for informative priors. They are chosen so that the prior means equal the true parameter values (i.e., σ 2 = b / ( a 1 1 ) and γ 2 = b / ( a 2 1 ) ), ensuring that the prior information is centered correctly. This allows for a meaningful evaluation of the Bayesian estimators’ performance when prior information is accurate.
We now outline the simulation procedure step-by-step as Algorithm 3.
Algorithm 3 Monte Carlo simulation procedure.
  • Require: Conjugate inverted gamma priors with hyperparameters ( a 1 , b ) and ( a 2 , b ) , sample sizes n = 20 , 50 , 100 to investigate the finite-sample performance under small, moderate, and large sample scenarios, number of simulations N = 5000 , and mission time s.
  • Ensure: Average values and mean squared errors of parameter estimates, reliability characteristics, ETT and OBTT
1:
Select values of hyperparameters ( a 1 , b ) and ( a 2 , b ) for informative priors, and set a 1 = a 2 = b = 0 for non-informative priors.
2:
Determine distinct parameter values equal to the means of the prior distributions, such that σ 2 = b / ( a 1 1 ) and γ 2 = b / ( a 2 1 ) hold, respectively.
3:
Compute the true MTSF value using MTSF = μ + σ π / 2 , where μ is the location parameter for minimum lifetime. Next, the remaining reliability and experimental performance indicators are calculated at a mission time of s = μ + MTSF / 2 .
4:
Produce a random sample y of sample size n = 20 , 50 , 100 from Rayleigh ( μ , 1 / β ) , where β = 1 σ 2 + 1 γ 2 and PDF is given in (2), for different parameter values. Similarly, generate d from the pmf in (1).
5:
For the comparative study, various estimators and corresponding intervals are computed for different parameter and hyperparameter values, following the procedures outlined in Section 3 and Section 4.
6:
Obtain the corresponding estimates of all relevant reliability characteristics, including ETT and OBTT.
7:
Repeat Steps (4) to (6) a total of N = 5000 iterations for distinct parameter value combinations. For each estimator derived from Steps (5) to (6), calculate its corresponding AV and MSE.
To compare the performance of the Bayesian estimation method and the classical estimation method, we try different parameter values and hyperparameter values, with the primary simulation results shown in Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7.
Based on the comprehensive analysis of the results presented in the aforementioned tables, it can be concluded that
  • Maximum likelihood estimation: MLE exhibits systematic bias that decreases with increasing sample size. The bias is more pronounced in small samples, while both bias and MSE improve significantly as sample size increases, confirming the consistency property of MLE but also highlighting its limitations in small-sample scenarios.
  • Least-squares and weighted least-squares: Both LS and WLS methods provide reasonable parameter estimates across all sample sizes. For small samples ( n = 20 ), LS and WLS show notable bias, particularly for the scale parameters σ and γ , with MSE values considerably larger than those of MLE and Bayesian methods. However, as sample size increases to n = 50 and n = 100 , the performance of both methods improves substantially, with bias and MSE decreasing markedly. WLS generally yields slightly lower MSE than LS for most parameter combinations, particularly for larger parameter values, confirming the benefit of incorporating variance heterogeneity in the estimation procedure.
  • Expected time on test (ETT): The ETT estimates exhibit similar patterns to the parameter estimates, with bias and MSE decreasing as sample size increases. For n = 20 , the ETT estimates show considerable bias, especially for combinations with larger parameter values. However, the bias is substantially reduced for n = 50 and n = 100 , with MSE decreasing by approximately 50% to 70% compared to the small sample case, indicating that reliable ETT estimation requires moderate to large sample sizes. The numerical integration procedure proved to be both accurate and computationally efficient, with negligible impact on overall simulation time.
  • Comparison of Bayesian methods: The SELF estimator outperforms both MLE and non-informative priors across all sample sizes, with its advantage being most pronounced in small samples. Although this advantage diminishes as sample size increases, it remains present, indicating that informative priors provide crucial stability when data are scarce.
  • Effect of prior information: Non-informative priors perform reasonably well but yield larger MSEs than SELF, with the difference being more substantial in small samples. As sample size increases, the gap narrows considerably. This demonstrates that the advantage of Bayesian methods stems primarily from the informative prior structure, and this benefit is most prominent when sample information is limited.
  • Interval estimation: The asymptotic confidence intervals for μ , σ , and γ show coverage probabilities that improve with increasing sample size. For n = 20 , the coverage probabilities are below the nominal 95% level for all parameters, particularly for μ . As sample size increases to n = 50 and n = 100 , coverage probabilities approach the nominal level for μ and show improvement for σ and γ , although some undercoverage remains. Interval lengths decrease consistently with increasing sample size, as expected. These results suggest that caution is warranted when interpreting confidence intervals in small samples, especially for the scale parameters.
  • Sample size effects: The most dramatic improvement across all estimators occurs between small and moderate sample sizes, with diminishing returns thereafter. This suggests that a moderate sample size provides a reasonable balance between estimation accuracy and experimental cost in practical applications.
  • Practical recommendations: Based on the systematic comparison across sample sizes, we recommend prioritizing Bayesian methods such as SELF for small samples; for moderate samples, SELF remains the preferred choice while non-informative priors and LS/WLS methods serve as viable alternatives; for large samples, all methods are acceptable, though Bayesian methods still offer some efficiency gains. For ETT estimation, moderate to large sample sizes ( n 50 ) are recommended to achieve reliable results. The interval estimation method for μ becomes reliable for n 50 , but caution is advised when interpreting σ and γ intervals, particularly in small samples.
To assess the impact of the loss function parameter δ on the Bayesian estimates, we compare the results obtained under three different loss functions: SELF ( δ = 2 ), ELF ( δ = 1 ), and PLF ( δ = 1 ). Table 6 presents the parameter estimates for selected parameter combinations with sample size n = 50 .
The results in Table 8 show that the parameter estimates are relatively stable across different loss functions. The differences between SELF, ELF, and PLF are generally small, with slightly larger variations observed for PLF in some cases (e.g., for σ in the (5,5,3) combination). This indicates that the Bayesian estimation procedure is robust to the choice of the loss function parameter δ within a reasonable range.
To provide a structured comparison, Table 9 presents the relative efficiency (RE = MSE_SELF/MSE_MLE) for selected parameter combinations. Values less than 1 indicate that SELF outperforms MLE.
The results show that all RE values are below 1, confirming the consistent superiority of Bayesian methods across all scenarios. The improvement is most pronounced for σ and γ estimation, with RE values as low as 0.04, and becomes more evident as sample size increases.

7. Real Data Example Analysis

To further validate the practical applicability of the proposed methodological framework, an illustrative empirical analysis is conducted herein by utilizing a carbon fiber strength dataset first published in the work of [15]. This dataset documents the strength values of individual carbon fibers as well as impregnated 1000-carbon fiber tows, with the full set of experimental data tabulated in Table 10. In line with the goodness-of-fit verification tests and the scaled total time on test (TTT) transformation method put forward by [15], the Rayleigh distribution is verified to be a suitable probabilistic model for characterizing the statistical properties of this carbon fiber strength dataset.
This analysis aims to show the superior precision of the location-scale Rayleigh model over the standard scale-only version. We consider model fitting for the dataset using: (i) a two-parameter Rayleigh model ( σ , μ ) and (ii) a one-parameter Rayleigh model ( σ ), with parameters estimated by maximum likelihood. The goodness of fit for these models is then evaluated based on several metrics: negative log–likelihood, Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), the Kolmogorov–Smirnov (K–S) statistic, and the empirical cumulative distribution function (ECDF).
For the K-S test, the null hypothesis H 0 is that the observed randomly censored data follow the specified Rayleigh distribution. A small p-value (typically < 0.05 ) leads to the rejection of H 0 , suggesting that the distributional assumption may not be appropriate.
The formula for calculating AIC is as follows:
AIC = 2 k 2 log ( L ) ,
where k denotes the number of parameters incorporated into the statistical model, and L represents the maximum likelihood value associated with the estimated model.
The BIC is mathematically expressed as
BIC = k log ( n ) 2 log ( L ) .
where k and L retain their definitions from the AIC formula, and n refers to the total number of observations in the dataset (Table 11 and Table 12).
It should be noted, however, that although the two-parameter Rayleigh distribution provides a better fit than its one-parameter counterpart, the Kolmogorov–Smirnov test yields p-values below 0.05 for both models, indicating that neither fully captures the empirical distribution of the carbon fiber strength data at conventional significance levels. This suggests that while the two-parameter model offers a substantial improvement, there may still be room for further refinement, such as considering more flexible distributions or mixture models. Q-Q plots are provided in Figure 2, which visually confirm the improved fit of the two-parameter model. Although both models show significant deviation from the 45-degree line (K-S test p < 0.05 ), indicating imperfect fit, the two-parameter model demonstrates noticeably better alignment, particularly in the upper tail. This superior fit is consistent with its lower AIC and BIC values in Table 12, confirming that the inclusion of a location parameter improves model performance even when the overall fit is not ideal.
Based on all goodness-of-fit criteria, we find that the two-parameter Rayleigh distribution achieves substantially lower values of negative log–likelihood, AIC, and BIC compared to the one-parameter Rayleigh distribution. Therefore, the two-parameter Rayleigh distribution exhibits markedly superior fitting performance to the one-parameter Rayleigh distribution.
For the graphical comparison, we present the cumulative distribution curves and examine their respective behaviors. As shown in Figure 3, the CDF curve of the two-parameter Rayleigh distribution lies closer to the empirical CDF than that of the one-parameter model. Consequently, we conclude that fitting this real dataset with the two-parameter Rayleigh distribution yields a considerably better fit than the one-parameter Rayleigh distribution, which only includes a scale parameter.

8. Conclusions

Randomly censored data are commonly encountered in both survival analysis and reliability theory. Numerous statistical distributions have been extensively utilized in prior research to develop estimation procedures under this data framework. This study introduces location parameter modeling into the analysis of randomly censored data following the Rayleigh distribution, thereby establishing a novel analytical framework for lifetime processes characterized by monotonically increasing failure rates. The inclusion of a location parameter is expected to influence estimates derived from data. The core objective of this study is to integrate this location parameter into models for both lifetime and censoring time distributions.
This study investigates a diverse set of estimation approaches for the location and scale parameters within the framework of the two-parameter Rayleigh distribution. A number of classical estimation methods are explored herein. In addition, Bayesian estimators corresponding to the parameters, as well as their CIs and HPD-credible intervals, are constructed under multiple loss functions. Given the theoretical challenges associated with comparing different estimation methods, a Monte Carlo simulation study is conducted to assess the performance of the various parameter estimators and their associated reliability characteristics. Comparisons among these estimators are mainly based on two key metrics: bias and mean squared error. Owing to the integration of supplementary prior information, Bayesian estimators can generate more accurate results than maximum likelihood estimators (MLEs) in certain cases, especially when the parameter values are large and the sample sizes are small. To demonstrate the practical applicability of the estimation methodologies proposed in this study, a real-world case analysis is presented using carbon fiber strength data. The empirical results show that the fitting performance of the two-parameter Rayleigh distribution is significantly superior to that of its one-parameter counterpart.
Future research could relax the assumption that failure and censoring times share the same location parameter, allowing these two processes to have different location parameters. Furthermore, the modeling framework of the two-parameter Rayleigh distribution could be extended to more complex survival analysis scenarios. For instance, covariates could be introduced to establish regression models, thereby analyzing the influence of various factors on lifetime distributions. Alternatively, the distribution could be integrated with structures such as competing risks or frailty models to handle more complex failure mechanisms and heterogeneous data.

Author Contributions

Conceptualization, L.Z.; Methodology, L.Z. and Z.Z.; Software, L.Z., Z.Z. and M.L.; Validation, Z.Z.; Formal analysis, L.Z.; Investigation, L.Z.; Resources, M.L.; Data curation, M.L.; Writing—original draft, L.Z.; Writing—review & editing, W.G.; Visualization, M.L.; Supervision, W.G.; Project administration, W.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Project 202610004010 which was supported by National Training Program of Innovation and Entrepreneurship for Undergraduates. Wenhao’s work was partially supported by the Science and Technology Research and Development Project of China State Railway Group Company, Ltd. (No. N2023Z020).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare that this study received funding from the Science and Technology Research and Development Project of China State Railway Group Company, Ltd. (No. N2023Z020). The funder was not involved in the study design, collection, analysis, interpretation of data, the writing of this article or the decision to submit it for publication.

Notation

The following notations are used throughout this paper.
SymbolDescription
μ Location parameter (minimum lifetime)
σ Scale parameter for failure time distribution
γ Scale parameter for censoring time distribution
β β = 1 / σ 2 + 1 / γ 2
α Significance level (e.g., in confidence intervals)
D i Censoring indicator ( D i = 1 if failure observed, 0 otherwise)
Y i Observed time, Y i = min ( X i , T i )
w 1 Number of failures, w 1 = i = 1 n d i
w 2 w 2 = i = 1 n ( y i μ ^ ) 2

References

  1. Banerjee, P.; Bhunia, S.; Seal, B. Partial Bayes Estimation in Two-Parameter Rayleigh Distribution. Am. J. Math. Manag. Sci. 2024, 43, 123–140. [Google Scholar] [CrossRef]
  2. Dey, T.; Dey, S.; Kundu, D. On Progressively Type-II Censored Two-Parameter Rayleigh Distribution. Commun. Stat.-Simul. Comput. 2016, 45, 438–455. [Google Scholar] [CrossRef]
  3. Luo, L.; Ma, Z.; Wang, M. Robust explicit estimators of the shifted Rayleigh distribution under Type II censoring. J. Reliab. Eng. Qual. Manag. 2025, 17, 117–130. [Google Scholar] [CrossRef]
  4. Ghitany, M.E. A compound Rayleigh survival model and its application to randomly censored data. Stat. Pap. 2001, 42, 437–450. [Google Scholar] [CrossRef]
  5. Ghitany, M.E.; Al-Awadhi, S. Maximum likelihood estimation of Burr XII distribution parameters under random censoring. J. Appl. Stat. 2002, 29, 955–965. [Google Scholar] [CrossRef]
  6. Friesl, M.; Hurt, J. On Bayesian estimation in an exponential distribution under random censorship. Kybernetika 2007, 43, 45–60. [Google Scholar]
  7. Abu-Taleb, A.A.; Smadi, M.M.; Alawneh, A.J. Bayes estimation of the lifetime parameters for the exponential distribution. J. Math. Stat. 2007, 3, 106–108. [Google Scholar] [CrossRef]
  8. Danish, M.Y.; Aslam, M. Bayesian estimation for randomly censored generalized exponential distribution under asymmetric loss functions. J. Appl. Stat. 2013, 40, 1106–1119. [Google Scholar] [CrossRef]
  9. Danish, M.Y.; Aslam, M. Bayesian inference for the randomly censored Weibull distribution. J. Stat. Comput. Simul. 2014, 84, 215–230. [Google Scholar] [CrossRef]
  10. Krishna, H.; Vivekanand, K.K. Estimation in Maxwell distribution with randomly censored data. J. Stat. Comput. Simul. 2015, 85, 3560–3578. [Google Scholar] [CrossRef]
  11. Garg, R.; Dube, M.; Kumar, K.; Krishna, H. On randomly censored generalized inverted exponential distribution. Am. J. Math. Manag. Sci. 2016, 35, 361–379. [Google Scholar] [CrossRef]
  12. Krishna, H.; Goel, N. Classical and Bayesian inference in two parameter exponential distribution with randomly censored data. Comput. Stat. 2018, 33, 249–275. [Google Scholar] [CrossRef]
  13. Chaturvedi, A. Randomly Censored Kumaraswamy Distribution. J. Stat. Theory Appl. 2024, 23, 1–25. [Google Scholar] [CrossRef]
  14. Chen, M.H.; Shao, Q.M. Monte Carlo estimation of Bayesian credible and HPD intervals. J. Comput. Graph. Stat. 1999, 8, 69–92. [Google Scholar] [CrossRef]
  15. Dey, S.; Dey, T.; Kundu, D. Two-parameter Rayleigh distribution: Different methods of estimation. Am. J. Math. Manag. Sci. 2014, 33, 55–74. [Google Scholar] [CrossRef]
  16. Calabria, R.; Pulcini, G. Point estimation under asymmetric loss functions for left-truncated exponential samples. Commun. Stat.-Methods 1996, 25, 585–600. [Google Scholar] [CrossRef]
  17. Kumari, A.; Kumar, K.; Kumar, I.B. Bayesian and Classical Inference in Maxwell Distribution Under Adaptive Progressively Type-II Censored Data. Int. J. Syst. Assur. Eng. Manag. 2023, 15, 1015–1036. [Google Scholar] [CrossRef]
Figure 1. Left: probability density functions (PDFs) of the two-parameter Rayleigh distribution for σ = μ = 0.3 (black), 0.5 (red), 1.0 (green), and 2.0 (blue); right: corresponding cumulative distribution functions (CDFs).
Figure 1. Left: probability density functions (PDFs) of the two-parameter Rayleigh distribution for σ = μ = 0.3 (black), 0.5 (red), 1.0 (green), and 2.0 (blue); right: corresponding cumulative distribution functions (CDFs).
Entropy 28 00313 g001
Figure 2. Q-Q plots for (a) two-parameter and (b) one-parameter Rayleigh distributions based on failure times.
Figure 2. Q-Q plots for (a) two-parameter and (b) one-parameter Rayleigh distributions based on failure times.
Entropy 28 00313 g002
Figure 3. CDF curves for the strength data. ECDF of the strength data (black), fitted CDFs from the two-parameter (red) and one-parameter (blue) Rayleigh distributions.
Figure 3. CDF curves for the strength data. ECDF of the strength data (black), fitted CDFs from the two-parameter (red) and one-parameter (blue) Rayleigh distributions.
Entropy 28 00313 g003
Table 1. Maximum likelihood estimates and Bayesian estimates under GELF of reliability and experimental characteristics.
Table 1. Maximum likelihood estimates and Bayesian estimates under GELF of reliability and experimental characteristics.
Maximum Likelihood EstimateBayesian Estimate Under GELF
MTSF ^ = μ ^ + σ ^ π 2 MTSF = 1 M B j = B + 1 M μ j + σ j π 2 δ 1 / δ
h ^ ( s ) = s μ ^ σ ^ 2 ; μ ^ < s < h ( s ) = 1 M B j = B + 1 M s μ j σ j 2 δ 1 / δ
R ^ ( s ) = exp ( s μ ^ ) 2 2 σ ^ 2 ; μ ^ < s < R ( s ) = 1 M B j = B + 1 M exp δ ( s μ j ) 2 2 σ j 2 1 / δ
p ^ = γ ^ 2 σ ^ 2 + γ ^ 2 p = 1 M B j = B + 1 M γ j 2 σ j 2 + γ j 2 δ 1 / δ
ETT ^ = μ ^ + 0 1 1 e β ^ t 2 / 2 n d t
where β ^ = 1 σ ^ 2 + 1 γ ^ 2 ETT = 1 M B j = B + 1 M ETT ( μ j , σ j , γ j ) δ 1 / δ
Table 2. MLEs for the parameters with different sample sizes.
Table 2. MLEs for the parameters with different sample sizes.
Parameters μ ^ σ ^ γ ^
n μ σ γ AV MSE AV MSE AV MSE
200.5110.69950.05090.57540.19630.57520.1955
1111.19320.04800.57380.19580.57640.1944
2112.19570.04890.57000.19910.58080.1921
5125.24540.07580.73630.08500.69971.7607
0.5230.96410.27501.36520.45951.34552.8740
1231.46630.28021.36310.46481.32752.9357
2322.46560.27591.33322.90641.36510.4634
5535.74180.70492.06558.99632.09620.9557
9119.19730.04950.57570.19540.57770.1935
500.5110.62350.01950.62450.14720.62380.1475
1111.12430.01960.62760.14510.62220.1490
2112.12540.01990.62480.14680.62040.1502
5125.15650.03120.79670.04750.78431.5002
0.5230.78750.10711.47340.30531.46572.4067
1231.30220.11461.47180.30651.45842.4290
2322.29540.11101.45262.44061.46930.3061
5535.45450.26262.26237.64872.28490.5751
9119.12560.01990.62630.14570.62480.1468
1000.5110.59130.01060.64860.12670.64920.1261
1111.09000.01040.64860.12650.65170.1245
2112.08750.00960.65450.12270.65140.1248
5125.11550.01690.82180.03480.81501.4155
0.5230.70640.05411.53130.23331.52742.1952
1231.20800.05391.53190.23191.53152.1810
2322.21060.05711.53172.18331.52600.2379
5535.32750.13492.35047.09222.36160.4388
9119.08930.01010.64990.12590.65110.1249
Table 3. Expected time on test (ETT) estimates under different methods.
Table 3. Expected time on test (ETT) estimates under different methods.
ParametersETT EstimatesBiasMSE
n μ σ γ μ ^ σ ^ γ ^ μ σ γ μ σ γ
200.5110.70220.83660.83760.2022−0.1634−0.16240.05130.05540.0559
1111.19670.84340.84200.1967−0.1566−0.15800.05040.05630.0569
2112.19200.83930.83890.1920−0.1607−0.16110.04710.05000.0500
5125.24750.82701.79510.2475−0.1730−0.20490.07840.04990.4074
0.5230.96021.65332.60780.4602−0.3467−0.39220.27010.19140.5927
1231.45821.66112.64650.4582−0.3389−0.35350.26740.19480.7783
2322.46622.56361.66890.4662−0.4364−0.33110.27800.62090.1871
5535.71584.36342.49810.7158−0.6366−0.50190.65631.99410.4312
9119.20040.84140.83980.2004−0.1586−0.16020.05120.04930.0560
500.5110.62430.89590.89500.1243−0.1041−0.10500.01980.02250.0218
1111.12510.90000.90290.1251−0.1000−0.09710.01960.02230.0209
2112.13120.89380.88640.1312−0.1062−0.11360.02160.02300.0243
5125.15460.89271.85320.1546−0.1073−0.14680.02990.01950.1381
0.5230.79391.76792.69880.2939−0.2321−0.30120.11110.09080.2441
1231.29271.78522.71100.2927−0.2148−0.28900.10910.08380.2663
2322.29162.71121.78430.2916−0.2888−0.21570.11000.27040.0901
5535.45694.52792.67590.4569−0.4721−0.32410.26120.79080.1803
9119.12560.89000.89500.1256−0.1100−0.10500.01980.02220.0223
1000.5110.58730.92870.93160.0873−0.0713−0.06840.00970.01080.0107
1111.08730.92430.92390.0873−0.0757−0.07610.00990.01160.0120
2112.09380.92370.92030.0938−0.0763−0.07970.01120.01190.0127
5125.11230.92121.86570.1123−0.0788−0.13430.01610.01030.0735
0.5230.70981.84322.78590.2098−0.1568−0.21410.05590.04510.1269
1231.21261.83902.79630.2126−0.1610−0.20370.05660.04280.1215
2322.20512.80551.84420.2051−0.1945−0.15580.05270.11530.0424
5535.33264.65492.75800.3326−0.3451−0.24200.13960.35750.0990
9119.08930.92710.92320.0893−0.0729−0.07680.01020.01180.0119
Table 4. Bayesian estimates under the SELF for the parameters.
Table 4. Bayesian estimates under the SELF for the parameters.
Parameters μ ^ σ ^ γ ^
n μ σ γ AV MSE AV MSE AV MSE
200.5110.46720.01291.01300.02951.02120.0279
1110.97970.02011.00860.05691.03520.0359
2111.96240.01661.06310.04001.02280.0391
5124.96840.01930.99850.02041.84850.1865
0.5230.46510.09881.94740.10252.88350.4526
1230.91350.10762.00190.12102.89360.4886
2322.04010.09562.81110.57611.88820.1325
5534.95700.49024.53882.01852.81100.3373
9118.94070.01601.02970.02391.08040.0462
500.5110.48280.00621.01800.01311.01410.0101
1110.98010.00471.03590.01221.02090.0141
2111.96810.00701.02740.01141.01610.0127
5124.97050.01121.01270.00702.03360.1010
0.5230.45410.02961.99450.04172.93200.1761
1230.96470.03221.98940.04282.98330.1940
2321.95000.03292.98720.15602.02930.0496
5534.90720.07054.96590.60953.01750.0960
9118.98130.00601.01510.00991.03270.0129
1000.5110.48480.00281.01250.00680.99570.0050
1110.99600.00301.00630.00651.00600.0084
2111.98240.00271.01470.00821.00880.0072
5124.98530.00551.00340.00531.97980.0606
0.5230.47720.01391.99150.02213.00670.0917
1230.96350.01262.00870.01983.02460.0861
2321.94590.01352.99830.07702.01290.0250
5534.96180.03935.05380.38352.99730.0523
9118.98630.00241.01090.00481.01700.0062
Table 5. Least-squares (LS) and weighted least-squares (WLS) estimates for the parameters.
Table 5. Least-squares (LS) and weighted least-squares (WLS) estimates for the parameters.
ParametersLSWLS
n μ σ γ μ ^ σ ^ γ ^ μ ^ σ ^ γ ^
AV MSE AV MSE AV MSE AV MSE AV MSE AV MSE
200.5110.70220.05130.55980.20730.57240.21060.70220.05130.52050.24140.54980.2308
1111.19670.05040.56590.20190.59020.23971.19670.05040.52390.23810.56810.2642
2112.19200.04710.56470.20210.58130.20712.19200.04710.52220.23860.55740.2275
5125.24750.07840.74180.09570.79251.66695.24750.07840.69140.12550.78961.6810
0.5230.96020.27011.59880.66962.20245.27360.96020.27011.51180.77292.21865.4581
1231.45820.26741.61180.60862.08674.33471.45820.26741.52440.71742.12554.6352
2322.46620.27801.63522.36501.75720.98522.46620.27801.54512.64031.71741.0824
5535.71580.65632.94777.61793.14054.56755.71580.65632.81718.22343.08964.8803
9119.20040.05120.56650.20180.59010.19979.20040.05120.52410.23850.56510.2198
500.5110.62430.01980.60520.16410.61040.16630.62430.01980.56150.19990.58230.1873
1111.12510.01960.60900.16130.61550.16331.12510.01960.56490.19620.58620.1854
2112.13120.02160.60340.16540.61310.16422.13120.02160.55380.20560.57630.1923
5125.15460.02990.84970.04430.92541.33165.15460.02990.77610.07200.91091.3798
0.5230.79390.11112.03670.49762.22052.15520.79390.11111.84520.56142.19832.3823
1231.29270.10912.04360.46362.20752.22871.29270.10911.84400.53052.19612.6541
2322.29160.11001.99981.42282.03410.62372.29160.11001.80281.88351.89950.7341
5535.45690.26123.78444.42843.89224.64875.45690.26123.38445.74713.64194.8081
9119.12560.01980.60850.16070.60920.16669.12560.01980.55820.20160.57460.1944
1000.5110.58730.00970.63920.13550.63940.13820.58730.00970.59050.17310.60630.1636
1111.08730.00990.63090.14120.63820.14021.08730.00990.58320.17900.60160.1678
2112.09380.01120.63000.14230.63600.14092.09380.01120.58130.18010.60080.1673
5125.11230.01610.90140.02280.92531.21725.11230.01610.81800.04880.90371.2756
0.5230.70980.05592.35200.48552.44151.38910.70980.05592.06460.43652.34041.7400
1231.21260.05662.34030.50242.44211.31951.21260.05662.04870.47462.35131.7114
2322.20510.05272.34570.76762.34520.59902.20510.05272.05671.31222.12330.6243
5535.33260.13964.53222.79854.54945.30685.33260.13963.88104.24384.03264.5262
9119.08930.01020.63210.14070.64080.13849.08930.01020.58370.17820.60820.1622
Table 6. Bayesian estimates under the non-informative prior for the parameters.
Table 6. Bayesian estimates under the non-informative prior for the parameters.
Parameters μ ^ σ ^ γ ^
n μ σ γ AV MSE AV MSE AV MSE
200.5110.45400.01461.08600.05781.09590.0537
1110.96630.02241.08400.11521.11680.0719
2111.94800.01921.15100.08171.09700.0723
5124.94460.02301.04870.02952.60381.9863
0.5230.40670.11032.12240.14483.66923.5298
1230.85520.12632.18500.19673.55771.6286
2321.98590.10033.53322.38122.05500.1575
5534.86810.51236.025312.0023.06680.3777
9118.92560.01881.10290.04561.17590.0996
500.5110.47970.00651.04310.01731.03860.0132
1110.97660.00501.06360.01711.04670.0187
2111.96460.00731.05370.01541.04040.0162
5124.96490.01181.02960.00852.26400.2529
0.5230.43940.03132.05800.04983.14400.2525
1230.95040.03382.05170.05013.20580.3199
2321.93470.03493.20520.26122.09550.0636
5534.88340.07725.40391.04543.11470.1176
9118.97730.00631.03980.01291.05950.0176
1000.5110.48360.00281.02430.00781.00660.0054
1110.99450.00301.01830.00731.01790.0095
2111.98110.00281.02650.00951.02070.0082
5124.98370.00561.01090.00562.07610.0842
0.5230.47290.01442.01990.02373.10910.1197
1230.95920.01292.03800.02203.12800.1162
2321.94070.01423.10020.09962.04280.0278
5534.95470.04065.26770.53873.04100.0565
9118.98540.00241.02230.00561.02870.0072
Table 7. Interval estimation results with different sample sizes.
Table 7. Interval estimation results with different sample sizes.
Parameters μ σ γ
n μ σ γ Length Coverage Length Coverage Length Coverage
200.5110.16650.4060.53770.6440.53850.642
1110.16720.4400.54590.6620.54330.646
2110.16730.4420.53660.6560.53680.628
5120.21070.4360.41000.5422.07240.738
0.5230.39350.4520.87870.5922.25880.752
1230.39430.4500.88640.6122.36700.702
2320.39310.4422.18250.7240.89740.618
5530.60920.4584.16800.7181.29520.550
9110.16730.4320.54030.6620.53790.630
500.5110.11410.4900.35580.6880.35500.698
1110.11480.4920.35680.7060.35930.720
2110.11340.4600.35670.6740.35080.654
5120.14490.4860.27720.6261.22220.816
0.5230.26640.4740.59180.5941.39400.762
1230.26860.5100.59870.6501.39770.748
2320.26840.5081.39880.7300.59850.612
5530.41470.4822.53280.7580.87080.646
9110.11380.4680.35220.6820.35610.704
1000.5110.08410.4960.25860.7320.26020.744
1110.08360.5220.25790.7180.25770.726
2110.08340.4800.25820.7100.25620.678
5120.10570.5120.20220.6060.83760.796
0.5230.19650.5020.43560.6321.00050.778
1230.19660.4980.43340.6341.00650.792
2320.19720.5181.01050.8060.43460.624
5530.30350.4561.81160.8000.63140.634
9110.08370.5120.25930.6920.25710.700
Table 8. Sensitivity analysis for different loss functions ( n = 50 ).
Table 8. Sensitivity analysis for different loss functions ( n = 50 ).
ParametersSELF ( δ = 2 )ELF ( δ = 1 )PLF ( δ = 1 )
μ ^ σ ^ γ ^ μ ^ σ ^ γ ^ μ ^ σ ^ γ ^
(0.5, 1, 1)0.48281.01801.01410.48271.01791.01390.49091.02581.0214
(5, 1, 2)4.97051.01272.03364.97071.01232.03254.97171.01822.0672
(0.5, 2, 3)0.45411.99452.93200.45361.99512.93380.50092.00772.9660
(5, 5, 3)4.90724.96593.01754.90574.96783.01944.91605.02673.0372
Table 9. Relative efficiency for selected parameter combinations.
Table 9. Relative efficiency for selected parameter combinations.
nParametersRelative Efficiency (SELF/MLE)
μ σ γ RE ( μ ) RE ( σ ) RE ( γ )
200.5110.250.150.14
0.5230.360.220.16
5.0530.700.220.35
500.5110.300.090.07
0.5230.280.140.07
5.0530.270.080.17
1000.5110.270.060.04
0.5230.260.090.04
5.0530.290.050.12
Table 10. Strength data.
Table 10. Strength data.
Strength Values
0.5620.5640.7290.8020.9501.0531.1111.1151.1941.208
1.2161.2471.2561.2711.2771.3051.3131.3481.3901.429
1.4741.4901.5031.5201.5221.5241.5511.5511.6091.632
1.6321.6761.6841.6851.7281.7401.7611.7641.7851.804
1.8161.8241.8361.8791.8831.8921.8981.9341.9471.976
2.0202.0232.0502.0592.0682.0712.0982.1302.2042.262
2.3172.3342.3402.3462.3782.4832.6832.8352.835
Table 11. Parameter estimates.
Table 11. Parameter estimates.
DistributionMLEsBayes EstimatesETTConfidence Intervals
Rayleigh ( μ , σ ) μ ^ = 0.4688 μ = 0.2822 ETT = 1.7955 μ : ( 0.0000 , 0.7306 )
σ ^ = 1.0585 σ = 0.9095 OBTT = 2.6248 σ : ( 0.100 , 1.2302 )
Rayleigh ( 0 , σ ) σ ^ = 1.4701 σ = 1.5883 ETT = 1.8424 σ : ( 1.2248 , 1.7153 )
OBTT = 2.6248
Table 12. Fitting summary statistics for the model.
Table 12. Fitting summary statistics for the model.
Distribution Log L AICBICK–S Test
D Statistic p Value
Rayleigh ( μ , σ ) 50.4060104.8120109.28030.2535 2.8141 × 10 4
Rayleigh ( 0 , σ ) 57.3398116.6795118.91360.27674 5.1408 × 10 5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, L.; Gui, W.; Zhao, Z.; Liu, M. Classical and Bayesian Inference for the Two-Parameter Rayleigh Distribution with Random Censored Data. Entropy 2026, 28, 313. https://doi.org/10.3390/e28030313

AMA Style

Zhang L, Gui W, Zhao Z, Liu M. Classical and Bayesian Inference for the Two-Parameter Rayleigh Distribution with Random Censored Data. Entropy. 2026; 28(3):313. https://doi.org/10.3390/e28030313

Chicago/Turabian Style

Zhang, Lanxi, Wenhao Gui, Zihan Zhao, and Minghui Liu. 2026. "Classical and Bayesian Inference for the Two-Parameter Rayleigh Distribution with Random Censored Data" Entropy 28, no. 3: 313. https://doi.org/10.3390/e28030313

APA Style

Zhang, L., Gui, W., Zhao, Z., & Liu, M. (2026). Classical and Bayesian Inference for the Two-Parameter Rayleigh Distribution with Random Censored Data. Entropy, 28(3), 313. https://doi.org/10.3390/e28030313

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop