Next Article in Journal
Synchronized Two-Camera Laser Monitor for Studying Combusting Powder Systems
Next Article in Special Issue
Asymmetric versus Symmetric Binary Regresion: A New Proposal with Applications
Previous Article in Journal
On Generating Functions for Parametrically Generalized Polynomials Involving Combinatorial, Bernoulli and Euler Polynomials and Numbers
Previous Article in Special Issue
Slash Truncation Positive Normal Distribution and Its Estimation Based on the EM Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Type I Generalized Logistic Distribution: Solving Its Estimation Problems with a Bayesian Approach and Numerical Applications Based on Simulated and Engineering Data

1
Department of Statistics, Universidad de Concepción, Concepción 4070409, Chile
2
School of Industrial Engineering, Pontificia Universidad Católica de Valparaíso, Valparaíso 2362807, Chile
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(4), 655; https://doi.org/10.3390/sym14040655
Received: 19 January 2022 / Revised: 6 March 2022 / Accepted: 20 March 2022 / Published: 24 March 2022

Abstract

:
The family of logistic type distributions has been widely studied and applied in the literature. However, certain estimation problems exist in some members of this family. Particularly, the three-parameter type I generalized logistic distribution presents these problems, where the parameter space must be restricted for the existence of their maximum likelihood estimators. In this paper, motivated by the complexities that arise in the inference under the likelihood approach utilizing this distribution, we propose a Bayesian approach to solve these problems. A simulation study is carried out to assess the performance of some posterior distributional characteristics, such as the mean, using Monte Carlo Markov chain methods. To illustrate the potentiality of the Bayesian estimation in the three-parameter type I generalized logistic distribution, we apply the proposed method to real-world data related to the copper metallurgical engineering area.

1. Introduction

There is a wide variety of probabilistic models that have been proposed to analysis data generated from continuous random variables [1,2]. One of these models is the logistic distribution [3]. Methods for estimating the parameters of some of these distributions have been well studied for different dimensionality of the parameter vector [4].
The logistic distribution has been largely studied and generalized from different perspectives; see, for example, the works proposed in [5,6,7,8]. One of these models is the type I generalized logistic (IGL) distribution, which can be considered as a family of proportional reversed hazard functions with the logistic distribution being its base. The IGL distribution has been used to model data with an unimodal probability density function (PDF). For a comprehensive account of the theory and applications of logistic distributions, we refer the readers to [2,3,9].
A three-parameter IGL distribution was proposed in [10] and methods of estimation for its parameters were also presented in this work. The authors demonstrated that its log-likelihood function is strictly decreasing in one of its parameters, and maximized only when these parameters tend to infinity. Then, its maximum likelihood (ML) estimators do not exist and a hybrid method must be employed to optimize the log-likelihood function by plugging it at the posterior mean, but at the same time solving the problem of non-existence.
A technique for estimating the location and scale parameters of the IGL distribution was proposed in [11] from a U-statistic constructed by using the best linear functions of order statistics as kernels. An efficiency comparison of the proposed estimators with respect to the ML estimator was also made by these authors. In a comparative study, the authors considered that there is an advantage when using a U-statistic for estimating the parameters from a practical point of view.
Some empirical evidence, based on stock markets and data sets of financial expected values, suggests that the true distribution of such expected values can be positively skewed in times of high inflation and more peaked than the normal distribution. The logistic distribution is often used in econometrics to model the consumer inflation rates due to its heavy tails compared to the normal distribution [12]. Its generalizations, which additionally allow for asymmetries, have been applied to model extreme risks in the context of the German stock market [13], and has been proved to be a simpler model than its harmonized index of consumer prices in inflation rates [14].
A method for determining explicit estimators of location and scale parameters was proposed in [15] by approximating the likelihood function using the Tiku method [16]. An application in genomic experiments utilizing the IGL distribution to determine whether a gene is differentially expressed under different conditions was presented in [17]. The authors simplified the computational complexity carrying out likelihood ratio (LR) tests for several thousands of genes. Moreover, an approximate LR test was also proposed to generalize the two-class LR method into multi-class microarray data. The IGL PDF can be symmetrical, left-skewed, right-skewed, and reversed-J shaped, which provides us with high flexibility in the modeling of the data.
The literature on the study of the performance of the joint estimators of the three-parameter IGL distribution is limited. Particularly for the ML estimators because, as mentioned, an estimation problem exists and the corresponding parameter space must be restricted for their existence. Despite this problem, an R package named glogis [18] was published without further discussion about this problem [19].
We can mention some other commonly used classical (or frequentist) estimation methods in addition to the ML approach. For example, we have the ordinary and weighted least squares methods [20], which are defined in terms of order statistics, making these methods robust and helpful for estimating the parameters of a skew distribution. Also, other estimation approaches related to percentiles, minimum distances, and bootstrapping can be considered [21]. Furthermore, the probability-weighted moment (PWM) method may be mentioned, which has been investigated by many researchers. This method was originally proposed in [22], is commonly employed in theoretical and empirical studies, and has determined frameworks that are more stable against outliers. The PWM method may be generally used to estimate parameters of a distribution whose inverse form cannot be formulated explicitly. The L-moment method was introduced in [23], which is linear combination of the PWM method. The L-moment method [24] is more easily related to the distribution shape and spread than the PWM method. However, when estimating parameters, there is no reason to distinguish between the L-moment and PWM methods, because they lead to identical parameter estimates, at least for their usual settings. An R package named lmom [25] has been implemented for the L-moment method.
When a distribution can be written as a mixture model [26,27], we may use the expectation-maximization algorithm proposed in [28] to efficiently estimate its parameters. Nevertheless, this algorithm may present some disadvantages at the step of maximization due to the dependence on initial values for multimodal likelihood functions [29]. To solve this and other issues discussed in the following sections, an alternative method based on a Bayesian approach was proposed in [30], which is known as the stochastic expectation maximization algorithm. This algorithm includes a step of stochastic simulation based on the Gibbs sampling and the Metropolis–Hastings technique from the posterior distribution of the parameters, that is, it it takes prior information and utilizes it to deal with such disadvantages.
An approach based on importance sampling was used in [31] to estimate the shape and scale parameters of the generalized exponential distribution, in addition to the Gibbs and Metropolis samplers to predict the behavior of the distribution. To the best of our knowledge, this approach has been not employed for the IGL distribution.
The main objective of this investigation is to use a Bayesian approach to obtain reliable estimators of the three-parameter IGL distribution. Specifically, we apply a Monte Carlo Markov chain (MCMC) algorithm to guarantee the coverage of the stationary distribution support. Our secondary objectives are: (i) to conduct a simulation study to assess the performance of some posterior distributional characteristics, such as the mean, utilizing Monte Carlo Markov chain methods; and (ii) to apply the proposed method to real-world data related to engineering area.
The outline of the paper is as follows. Section 2 presents the three-parameter IGL distribution. In Section 3, we introduce a classical approach to estimate the parameters of this distribution, whereas in Section 4, we provide the numerical approximation of the posterior distributions to make Bayesian inference by using MCMC methods. In Section 5, Monte Carlo simulation results are reported and discussed. Section 6 applies the obtained results to a real-world data set from the area of copper metallurgical engineering to illustrate the potentiality of the Bayesian analysis in the three-parameter IGL distribution. Finally, in Section 7, we sketch the conclusions of this study.

2. The Type I Generalized Logistic Distribution and Related Distributions

Let the random variable W follow an IGL distribution with location ( μ ), scale ( σ ), and shape (b) parameters. Then, the PDF and cumulative distribution function of W are, respectively, given by
f W ( w ; μ , b , σ ) = b σ e ( w μ ) / σ 1 + e ( w μ ) / σ b + 1 , F W ( w ; μ , b , σ ) = 1 + e ( w μ ) / σ b ,
with w R , μ R , b > 0 , σ > 0 , and “e” denoting the exponential function. The family of IGL distributions consists of members with their positively skewed PDF and coefficient of kurtosis greater than that of the logistic distribution. If W is an IGL distributed random variable, then W has a type II generalized logistic (IIGL) distribution [32]. Additional generalizations of the logistic distribution are discussed in [1]. For b = 1 , the distribution is symmetric, for b < 1 , it is skewed to the left (negative asymmetry), and for b > 1 , it is skewed to the right (positive asymmetry).
The moment generating function of the IGL distribution is stated as
M W ( r ) = e μ r Γ ( 1 σ r ) Γ ( b + σ r ) Γ ( b ) , b < r < 1 ,
where Γ is the usual gamma function defined as Γ ( z ) = 0 x z 1 e x d x , for z > 0 . From (2), it follows that
E ( W ) = μ + σ ( ψ ( b ) ψ ( 1 ) ) , Var ( W ) = σ 2 ( ψ ( 1 ) + ψ ( b ) ) ,
with ψ ( z ) = Γ ( z ) / Γ ( z ) and ψ ( z ) = d ψ ( z ) / d z being the digamma and trigamma functions, respectively. From the formulas established in (3), the coefficient of skewness for W, corresponding to the third standardized moment, is formulated as
Skew ( W ) = ψ ( b ) ψ ( 1 ) ( ψ ( b ) + ψ ( 1 ) ) 3 / 2 = : Skew W ( b ) .
Note that the expression given in (4) does not depend on the parameter σ .
For the sake of comparison of how flexible a generalized logistic distribution can be, when incorporating a fourth parameter, in relation to the IGL case, we introduce the type IV generalized logistic (IVGL) distribution with parameters μ , σ , α and β . If M IVGL ( α , β ) , then we get
Skew ( M ) = ψ ( α ) ψ ( β ) ( ψ ( α ) + ψ ( β ) ) 3 / 2 = : Skew M ( α , β ) .
In Figure 1a, we show the effect of the shape parameter b for the IGL distribution, when the other parameters μ and σ take different values. It is clear from this figure that the distribution is negatively skewed for b < 1 and is positively skewed for b > 1 . The IGL distribution is flexible in the sense that the rate of change of the coefficient of skewness is greater for negative values than for the positive values in relation to the other generalized logistic distributions. In Figure 1b, we show the flexibility to capture asymmetry of the IVGL distribution, which is more pronounced for values of the parameter β , approximately when β < 0.4 . Thus, this additional parameter of the IVGL distribution is not relevant. For the IIGL distribution, the skewness is a decreasing function of b. Note that if the value of b tends to infinity, then the IIGL distribution has heavier tails. We observe that the coefficient of skewness is approximately constant for values of the shape parameter greater than 0.025. Therefore, the IGL distribution can be used in robustness studies of classical procedures, since extreme values are often observed in real-world data.

3. Inference: Classical Approaches

Next, we describe the moment method of estimation. The expressions stated in (3) are useful to find the moment estimators of μ , σ and b, μ ^ M , σ ^ M and b ^ M namely, respectively. The estimator (moment-based) of Skew ( W ) for a sample W n = ( W 1 , , W n ) from the IGL ( μ , σ , b ) distribution is given by
Skew ^ ( W n ) = ( n 3 ) 1 i = 1 n ( W i W ¯ n ) 3 ( n 1 ) 1 i = 1 n ( W i W ¯ n ) 2 3 / 2 .
If 2 < Skew ^ ( W n ) < 1.1396 , with Skew ^ ( W n ) being defined in (6), then it is possible to obtain a unique solution b ^ M from a condition stated as
( n 3 ) 1 i = 1 n ( W i W ¯ n ) 3 ( n 1 ) 1 i = 1 n ( W i W ¯ n ) 2 3 / 2 = ψ ( b ^ M ) ψ ( 1 ) ψ ( b ^ M ) + ψ ( 1 ) 3 / 2 .
Thus, over the condition given in (7), we obtain the moment estimators for μ and σ by analytically solving the equations established as
i = 1 n ( W i W ¯ n ) 2 n 1 = σ ^ M 2 ( ψ ( 1 ) + ψ ( b ^ M ) ) , W ¯ n = μ ^ M + σ ^ M ( ψ ( b ^ M ) ψ ( 1 ) ) .
Moreover, in [10], note that ( μ ^ M , σ ^ M , b ^ M ) are consistent and asymptotically unbiased for ( μ , σ , b ) when b ^ M exists. However, from a practical numerical point of view, we see in our simulation study that the degree of sample asymmetry and the sample size affect the existence of b ^ M , without being sure of the restrictions for Skew ^ ( W n ) , and so b ^ M does not satisfy the condition given in (7).
Next, we consider the ML method of estimation. Assuming W n is a sample of size n from the IGL ( μ , σ , b ) distribution, the log-likelihood function of θ = ( μ , σ , b ) is formulated as : = ( μ , σ , b ) , where
( μ , σ , b ) = n log ( b ) n log ( σ ) 1 σ i = 1 n ( W i μ ) ( b + 1 ) i = 1 n log 1 + e ( W i μ ) / σ .
From (8), we obtain a closed-form expression for estimating b by the ML method, that is, we reach
b ^ ( μ , σ ) : = b ^ = T n ¯ 1 ,
with T ¯ n ( μ , σ , W n ) : = T ¯ n = ( 1 / n ) i = 1 n log ( 1 + e ( W i μ ) / σ ) . When plugging the estimator b ^ into the log-likelihood function, we get the profile log-likelihood function, denoted in general by * ( θ , g ( θ ) ) (or concentrated log-likelihood function, whose terminology was borrowed from the ML theory, where the insertion of certain partial solutions ϕ = g ( θ ) leads to such a concentration). Hence, the corresponding profile log-likelihood function is defined as
* ( μ , b ^ , σ ) = n T ¯ n n log ( n T ¯ n ) + H ( μ , σ , W n ) ,
where H is a function that does not depend on b ^ given by
H ( μ , σ , W n ) = n ( log ( n ) log ( σ ) 1 ) n Y ¯ n ,
with Y ¯ n = ( 1 / n ) i = 1 n ( W i μ ) / σ . Therefore, with a usual optimization algorithm to maximize the expression stated in (10), we find the ML estimate σ ^ . However, the ML estimation in the IGL distribution has some problems. In [10], it was shown that there is a path in the parameter space along which the likelihood function becomes unbounded.
We present the Fisher information matrix (FIM) of θ in order to use it as a guide to define priori distributions for Bayesian inference on θ = ( μ , σ , b ) = ( θ 1 , θ 2 , θ 3 ) . The FIM of θ , say K ( θ ) , have elements given by
κ r s = n E 2 θ r θ s , r , s ( 1 , 2 , 3 ) ,
with being the corresponding log-likelihood function. Thus, the FIM of θ is stated as K ( θ ) = n K 1 ( θ ) , where K 1 ( θ ) is the FIM when n = 1 , that is,
K 1 ( θ ) = E θ r θ s = E 2 θ r θ s , r , s ( 1 , 2 , 3 ) .
In particular, we have
κ 12 = n σ ( b + 1 ) , κ 13 = n b σ 2 ( b + 2 ) ( ψ ( b + 1 ) ψ ( 2 ) ) , κ 11 = n b σ 2 ( b + 2 ) , κ 22 = n b 2 , κ 23 = n σ ( b + 1 ) ( ψ ( b ) ψ ( 2 ) ) , κ 33 = n σ 2 1 + b b + 2 ψ ( b + 1 ) + ψ ( 2 ) + ( ψ ( b + 1 ) ψ ( 2 ) ) 2 .
Note that the FIM is independent of μ so that we can write K 1 ( θ ) : = K 1 ( θ ) , with θ = ( σ , b ) . Moreover, we observe that, if b = 1 , difficulties appear when applying the Fisher asymptotic theory, having a singular FIM. Additional details can be found in [33].

4. Inference: A Bayesian Approach

One of the first steps to do Bayesian statistics is generating information in terms of a PDF. Obtaining a priori distribution for a single parameter can be simple, if some experience in similar situations has been established. In the Bayesian inference, the posterior PDF p ( θ | W n ) is constructed from the likelihood function p ( W n | θ ) , and the prior PDF p ( θ ) for the parameters is obtained by means of what is known as the Bayes rule, that is,
p ( θ | W n ) = 1 p ( W n ) p ( W n | θ ) p ( θ ) p ( W n | θ ) p ( θ ) = : p ( θ | W n ) .
In this context, p ( W n ) is known as the marginal likelihood function and is given by
p ( W n ) = θ p ( W n | θ ) d θ ,
which is often extremely hard to calculate when the likelihood function has a complex structure, as is the case when considering the IGL distribution. Hence, the Bayesian estimator of any function g of θ under squared error loss function is defined as
g ^ ( W n ) : = E ( g ( θ ) ) = 1 p ( W n ) θ g ( θ ) p ( W n | θ ) p ( θ ) d θ
and under absolute error loss function, it is the median of g ( θ ) , that is,
g ˜ ( W n ) : = Med ( g ( θ ) ) .
Next, we conduct the prior specification. For a parameter vector, establishing a priori joint distribution can be much more complicated. It is also difficult to obtain information on the dependence of the parameters and express it in terms of a joint distribution.
We can consider a priori distribution for the parameter vector θ , say θ = ( θ 1 , θ 2 , θ 3 ) , as p ( θ ) = p ( θ 1 | θ 2 , θ 3 ) × p ( θ 2 | θ 3 ) × p ( θ 3 ) . Thus, by considering that each parameter is independent of the others, the last expression remains as p ( θ ) = p ( θ 1 ) × p ( θ 2 ) × p ( θ 3 ) .
The posterior PDF of θ = ( μ , σ , b ) is, up to the constant factor p ( W n ) , stated as
p ( θ | W n ) = b n σ n e S n / σ + μ / σ i = 1 n 1 + e ( W i μ ) / σ b + 1 p ( θ ) ,
where
p ( W n ) = 0 + 0 + + b n σ n e S n / σ + μ / σ i = 1 n 1 + e ( W i μ ) / σ b + 1 d μ d σ d b ,
with S n : = i = 1 n W i and p ( θ ) acquiring the structures specified above (appropriate for its application). Therefore, the posterior PDF of θ = ( μ , σ , b ) given the data is formulated as
p ( μ , σ , b | W n ) = 1 p ( W n ) b n σ n e S n / σ + n μ / σ i = 1 n 1 + e ( W i μ ) / σ b + 1 p ( μ , σ , b ) .
Note that the integrals stated in the expressions defined in (15) and (18), as well as the solution of the equation established in (16), are not simple to obtain explicit closed forms. Moreover, in this paper, we can specify the prior PDF without supposing independence between μ , σ and b, via the Jeffreys prior. Note that the Jeffreys prior does not work well for models of multidimensional parameters [34]. Nonetheless, we use it in the next section to find the Bayesian estimates in a particular case for the sake of comparison.
Next, we proceed with the MCMC estimation. We focus on those traditional tuned search procedures for proposed and associated priors. We test the method proposed in [35], available for R, Python, MatLab, and C++. Using the R language for our simulated scenarios here, it was not possible to achieve reasonable results.
Considering the MCMC algorithm, that is, θ ( k + 1 ) with distribution defined by q ( · θ ( k ) ) being accepted or rejected according to the MH algorithm, we set
α ( θ ( k ) ; θ ( k + 1 ) ) : = min 1 , p ( θ ( k + 1 ) | W n ) q ( θ ( k ) θ ( k + 1 ) ) p ( θ ( k ) | W n ) q ( θ ( k + 1 ) θ ( k ) ) ,
where p is defined in (17) and q is a specific conditional PDF also called candidate kernel. Considering the likelihood functions previously defined, and the general prior specification given, the posterior distribution has a complex form. Then, the Bayesian estimation can be implemented by employing MCMC methods, which make it simpler to obtain efficient sampling from the marginal posterior distributions. A crucial step in designing an effective sampling regime, implemented in the acceptance/rejection step stated in (20), is the choice of a kernel q.

5. Simulation Studies

We recall that, for large samples, the biases of the ML and Bayesian estimators are negligible. However, for small and moderate sample sizes, the second-order biases of the ML estimators may be large. We use Monte Carlo simulation to evaluate the finite sample performance of the ML and Bayes estimators for the parameter θ = ( μ , σ , b ) of the IGL distribution, with the PDF given in (1).
In order to analyze the point estimation results from ML and Bayesian methods, we compute, for each sample size, n { 15 , 30 , 50 , 100 } , each scenario specified, representing the degree skewness as: severe negative θ = ( 0 , 2 , 0.05 ) –case 1–; moderate negative θ = ( 0 , 4 , 0.01 ) –case 2–: zero θ = ( 0 , 1 , 1 ) –case 3–; and moderate positive θ = ( 0 , 6 , 10 ) –case 4–. For each of the two estimation procedures, it is reported the true bias (E ( θ ^ ) θ ); the relative bias ( ( E ( θ ^ ) θ ) / θ ), by estimating E ( θ ^ ) with Monte Carlo experiments); and the root mean square error, that is, MSE , where MSE is the mean squared error estimated from the 5000 Monte Carlo replicates. The values of IGL distributed random variables were generated using the inversion method. More specifically, the generation of the random samples coming from W IGL ( μ , σ , b ) can be easily obtained utilizing the PDF defined in (1) through the inverse transformation, that is, employing W = μ σ ( log ( U 1 / b 1 ) ) , where U is uniformly distributed on ( 0 , 1 ) . To find the ML estimate, the log-likelihood function defined in (8) is maximized using the Newton–Raphson algorithm with first analytic and second derivatives, both implemented in the glogisfit function of the glogis package [36].
To find the Bayesian estimate, we employ the MCMC algorithm by generating candidate θ ( k + 1 ) with probability α ( θ ( k ) ; θ ( k + 1 ) ) given in (20). The Geweke and multivariate Gelman–Rubin criteria were used to evaluate convergence. Moreover, we utilize the Ljung-Box test to detect the autocorrelations in each chain.
Next, we conduct the classical estimation approaches. In order to show numerically the difficulties they have with the ML and moment methods when estimating the vector of parameters θ , even if convergence is achieved in some replicates, we implement the methods with the generated simulations. Table 1 reports the percentages of updates by means of the moment estimation. In addition, we apply the minimum chi-square estimation procedure [37] implemented in [38] for the mipfp package of R without achieving reliable results. We do not show these results in the paper due to restrictions of space. Table 2 reports the results obtained for the estimates with the moment method and 1000 replicates for the size samples, cases, parameter, and indicators mentioned. These results are not satisfactory and, for the ML method, we do not report the results because they are totally unsatisfactory.
Now, the Bayesian approach is conducted. In the case where the parameter vector θ of the IGL distribution incorporates the three parameters, that is, μ , σ and b, we employ the traditional MCMC algorithm as an alternative to the classical estimation. Understanding the choice of prior distributions is undoubtedly the most controversial aspect of any Bayesian analysis [39,40], we proceed to specify two priori classes (informative and non-informative) for a parameter space of the IGL distribution.
We use the most common convergence criteria in the literature on the topic related to the Gelman–Rubin, Geweke, and Ljung-Box methods. The Gelman–Rubin statistic is built with at least two chains, where the variance is compared within and between them. If the variability is similar within the chains, they are considered to have converged. In terms of its practical use, the value of the statistic must be less that 1.2 [41]. The Geweke test is constructed with a chain, which is usually done by taking the initial 10% and 50% of the chain and making a comparison of means between both partitions, where significant differences indicate that the chain does not converge. The Ljung-Box test is applied to each chain and tries to evaluate the correlation between chain values at different lags, in particular it was evaluated at a maximum of 15 lags. If there is any significant observed correlation, then the chain is correlated and is not considered a random sample. We employ the smallest p-value of the 15 lags, if this is not significant. Then, the others correlations, for lags less than or equal to 14, are not significant either.
We consider a prior distribution given by p ( θ ) = p ( μ , σ , b ) = p ( μ ) p ( σ ) p ( b ) , where p ( μ ) , p ( σ ) and p ( b ) are specified as follows. For the case where we have severe negative skewness (case 1), we use μ Normal ( γ 1 , δ 1 ) , σ Gamma ( α 1 , β 1 ) and b Beta ( ν 1 , ψ 1 ) . When we have moderate negative skewness (case 2), we assume μ Normal ( γ 2 , δ 2 ) , σ Lognormal ( η , κ ) and b Inverse - gamma ( ρ , τ ) . When symmetry exists (case 3), we state μ Normal ( γ 3 , δ 3 ) , σ Gamma ( α 2 , β 2 ) and b Gamma ( α 3 , β 3 ) . For the case of positive skewness (case 4), we employ θ 1 , θ 2 and θ 3 as the dependent joint. We numerically test different priors and values of the hyperparameters to find the convergence of the chains and the sufficient number of iterations. Understanding that there are other methods to set hyperparameter values, for example, empirical Bayes and James–Stein estimators, the development and implementation of these other methods will be studied in future work.
The criterion for eliciting the kernels was mainly oriented to choosing those structures that allowed us to make the algorithm more efficient, that is, to decrease the rejection rate in the step that involves the Metropolis–Hastings ratio, as stated in (20). Consequently for the simulation study, we have that:
  • Case 1: E ( μ t | μ , σ , b ) = 0 , Var ( μ t | μ , σ , b ) = ( μ + σ / 2 + b ) 1 / 2 ,
    E ( σ t | μ , σ , b ) = σ + b , Var ( σ t | μ , σ , b ) = μ 2 + 3 σ ,
    E ( b t | μ , σ , b ) = μ 2 / 5 + b , Var ( b t | μ , σ , b ) = σ .
  • Case 2: E ( μ t | μ , σ , b ) = 0 , Var ( μ t | μ , σ , b ) = ( μ ) 2 + σ / 4 + b / 10 ,
    E ( σ t | μ , σ , b ) = σ , Var ( σ t | μ , σ , b ) = μ 2 + σ + b ,
    E ( b t | μ , σ , b ) = b , Var ( b t | μ , σ , b ) = μ 2 + σ .
  • Case 3: E ( μ t | μ , σ , b ) = μ , Var ( μ t | μ , σ , b ) = ( σ + b ) 2 ,
    E ( σ t | μ , σ , b ) = σ , Var ( σ t | μ , σ , b ) = μ / 2 2 + b .
  • Case 4: E ( μ t | μ , σ , b ) = 0 , Var ( μ t | μ , σ , b ) = μ 2 + σ / 4 + b / 10 ,
    E ( σ t | μ , σ , b ) = ( μ ) 2 + σ , Var ( σ t | μ , σ , b ) = μ 2 + σ / 6 + b / 10 ,
    E ( b t | μ , σ , b ) = 9 + μ 2 + b / 10 , Var ( b t | μ , σ , b ) = μ 2 + σ / 2 ,
where ★ denotes the true value of the MCMC.
Next, we consider dependent and non-informative priors. A class of advisable priori distributions are of the non-informative type, whose construction method is relatively simple, and without privileging specific behavior for any element of the parameter vector. This can be achieved via a Jeffreys prior defined as
p ( θ ) det ( K ( θ ) ) 1 / 2 ,
where K ( θ ) is defined with the elements given in (12), and θ = ( σ , b ) . We observe that the behavior of the Jeffreys prior depends only on θ = ( σ , b ) . In Figure 2, we show the behavior of det ( K ( θ ) ) 1 / 2 on the subspace of θ , say ( σ , b ) ( 0 , 7.0 ) × ( 0 , 12.5 ) , highlighting four region: (a) ( 0 , 0.5 ) × ( 0 , 0.5 ) ; (b) ( 0 , 0.5 ) × ( 4 , 6 ) ; (c) ( 4 , 6 ) × ( 0 , 0 . 5 ) ; and (d) ( 5 , 7 ) × ( 9 , 11 ) . Thus, in general, note that the PDF of a Jeffreys prior takes extreme values when the scale parameter ( σ ) is close to the origin and in greater degree when the shape (b) is close to the origin (see Figure 3a,b, respectively). Whereas if we move away from it (see Figure 3c,d), the Jeffreys prior PDF (in this figure denoted as z) takes smaller values. This is the reason why it is recommended to use it in cases where empirical asymmetry is positive, that is, θ = ( 0 , 6 , 10 ) , in order to avoid those parts of the domain where the PDF has very strong changes and thus damages the efficiency of the algorithm. The above does not disregard the use of the independent and non-informative priors. To achieve a reasonable rejection rate, these results indicate to us that a Jeffrey priori on the region (c) should be considered.
Table 3 and Table 4 report the results obtained for the posterior mean and median estimates, respectively, with the Bayesian method and 1000 replicates for the size samples, cases, parameter, and indicators mentioned. These results are quite satisfactory and helpful. We use the convergence Gelman–Rubin, Geweke, and Ljung-Box criteria, with their results being reported in Table 5. From these results, we observe that all criteria employed here tell us that all chains converge, except, although debatable, in the case where the distribution has a severe negative skewness and with a small sample size. In a future study, we hope to provide a detailed discussion on the selection of priori distributions. We recall that the main objective of our study is to show that the Bayesian estimation is an alternative to the estimation problem of the three-parameter IGL distribution.

6. Empirical Application

Next, we present an application of real-world data corresponding to the solvent extraction (SX) process of copper. This process considers several stages of extraction and re-extraction with their respective operating variables. These variables, as in any process, often generate difficulties to the operation as the lack of efficiency on the copper recovery itself. Generally, difficulties occur in the following stage—the electrowinning (EW). Thus, it results from the dragging of unwanted impurities into the electrolyte. One of the purposes in the metallurgical area is to analyze the process in the stages of solvent extraction and electro-obtaining, identifying the most relevant variables, according to their: (i) impact on the process; (ii) empirical and design operational conditions; and (ii) parameters of control. The pregnant leach solution (or impure aqueous solution, known as PLS in short), from the leaching process, feeds the SX area which, by an organic solvent, is transferred to a pure and concentrated copper sulfate solution, called rich electrolyte, which is sent to the electro-obtaining area. The total PLS flow that feeds the plant is one of the main factors that determine the efficiency of copper extraction [42].
The PLS daily flow throughout the extraction process in the area of SX sulfides (W) was registered between years 2015 and 2019, with a sample size of n = 728 . A descriptive summary, a histogram and the empirical cumulate distribution of the data set are shown in Table 6 and Figure 4, respectively. We can observe from Table 6 and Figure 4 that the data distribution has a moderate positive skewness. This is a very complex situation to estimate the parameters of the IGL distribution by the moment and ML methods, as it was shown in Section 2 and simulation study of Section 4. We use the Bayesian estimation method.
To apply Bayesian estimation, we consider independent and informative prior distributions for the parameter θ = ( μ , σ , b ) and p ( μ , σ , b ) = p ( μ ) × p ( σ ) × p ( b ) , where μ Normal ( γ , δ ) ,   σ Gamma ( ρ , τ ) and b Gamma ( ρ , τ ) . The hyperparameters are set by utilizing the criteria mentioned. For the kernels, we have:
E ( μ t | μ , σ , b ) = μ , Var ( μ t | μ , σ , b ) = ( σ / 100 + b / 100 ) 2 ,
E ( σ t | μ , σ , b ) = σ , Var ( σ t | μ , σ , b ) = μ / 4000 + b / 120 ,
E ( b t | μ , σ , b ) = b , Var ( b t | μ , σ , b ) = μ / 10,000 + σ / 1000 .
In the diagnostic analysis, we have that the multivariate Gelman–Rubin factor is 1.01. The p-values from the Geweke test are in Table 7, from where we can observe that, only for the chains associated with the shape parameter b, this was significant at 5%. Moreover, in Figure 4, the histogram with the fitted IGL PDF (a) and the empirical cumulate distribution with the fitted IGL cumulative distribution function (b) are shown, from where we can see that a good fit was achieved in comparison to the other estimation methods considered in this article. The Bayesian estimate of θ = ( μ , σ , b ) is reported in Table 8, and the autocorrelation functions (ACFs) of the chains are shown in Figure 5. Thus, by having the daily flow distribution of the PLS characterized via the estimated values of the parameters, and the solvent extraction process, the entire LX-SX-EW circuit is better controlled since the behavior of the PLS flow is better characterized. In this way, the control becomes more efficient.

7. Conclusions

In relation to the classical methods used for estimating the parameters of the type I generalized logistic distribution, that is, the maximum likelihood and moment estimators, we can conclude from a numerical point of view that: (a) in the case of the ML estimator, as expected, there are convergence problems, which can be seen clearly for the distribution with negative asymmetry; (b) there is no pattern in the ranges of the parameters or in the sample sizes where the estimators have good behavior; and (c) for moment estimation, the convergence problems are accentuated, whose evidences come to complement a more analytical point of view, as described in [10,43].
The proposed analysis with a real-world data set based on the type I generalized logistic distribution by using the Bayesian approach notoriously corrects the problems detected with the maximum likelihood and moment estimators. Another aspect in favor of our proposal is to be able to find congruent estimates independent of the range of values that parameters and sample size could have. In general, the Bayesian approach turned out to be robust in the estimation of parameters because the average of the estimates resembles the values of the parameters with which the different samples were simulated and with quite small variance. Thus, this work showed the impossibility of finding consistent solutions to a problem that was reflected in the classical methods of estimation. Therefore, in the light foregoing, the future research will be analytically addressing some solutions to improve or prevent the non-existence of the maximum likelihood estimator of the third parameter of the type I generalized logistic distribution. Exploration of different computational algorithms in the estimation of parameters for a class of distributions could be carried out in the line of the work proposed in [44].
As mentioned, some other commonly used classical estimation approaches are related to ordinary/weighted least squares [20], percentiles, minimum distances, and bootstrap methods [21]. The comparison of additional estimation methods to the considered ones in the present study is beyond our objectives. However, we are planning such a comparison for future work. Once we solve the problem of estimation in the type I generalized logistic distribution, different aspects related to sampling, inference, and modeling can be studied. Then, another important challenge to be implemented is related to calculating the sample size when estimating the mean or other parameters of the type I generalized logistic distribution [45], as well as formulating more complex modeling structures, such as regression, temporal, spatial, functional, partial least squares, and errors-in-variables settings [46,47,48,49,50]. We hope to report findings related to these topics in new investigations.

Author Contributions

Data curation, N.J.-L. and B.L.-Á.; formal analysis, B.L.-Á., J.F.-Z. and V.L.; investigation, N.J.-L., B.L.-Á., J.F.-Z. and V.L.; methodology, N.J.-L., B.L.-Á., J.P.N., J.F.-Z. and V.L.; writing—original draft, N.J.-L., B.L.-Á., J.P.N. and J.F.-Z.; writing—review and editing, V.L. All authors have read and agreed to the submitted version of the manuscript.

Funding

Research work by N.J.-L., B.L.-Á., J.P.N., and J.F.-Z. was partially supported by project grant number VRID Nº2021000209INI, from the Vice-Rectory for Research and Advanced Studies of the Universidad de Concepción, Chile. Research by V.L. was partially supported by FONDECYT, project grant number 1200525, from the National Agency for Research and Development (ANID) of the Chilean government under the Ministry of Science, Technology, Knowledge and Innovation.

Data Availability Statement

The data and codes used in this study are available from the authors on request.

Acknowledgments

The authors would also like to thank the editors and reviewers for their constructive comments, which led to improving the presentation of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Johnson, N.L.; Kotz, S.; Balakrishnan, N. Continuous Univariate Distributions; Wiley: New York, NY, USA, 1994; Volume 1–2. [Google Scholar]
  2. Balakrishnan, N.; Nevzorov, V.B. A Primer on Statistical Distributions; Wiley: New York, NY, USA, 2004. [Google Scholar]
  3. Balakrishnan, N. Handbook of the Logistic Distribution; CRC Press: Boca Raton, FL, USA, 1991. [Google Scholar]
  4. Lai, C.D. Generalized Weibull distributions. In Generalized Weibull Distributions; Springer: New York, NY, USA, 2014; pp. 23–75. [Google Scholar]
  5. Dubey, S.D. A new derivation of the logistic distribution. Nav. Res. Logist. Q. 1969, 16, 37–40. [Google Scholar] [CrossRef]
  6. Al-Marzouki, S.; Jamal, F.; Chesneau, C.; Elgarhy, M. Half logistic inverse Lomax distribution with applications. Symmetry 2021, 13, 309. [Google Scholar] [CrossRef]
  7. Athayde, E.; Azevedo, A.; Barros, M.; Leiva, V. Failure rate of Birnbaum-Saunders distributions: Shape, change-point, estimation and robustness. Braz. J. Probab. Stat. 2019, 3, 301–328. [Google Scholar] [CrossRef][Green Version]
  8. Afify, A.Z.; Altum, E.; Alizadeh, M.; Ozel, G.; Hamedani, G.G. The odd exponentiated half-logistic-G family: Properties, characterizations and applications. Chil. J. Stat. 2017, 8, 65–91. [Google Scholar]
  9. Balakrishnan, N.; Hossain, A. Inference for the Type II generalized logistic distribution under progressive Type II censoring. J. Stat. Comput. Simul. 2007, 77, 1013–1031. [Google Scholar] [CrossRef]
  10. Zelterman, D. Parameter estimation in the generalized logistic distribution. Comput. Stat. Data Anal. 1987, 5, 177–184. [Google Scholar] [CrossRef]
  11. Sreekumar, N.; Thomas, P.Y. Estimation of the parameters of Type-I generalized logistic distribution using order statistics. Commun. Stat. Theory Methods 2008, 37, 1506–1524. [Google Scholar] [CrossRef]
  12. Batchelor, R.A.; Orr, A.B. Inflation expectations revisited. Economica 1988, 55, 317–331. [Google Scholar] [CrossRef]
  13. Tolikas, K.; Koulakiotis, A.; Brown, R.A. Extreme risk and value-at-risk in the German stock market. Eur. J. Financ. 2007, 13, 373–395. [Google Scholar] [CrossRef]
  14. Walter, N.; Bergheim, S. Productivity, Growth Potential and Monetary Policy in EMU; Technical Report 42, Reports on European Integration; Publications Office of the European Union: Luxembourg, 2006.
  15. Hossain, A.; Willan, A.R. Approximate MLEs of the parameters of location-scale models under type II censoring. Statistics 2007, 41, 385–394. [Google Scholar] [CrossRef]
  16. Lagos-Álvarez, B.; Ferreira, G.; Porcu, E. Modified maximum likelihood estimation in autoregressive processes with generalized exponential innovations. Open J. Stat. 2014, 4, 620. [Google Scholar] [CrossRef][Green Version]
  17. Hossain, A.; Beyene, J.; Willan, A.R.; Hu, P. A flexible approximate likelihood ratio test for detecting differential expression in microarray data. Comput. Stat. Data Anal. 2009, 53, 3685–3695. [Google Scholar] [CrossRef]
  18. Zeileis, A.; Windberger, T. Glogis: Fitting and Testing Generalized Logistic Distributions. R Package Version 1.0-1 2018. Available online: https://CRAN.R-project.org/package=glogis (accessed on 15 January 2022).
  19. Abberger, K. ML-Estimation in the Location-Scale-Shape Model of the Generalized Logistic Distribution; Discussion Paper Series/CoFE, Vol. 02/15; Konstanz Universitat: Konstanz, Germany, 2002. [Google Scholar]
  20. Swain, J.J.; Venkatraman, S.; Wilson, J.R. Least-squares estimation of distribution functions in Johnson’s translation system. J. Stat. Comput. Simul. 1988, 29, 271–297. [Google Scholar] [CrossRef]
  21. Dey, S.; Alzaatreh, A.; Ghosh, I. Parameter estimation methods for the Weibull-Pareto distribution. Comput. Math. Methods 2021, 3, e1053. [Google Scholar] [CrossRef][Green Version]
  22. Greenwood, J.A.; Landwehr, J.M.; Matalas, N.C.; Wallis, J.R. Probability weighted moments: Definition and relation to parameters of several distributions expressable in inverse form. Water Resour. Res. 1979, 15, 1049–1054. [Google Scholar] [CrossRef][Green Version]
  23. Hosking, J.R.M. L-moments: Analysis and estimation of distributions using linear combinations of order statistics. J. R. Stat. Soc. 1990, 52, 105–124. [Google Scholar] [CrossRef]
  24. Lillo, C.; Leiva, V.; Nicolis, O.; Aykroyd, R.G. L-moments of the Birnbaum-Saunders distribution and its extreme value version: Estimation, goodness of fit and application to earthquake data. J. Appl. Stat. 2018, 45, 187–209. [Google Scholar] [CrossRef]
  25. Hosking, J.R.M. lmom: L-Moments. R Package Version 2.8 2019. Available online: https://CRAN.R-project.org/package=lmom (accessed on 15 January 2022).
  26. Kotz, S.; Leiva, V.; Sanhueza, A. Two new mixture models related to the inverse Gaussian distribution. Methodol. Comput. Appl. Probab. 2010, 12, 199–212. [Google Scholar] [CrossRef]
  27. Balakrishnan, N.L.; Gupta, R.; Kundu, D.; Leiva, V.; Sanhueza, A. On some mixture models based on the Birnbaum-Saunders distribution and associated inference. J. Stat. Plan. Inference 2011, 141, 2175–2190. [Google Scholar] [CrossRef]
  28. Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. B 1977, 39, 1–22. [Google Scholar]
  29. Celeux, G.; Govaert, G. A classification EM algorithm for clustering and two stochastic versions. Comput. Stat. Data Anal. 1992, 14, 315–332. [Google Scholar] [CrossRef][Green Version]
  30. Celeux, G.; Didier, C.; Diebolt, J. Stochastic versions of the EM algorithm: An experimental study in the mixture case. J. Stat. Comput. Simul. 1996, 55, 287–314. [Google Scholar] [CrossRef]
  31. Raqab, M.Z.; Madi, M.T. Bayesian inference for the generalized exponential distribution. J. Stat. Comput. Simul. 2005, 75, 841–852. [Google Scholar] [CrossRef]
  32. Balakrishnan, N.; Leung, M. Order statistics from the Type I generalized. Commun. Stat. Simul. Comput. 1988, 17, 25–50. [Google Scholar] [CrossRef]
  33. Nassar, M.; Elmasry, A. A study of generalized logistic distributions. J. Egypt. Math. Soc. 2012, 20, 126–133. [Google Scholar] [CrossRef][Green Version]
  34. Bernardo, J.M.; Smith, A.F. Bayesian Theory; Wiley: New York, NY, USA, 2009. [Google Scholar]
  35. Christen, J.A.; Fox, C. A general purpose sampling algorithm for continuous distributions (the t-walk). Bayesian Anal. 2010, 5, 263–281. [Google Scholar] [CrossRef]
  36. Windberger, T.; Zeileis, A. Structural breaks in inflation dynamics within the European Monetary Union. East. Eur. Econ. 2014, 52, 66–88. [Google Scholar] [CrossRef][Green Version]
  37. Harris, R.; Kanji, G. On the use of minimum chi-square estimation. J. R. Stat. Soc. D 1983, 32, 379–394. [Google Scholar] [CrossRef]
  38. Barthélemy, J.; Suesse, T. mipfp: An R package for multidimensional array fitting and simulating multivariate Bernoulli distributions. J. Stat. Softw. 2018, 86, 1–20. [Google Scholar] [CrossRef][Green Version]
  39. Lindley, D. Reconciliation of probability distributions. Oper. Res. 1983, 31, 866–880. [Google Scholar] [CrossRef]
  40. Walters, C.; Ludwig, D. Calculation of Bayes posterior probability distributions for key population parameters. Can. J. Fish. Aquat. Sci. 1994, 51, 713–722. [Google Scholar] [CrossRef]
  41. Gelman, A.; Rubin, D.B. A single series from the Gibbs sampler provides a false sense of security. Bayesian Stat. 1992, 4, 625–631. [Google Scholar]
  42. Jergensen, G.V. Copper leaching, solvent extraction, and electrowinning technology. Int. J. Surf. Min. Reclam. Environ. 1999, 13. [Google Scholar]
  43. Lagos-Álvarez, B.; Jiménez-Gamero, M.; Fernández, A. Bias correction in the type I generalized logistic distribution. Commun. Stat. Simul. Comput. 2011, 40, 511–531. [Google Scholar] [CrossRef]
  44. Couri, L.; Ospina, R.; da Silva, G.; Leiva, V.; Figueroa-Zuniga, J. A study on computational algorithms in the estimation of parameters for a class of beta regression models. Mathematics 2022, 10, 299. [Google Scholar] [CrossRef]
  45. Costa, E.; Santos-Neto, M.; Leiva, V. Optimal sample size for the Birnbaum-Saunders distribution under decision theory with symmetric and asymmetric loss functions. Symmetry 2021, 13, 926. [Google Scholar] [CrossRef]
  46. Saulo, H.; Dasilva, A.; Leiva, V.; Sanchez, L.; de la Fuente-Mella, H. Log-symmetric quantile regression models. Stat. Neerl. 2022; in press. [Google Scholar] [CrossRef]
  47. Liu, Y.; Mao, G.; Leiva, V.; Liu, S.; Tapia, A. Diagnostic analytics for an autoregressive model under the skew-normal distribution. Mathematics 2020, 8, 693. [Google Scholar] [CrossRef]
  48. Martinez, S.; Giraldo, R.; Leiva, V. Birnbaum–Saunders functional regression models for spatial data. Stoch. Environ. Res. Risk Assess. 2019, 33, 1765–1780. [Google Scholar] [CrossRef]
  49. Huerta, M.; Leiva, V.; Liu, S.; Rodriguez, M.; Villegas, D. On a partial least squares regression model for asymmetric data with a chemical application in mining. Chemom. Intell. Lab. Syst. 2019, 190, 55–68. [Google Scholar] [CrossRef]
  50. Figueroa-Zuniga, J.; Bayes, C.L.; Leiva, V.; Liu, S. Robust beta regression modeling with errors-in-variables: A Bayesian approach and numerical applications. Stat. Pap. 2022; in press. [Google Scholar] [CrossRef]
Figure 1. PDF of the IGL distribution (a) and d Skew M ( α , β ) / d α of the IVGL distribution (b) for the indicated values of the parameters.
Figure 1. PDF of the IGL distribution (a) and d Skew M ( α , β ) / d α of the IVGL distribution (b) for the indicated values of the parameters.
Symmetry 14 00655 g001
Figure 2. Behavior of the Jeffreys prior for ( σ , b ) ( 0 , 7.0 ) × ( 0 , 12.5 ) .
Figure 2. Behavior of the Jeffreys prior for ( σ , b ) ( 0 , 7.0 ) × ( 0 , 12.5 ) .
Symmetry 14 00655 g002
Figure 3. Behavior of the Jeffreys priors, here denoted as z, for ( σ , b ) : ( 0 , 0.5 ) × ( 0 , 0.5 ) (a); ( 0 , 0.5 ) × ( 4 , 6 ) (b); ( 4 , 6 ) × ( 0 , 0 . 5 ) (c); and ( 5 , 7 ) × ( 9 , 11 ) (d).
Figure 3. Behavior of the Jeffreys priors, here denoted as z, for ( σ , b ) : ( 0 , 0.5 ) × ( 0 , 0.5 ) (a); ( 0 , 0.5 ) × ( 4 , 6 ) (b); ( 4 , 6 ) × ( 0 , 0 . 5 ) (c); and ( 5 , 7 ) × ( 9 , 11 ) (d).
Symmetry 14 00655 g003
Figure 4. Histogram and fitted PDF (a) and empirical distribution function with fitted cumulative distribution function (b) and parameters estimated via classical and Bayesian methods using PLS daily flow data.
Figure 4. Histogram and fitted PDF (a) and empirical distribution function with fitted cumulative distribution function (b) and parameters estimated via classical and Bayesian methods using PLS daily flow data.
Symmetry 14 00655 g004
Figure 5. ACF of the chain μ | W n (a), σ | W n , (b), and b | W n (c) with PLS daily flow data.
Figure 5. ACF of the chain μ | W n (a), σ | W n , (b), and b | W n (c) with PLS daily flow data.
Symmetry 14 00655 g005
Table 1. Percentage of updates by means of the moment estimation for the indicated values.
Table 1. Percentage of updates by means of the moment estimation for the indicated values.
μ σ b n = 15 n = 30 n = 50 n = 100
020.0593.5%84.5%83.0%77.0%
040.194.5%85.5%84.5%80.1%
01198.0%98.5%99.5%100.0%
061088.5%80.5%82.0%70.5%
Table 2. Estimates with the moment method for 1000 replicates of the listed size sample, case, parameter, and indicator.
Table 2. Estimates with the moment method for 1000 replicates of the listed size sample, case, parameter, and indicator.
n = 15 Cases 1–2 μ = 0 σ = 2 b = 0.05 μ = 0 σ = 4 b = 0.1
Mean−19.0963813.319170.49371−18.8157513.759090.51013
Bias−19.0963811.319170.44371−18.815759.759090.41013
Relative bias-5.659588.8742-2.439774.1013
Standard deviation14.255114.48110.2142614.809994.532130.22428
MSE567.87987148.203840.24278573.36827115.780.21851
n = 15 Cases 3–4 μ = 0 σ = 1 b = 1 μ = 0 σ = 6 b = 10
Mean−0.25720.982171.3932311.429864.900522.78161
Bias−0.2572−0.017830.3932311.42986-1.09948−7.21839
Relative bias-−0.017830.39323-−0.18325−0.72184
Standard deviation1.03150.263671.421715.860881.399993.27409
MSE1.130140.069842.17588164.991633.1688362.82485
n = 30 Cases 1–2 μ = 0 σ = 2 b = 0.05 μ = 0 σ = 4 b = 0.1
Mean−14.4344711.637920.38711−14.2975312.140670.4039
Bias−14.434479.637920.33711−14.297538.140670.3039
Relative bias-4.818966.7422-2.035173.039
Standard deviation9.29013.644420.145319.571063.647830.15087
MSE294.6599106.171270.13476296.024579.577150.11512
n = 30 Cases 3–4 μ = 0 σ = 1 b = 1 μ = 0 σ = 6 b = 10
Mean−0.17170.977551.46811.330664.891983.03369
Bias−0.1717−0.022450.46811.33066−1.10802−6.96631
Relative bias-−0.022450.468-−0.18467−0.69663
Standard deviation1.042840.25483.322955.873081.142184.2669
MSE1.1170.0654311.26101162.876922.5322766.73593
n = 50 Cases 1–2 μ = 0 σ = 2 b = 0.05 μ = 0 σ = 4 b = 0.1
Mean−10.7972110.103060.31954−10.592210.651660.33576
Bias−10.797218.103060.26954-10.59226.651660.23576
Relative bias-4.051535.3908-1.662912.3576
Standard deviation8.380613.342310.131188.627763.347870.13503
MSE186.8143876.830610.08986186.6329155.452830.07382
n = 50 Cases 3–4 μ = 0 σ = 1 b = 1 μ = 0 σ = 6 b = 10
Mean−0.127180.972811.305139.725135.092823.83445
Bias−0.12718−0.027190.305139.72513−0.90718−6.16555
Relative bias-−0.027190.30513-−0.1512−0.61656
Standard deviation0.923680.210611.610555.557140.893395.36295
MSE0.869360.045092.68698125.459971.6211266.77523
n = 100 Cases 1–2 μ = 0 σ = 2 b = 0.05 μ = 0 σ = 4 b = 0.1
Mean−8.570279.051830.272−8.211029.560390.28589
Bias−8.570277.051830.222−8.211025.560390.18589
Relative bias-3.525914.44-1.39011.8589
Standard deviation5.913482.604110.097766.222172.702760.10234
MSE108.4187756.509690.05884106.1362238.222840.04503
n = 100 Cases 3–4 μ = 0 σ = 1 b = 1 μ = 0 σ = 6 b = 10
Mean−0.194891.008761.230217.133165.40285.68398
Bias−0.194890.008760.230217.13316−0.5972−4.31602
Relative bias-0.008760.23021-−0.09953−0.4316
Standard deviation0.677390.159860.688615.935290.757797.43532
MSE0.496840.025630.5271886.109640.9308973.91197
Table 3. Bayesian estimate (posterior mean) for 1000 replicates of the listed size sample, case, parameter, and indicator.
Table 3. Bayesian estimate (posterior mean) for 1000 replicates of the listed size sample, case, parameter, and indicator.
n = 15 Cases 1–2 μ = 0 σ = 2 b = 0.05 μ = 0 σ = 4 b = 0.1
Mean−0.0193.1020.0810.0613.7110.093
Bias−0.0191.1020.0310.061−0.289−0.007
Relative bias-0.5510.62-−0.072−0.07
Standard deviation0.080.6320.0240.4060.1570.024
MSE0.0071.6140.0020.1690.1080.001
n = 15 Cases 3–4 μ = 0 σ = 1 b = 1 μ = 0 σ = 6 b = 10
Mean0.0860.9621.3630.0845.91810.517
Bias0.086−0.0380.3630.084−0.0820.517
Relative bias-−0.0380.363-−0.0140.052
Standard deviation0.850.3310.6830.1720.7460.742
MSE0.7290.1110.5980.0370.5630.818
n = 30 Cases 1–2 μ = 0 σ = 2 b = 0.05 μ = 0 σ = 4 b = 0.1
Mean−0.0252.7370.070.0623.7050.093
Bias−0.0250.7370.020.062−0.295−0.007
Relative bias-0.3680.4-−0.074−0.07
Standard deviation0.1220.7190.020.5580.2210.018
MSE0.0151.0610.0010.3150.1360
n = 30 Cases 3–4 μ = 0 σ = 1 b = 1 μ = 0 σ = 6 b = 10
Mean−0.0970.9961.4190.0885.95610.429
Bias−0.097−0.0040.4190.088−0.0440.429
Relative bias-−0.0040.419-−0.0070.043
Standard deviation0.8360.2670.7610.210.5810.949
MSE0.7090.0720.7550.0520.3391.085
n = 50 Cases 1–2 μ = 0 σ = 2 b = 0.05 μ = 0 σ = 4 b = 0.1
Mean−0.0072.5020.0640.0863.710.093
Bias−0.0070.5020.0140.086−0.29−0.007
Relative bias-0.2510.28-−0.072−0.07
Standard deviation0.1510.6660.0180.6340.2410.014
MSE0.0230.6960.0010.4090.1420
n = 50 Cases 3–4 μ = 0 σ = 1 b = 1 μ = 0 σ = 6 b = 10
Mean−0.1861.0171.4040.0785.96110.379
Bias−0.1860.0170.4040.078−0.0390.379
Relative bias-0.0170.404-−0.0060.038
Standard deviation0.7540.1960.7540.2380.471.063
MSE0.6040.0390.7310.0630.2231.273
n = 100 Cases 1–2 μ = 0 σ = 2 b = 0.05 μ = 0 σ = 4 b = 0.1
Mean−0.0122.3830.0610.053.7790.096
Bias−0.0120.3830.0110.05−0.221−0.004
Relative bias-0.1920.22-−0.055−0.040
Standard deviation0.1940.5720.0160.7070.2720.011
MSE0.0380.47400.5020.1230
n = 100 Cases 3–4 μ = 0 σ = 1 b = 1 μ = 0 σ = 6 b = 10
Mean−0.2211.0271.3140.0675.9610.389
Bias−0.2210.0270.3140.067−0.040.389
Relative bias-0.0270.314-−0.0070.039
Standard deviation0.5610.1510.5310.1870.3570.879
MSE0.3640.0230.380.0390.1290.924
Table 4. Bayesian estimate (posterior median) for 1000 replicates of the listed size sample, case, parameter, and indicator.
Table 4. Bayesian estimate (posterior median) for 1000 replicates of the listed size sample, case, parameter, and indicator.
n = 15 Cases 1–2 μ = 0 σ = 2 b = 0.05 μ = 0 σ = 4 b = 0.1
Mean−0.0262.7470.0690.0653.6380.087
Bias−0.0260.7470.0190.065−0.362−0.013
Relative bias-0.3740.38-−0.09−0.13
Standard deviation0.0950.6240.0230.3980.1580.022
MSE0.010.9470.0010.1630.1560.001
n = 15 Cases 3–4 μ = 0 σ = 1 b = 1 μ = 0 σ = 6 b = 10
Mean0.220.9070.9350.0815.8410.267
Bias0.22−0.093−0.0650.081−0.160.267
Relative bias-−0.093−0.065-−0.0270.027
Standard deviation0.9240.3460.5130.1910.7240.728
MSE0.9030.1280.2670.0430.5490.602
n = 30 Cases 1–2 μ = 0 σ = 2 b = 0.05 μ = 0 σ = 4 b = 0.1
Mean−0.0342.4720.0620.0643.6320.089
Bias−0.0340.4720.0120.064−0.368−0.011
Relative bias-0.2360.24-−0.092−0.11
Standard deviation0.1410.7170.020.5430.220.017
MSE0.0210.7370.0010.2990.1840
n = 30 Cases 3–4 μ = 0 σ = 1 b = 1 μ = 0 σ = 6 b = 10
Mean0.0170.9661.0790.0925.90310.214
Bias0.017−0.0340.0790.092−0.0970.214
Relative bias-−0.0340.079-−0.0160.021
Standard deviation0.8450.270.5710.2450.5680.935
MSE0.7140.0740.3320.0680.3320.92
n = 50 Cases 1–2 μ = 0 σ = 2 b = 0.05 μ = 0 σ = 4 b = 0.1
Mean−0.0142.3040.0580.093.6440.09
Bias−0.0140.3040.0080.09−0.356−0.01
Relative bias-0.1520.16-−0.089−0.1
Standard deviation0.180.6720.0180.6240.2390.014
MSE0.0330.54500.3980.1840
n = 50 Cases 3–4 μ = 0 σ = 1 b = 1 μ = 0 σ = 6 b = 10
Mean−0.0890.9961.1420.0855.92210.185
Bias−0.089−0.0040.1420.085−0.0780.185
Relative bias-−0.0040.142-−0.0130.018
Standard deviation0.7510.1970.570.2820.461.045
MSE0.5720.0390.3460.0870.2171.126
n = 100 Cases 1–2 μ = 0 σ = 2 b = 0.05 μ = 0 σ = 4 b = 0.1
Mean−0.0222.2580.0570.0643.720.093
Bias−0.0220.2580.0070.064−0.28−0.007
Relative bias-0.1290.14-−0.07−0.07
Standard deviation0.2310.5820.0160.6930.2720.011
MSE0.0540.40600.4850.1520
n = 100 Cases 3–4 μ = 0 σ = 1 b = 1 μ = 0 σ = 6 b = 10
Mean−0.1541.0151.160.0765.93810.218
Bias−0.1540.0150.160.076−0.0620.218
Relative bias-0.0150.16-−0.010.022
Standard deviation0.5540.1520.430.2230.3490.856
MSE0.3310.0230.2110.0550.1250.781
Table 5. Results of the diagnostics for 1000 replicates of the listed size sample, case, parameter, and test.
Table 5. Results of the diagnostics for 1000 replicates of the listed size sample, case, parameter, and test.
n = 15 Cases 1–2 μ = 0 σ = 2 b = 0.05 μ = 0 σ = 4 b = 0.1
Gelman-Rubin-1.2081--1.0644-
Geweke0.95160.6480.65350.63990.34110.1022
Ljung-Box0.1120.23580.24970.07290.25630.1289
n = 15 Cases 3–4 μ = 0 σ = 1 b = 1 μ = 0 σ = 6 b = 10
Gelman-Rubin-1.1755--1.0706-
Geweke0.20910.4380.37740.77770.51090.7443
Ljung-Box0.5250.44310.23270.65020.13330.1462
n = 30 Cases 1–2 μ = 0 σ = 2 b = 0.05 μ = 0 σ = 4 b = 0.1
Gelman-Rubin-1.017--1.0218-
Geweke0.22030.09430.17020.97850.97880.427
Ljung-Box0.66540.02710.00220.7370.49860.7545
n = 30 Cases 3–4 μ = 0 σ = 1 b = 1 μ = 0 σ = 6 b = 10
Gelman-Rubin-1.0758--1.0414-
Geweke0.56410.20970.44190.7220.54190.7722
Ljung-Box0.24610.89680.1930.07790.09020.3097
n = 50 Cases 1–2 μ = 0 σ = 2 b = 0.05 μ = 0 σ = 4 b = 0.1
Gelman-Rubin-1.0209--1.0305-
Geweke0.87960.5420.68530.56990.29150.5914
Ljung-Box0.08220.48690.28540.25870.71240.6139
n = 50 Cases 3–4 μ = 0 σ = 1 b = 1 μ = 0 σ = 6 b = 10
Gelman-Rubin-1.1348--1.1341-
Geweke0.56350.40840.58170.96040.71690.6362
Ljung-Box0.16750.60550.07380.46260.27120.0961
n = 100 Cases 1–2 μ = 0 σ = 2 b = 0.05 μ = 0 σ = 4 b = 0.1
Gelman-Rubin-1.1659--1.0252-
Geweke0.62610.57560.86810.89350.68450.9877
Ljung-Box0.07540.17190.08780.09420.20240.5346
n = 100 Cases 3–4 μ = 0 σ = 1 b = 1 μ = 0 σ = 6 b = 10
Gelman-Rubin-1.0038--1.1379-
Geweke0.43960.13970.41240.09970.88760.5265
Ljung-Box0.77930.14690.39150.15790.26340.1191
Table 6. Descriptive summary of PLS daily flow data.
Table 6. Descriptive summary of PLS daily flow data.
MeanMedianVarianceStandard DeviationCoefficient of Skewness
10,739.5710,999.74318,485.03564.34−3.50
Table 7. p-value of the Geweke test for the indicated parameter with PLS daily flow data.
Table 7. p-value of the Geweke test for the indicated parameter with PLS daily flow data.
Parameter μ σ b
p-value0.600.340.04
Table 8. Classical and Bayesian estimates of θ = ( μ , σ , b ) with PLS daily flow data.
Table 8. Classical and Bayesian estimates of θ = ( μ , σ , b ) with PLS daily flow data.
Parameter
Indicator μ σ b
Estimate604.751184.633866.72
Posterior mean11,000.27177.550.49
Posterior median11,000.24177.550.49
Variance1.184.150.00
Standar deviation1.092.040.02
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lagos-Álvarez, B.; Jerez-Lillo, N.; Navarrete, J.P.; Figueroa-Zúñiga, J.; Leiva, V. A Type I Generalized Logistic Distribution: Solving Its Estimation Problems with a Bayesian Approach and Numerical Applications Based on Simulated and Engineering Data. Symmetry 2022, 14, 655. https://doi.org/10.3390/sym14040655

AMA Style

Lagos-Álvarez B, Jerez-Lillo N, Navarrete JP, Figueroa-Zúñiga J, Leiva V. A Type I Generalized Logistic Distribution: Solving Its Estimation Problems with a Bayesian Approach and Numerical Applications Based on Simulated and Engineering Data. Symmetry. 2022; 14(4):655. https://doi.org/10.3390/sym14040655

Chicago/Turabian Style

Lagos-Álvarez, Bernardo, Nixon Jerez-Lillo, Jean P. Navarrete, Jorge Figueroa-Zúñiga, and Víctor Leiva. 2022. "A Type I Generalized Logistic Distribution: Solving Its Estimation Problems with a Bayesian Approach and Numerical Applications Based on Simulated and Engineering Data" Symmetry 14, no. 4: 655. https://doi.org/10.3390/sym14040655

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop