Next Article in Journal
On the Approximation of the Hardy Z-Function via High-Order Sections
Next Article in Special Issue
Unit-Power Half-Normal Distribution Including Quantile Regression with Applications to Medical Data
Previous Article in Journal
Uniform Stabilization and Asymptotic Behavior with a Lower Bound of the Maximal Existence Time of a Coupled System’s Semi-Linear Pseudo-Parabolic Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Flexible Asymmetric Log-Birnbaum–Saunders Nonlinear Regression Model with Diagnostic Analysis

by
Guillermo Martínez-Flórez
1,*,†,
Inmaculada Barranco-Chamorro
2,*,† and
Héctor W. Gómez
3,†
1
Departamento de Matemáticas y Estadística, Facultad de Ciencias Básicas, Universidad de Córdoba, Córdoba 230027, Colombia
2
Departamento de Estadística e I.O., Facultad de Matemáticas, Universidad de Sevilla, 41012 Sevilla, Spain
3
Departamento de Estadística y Ciencias de Datos, Universidad de Antofagasta, Antofagasta 1240000, Chile
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Axioms 2024, 13(9), 576; https://doi.org/10.3390/axioms13090576
Submission received: 20 July 2024 / Revised: 17 August 2024 / Accepted: 21 August 2024 / Published: 23 August 2024
(This article belongs to the Special Issue Probability, Statistics and Estimations, 2nd Edition)

Abstract

:
A nonlinear log-Birnbaum–Saunders regression model with additive errors is introduced. It is assumed that the error term follows a flexible sinh-normal distribution, and therefore it can be used to describe a variety of asymmetric, unimodal, and bimodal situations. This is a novelty since there are few papers dealing with nonlinear models with asymmetric errors and, even more, there are few able to fit a bimodal behavior. Influence diagnostics and martingale-type residuals are proposed to assess the effect of minor perturbations on the parameter estimates, check the fitted model, and detect possible outliers. A simulation study for the Michaelis–Menten model is carried out, covering a wide range of situations for the parameters. Two real applications are included, where the use of influence diagnostics and residual analysis is illustrated.

1. Introduction

Regression models are one of the most common statistical techniques used to explain the relationship between a response or output continuous variable Y and a given set of explanatory covariates or predictors X 1 , , X p , where p denotes the number of predictors. In the case where Y is a continuous random variable, linear models stand out as the most frequently used in practice. However, in certain situations, the relationship between the response variable and the set of predictors is nonlinear. In these cases, it is quite difficult to determine the nonlinear behavior of the response variable and the set of covariates. Nonlinear models usually arise from an underlying theory about the relationships between the variables under study. By definition, given a set of covariates X 1 , , X p and a function f ( x 1 , x 2 , , x p ; β 1 , β 2 , , β p ) , where β 1 , β 2 , , β p are unknown parameters, f is linear in β j , j = 1 , 2 , , p , if and only if the first derivative of f with respect to β j does not depend on β j for j = 1 , 2 , , p ; otherwise, f is nonlinear. That is, in the nonlinear case, some of β j ’s parameters appear in f a nonlinear way.
One very general form of nonlinear regression model considers additive errors. Generally speaking, a nonlinear regression model with additive errors is defined as
Y i = ψ ( β , x i ) + ϵ i , i = 1 , 2 , , n ,
where β is a vector of unknown parameters, x is a vector of known covariates, and ψ ( · ; · ) , is a nonlinear, injective, and continuous function, twice differentiable with respect to the elements of β . The error terms ϵ i are independent and identically distributed random variables. Usually it is assumed that ϵ i N ( 0 , σ 2 ) with unknown σ 2 > 0 . However, quite often, the assumption of normality of errors does not hold. Other alternative models should be considered. Among the most relevant ones, we can cite Cancho et al. [1], who proposed a nonlinear model where the error term follows Azzalini’s skew normal distribution introduced in [2], ϵ S N ( 0 , σ , λ ) with skewness parameter λ R , and scale parameter σ > 0 . Another nonlinear regression model with alpha power distributed errors was proposed in Martínez-Flórez et al. [3,4,5]. Details about the alpha power model can be seen in Durrans [6].
We highlight that all these precedents are unimodal. However, in practice, other situations are possible. As an introductory example of interest, consider, for instance, the distribution of RNA in patients with human immunodeficiency virus (HIV) undergoing highly active antiretroviral therapy (HAART). It can be seen in Li et al. [7] that the logarithm of RNA is an asymmetric bimodal distribution, which can be adjusted by a function of a given set of covariates, x 1 , x 2 , , x p (for instance: level of certain biomarkers, sex, age, kind of diet, employment, etc.), and by using a nonlinear regression model with asymmetric and bimodal errors. In this context, we focus on nonlinear regression models based on the extension of the Birnbaum–Saunders (BS) distribution introduced in Martínez-Flórez et al. [8]. Our proposal may be superior for considering a mixture of distributions, since it involves fewer parameters. Influence diagnostics tools are also given. Following a traditional approach, different kinds of perturbations for the parameters in the model are introduced, which can be used to detect outliers and to assess the sensitivity of parameter estimates. Martingale-type residuals are proposed to check the fit provided by the model. All of these aim to illustrate the use of asymmetric nonlinear models to describe lifetime data.
The outline of this paper is as follows. In Section 2, the background of our proposal is established. In Section 3, the nonlinear flexible log-Birnbaum–Saunders regression model is introduced. Results from inferences based on the maximum likelihood method are given. The observed information Fisher matrix is obtained, along with its applications to this regression model. Due to the possible complexity of nonlinear models, a key point is to study the possible deficiencies in the fitted model. Therefore, Section 4 is devoted to influence diagnostics. First, the Cook generalized distance is considered. Later, local influence criteria are given to assess the effect of minor perturbations in the data and/or the proposed model on the statistical summaries. Different perturbation schemes are considered: case-weight as well as perturbation for the response variable, for explanatory variables, and for the scale parameter. Finally, martingale-type residuals are presented to detect deficiencies in the structural part of the model and detect possible outliers. A complete simulation study is given in Section 6. The two-parameter Michaelis–Menten model, widely used in kinetic chemistry, is considered. A variety of situations and error terms are covered. Two applications to real datasets can be seen in Section 7. There, a thoughtful discussion of different regression models is carried out. Moreover, the use of diagnostic criteria and residuals to improve the fitted nonlinear model is illustrated. Final conclusions are presented in Section 8. Technical details are provided in Appendix A.1, Appendix A.2 and Appendix A.3.

2. Materials and Methods

Recall that the BS distribution was introduced by Birnbaum and Saunders [9] in order to model the lifetime of certain structures under dynamic load and that a random variable (RV) T follows a BS distribution, T B S ( α , τ ) , if its probability density function (PDF) is
f T ( t ) = ϕ ( a t ) A t , t > 0 ,
with a t = 1 α t τ τ t , A t = t 3 / 2 ( t + τ ) 2 α τ , and ϕ ( · ) denoting the PDF of the N ( 0 , 1 ) distribution. In Equation (2), α > 0 is a shape parameter, and τ > 0 is a scale parameter and the median of the distribution.
A number of regression models related to the BS distribution can be found in the literature. We first refer to the pioneering work carried out by Rieck and Nedelman [10], where the log-linear-Birnbaum–Saunders regression model was introduced. There, it was assumed that the output variable Y follows a sinh-normal (SHN) distribution, S H N ( α , μ , 2 ) , whose PDF is
φ ( y ) = 2 α cosh y μ 2 2 ϕ 2 α sinh y μ 2 , y R ,
where α > 0 is a shape parameter and μ R is a location parameter. Given a random sample T i B S ( α , τ ) for i = 1 , 2 , , n , let Y i = l o g ( T i ) . Rieck and Nedelman [10] proposed the linear regression model
Y i = β x i + ϵ i ,
where x i is a covariate vector, β is a vector of unknown parameters, and the errors, ϵ i , follow an SHN distribution, ϵ i SHN ( α , 0 , 2 ) , i = 1 , 2 , , n . In this case, E ( Y i ) = μ i = x i β , and the errors ϵ i are symmetric with respect to zero. Asymmetric extensions were proposed by Leiva et al. [11], who considered a skewed sinh-normal model and provided applications to pollution data in Santiago, Chile. Later, Lemonte et al. [12] proposed another asymmetric extension based on the skew-normal, and Martínez-Flórez et al. [3] introduced the asymmetric extension based on the alpha power model of Durrans [6].
Another kind of asymmetric generalization of the sinh-normal model was proposed in Martínez-Flórez et al. [8,13]. This is based on the flexible skew-normal distribution introduced in Gómez et al. [14], and it is known as the flexible sinh-normal (FSHN) distribution, Y FSHN ( α , ξ , σ , δ , λ ) , whose PDF is given by
φ ( y ) = c δ σ 2 α cosh y ξ σ ϕ 2 α sinh y ξ σ + δ Φ λ 2 α sinh y ξ σ , y R ,
with c δ = ( 1 Φ ( δ ) ) 1 and Φ ( · ) denoting the cumulative distribution function (CDF) of the N ( 0 , 1 ) distribution. In Equation (5), α R + is a shape parameter, ξ R is location, σ R + is scale, δ R is related to the bimodality, and λ R is skewness.
It can be seen in [8] that particular cases of interest in the FSHN model are:
  • If δ = 0 , then the FSHN model reduces to the skewed sinh-normal distribution introduced by Leiva et al. [11].
  • If λ = 0 , then a symmetric model denoted as F S H N λ = 0 ( α , ξ , σ , δ ) is obtained, which allows us to model symmetric bimodal data; see [8].
  • If λ = δ = 0 , then the FSHN model reduces to the sinh-normal distribution introduced by Rieck and Nedelman [10].
Furthermore, the following properties are proven in [8], which will be used subsequently.
Lemma 1. 
Let Y F S H N ( α , ξ , σ , δ , λ ) . Then
1. 
Y = ξ + σ a r c s i n h α 2 Z with Z flexible skew - normal ( δ , λ ) .
2. 
a Y F S H N ( α , a ξ , a σ , δ , λ ) , a > 0 .
3. 
Y + b F S H N ( α , ξ + b , σ , δ , λ ) , b R .
4. 
Y F S H N ( α , ξ , σ , δ , λ ) .
Remark 1. 
Properties given in Lemma 1 allow us to obtain features of Y F S H N ( α , ξ , σ , δ , λ ) in terms of Z flexible skew - normal ( δ , λ ) . For instance,
1. 
The p-th quantile of Y, y p , can be obtained from the p-th quantile, z p , of Z:
y p = ξ + σ a r c s i n h α 2 z p , 0 < p < 1 .
2. 
The moments of Y can be expressed in terms of the moments of the RV a r c s i n h α 2 Z with Z flexible skew - normal ( δ , λ ) . This fact will be explicitly introduced in the notation. So, we will write
E F S N a r c s i n h α Z 2 = c δ a r c s i n h α z 2 ϕ ( | z | + δ ) Φ ( λ z ) d z .
Additional details can be seen in Martínez-Flórez et al. [8] and Gómez et al. [14].
The FSHN model was used in Martínez-Flórez et al. [8] to introduce the flexible Birnbaum–Saunders (FBS) linear regression model, which is based on the fact that, if T F B S ( α , τ , δ , λ ) , then
Y = l o g ( T ) F S H N ( α , ξ , 2 , δ , λ ) , with ξ = log ( τ ) .
So, given a random sample from Equation (7), Y i = l o g ( T i ) F S H N ( α , ξ i , 2 , δ , λ ) , covariates can be considered to explain the response variable in a natural way, that is,
ξ i = β T x i , for i = 1 , 2 , , n .
This is called the flexible log-linear Birnbaum–Saunders regression model, whose details can be seen in [8].
In this paper, we consider a nonlinear extension of the model introduced in [8]. The interest of our proposal is based on the fact that there exist few papers dealing with nonlinear extensions of the log-BS regression model. We can cite the BS nonlinear regression model proposed by Lemonte and Cordeiro [15], the study on diagnostics and influence analysis techniques in nonlinear log-BS models with asymmetric models carried out by Lemonte [12], and the paper by Martínez-Flórez et al. [3] on the nonlinear log-BS exponentiated model.

3. Nonlinear Flexible Log-Birnbaum–Saunders

3.1. Regression Model

In this subsection, the flexible log-Birnbaum–Saunders nonlinear regression model is introduced. So, let us consider T 1 , T 2 , , T n independent RVs with T i FBS ( α i , τ i , δ i , λ i ) . Suppose now that the distribution of T i depends on a set of p explanatory variables, denoted by x i = ( x i 1 , x i 2 , , x i p ) T , in the following way:
  • τ i = exp ( f ( β , x i ) ) , i = 1 , 2 , , n , where β T = ( β 1 , β 2 , , β p ) is a p-dimensional vector of unknown parameters, and f is an injective and continuous nonlinear function, twice differentiable with respect to the elements of β .
  • The shape parameters do not involve x i ; that is, α i = α , δ i = δ , λ i = λ , and i = 1 , 2 , , n .
Let Y i = log ( T i ) . Then, the nonlinear flexible log-Birnbaum–Saunders model is defined by
Y i = f ( β , x i ) + ϵ i ,
where it is assumed that the error terms, ϵ i , are independent and identically distributed, ϵ i FSHN ( α , 0 , 2 , δ , λ ) . Since ϵ i FSHN ( α , 0 , 2 , δ , λ ) , by applying results given in [8]
E ( ϵ i ) = 2 c 1 ( α , δ , λ ) ,
V a r ( ϵ i ) = 4 V ( α , δ , λ ) ,
where
c 1 ( α , δ , λ ) = E F S N a r c s i n h α Z 2 = c δ a r c s i n h α z 2 ϕ ( | z | + δ ) Φ ( λ z ) d z ,
and V ( α , δ , λ ) is the variance of the RV arcsinh ( α Z / 2 ) , with Z F l e x i b l e S k e w N o r m a l ( δ , λ ) . The existence of c 1 ( α , δ , λ ) and V ( α , δ , λ ) follow from a r c s i n h ( x ) = l o g ( x + 1 + x 2 ) , x R , where l o g ( · ) denotes the Napierian logarithm and the existence of moments of the skew-normal distribution [16]. Additional details can be seen in Appendix A.1.
By applying Lemma 1 to Equation (8), we have that Y i F S H N ( α , f ( β , x i ) , 2 , δ , λ ) . Moreover, E ( Y i ) = f ( β , x i ) + E ( ϵ i ) , where E ( ϵ i ) was given in Equation (9).
Corollary 1. 
Relevant nonlinear regression models that can be obtained as particular cases of the nonlinear flexible log-Birnbaum–Saunders are the following ones:
1. 
If δ = 0 , then the log-BS nonlinear regression model with asymmetric errors proposed by Lemonte and Cordeiro [15] is obtained.
2. 
If λ = 0 , then a nonlinear log-BS regression submodel with flexible sinh-normal errors is obtained. We highlight that this submodel may fit bimodal data for δ < 0 .
3. 
If δ = λ = 0 , then the nonlinear extension of the Rieck and Nedelman regression model is obtained [10].
Proof. 
Note that the nonlinear Flexible log-Birnbaum–Saunders model is built on
Y i F S H N ( α , f ( β , x i ) , 2 , δ , λ ) .
From Equation (5), its PDF reduces to
f Y ( y i ) = c δ α cosh w i ϕ 2 α sinh w i + δ Φ λ 2 α sinh w i ,
with w i = y i f ( β , x i ) 2 and c δ = ( 1 Φ ( δ ) ) 1 . By considering the different values for the parameters in Equation (11), the proposed results follow. □
Remark 2. 
From now on, the following considerations must be taken into account:
  • The general model with a scale parameter σ > 0 is considered, Y i F S H N ( α , f ( β , x i ) , σ , δ , λ ) , whose PDF was given in Equation (5). Also note that E ( ϵ i ) = σ c 1 ( α , δ , λ ) and V a r ( ϵ i ) = σ 2 V ( α , δ , λ ) .
  • To emphasize the origin of our proposal, that is, the use of logarithm of a flexible Birnbaum–Saunders distribution as a regression model, the notation flexible log-Birnbaum–Saunders Y i FLBS ( α , f ( β , x i ) , σ , δ , λ ) will be used instead of Y i F S H N ( α , f ( β , x i ) , σ , δ , λ ) . Both notations are equivalent.

3.2. Inference

In this subsection, the maximum likelihood method is applied to estimate the parameters in the Y i FLBS ( α , f ( β , x i ) , σ , δ , λ ) model. To simplify the exposition of results, let us denote by θ = ( α , β , σ , δ , λ ) T
ξ i 1 = ξ i 1 ( θ ) = 2 α cosh y i f ( β , x i ) σ , ξ i 2 = ξ i 2 ( θ ) = 2 α sinh y i f ( β , x i ) σ .
Proposition 1. 
Let Y 1 , , Y n be a random sample of Y FLBS ( α , f ( β , x ) , σ , δ , λ ) . Then, the log-likelihood function is
l ( θ ) = n log ( c δ ) n log ( σ ) + i = 1 n log ( ξ i 1 ) 1 2 i = 1 n ( ξ i 2 2 + 2 δ s g n ( ξ i 2 ) ξ i 2 + δ 2 ) + i = 1 n log ( Φ ( λ ξ i 2 ) ) ,
where c δ = ( 1 Φ ( δ ) ) 1 , and s g n ( · ) denotes the s i g n function.
Proof. 
By using the notation introduced in Equation (12), note that
f Y ( y i ) = c δ σ ξ i 1 ϕ | ξ i 2 | + δ Φ λ ξ i 2 .
Since we have a random sample, the log-likelihood function for θ = ( α , β , σ , δ , λ ) T is
l ( θ ) = i = 1 n l o g f Y ( y i ; θ ) .
Equation (13) follows from Equations (14) and (15) and from the fact that ϕ ( · ) is the PDF of an N ( 0 , 1 ) distribution. □
Corollary 2. 
Let Y 1 , , Y n be a random sample of Y FLBS ( α , f ( β , x i ) , σ , δ , λ ) . Then:
1. The score functions are
U ( α ) = n α + 1 α i = 1 n ξ i 2 2 λ α i = 1 n ξ i 2 W i + δ α i = 1 n s g n ( ξ i 2 ) ξ i 2 , U ( β j ) = 1 σ i = 1 n d i , j ξ i 1 ξ i 2 ξ i 2 ξ i 1 λ σ i = 1 n d i , j ξ i 1 W i + δ σ i = 1 n d i , j s g n ( ξ i 2 ) ξ i 1 , j = 1 , 2 , , p , U ( σ ) = n σ + 1 σ i = 1 n z i ξ i 1 ξ i 2 ξ i 2 ξ i 1 λ σ i = 1 n z i ξ i 1 W i + δ σ i = 1 n s g n ( ξ i 2 ) z i ξ i 1 , U ( δ ) = n ϕ ( δ ) 1 Φ ( δ ) i = 1 n s g n ( ξ i 2 ) ξ i 2 n δ , U ( λ ) = i = 1 n ξ i 2 W i ,
where s g n ( · ) denotes the sign function, and
d i , j = f ( β , x i ) β j , z i = y i f ( β , x i ) σ , W i = ϕ ( λ ξ i 2 ) Φ ( λ ξ i 2 ) , j = 1 , 2 , , p ; i = 1 , , n .
2. Maximum likelihood estimators for the regression parameter β and parameters α , σ , δ , and λ are obtained as solutions for U ( θ ) = 0 , which require numerical procedures.
Proof. 
It is straightforward by applying standard calculus techniques. □

Observed Information Matrix

Let us consider the matrix
J ( θ ) = H ( θ ) ,
where H ( θ ) is the Hessian matrix of the log-likelihood function l ( θ ) . The elements of J ( θ ) are denoted by j α α , j β j β j , j α β j , , j δ δ , j δ λ , j λ λ . Their expressions are given in Appendix A.2.
Recall that the Fisher (or expected) information matrix, I F ( θ ) , is given by the expected values of the elements in J ( θ ) . For large samples and under regularity conditions, the MLE of θ , θ ^ , is asymptotically normal, and its asymptotic covariance matrix is the inverse of the Fisher information matrix, I F 1 ( θ ) . Explicitly,
θ ^ θ L N p + 4 ( 0 , I F 1 ( θ ) ) ,
where L denotes convergence in law or in distribution; see [17].
Let J ( θ ^ ) be the observed information matrix, which is obtained by replacing the unknown parameters in Equation (16) by their MLEs. Since, for large n, J ( θ ^ ) converges in probability to I F ( θ ) , J ( θ ^ ) P I F ( θ ) , in practice, Equation (17) is applied, taking J 1 ( θ ^ ) instead of I F 1 ( θ ) .
As for the existence of previous matrices, recall that the flexible skew-normal (FSN) and its properties are well-established. Our model is obtained by applying the a r c s i n h ( · ) transformation to an FSN variable. Therefore, taking into account that s i n h ( · ) and c o s h ( · ) functions are continuous and derivable, and given the good analytical properties of the FSN model, the existence of derivatives of log-likelihood in the nonlinear FLBS model follows. Moreover, for λ = δ = 0 , the rows (or columns) of the information matrix are linearly independent. All these facts support that the MLEs’ regularity conditions are met in practice.
Moreover, for large n, and due to the convergence in probability of J ( θ ^ ) to I F ( θ ) , the inverse of submatrix in Equation (16), which corresponds to β , J 1 ( β ) , can be used to obtain the asymptotic variance of μ ^ i = f ( β ^ , x i ) . Specifically,
V a r ( μ ^ i ) = t r a c e ( d i d i J 1 ( β ^ ) ) ,
where D = μ β is a p × n matrix with μ = ( μ 1 , , μ n ) , and d i denotes the i-th column of the matrix D evaluated at β ^ .

4. Influence Diagnostics

In this section, influence diagnostic tools are proposed. Specifically, Cook’s generalized distance is considered in Section 4.1. This is a case-deletion kind of influence diagnostic that can be used to detect influential observations on parameter estimates. Later, local influence measures are introduced in Section 4.2. Four perturbation schemes are proposed, which may be used to carry out a sensitivity study and detect influential cases affecting the obtained inferential results.
First, the notation is introduced. Following Cook [18], let l ( θ ) be the log-likelihood corresponding to the model proposed in Equation (8), with θ being the vector of unknown parameters. Perturbations into the model can be introduced through a q × 1 vector ω , which is restricted to some open subset ω Ω R q . In practice, ω is a given perturbation scheme of the initial model.
Let us now consider l ( θ | ω ) to be the log-likelihood associated with the perturbed model for certain ω Ω . It is assumed that there exists an ω 0 Ω such that l ( θ | ω 0 ) = l ( θ ) for all θ ; that is, ω 0 represents no perturbation into the model. It is also assumed that l ( θ | ω ) is twice continuously differentiable at (θ ω ), where θ′ and ω denote the transpose of θ and ω . Let us denote by θ ^ and θ ^ ω the MLEs of the unknown parameters under l ( θ ) and l ( θ | ω ) , respectively.
The diagnostic curvature was introduced by Cook [18] as
C ( u ) = 2 | u H u | ,
where u is an eigenvector of H with u = 1 , and H is the q × q matrix with elements h ii = 2 l ( θ | ω ) ω i ω i | θ ^ ω . In practice, the elements of H can be obtained from the relationship
H = Δ J 1 ( θ ^ ) Δ
evaluated at ω = ω 0 , with J ( θ ) being the observed information matrix evaluated at θ = θ ^ and Δ = 2 l ( θ | ω ) θ ω evaluated at θ = θ ^ and ω = ω 0 .
Based on Equation (18), the influence diagnostic analysis of maximum curvature, C max , can be carried out. So, the eigenvector u max associated with the largest eigenvalue of the H matrix can be used to assess the local influence on the estimates of parameters in the log-BS nonlinear model. The effect of locally influential observations is determined by the perturbation of the data in the u max direction.
The previous tools can also be used to assess the local change in θ ^ , due to the influence of ω , by using the likelihood displacement defined as LD ( ω ) = 2 { l ( θ ^ ) l ( θ ^ ω ) } , which compares θ ^ ω and θ ^ with respect to the non-perturbed log-likelihood.
To conclude, it is worth mentioning that, in order to study influential observations, Poon and Poon [19] proposed the conformal normal curvature, B l , defined by
B l = u Δ ( L ¨ 1 ) Δ u tr u Δ ( L ¨ 1 ) Δ u 2 θ = θ ^ , ω = ω 0 ,
where tr ( A ) is the trace of matrix A . Moreover, B l and LD are computationally equivalent. The conformal normal curvature in the direction ω 0 is invariant under reparameterization and for any direction l , 0 | B l | 1 , implying that B l is a normalized measure, which allows us to compare two curvatures.

4.1. Cook Generalized Distance

For the vector of parameters in the log-BS nonlinear model, θ = ( α , β , σ , δ , λ ) , Cook’s generalized distance (GDC) measures the global effect on MLEs of θ when an observation is removed; details can be seen in [18]. If the number of regression coefficients in the fitted model is p , then the global influential statistic for the log-FBS nonlinear regression model is given by
GDC i θ = 1 p + 4 θ ^ θ ^ i Σ ^ θ ^ 1 θ ^ θ ^ i , i = 1 , , n ,
where Σ ^ θ ^ is the estimated covariance matrix of θ ^ , and θ ^ ( i ) is the MLE of θ when the i -th observation is removed.
If we focus on the vector of regression coefficients β , then we only need the subvector that corresponds to these coefficients, and Cook’s generalized distance reduces to
GDC i β = 1 p β ^ β ^ i Σ ^ β ^ 1 β ^ β ^ i , i = 1 , , n ,
where Σ ^ β ^ is the submatrix of Σ ^ θ ^ associated with vector β ^ .

4.2. Local Influence Measurements

In this subsection, we deal with the effect of minor perturbations on the data, since they may cause a considerable effect on the estimates of parameters in the fitted model [20]. The local influence diagnostics are useful to check the model’s assumptions and assess the effects of minor perturbations in the dataset or in the proposed model on estimates of regression parameters, scale parameters, and other parameters. In this way, problems with the error distribution assumptions or the fitted regression model can be detected. The following perturbation schemes are addressed:
  • Case-weight;
  • Perturbation of the response variable;
  • Perturbation of an explanatory variable;
  • Perturbation of the scale parameter.

4.2.1. Case-Weight Perturbation

Let ω be the n × 1 vector of case-weights for the log-FBS nonlinear regression model introduced in Equation (8). The relevant part of the log-likelihood for the perturbed model is
l ( θ | ω ) = i = 1 n ω i log ( 1 Φ ( δ ) ) + log ( σ ) + i = 1 n ω i log ( ξ i 1 )
1 2 i = 1 n ω i ( ξ i 2 2 + 2 δ sgn ( ξ i 2 ) ξ i 2 + δ 2 ) + i = 1 n ω i log ( Φ ( λ ξ i 2 ) ) .
Then, for the i -th observation and the j-th coefficient β j , it follows that
Δ ij = 1 σ d i , j ξ i 1 ξ i 2 ξ i 2 ξ i 1 λ σ d i , j ξ i 1 W i + δ σ d i , j sgn ( ξ i 2 ) ξ i 1 , j = 1 , 2 , , p .
Therefore, the Δ β matrix is given by:
Δ β = D diag { κ 1 , κ 2 , , κ n } i = 1 , 2 , , n
where D = { d ij } is a p × n matrix and
κ i = 1 σ ξ i 1 ξ i 2 ξ i 2 ξ i 1 λ σ ξ i 1 W i + δ σ sgn ( ξ i 2 ) ξ i 1 .
Moreover, Δ α , σ , δ , λ is a 4 × n matrix given by
Δ α , σ , δ , λ = ( ν 1 , ν 2 , , ν n ) ,
where
ν i = 1 α + 1 α ξ i 2 2 λ α W i ξ i 2 + δ α sgn ( ξ i 2 ) ξ i 2 1 σ z i ξ i 1 ξ i 2 ξ i 2 ξ i 1 λ σ z i ξ i 1 W i + δ σ sgn ( ξ i 2 ) z i ξ i 1 ϕ ( δ ) 1 Φ ( δ ) sgn ( ξ i 2 ) ξ i 2 δ ξ i 2 W i , i = 1 , , n .

4.2.2. Perturbation of the Response Variable

An additive perturbation scheme on the response variable is proposed as follows:
Y iw = Y i + ω i s y , i = 1 , , n ,
where s y is the standard deviation of the response variable, and ω i R . So, the relevant part of the perturbed log-likelihood is given by
l ( θ | ω ) = n log ( 1 Φ ( δ ) ) n log ( σ ) + i = 1 n log ( ξ i 1 ω )
1 2 i = 1 n ( ξ i 2 ω 2 + 2 δ sgn ( ξ i 2 ω ) ξ i 2 ω + δ 2 ) + i = 1 n log ( Φ ( λ ξ i 2 ω ) ) ,
where
ξ i 1 ω = 2 α cosh y i ω f ( β , x i ) σ , ξ i 2 ω = 2 α sinh y i ω f ( β , x i ) σ .
In this case, the elements of the Δ matrix are given by
Δ ij ( β ) = s y σ 2 d i , j 2 ξ i 2 2 + 4 α 2 1 + ξ i 2 2 ξ i 2 2 + 4 / α 2 + λ s y σ 2 d i , j W i ξ i 2 + λ ξ i 1 2 ( λ ξ i 2 + W i ) + δ s y σ 2 d i , j sgn ( ξ i 2 ) ξ i 2 , i = 1 , , n , j = 1 , , p . Δ ij ( α , σ , δ , λ ) = ( υ 1 i , υ 2 i , υ 3 i , υ 4 i ) , i = 1 , , n , j = 1 , , 4 ,
where
υ 1 i = 2 s y α σ ξ i 1 ξ i 2 + λ s y α σ ξ i 1 W i 1 + λ ξ i 2 ( λ ξ i 2 + W i ) + δ s y α σ sgn ( ξ i 2 ) ξ i 1 , υ 2 i = s y σ 2 ξ i 1 ξ i 2 ξ i 2 ξ i 1 + s y σ 2 z i 2 ξ i 2 2 + 4 α 2 1 + ξ i 2 2 ξ i 2 2 + 4 / α 2 λ s y σ 2 W i ξ i 1 z i ( ξ i 2 + λ ξ i 1 2 ( λ ξ i 2 + W i ) ) + δ s y σ 2 ( ξ i 1 + z i ξ i 2 ) , υ 3 i = s y σ sgn ( ξ i 2 ) ξ i 1 , υ 4 i = s y σ ξ i 1 W i ( 1 + λ ξ i 2 ( λ ξ i 2 + W i ) ) .

4.2.3. Perturbation of an Explanatory Variable

Let us now consider the following additive perturbation for the explanatory variable x q :
x iqw = x iq + ω i s x q , i = 1 , , n ,
where s x q is the standard deviation of the x q variable, and ω i R . Then, the relevant part of the perturbed log-likelihood is
l ( θ | ω ) = n log ( 1 Φ ( δ ) ) n log ( σ ) + i = 1 n log ( ξ i 1 q ω )
1 2 i = 1 n ( ξ i 2 q ω 2 + 2 δ sgn ( ξ i 2 q ω ) ξ i 2 q ω + δ 2 ) + i = 1 n log ( Φ ( λ ξ i 2 q ω ) ) ,
where
ξ i 1 q ω = 2 α cosh y i f ( β w , x i ) σ , ξ i 2 q ω = 2 α sinh y i f ( β w , x i ) σ ,
and β w is ( x 1 , x 2 , , x i ( q 1 ) , x iq + ω i s x q , x i ( q + 1 ) , , x p ) .
The elements of Δ ( β ) are
Δ ij ( β ) = s x q σ r ijq ξ i 1 ξ i 2 ξ i 2 ξ i 1 λ ξ i 1 W i + δ sgn ( ξ i 2 ) ξ i 1 δ s x q σ 2 d i , j r iq sgn ( ξ i 2 ) ξ i 2 s x q σ 2 d i , j r iq 2 ξ i 2 2 + 4 α 2 1 + ξ i 2 2 ξ i 2 2 + 4 / α 2 + λ W i ( ξ i 2 + λ ξ i 1 2 ( λ ξ i 2 + W i ) ) ,
with r ijq = 2 f ( β w , x ) β q x iqw | θ = θ ^ , ω = 0 and r iq = f ( β w , x ) x iqw | θ = θ ^ , ω = 0 .
Analogously, for parameters α , σ , δ , and λ , we have that
Δ ( α , σ , δ , λ ) = ( c ^ 1 , c ^ 2 , , c ^ n )
where
c i α = 2 s x q α σ r iq ξ i 1 ξ i 2 λ s x q α σ r iq ξ i 1 W i 1 + λ ξ i 2 ( λ ξ i 2 + W i ) δ s x q α σ r iq sgn ( ξ i 2 ) ξ i 1 , c i σ = s x q σ 2 r iq ξ i 1 ξ i 2 ξ i 2 ξ i 1 λ ξ i 1 W i + δ sgn ( ξ i 2 ) ξ i 1 s x q σ 2 r iq z i 2 ξ i 2 2 + 4 α 2 1 + ξ i 2 2 ξ i 2 2 + 4 / α 2 + λ W i ( ξ i 2 + λ ξ i 1 2 ( λ ξ i 2 + W i ) ) + δ sgn ( ξ i 2 ) ξ i 2 , c i δ = s x q σ sgn ( ξ i 2 ) r iq ξ i 1 , c i λ = s x q σ r iq ξ i 1 W i 1 + λ ξ i 2 ( λ ξ i 2 + W i ) .

4.2.4. Perturbation of a Scale Parameter

Now, we focus on the effect of a minor perturbation on the scale parameter σ , which may cause heteroscedasticity. In this subsection, the effect of this perturbation on the MLEs is studied. Let us assume that the error term in Equation (8) is distributed as ϵ i FSHN ( α , 0 , σ / w i , δ , λ ) , i = 1 , 2 , , n . Then, the perturbed log-likelihood is
l ( θ | ω ) = n log ( 1 Φ ( δ ) ) i = 1 n log ( σ / ω i ) + i = 1 n log ( ξ i 1 ω 1 ) 1 2 i = 1 n ( ξ i 2 ω 1 2 + 2 δ sgn ( ξ i 2 ω 1 ) ξ i 2 ω 1 + δ 2 ) + i = 1 n log ( Φ ( λ ξ i 2 ω 1 ) ) ,
where
ξ i 1 ω 1 = 2 α cosh z i ω 1 , ξ i 2 q ω = 2 α sinh z i ω 1 , z i ω 1 = y i f ( β , x i ) σ / ω i .
In this case, we have
Δ ij ( β ) = 1 σ d i , j z i 2 ξ i 2 2 + 4 α 2 1 + ξ i 2 2 ξ i 2 2 + 4 / α 2 + λ W i ( ξ i 2 + λ ξ i 1 2 ( λ ξ i 2 + W i ) ) + δ σ sgn ( ξ i 2 ) d i , j z i ξ i 2 ,
and Δ i ( α , σ , δ , λ ) = ( r ^ 1 , r ^ 2 , , r ^ n ) with
r i α = 2 α z i ξ i 1 ξ i 2 + λ α z i ξ i 1 W i 1 + λ ξ i 2 ( λ ξ i 2 + W i ) + δ α z i sgn ( ξ i 2 ) ξ i 1 , r i σ = 1 σ z i ξ i 1 ξ i 2 ξ i 2 ξ i 1 λ ξ i 1 W i + δ sgn ( ξ i 2 ) ξ i 1 + 1 σ z i 2 2 ξ i 2 2 + 4 α 2 1 + ξ i 2 2 ξ i 2 2 + 4 / α 2 + λ W i ( ξ i 2 + λ ξ i 1 2 ( λ ξ i 2 + W i ) ) + δ sgn ( ξ i 2 ) ξ i 2 , r i δ = sgn ( ξ i 2 ) z i ξ i 1 , r i λ = z i ξ i 1 W i 1 + λ ξ i 2 ( λ ξ i 2 + W i ) .

5. Residual Analysis

In this section, martingale-type residuals are considered to detect deficiencies in the fitted FBS nonlinear regression model with respect to the error distributional assumptions and to detect possible outliers. The study is based on the deviance component residual built on the martingale-type residuals proposed in Therneau et al. [21]. For the nonlinear log-FBS model, the martingale residuals can be obtained as
r M i = 1 + ln 1 F FSHN z ^ i , i = 1 , 2 , , n ,
where F FSHN ( · ) is the CDF of the FSHN distribution, and z ^ i = ξ i 2 ( θ ^ ) .
Therneau et al. [21] proposed the deviance component residual as a transformation of the martingale-type residual. For non-censored data, they can be taken as
r MT i = sgn ( r M i ) 2 r M i + ln ( 1 r M i ) 1 / 2 , i = 1 , 2 , , n .
Here, r MT i can be used as martingale-type residuals since they are symmetrically distributed around zero.
On the other hand, Ortega et al. [22] proposed the consideration of the standardized residuals, r MT i * . For the log-BS nonlinear model, they are given by
r MT i * = r MT i 1 GL ii ( θ ^ ) , i = 1 , , n ,
where GL ii ( θ ^ ) is the i-th principal component for the generalized leverage matrix evaluated at θ ^ ; details can be seen in Wei et al. [23].
The generalized leverage matrix is defined by
GL ( θ ) = D θ ( L ¨ ) 1 L ¨ θ y ,
where D θ = f ( θ ) θ = ( D , 0 ) , L ¨ is a Hessian matrix, and L ¨ θ y = ( L ¨ α y , L ¨ β y , L ¨ σ y , L ¨ δ y , L ¨ λ y ) , with
L ¨ α y = 2 α σ ξ i 1 ξ i 2 + λ α σ ξ i 1 W i 1 + λ ξ i 2 ( λ ξ i 2 + W i ) + δ α σ sgn ( ξ i 2 ) ξ i 1 ,
L ¨ β y i = 1 σ 2 d i , j 2 ξ i 2 2 + 4 α 2 1 + ξ i 2 2 ξ i 2 2 + 4 / α 2 + λ W i ( ξ i 2 + λ ξ i 1 2 ( λ ξ i 2 + W i ) ) + δ σ 2 sgn ( ξ i 2 ) d i , j ξ i 2 ,
L ¨ σ y = 1 σ 2 ξ i 1 ξ i 2 ξ i 2 ξ i 1 λ ξ i 1 W i + δ sgn ( ξ i 2 ) ξ i 1 + 1 σ 2 z i 2 ξ i 2 2 + 4 α 2 1 + ξ i 2 2 ξ i 2 2 + 4 / α 2 + λ W i ( ξ i 2 + λ ξ i 1 2 ( λ ξ i 2 + W i ) ) + δ sgn ( ξ i 2 ) ξ i 2 ,
L ¨ δ y = 1 σ sgn ( ξ i 2 ) ξ i 1 ,
and
L ¨ λ y = 1 σ ξ i 1 W i 1 + λ ξ i 2 ( λ ξ i 2 + W i ) .
In general, the distributions of martingale residuals and deviance component residuals are unknown. Based on the properties of deviance component residuals and the suggestions proposed in Atkinson [24], the residual analysis must be based on envelopes with normality plots. We will follow this recommendation in the practical applications carried out in Section 7.

6. Simulation

In order to study the performance of MLEs in the log-FBS nonlinear regression model, two simulation studies have been carried out. Both suggest the good performance of our proposal.

6.1. Simulation for the Two-Parameter Michaelis–Menten Model

Next, the two-parameter Michaelis–Menten model is introduced. This model is widely used in kinetic chemistry to describe the concentration of a substrate ( Y ) in terms of the reaction rate ( X ). In practice, it is applied to enzyme-catalyzed reactions of one substance and is used in a variety of situations, such as antigen–antibody binding, DNA–DNA hybridization, and protein–protein interactions.
The Michaelis–Menten model with log-FBS errors is given by
y i = β 2 x i β 1 + x i + ϵ i , i = 1 , , n ,
where the reaction rate x > 0 , β 2 > 0 represents the maximum rate of reaction, β 1 > 0 is a kinetic constant, and ϵ FSHN ( α , 0 , σ , δ , λ ) .
Properties of Equation (30) were studied by Cysneiros and Vanegas [25] by assuming that β 1 = 0.0645 , β 2 = 212 , and the error term ϵ follows a Student’s t distribution. In our simulation, we consider Equation (30) with these values of β 1 and β 2 , but the error term is FSHN distributed as ϵ FSHN ( α , 0 , σ , δ , λ ) .
Without loss of generality, σ = 2 is fixed. For the shape parameters in the FSHN model, we take α = 0.75 , 2.75 , λ = 1 , 2.5 , and δ = 1.5 , 0.75 , 0.75 , 1.5 . We highlight that these scenarios cover a variety of situations for the shapes of the FSHN distribution (unimodal and bimodal), as can be seen in [8].
As for the sample size, for every scenario, we consider n = 30 , 50 , 100 , 200 . For the explanatory variable X , a random sample of a U ( 0 , 1 ) distribution was generated. In every setting, 2000 simulations were carried out by using the maxLik function of the R software and by applying the BFGS method. As statistical summaries, the standard deviation (sd) of MLEs, the absolute value of bias ( | bias | ), and the root of the mean squared error ( MSE ) are given in Table 1, Table 2, Table 3 and Table 4. In these tables, we can observe that, when the sample size increases, the standard deviations, biases, and MSE decrease in all scenarios. It can also be determined that the bias and MSE of β ^ 1 and β ^ 2 are negligible, which confirms the asymptotic unbiasedness of these estimators in the Michaelis–Menten model.
Also note that we obtained satisfactory results in our estimates for δ < 0 , suggesting the good performance of our model in bimodal settings.
For ( α , σ , δ , λ ) parameters, we highlight that the greater bias was obtained for δ > 0 and n = 50 . However, if n increases, then | bias | decreases. In all settings, the best results are obtained for negative values of δ , even for n = 30 .
Since these simulations cover a variety of situations and shapes of the FSHN distribution, they suggest that the estimators of the parameters are consistent when the sample size increases.
As for the interest of our results in practice, we highlight the importance of the Michaelis–Menten model. Recall that, in kinetic chemistry, this model is applied in the classical case of an enzyme substrate mechanism in which the reaction timescale of the enzyme must be faster than the reaction timescale of the substrate.

6.2. Simulation for a Nonlinear Regression Model Proposed in [15]

A second simulation study is carried out for the nonlinear regression model with p = 3 covariates, 4 regression coefficients, and sinh-normal errors discussed in Lemonte and Cordeiro [15]. The model is
μ i = β 1 x i , 1 + β 2 x i , 2 + β 3 exp ( β 4 x i , 3 ) , i = 1 , , n .
Next, the MLE properties of parameters in this model are studied under the assumption of FSHN errors.
As statistical summaries, again we give: the standard error (sd) of estimates, | bias | , and MSE .
The scenarios under consideration are: β 1 = 4 , β 2 = 5 , β 3 = 3 , β 4 = 1.5 , and σ = 3 . As for the parameters for the error term, we considered α = 0.75 , 1.5 , λ = 1 , 2.5 , and δ = 2.5 , 0.75 . The sample sizes in our simulations were n = 50 , 100 , 150 . The covariates X 1 , X 2 , X 3 were random variables generated as uniform ( 0 , 1 ) . In every setting, the simulations were repeated 2000 times. The maxLik function of the R software and the BFGS method were applied.
Results are listed in Table 5 and Table 6. In general, we can see that the standard errors, the absolute bias, and the square root of the mean squared error decrease for all the parameters if the sample size increases.
We highlight that the estimates of parameters β 1 , β 2 , β 3 , and β 4 , that is, for those involved in the regression model, behave well, which suggests their asymptotic unbiasedness and consistency.

7. Real Applications

In this section, two illustrations of the log-FBS nonlinear regression model are given. In Section 7.1, a discussion of linear and nonlinear regression models is carried out. The residual analysis techniques proposed in Section 5 are applied to check the model’s adequacy. On the other hand, in Section 7.2, the emphasis is put on the use of diagnostic influence techniques proposed in Section 4.

7.1. Illustration I

7.1.1. Classical Discussion of Models

Let us consider the Australian Institute of Sport (AIS) dataset available from the sn package of R [26]. It consists of n = 202 measurements from high-performance athletes on various characteristics of the blood. We aim to explain the Hematocrit (Hc) variable in terms of Hemoglobin (Hg). A linear model with log-FBS errors and two nonlinear models with log-BS and log-FBS errors are considered for Y = log ( Hc ) . First, it is established that a nonlinear model is better. Later, by using a likelihood ratio test (LRT), it is seen that a nonlinear model with log-FBS errors must be preferred. Its adequacy is checked by envelope plots for the martingale-type residuals.
The regression models under consideration are:
  • Linear model for Y = log ( Hc ) :
    Y i = log ( Hc i ) = β 1 + β 2 x i + ϵ i , i = 1 , 2 , , n
    where
    ϵ i Log FBS ( α , 0 , 2 , δ , λ ) .
  • Nonlinear model for Y = log ( Hc ) with log-BS errors:
    Y i = log ( Hc i ) = β 1 x i β 2 + ϵ i , i = 1 , 2 , , n
    where
    ϵ i Log BS ( α , 0 , 2 ) .
  • Nonlinear model for Y = log ( Hc ) with log-FBS errors:
    Y i = log ( Hc i ) = β 1 x i β 2 + ϵ i , i = 1 , 2 , , n
    where
    ϵ i Log FBS ( α , 0 , 2 , δ , λ ) .
Table 7 provides the parameter estimates by using maximum likelihood, along with the estimated standard errors (in parentheses) and the p-values of tests for the coefficients. The proposed models are compared by using the Akaike information criterion (AIC) and corrected AIC (AICc) defined as
AIC = 2 l ( θ ^ ) + 2 k and AICc = AIC + 2 k ( k + 1 ) n k 1 ,
where k is the number of parameters in the model. The better model is the one with the lowest AIC or AICc.
From AIC and AICc in Table 7, the nonlinear log-BS and nonlinear log-FBS models provide the better fit.
Since nonlinear log-BS and nonlinear log-FBS are nested models, they can be compared with a likelihood ratio test (LRT), where
H 0 : ( δ , λ ) = ( 0 , 0 ) versus H 1 : ( δ , λ ) ( 0 , 0 ) ,
and the test statistic is
Δ = L LBS ( α ˜ , β ˜ 1 , β ˜ 2 ) L LFBS ( α ^ , β ^ 1 , β ^ 2 , δ ^ , λ ^ ) .
By taking α = 0.05 and using the estimates in Table 7, we obtain
2 log ( Δ ) = 2 ( 451.8546 457.0297 ) = 10.3502 ,
which is greater than the critical point χ 2 , 0.95 2 = 5.9914 . Therefore, the null hypothesis is rejected, and it can be stated than the nonlinear log-FBS model provides a better fit than the nonlinear log-BS one.
Moreover, Figure 1a,b give the envelopes for the martingale-type residuals for both nonlinear models. Again, these plots suggest that the nonlinear log-FBS model provides a better fit than the nonlinear log-BS one.

7.1.2. Discussion of Models Based on Validation Techniques

An evaluation of regression models previously proposed is next carried out by using validation techniques. Uses of cross-validation (CV) and generalized CV criteria to select models or optimal ridge parameters in partial linear regression models can be seen in [27,28], among others. In our case, we follow a validation approach.
We randomly split the 202 observations into two sets: a training set containing 101 of the data points, and a validation set with the remaining 101 observations. The regression model fitted to the training sample is later evaluated on the validation sample by using the mean squared error (MSE) as a measure of error. This process is repeated 100 times, that is, using 100 different random splits of the original sample into training and validation sets. In summary, the mean of the MSE i , i = 1 , , 100 , is given in Table 8.
Since the nonlinear log-FBS model exhibits the smallest MSE in Table 8, this model would be preferred to describe our dataset.
As for goodness-of-fit tests to decide if the sample comes from a specific model, the Anderson–Darling test, implemented in the goftest R package, Version 1.2-3 is used [29]. The p-values of these tests are listed in Table 9. These summaries suggest that the nonlinear log-FBS model provides a good fit for this dataset.

7.2. Illustration II: Model Discussion with Emphasis on the Use of Diagnostic Influence Techniques

The real dataset under consideration consists of 46 independent observations of a metal specimen subject to cyclic stress. This dataset has been previously studied by Rieck and Nedelman [10], Galea et al. [30] and Xie and Wei [31]. The variables involved in this study are: the response variable ( T ), number of cycles to failure, and the explanatory covariable ( X ), which corresponds to the work by cycle (mJ/m3). The aim is to check the number of cycles before a failure (crack) happens. Rieck and Nedelman [10] proposed to fit the linear regression model
Y i = log ( T i ) = β 1 + β 2 log ( X i ) + ϵ i ,
where ϵ i SHN ( α , 0 , 2 ) .
Note that Equation (31) is a linear model in the logarithm of the explanatory variable X . This fact may render cumbersome the interpretation of parameters in the fitted model. An alternative is to keep the original explanatory variable X and try to fit a nonlinear model. As for what nonlinear model to propose, the scatter plot of { ( x i , y i ) } may be useful. In this case, an exponential model is suggested. So, we propose to fit the nonlinear regression model
Y i = log ( T i ) = β 1 + β 2 exp ( β 3 / X i ) + ϵ i ,
where ϵ i FSHN ( α , 0 , σ , δ , λ ) .
Note that
E ( Y i ) = μ i = β 1 + β 2 exp ( β 3 / X i ) + E ( ϵ i ) = β 1 * + β 2 exp ( β 3 / X i ) ,
with β 1 * = β 1 + E ( ϵ ) and ϵ FSHN ( α , 0 , σ , δ , λ ) .
Remark 3. 
A brief discussion to illustrate that, in this case, the nonlinear proposal is superior to the linear one is given in Appendix A.3.
The maximum likelihood estimates for the parameters in Equation (32) along with their standard errors (in parentheses) are given in Table 10.
In Figure 2a,b, martingale-based residuals and the diagonal elements in matrix GL ii are plotted. These plots suggest that certain observations may be potentially influential. These are cases #1, #2, #3, #4, #5, #12, #32, and #46, among others.
Analogously, diagnostic analyses have been carried out for the vector of parameter β and the remaining parameters in the model α , σ , δ , λ . The statistic GDC i ( θ ) introduced in Section 4.1 is used in the following settings: case-weights, perturbation of the response variable, perturbed covariables, and perturbed scale parameters. Our results show that, in the case-weights setting, cases #1, #3, and #4 are detected as potentially influential, whereas for the perturbed response variable and the perturbed scale parameter setting, only case #44 was detected as potentially influential. As for the perturbed covariable setting, cases #1, #2, #3, #8, #13, and #19 are detected. The plots given in Figure 3a,b, Figure 4a,b, Figure 5a,b and Figure 6a,b show the results of these analyses.
Next, the influence of previously mentioned cases is studied thorough the relative change statistic, defined as
RC θ j = θ ^ j θ ^ j ( I ) θ ^ j × 100 ,
where θ ^ j denotes the maximum likelihood estimate for the parameter θ j , including all observations, and θ ^ j ( I ) denotes the estimate of the same parameter, deleting the influential observations.
The approximation for the standard error (SE) of RC proposed in Santana et al. [32] will be used:
RC ( SE ^ ( θ j ) ) = SE ^ ( θ ^ j ) SE ^ ( θ ^ j ( I ) ) SE ^ ( θ ^ j ) × 100 ,
where SE ^ ( θ ^ j ) denotes the SE for the maximum likelihood estimate for the parameter θ j , including all observations, and SE ^ ( θ ^ j ( I ) ) denotes the SE for the estimate of the same parameter, deleting the influential observations.
Table 11 provides the RC ( % ) , their RC ( SE ^ ( θ j ) ) in parentheses, and the p-values associated with test H 0 : β j = 0 versus H 1 : β j 0 for j = 1 , 2 , 3 .
From the results in Table 11, we may conclude that, individually, cases #2, #44, and #46 affect the estimates of the parameters β 1 , β 2 , and β 3 in the nonlinear regression model, as well as the scale ( σ ), shape ( δ ), and skewness ( λ ) parameters. It can also be seen that cases #2 and #46 have influence on the ML-estimator of α .
Moreover, the joint influence of cases { 2 , 44 } , { 2 , 46 } , { 44 , 46 } , and { 2 , 44 , 46 } is analyzed. It can be seen that, jointly, these observations have influence on the estimates of the model parameters. As for the hypothesis tests for the parameters in the nonlinear regression model, none of the observations (#2, #44, and #46) affect their significance.
Therefore, cases #2, #44, and #46 were eliminated. The new fitted model is
μ ^ i = y ^ i | x i = 9.1327 5.3787 e 21.3189 / x i ,
whose error terms are distributed as ϵ i FSHN ( 2.2841 , 0 , 0.9028 , 1.0741 , 4.6699 ) .
The plots in Figure 7a,b show the envelope plot for the martingale-type residuals and the dispersion plot for the final fitted model given in Equation (33). Both plots suggest that this is a good choice for describing this dataset.

8. Conclusions

In this paper, a nonlinear log-Birnbaum–Saunders regression model with additive errors is introduced. It is assumed that the errors follow the flexible sinh-normal distribution introduced in [8]. As an advantage, we highlight that this model can be used to describe a continuous response variable with asymmetric, unimodal, or bimodal behavior by using a nonlinear function of covariates. Its theoretical properties are studied, along with results from the maximum likelihood estimation of the parameters. Influence diagnostics are introduced in order to provide tools to detect influential observations and deficiencies in the fitted model. Statistics based on Cook’s generalized distance and local influence diagnostics are considered. Several perturbation schemes are proposed. Specifically, these are: case-weight, perturbations of the response variable, perturbations of an explanatory variable, and perturbations of the scale parameter. Martingale-type residual analysis is presented to detect deficiencies in the fitted model. Simulation studies are included, where the Michaelis–Menten model of interest in kinetic chemistry is considered. The simulations cover a variety of situations as for the range of parameters under consideration. In particular, we highlight that negative and positive values for the parameter that controls the unimodality or bimodality in our model are considered. Our results suggest that the estimators of the parameters are consistent and show the good performance in both unimodal and bimodal settings. To conclude, applications to real datasets are given in order to illustrate how to use our methodology.
We highlight that, in the first application, a discussion of linear and nonlinear regression models is carried out, first from a classical point of view. Several information criteria and the likelihood ratio test are applied. Martingale-type residual analysis techniques are applied to check the model’s adequacy. Second, this discussion is also carried out by using cross-validation techniques. In both cases, our proposal is superior to the other regression models under consideration. On the other hand, in the second real application, the emphasis is on the use of diagnostic influence techniques to detect influential observations.
As important novelties and advantages of our model, we highlight the nonlinearity of our proposal and the kind of distribution used for the errors, since it is assumed that they follow a flexible sinh-normal distribution. Therefore, it is useful for modeling unimodal and bimodal settings. Also, recall that it can be applied to nonnegative data and that it is of interest in reliability and medicine. Moreover, our approach is more general and reduces to other nonlinear regression models previously introduced in the literature, such as the log-BS nonlinear regression model with asymmetric errors introduced in [15].
As future research, we will consider the extension of these models to a censored continuous response variable Y . This kind of study is relevant, for instance, in medicine, where, quite often, we have data that can only be recorded above or below a certain threshold and that depend on the sensitivity of a laboratory test.

Author Contributions

Conceptualization, G.M.-F., I.B.-C. and H.W.G.; formal analysis, G.M.-F., I.B.-C. and H.W.G.; investigation, G.M.-F., I.B.-C. and H.W.G.; methodology, G.M.-F., I.B.-C. and H.W.G.; software, G.M.-F.; supervision, H.W.G.; validation, G.M.-F. and I.B.-C.; visualization, H.W.G. All authors have read and agreed to the published version of the manuscript.

Funding

The research of H.W. Gómez was supported by Grant SEMILLERO UA-2024 (Chile). The research of I. Barranco-Chamorro was supported by IOAP of the University of Seville, Spain. The research of G. Martínez-Flórez was supported by the Vice-rectorate for Research of the Universidad de Córdoba, Colombia, project grant FCB-06-22.

Data Availability Statement

Details about data availability have been given in Section 7. The numerical code is available upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations for distributions are used in this manuscript:
BSBirnbaum–Saunders
SHNsinh-normal
FSHNflexible sinh-normal
FBSflexible Birnbaum–Saunders
FLBSflexible log-Birnbaum–Saunders

Appendix A

Appendix A.1

The following lemma was used to establish the existence of Equations (9) and (10).
Lemma A1. 
Let ϵ FSHN ( α , 0 , 2 , δ , λ ) . Then,
E ( ϵ ) = 2 c 1 ( α , δ , λ ) , Var ( ϵ ) = 4 V ( α , δ , λ ) ,
where
c 1 ( α , δ , λ ) = E FSN arcsinh α Z 2 = c δ arcsinh α z 2 ϕ ( | z | + δ ) Φ ( λ z ) dz ,
and V ( α , δ , λ ) denotes the variance of the RV arcsinh ( α Z / 2 ) , with Z Flexible Skew Normal ( δ , λ ) .
Proof. 
Since ϵ FSHN ( α , 0 , 2 , δ , λ ) , it can be seen in [8] that ϵ = 2 arcsinh ( α Z / 2 ) , with Z Flexible Skew Normal ( δ , λ ) . Recall that the PDF of Z is
f Z ( z ) = c δ ϕ ( | z | + δ ) Φ ( λ z ) , z R .
Therefore,
E ( ϵ ) = 2 E FSN arcsinh α Z 2 = 2 c δ R arcsinh α z 2 ϕ ( | z | + δ ) Φ ( λ z ) dz .
Taking into account the relationship arcsinh ( x ) = log ( x + x 2 + 1 ) , x R , and the fact that, if w = x + x 2 + 1 > 0 , then log ( w ) w 1 (see Love [33]), then we have that
R arcsinh α z 2 ϕ ( | z | + δ ) Φ ( λ z ) dz
can be expressed in terms of incomplete moments of the skew-normal distribution. The existence of moments of the skew-normal distribution can be seen in Haas [16]; therefore, c 1 ( α , δ , λ ) also exists.
Note that we can proceed similarly for Var ( ϵ ) or E FSN arcsinh α Z 2 j with j Z + , since the moments of the skew-normal distribution exist; see Haas [16]. Therefore, the moments of E FSN arcsinh α Z 2 j with j Z + also exist, in particular, V ( α , δ , λ ) . □

Appendix A.2

Next, the elements of the observed information matrix are given.
Corollary A1. 
Let Y 1 , , Y n be a random sample of Y FLBS ( α , f ( β , x i ) , σ , δ , λ ) . Then, the elements of the observed information matrix are:
j α α = n α 2 + 3 α 2 i = 1 n ξ i 2 2 + λ α 2 i = 1 n W i ξ i 2 2 + λ ξ i 2 ( λ ξ i 2 + W i ) + 2 δ α 2 i = 1 n sgn ( ξ i 2 ) ξ i 2 , j α β j = 2 α σ i = 1 n d i , j ξ i 1 ξ i 2 + λ α σ i = 1 n d i , j W i ξ i 1 1 + λ ξ i 2 ( λ ξ i 2 + W i ) + δ α σ i = 1 n d i , j sgn ( ξ i 2 ) ξ i 1 , j α δ = 1 α i = 1 n sgn ( ξ i 2 ) ξ i 2 , j α λ = 1 α i = 1 n ξ i 2 W i 1 + λ ξ i 2 ( λ ξ i 2 + W i ) , j α σ = 2 α σ i = 1 n z i ξ i 1 ξ i 2 λ α σ i = 1 n z i ξ i 1 W i 1 + λ ξ i 2 ( λ ξ i 2 + W i ) + δ α σ i = 1 n sgn ( ξ i 2 ) z i ξ i 1 , j β j β j = 1 σ i = 1 n g i , jj ξ i 1 ξ i 2 λ ξ i 1 W i + δ sgn ( ξ i 2 ) ξ i 1 ξ i 2 ξ i 1 + δ σ 2 i = 1 n d i , j d i , j sgn ( ξ i 2 ) ξ i 2 + 1 σ 2 i = 1 n d i , j d i , j 2 ξ i 2 2 + 4 α 2 1 + ξ i 2 2 ξ i 2 2 + 4 / α 2 + λ σ 2 i = 1 n d i , j d i , j W i ξ i 2 + λ ξ i 1 2 ( λ ξ i 2 + W i ) , j β j σ = 1 σ 2 i = 1 n d i , j ξ i 1 ξ i 2 ξ i 2 ξ i 1 + δ σ 2 i = 1 n d i , j sgn ( ξ i 2 ) ξ i 1 + δ σ 2 i = 1 n d i , j sgn ( ξ i 2 ) z i ξ i 2 λ σ 2 i = 1 n d i , j ξ i 1 W i + 1 σ 2 i = 1 n d i , j z i 2 ξ i 2 2 + 4 α 2 1 + ξ i 2 2 ξ i 2 2 + 4 / α 2 + λ σ 2 i = 1 n d i , j z i W i ξ i 2 + λ ξ i 1 2 ( λ ξ i 2 + W i ) ,
j σ σ = 2 σ 2 i = 1 n z i ξ i 1 ξ i 2 ξ i 2 ξ i 1 + δ σ 2 i = 1 n sgn ( ξ i 2 ) z i 2 ξ i 1 + z i ξ i 2 2 λ σ 2 i = 1 n z i ξ i 1 W i n σ 2 + 1 σ 2 i = 1 n z i 2 2 ξ i 2 2 + 4 α 2 1 + ξ i 2 2 ξ i 2 2 + 4 / α 2 + λ σ 2 i = 1 n z i 2 W i ξ i 2 + λ ξ i 1 2 ( λ ξ i 2 + W i ) ,
j β j λ = 1 σ i = 1 n d ij W i ξ i 1 1 + λ ξ i 2 ( λ ξ i 2 + W i ) , j λ λ = i = 1 n ξ i 2 2 W i ( λ ξ i 2 + W i ) ,
j σ λ = 1 σ i = 1 n z i W i ξ i 1 λ σ i = 1 n z i ξ i 1 ξ i 2 W i ( λ ξ i 2 + W i ) , j σ δ = 1 σ i = 1 n sgn ( ξ i 2 ) z i ξ i 1 ,
j β j δ = 1 σ i = 1 n d i , j sgn ( ξ i 2 ) ξ i 1 , j λ δ = 0 , j δ δ = n W δ ( δ W δ ) + 1 ,
where W δ = ϕ ( δ ) / ( 1 Φ ( δ ) ) .
Proof. 
It is straightforward by applying standard calculus techniques. □

Appendix A.3

In this appendix, it is shown that the nonlinear model proposed in Section 7.2 is superior to a linear model. So, for the dataset studied in Section 7.2, let us consider the linear model with sinh-normal errors or log-BS
Y i = log ( T i ) = β 1 + β 2 x i + ϵ i ,
with ϵ i SHN ( α , 0 , 2 ) . Summaries for this model and the nonlinear one proposed in Equation (32) are given in Table A1. From AIC and AICc in Table A1, it can be concluded that the nonlinear flexible log-BS provides a better fit to this dataset than the linear log-BS model.
Table A1. Parameter estimates (standard errors).
Table A1. Parameter estimates (standard errors).
EstimatesLog-BSFlexible Log-BS
α ^ 0.5202 (0.0543)2.3061 (1.3846)
p-value(0.0000)(0.0479)
β ^ 1 7.9849 (0.1546)10.1725 (1.4170)
p-value(0.0000)(0.0000)
β ^ 2 −0.0406 (0.0033)−5.8413 (1.0844)
p-value(0.0000)(0.0000)
β ^ 3 −17.9475 (8.5355)
p-value (0.0354)
σ ^ 0.9499 (0.3100)
p-value (0.0021)
δ ^ 1.2984 (0.6121)
p-value (0.0339)
λ ^ −5.6195 (2.6171)
p-value (0.0317)
AIC75.563853.6077
AICc76.539456.5550

References

  1. Cancho, V.G.; Lachos, V.H.; Ortega, E.M.M. A nonlinear regression model with skew-normal errors. Stat. Papers 2010, 51, 547–558. [Google Scholar] [CrossRef]
  2. Azzalini, A. A class of distributions which includes the normal ones. Scand. J. Stat. 1985, 12, 171–178. [Google Scholar]
  3. Martínez-Flórez, G.; Bolfarine, H.; Gómez, H.W. The Log-Linear Birnbaum–Saunders Power Model. Methodol. Comput. Appl. Probab. 2017, 19, 913–933. [Google Scholar] [CrossRef]
  4. Bolfarine, H.; Martínez–Flórez, G.; Salinas, H.S. Bimodal symmetric-asymmetric families. Commun. Stat. Theory Methods 2018, 47, 259–276. [Google Scholar] [CrossRef]
  5. Martínez-Flórez, G.; Bolfarine, H.; Gómez, H.W. An alpha-power extension for the Birnbaum–Saunders distribution. Statistics 2014, 48, 896–912. [Google Scholar] [CrossRef]
  6. Durrans, S.R. Distributions of fractional order statistics in hydrology. Water Resour. Res. 1992, 28, 1649–1655. [Google Scholar] [CrossRef]
  7. Li, X.; Chu, H.; Gallant, J.E.; Hoover, D.R.; Mack, W.J.; Chmiel, J.S.; Muñoz, A. Bimodal virologic response to antiretroviral therapy for HIV infection: An application using a mixture model with left censoring. J. Epidemiol. Commun. Health 2006, 60, 811–818. [Google Scholar] [CrossRef]
  8. Martínez-Flórez, G.; Barranco-Chamorro, I.; Gómez, H.W. Flexible Log-Linear Birnbaum–Saunders Model. Mathematics 2021, 9, 1188. [Google Scholar] [CrossRef]
  9. Birnbaum, Z.W.; Saunders, S.C. A New Family of Life Distributions. J. Appl. Probab. 1969, 6, 319–327. [Google Scholar] [CrossRef]
  10. Rieck, J.R.; Nedelman, J.R. A log-linear model for the Birnbaum–Saunders distribution. Technometrics 1991, 33, 51–60. [Google Scholar]
  11. Leiva, V.; Vilca-Labra, F.; Balakrishnan, N.; Sanhueza, A. A skewed sinh-normal distribution and its properties and application to air pollution. Commun. Stat. Theory Methods 2010, 39, 426–443. [Google Scholar] [CrossRef]
  12. Lemonte, A.J. A log-Birnbaum–Saunders regression model with asymmetric errors. J. Stat. Comput. Simul. 2012, 82, 1775–1787. [Google Scholar] [CrossRef]
  13. Martínez-Flórez, G.; Barranco-Chamorro, I.; Bolfarine, H.; Gómez, H.W. Flexible Birnbaum–Saunders Distribution. Symmetry 2019, 11, 1305. [Google Scholar] [CrossRef]
  14. Gómez, H.W.; Elal-Olivero, D.; Salinas, H.S.; Bolfarine, H. Bimodal extension based on the skew-normal distribution with application to pollen data. Environmetrics 2011, 22, 50–62. [Google Scholar] [CrossRef]
  15. Lemonte, A.J.; Cordeiro, G. Birnbaum–Saunders nonlinear regression models. Comput. Stat. Data Anal. 2009, 53, 4441–4452. [Google Scholar] [CrossRef]
  16. Haas, M. A Note on the Moments of the Skew-Normal Distribution. Econ. Bull. 2012, 32, 3306–3312. [Google Scholar]
  17. Lehman, L.E. Elements of Large-Sample Theory; Springer: New York, NY, USA, 1999. [Google Scholar]
  18. Cook, R.D. Assessment of Local Influence. J. R. Stat. Society Ser. B (Methodol.) 1986, 48, 133–169. [Google Scholar] [CrossRef]
  19. Poon, W.Y.; Poon, Y.S. Conformal normal curvature and assessment of local influence. J. R. Statist. Soc. B 1999, 61, 51–61. [Google Scholar] [CrossRef]
  20. Barros, M.; Galea, M.; Gonzalez, M.; Leiva, V. Influence diagnostics in the tobit censored response’ model. Stat. Methods Appl. 2010, 19, 379–397. [Google Scholar] [CrossRef]
  21. Therneau, T.; Grambsch, P.; Fleming, T. Martingale based residuals for survival models. Biometrika 1990, 77, 147–160. [Google Scholar] [CrossRef]
  22. Ortega, E.M.; Bolfarine, H.; Paula, G.A. Influence diagnostics in generalized log-gamma regression models. Comput. Stat. Data Anal. 2003, 42, 165–186. [Google Scholar] [CrossRef]
  23. Wei, B.C.; Hu, Y.Q.; Fung, W.K. Generalized leverage and its applications. Scand. J. Stat. 1998, 25, 25–37. [Google Scholar] [CrossRef]
  24. Atkinson, A.C. Two graphical displays for outlying and influential observations in regression. Biometrika 1981, 68, 13–20. [Google Scholar] [CrossRef]
  25. Cysneiros, F.J.A.; Vanegas, L.H. Residuals and their statistical properties in symmetrical nonlinear models. Stat. Probab. Lett. 2008, 78, 3269–3273. [Google Scholar] [CrossRef]
  26. Azzalini, A. The R Package ’sn’: The Skew-Normal and Related Distributions such as the Skew-t and the SUN (Version 2.1.1). Available online: https://cran.r-project.org/package=sn (accessed on 15 August 2024).
  27. Wasserman, L. All of Nonparametric Statistics; Springer Science+Business Media Inc.: New York, NY, USA, 2006. [Google Scholar]
  28. Amini, M.; Roozbeh, M. Optimal partial ridge estimation in restricted semiparametric regression models. J. Multivar. Anal. 2015, 136, 26–40. [Google Scholar] [CrossRef]
  29. Baddeley, B. Goftest: Classical Goodness-of-Fit Tests for Univariate Distributions. R Package Version 1.2-3. Available online: https://cran.r-project.org/web/packages/goftest (accessed on 5 June 2024).
  30. Galea, M.; Leiva-Sánchez, V.; Paula, G.A. Influence diagnostics in log-Birnbaum–Saunders regression models. J. Appl. Stat. 2004, 31, 1049–1064. [Google Scholar] [CrossRef]
  31. Xie, F.C.; Wei, B.C. Diagnostics analysis for log-Birnbaum–Saunders regression models. Comput. Statist. Data Anal. 2007, 51, 4692–4706. [Google Scholar] [CrossRef]
  32. Santana, L.; Vilca, F.; Leiva, V. Influence analysis in skew-Birnbaum–Saunders regression models and applications. J. Appl. Stat. 2011, 38, 1633–1649. [Google Scholar] [CrossRef]
  33. Love, E.R. Some Logarithm Inequalities. Math. Gaz. 1980, 64, 55–57. [Google Scholar] [CrossRef]
Figure 1. Normal probability plots for r MT i with envelopes of Q-qplots for the scaled residuals for the fitted models: (a) nonlinear log-BS model and (b) nonlinear log-FBS model.
Figure 1. Normal probability plots for r MT i with envelopes of Q-qplots for the scaled residuals for the fitted models: (a) nonlinear log-BS model and (b) nonlinear log-FBS model.
Axioms 13 00576 g001
Figure 2. (a) Plots of deviance component residuals r MT i . (b) Elements in the diagonal of the generalized Leverage matrix GL ii .
Figure 2. (a) Plots of deviance component residuals r MT i . (b) Elements in the diagonal of the generalized Leverage matrix GL ii .
Axioms 13 00576 g002
Figure 3. Plots for the C i ( · ) index in the case-weights scheme: (a) for β and (b) for θ 1 = ( α , σ , δ , λ ) .
Figure 3. Plots for the C i ( · ) index in the case-weights scheme: (a) for β and (b) for θ 1 = ( α , σ , δ , λ ) .
Axioms 13 00576 g003
Figure 4. Plots for the C i ( · ) index in the perturbation of the response variable scheme: (a) for β and (b) for θ 1 = ( α , σ , δ , λ ) .
Figure 4. Plots for the C i ( · ) index in the perturbation of the response variable scheme: (a) for β and (b) for θ 1 = ( α , σ , δ , λ ) .
Axioms 13 00576 g004
Figure 5. Plots for the C i ( · ) index in the perturbed covariate scheme: (a) for β and (b) for θ 1 = ( α , σ , δ , λ ) .
Figure 5. Plots for the C i ( · ) index in the perturbed covariate scheme: (a) for β and (b) for θ 1 = ( α , σ , δ , λ ) .
Axioms 13 00576 g005
Figure 6. Plots for the C i ( · ) index for the perturbation of the scale parameter scheme: (a) for β and (b) for θ 1 = ( α , σ , δ , λ ) .
Figure 6. Plots for the C i ( · ) index for the perturbation of the scale parameter scheme: (a) for β and (b) for θ 1 = ( α , σ , δ , λ ) .
Axioms 13 00576 g006
Figure 7. (a) Normal probability plots for rMT i with envelopes of Q-qplots for the scaled residuals from the fitted model and (b) dispersion diagram for nonlinear model FSHN with errors.
Figure 7. (a) Normal probability plots for rMT i with envelopes of Q-qplots for the scaled residuals from the fitted model and (b) dispersion diagram for nonlinear model FSHN with errors.
Axioms 13 00576 g007
Table 1. | Bias | , sd, and MSE for the MLEs in the nonlinear FSHN ( α , β 1 = 0.0645 , β 2 = 212 , σ = 2 , δ = 1.5 , λ ) regression model.
Table 1. | Bias | , sd, and MSE for the MLEs in the nonlinear FSHN ( α , β 1 = 0.0645 , β 2 = 212 , σ = 2 , δ = 1.5 , λ ) regression model.
n = 30 n = 50 n = 100 n = 200
α λ θ j | bias | sd MSE | bias | sd MSE | bias | sd MSE | bias | sd MSE
α 1.00082.24622.45810.51391.47791.56420.30161.061.10180.1540.65480.6725
β 1 0.00000.00050.00050.00000.00040.000400.00030.000300.00020.0002
1 β 2 0.08200.36620.37510.03010.28390.28540.00760.19800.19800.0090.14470.1450
σ 0.10631.58691.58980.21531.45481.47010.31311.38631.42080.21941.08371.1054
δ 0.04370.77840.77930.02390.57660.57690.01320.43910.43920.00760.30880.3088
λ 0.19010.95660.97490.16340.70980.72810.13300.53320.54940.07490.35490.3626
0.75 α 0.88651.85572.05520.48391.37271.45490.24670.84040.87560.15510.70130.7180
β 1 0.00010.00050.00050.00000.00040.00040.00000.00030.00030.00000.00020.0002
2.5 β 2 0.14260.35310.38050.05940.27750.28370.01370.20940.20980.00360.18270.1827
σ 0.11771.60551.60850.27331.53321.55670.26571.30601.33240.22491.24411.2643
δ 0.32480.84750.90690.11410.63910.64890.04970.48780.49020.02070.42410.4245
λ 0.50842.10272.16320.44251.54521.60660.34411.11681.16820.20630.83700.8618
α 0.31202.45982.47840.19552.29032.29780.25322.10522.12030.03901.88501.8854
β 1 0.00010.00090.00090.00010.00060.000600.00040.00040.00000.00030.0003
1 β 2 0.23370.57740.62270.15740.43610.46350.06410.30650.31300.02960.21950.2214
σ 0.76591.47921.66520.54571.11951.2450.28450.70860.76340.11090.42290.4371
δ 0.44240.78250.89860.34150.63290.71890.18090.49460.52650.06360.40250.4074
λ 0.14960.50120.52290.10980.45370.46670.03270.40080.40200.02010.34710.3476
2.75 α 0.66892.53772.62440.54442.09952.16790.36471.74311.78020.01851.55181.5514
β 1 0.00040.00090.00100.00030.00070.00070.00010.00050.000500.00040.0004
2.5 β 2 0.58970.73490.94190.41890.59200.7250.22820.50540.55440.08450.36810.3776
σ 0.80941.62771.81690.68581.39421.55310.34380.72130.79880.13770.43210.4534
δ 0.22550.86030.88930.19950.70520.73250.15250.54210.56290.06480.41460.4195
λ 0.66431.55321.68840.54971.32911.43770.22371.25521.27450.05561.18711.1881
Table 2. | Bias | , sd, and MSE for the MLEs in the nonlinear FSHN ( α , β 1 = 0.0645 , β 2 = 212 , σ = 2 , δ = 0.75 , λ ) regression model.
Table 2. | Bias | , sd, and MSE for the MLEs in the nonlinear FSHN ( α , β 1 = 0.0645 , β 2 = 212 , σ = 2 , δ = 0.75 , λ ) regression model.
n = 30 n = 50 n = 100 n = 200
α λ θ j | bias | sd MSE | bias | sd MSE | bias | sd MSE | bias | sd MSE
α 1.26192.52592.82220.67351.76901.89210.28221.10921.14410.12410.61430.6265
β 1 0.00000.00060.00060.00000.00040.00040.00000.00030.00030.00000.00020.0002
1 β 2 0.06090.38850.39300.02820.34600.34700.03380.24690.24910.02680.19480.1966
σ 0.72001.79571.93470.44711.67011.72890.35921.48411.52640.32801.26591.3073
δ 0.17140.81520.83250.19280.68070.70710.11970.49280.50700.06940.35540.3620
λ 0.30750.99141.03790.16890.98370.99760.19800.91470.93560.10460.53400.5440
0.75 α 0.86141.83352.02420.54841.42351.52460.32361.03911.08790.16460.78110.7980
β 1 0.00000.00050.00050.00000.00040.00040.00000.00020.00020.00000.00020.0002
2.5 β 2 0.03940.29510.29750.02490.21720.21860.01850.17390.17480.00350.13990.1399
σ 0.58311.74241.83740.38041.52031.56720.27471.38111.40760.17871.36841.3800
δ 0.02220.79530.79490.03160.74770.74830.01350.66220.66210.00900.55160.5516
λ 0.56652.18752.25770.74112.00102.13260.67561.65761.78930.49511.29551.3865
α 0.56633.11983.16940.49712.81852.86200.50862.50522.55560.38692.24282.2759
β 1 0.00000.00100.00100.00000.00080.00080.00000.00050.00050.00000.00040.0004
1 β 2 0.21570.67390.70730.12630.52170.53660.06450.37050.37590.03170.26340.2653
σ 0.63921.62061.74140.56441.37961.49010.27920.90330.94520.14800.58750.6057
δ 0.34710.89170.95650.27200.76950.81590.11180.63810.64770.03590.54420.5452
λ 0.15010.66070.67730.12470.61390.62640.08530.54340.54980.05760.48160.4849
2.75 α 0.81082.75042.86740.55402.17662.24500.45851.92121.97450.36501.57051.6119
β 1 0.00020.00100.00100.00010.00080.00080.00000.00060.00060.00000.00040.0004
2.5 β 2 0.33920.73200.80640.23000.62580.66650.10610.50860.51930.02670.44580.4465
σ 0.86291.78171.97870.81731.35721.58370.62501.14741.30620.34880.62210.7130
δ 0.38000.87190.95060.39740.73500.83520.30900.62640.69820.22970.50590.5554
λ 0.61131.70791.81300.54321.52391.61710.27081.44001.46470.09181.27921.2821
Table 3. | Bias | , sd, and MSE for the MLEs of parameters in the nonlinear FSHN ( α , β 1 = 0.0645 , β 2 = 212 , σ = 2 , δ = 0.75 , λ ) regression model.
Table 3. | Bias | , sd, and MSE for the MLEs of parameters in the nonlinear FSHN ( α , β 1 = 0.0645 , β 2 = 212 , σ = 2 , δ = 0.75 , λ ) regression model.
n = 30 n = 50 n = 100 n = 200
α λ θ j | bias | sd MSE | bias | sd MSE | bias | sd MSE | bias | sd MSE
α 2.11082.96083.63091.40572.32452.71230.98751.90732.14460.80431.71831.8944
β 1 0.00010.00050.00050.00010.00040.00040.00010.00030.00030.00010.00020.0002
1 β 2 0.23690.34540.41880.10230.30460.32070.08360.23420.24870.07440.18900.2028
σ 0.40171.78761.82840.14401.77341.77550.10721.71181.71200.10681.55501.5587
δ 0.61621.72121.82460.19962.02352.02910.19201.85141.84020.19821.58571.5951
λ 1.80532.84853.37241.48742.41242.83411.31461.78552.21460.60681.06201.2214
0.75 α 1.29442.29542.63070.76441.67131.83490.55691.25431.37230.43821.31931.3885
β 1 0.00000.00040.00040.00010.00030.00030.00010.00020.00020.00010.00010.0001
2.5 β 2 0.05010.23430.23910.07610.18700.20160.07730.13520.15570.07630.10130.1267
σ 0.31801.73691.76190.30311.95231.97560.28791.64631.66880.28141.60881.6310
δ 0.68081.47751.62380.58121.46411.57260.56281.25401.37280.26941.28431.3105
λ 1.97292.67513.32391.52132.53742.95431.38131.98972.42211.22521.61782.0276
α 2.35672.88483.71810.89792.69632.84190.67772.00762.11690.25691.59971.6201
β 1 0.00000.00040.00040.00010.00100.00100.00010.00070.00070.00000.00060.0006
1 β 2 0.42840.95951.05080.20460.76240.78930.11210.65190.66080.02540.51950.5196
σ 1.69401.60902.33631.51321.63002.22411.04851.58811.90170.97071.36851.6767
δ 1.52422.29372.75391.00931.06231.46460.94491.04961.41140.80770.87221.1882
λ 0.26321.64981.66560.16911.07441.08660.10701.05301.05840.08100.93540.9380
2.75 α 1.60582.89963.31451.49572.38192.81261.05711.73762.03220.97871.63441.9038
β 1 0.00010.00100.00100.00010.00080.00080.00020.00050.00060.00020.00040.0005
2.5 β 2 0.71090.91531.15890.33150.66030.73880.30330.50260.58650.24660.44370.5073
σ 1.58891.62652.27381.44501.71682.24401.44561.66682.20501.20581.41301.8566
δ 1.32701.86472.28871.09631.36381.74821.14840.99651.51981.00000.96661.3903
λ 0.64182.85112.92240.40112.08582.12180.41222.01352.05520.37771.86701.9033
Table 4. | Bias | , sd, and MSE for the MLEs of parameters in the nonlinear FSHN ( α , β 1 = 0.0645 , β 2 = 212 , σ = 2 , δ = 1.5 , λ ) regression model.
Table 4. | Bias | , sd, and MSE for the MLEs of parameters in the nonlinear FSHN ( α , β 1 = 0.0645 , β 2 = 212 , σ = 2 , δ = 1.5 , λ ) regression model.
n = 30 n = 50 n = 100 n = 200
α λ θ j | bias | sd MSE | bias | sd MSE | bias | sd MSE | bias | sd MSE
α 1.33213.19233.45641.24612.02852.37451.08681.77572.07691.00861.59791.8858
β 1 0.00010.00130.00130.00000.00030.00030.00010.00020.00020.00000.00010.0001
1 β 2 0.21900.83860.86670.15850.24940.29470.07210.16680.18130.02310.10410.1064
σ 0.18851.68091.69140.09902.22922.23140.02291.85721.85120.01441.61281.6084
δ 0.95102.20022.39690.48492.15572.20200.27642.01002.02230.19501.67971.6909
λ 1.28602.61392.91311.26952.36242.67440.99811.58621.86970.54230.94221.0849
0.75 α 0.88522.94063.07090.79701.67121.84640.66491.36871.51840.59231.28101.4088
β 1 0.00010.00120.00120.00010.00020.00020.00000.00020.00020.00000.00010.0001
2.5 β 2 0.01800.75500.75410.07980.15170.17090.08090.12350.14740.05130.09360.1065
σ 0.65001.95982.06200.13692.09622.10060.11551.49541.49600.07261.67041.6719
δ 1.12441.97152.26960.80331.83451.99700.45521.79691.84920.05881.73811.7353
λ 1.61032.87103.29171.57602.55882.99781.76092.16332.78501.10741.63141.9688
α 1.67043.05043.47290.26432.78612.79210.24612.43652.44270.07702.52282.5187
β 1 0.00010.00130.00130.00020.00100.00100.00030.00070.00070.00010.00050.0005
1 β 2 0.61370.86751.06260.26310.82150.86080.24250.69210.73170.17120.48360.5121
σ 1.83401.95992.68421.65711.79802.44521.02471.63131.92290.91101.63981.8729
δ 1.77301.82022.54101.32641.70922.16030.91271.80852.02170.35441.77831.8096
λ 0.83481.94102.11290.59451.69261.79400.46091.61221.67290.35651.11401.1675
2.75 α 1.14203.06523.26400.74012.52402.63030.63982.27132.35590.61942.10032.1866
β 1 0.00010.00110.00110.00010.00070.00070.00030.00050.00060.00020.00040.0004
2.5 β 2 0.58820.69130.90770.32360.58410.66770.27700.47720.55110.25100.38220.4568
σ 0.20231.82421.83091.98261.94312.77601.38131.82722.28801.20141.59711.9965
δ 1.24851.82922.21461.06251.85622.13501.17341.72242.08170.91191.67831.9077
λ 0.95922.97353.12440.84722.72572.84830.84692.36572.50890.60731.86131.9551
Table 5. | Bias | , empirical sd, and MSE for the MLEs of the parameters in the nonlinear FSHN ( α , 4 , 5 , 3 , 1.5 , 3 , 0.75 , λ ) regression model.
Table 5. | Bias | , empirical sd, and MSE for the MLEs of the parameters in the nonlinear FSHN ( α , 4 , 5 , 3 , 1.5 , 3 , 0.75 , λ ) regression model.
n = 50 n = 100 n = 150
α λ θ j | bias | sd MSE | bias | sd MSE | bias | sd MSE
α 1.32782.65942.97120.46401.46111.53240.20540.99081.0115
β 1 0.01270.52360.52350.00610.35580.35580.00410.28500.2850
β 2 0.00870.54650.54630.00490.36240.36240.00380.27890.2789
1 β 3 0.06920.48880.49350.01360.37810.37830.00080.30750.3077
β 4 0.00580.14990.14990.00690.11450.11460.00830.09090.0913
δ 0.10841.00041.00620.09670.69750.70420.08820.49290.5005
λ 0.25650.90380.93950.17440.90600.92230.10450.69230.6999
0.75 α 0.67961.83821.95850.36861.20261.25730.25191.01301.0435
β 1 0.01200.43370.43360.01290.28890.28910.00390.23680.2367
β 2 0.00650.44610.44580.00500.29520.29510.00100.22130.2213
2.5 β 3 0.04390.38290.38510.00370.26510.26500.00110.23150.2317
β 4 0.00500.11420.11420.00440.08000.08010.00300.06810.0681
δ 0.08460.89390.89780.04090.70710.70800.02940.64090.6413
λ 0.49542.06782.12480.68571.75621.88450.56971.50431.6080
α 1.17582.90683.13450.49512.05502.11320.30551.66891.6961
β 1 0.00690.80740.80710.00600.52880.52880.00380.42440.4244
β 2 0.03370.79600.79670.02640.53430.53500.00870.41570.4156
1 β 3 0.16200.69680.71510.05890.53600.53900.03810.43120.4328
β 4 0.01810.21460.21530.00120.16360.16360.00000.13370.1337
δ 0.15010.90770.91970.12180.65880.66980.08950.55210.5591
λ 0.06170.69450.69700.03720.68780.68860.02320.60230.6025
α 0.63112.43652.51550.09211.56921.57130.02191.29221.2920
1.5 β 1 0.01920.69080.69110.00880.46800.46810.00330.38180.3816
β 2 0.01300.70760.70770.00490.46300.46300.00240.38510.3849
2.5 β 3 0.13910.65100.66530.03490.48800.48900.02080.41390.4143
β 4 0.01070.18730.18750.00400.14450.14450.00290.12010.1201
δ 0.14160.94040.95100.13670.69790.71090.13500.61320.6279
λ 0.21272.13042.14090.16021.61501.62220.14971.50551.5124
Table 6. | Bias | , empirical sd, and MSE for the MLEs of the parameters in the nonlinear FSHN ( α , 4 , 5 , 3 , 1.5 , 3 , 2.5 , λ ) regression model.
Table 6. | Bias | , empirical sd, and MSE for the MLEs of the parameters in the nonlinear FSHN ( α , 4 , 5 , 3 , 1.5 , 3 , 2.5 , λ ) regression model.
n = 50 n = 100 n = 150
α λ θ j | bias | sd MSE | bias | sd MSE | bias | sd MSE
α 0.35061.20331.25290.15600.70210.71900.11130.54570.5568
β 1 0.00590.41370.41360.00330.26830.26840.00110.22210.2220
β 2 0.00850.41500.41500.00620.27700.27700.00360.22160.2215
1 β 3 0.08960.45430.46290.03560.28600.28810.00880.23730.2374
β 4 0.01130.13540.13580.00500.08240.08250.00040.06950.0695
δ 0.03070.68690.68740.02600.45740.45800.02040.36640.3670
λ 0.14760.73380.74820.14470.62290.63940.09540.36250.3748
0.75 α 0.92131.89642.10680.44601.08511.17270.27510.80860.8539
β 1 0.01460.42380.42370.03360.28020.28200.00070.22640.2263
β 2 0.02250.42020.42040.01650.28220.28250.00450.23190.2319
2.5 β 3 0.31200.65270.72290.20200.42190.46760.17250.33760.3790
β 4 0.05380.19030.19760.04130.11360.12090.03810.08940.0972
δ 0.51411.01951.14090.34820.71740.79710.25980.54290.6016
λ 0.79861.89372.05520.33641.59041.62480.14721.46681.4742
α 0.49571.52161.60030.27211.35241.37910.21531.12731.1474
β 1 0.02430.51790.51830.00030.34480.34470.00010.27740.2774
β 2 0.00850.49350.49340.00640.34650.34650.00310.27530.2752
1 β 3 0.24680.59590.64470.10230.42790.43980.04410.33250.3354
β 4 0.04500.17250.17820.01780.12090.12210.00570.09430.0945
δ 0.25370.73390.77620.07870.50170.50770.05820.40810.4122
λ 0.20880.74080.76970.12010.63180.64300.08870.47070.4789
α 0.57122.02482.10170.40651.42791.48390.31281.10081.1439
1.5 β 1 0.05440.48830.49080.02410.34540.34600.00690.27960.2796
β 2 0.00730.54430.54380.00700.34230.34220.00190.27050.2705
2.5 β 3 0.59420.99541.15830.38610.63570.74350.28160.51810.5895
β 4 0.09640.25680.27410.07480.16950.18520.05590.13930.1500
δ 0.22980.98351.00990.19680.63760.66700.17210.49510.5239
λ 0.20781.84511.85470.13311.62601.63050.02661.59681.5970
Table 7. Parameter estimates (standard error) and p-values for nonlinear log-BS, linear log-FBS, and nonlinear log-FBS models, along with AIC and AICc values.
Table 7. Parameter estimates (standard error) and p-values for nonlinear log-BS, linear log-FBS, and nonlinear log-FBS models, along with AIC and AICc values.
Parameter EstimatesNonlinear Log-BSLinear Log-FBSNonlinear Log-FBS
α 0.0258 ( 0.0012 ) 0.0260 ( 0.0016 ) 0.0244 ( 0.0014 )
p-value ( 0.0000 ) ( 0.0000 ) ( 0.0000 )
β 1 2.0343 ( 0.0281 ) 2.9460 ( 0.0000 ) 2.0822 ( 0.0001 )
p-value ( 0.0000 ) ( 0.0000 ) ( 0.0000 )
β 2 0.2295 ( 0.0051 ) 0.0593 ( 0.0001 ) 0.2259 ( 0.0002 )
p-value ( 0.0000 ) ( 0.0000 ) ( 0.0000 )
δ 1.9303 ( 0.0070 ) 2.0949 ( 0.0062 )
p-value ( 0.0000 ) ( 0.0000 )
λ 1.5232 ( 0.0071 ) 1.3293 ( 0.0053 )
p-value ( 0.0000 ) ( 0.0000 )
AIC 897.70 893.27 904.05
AICc 897.58 892.96 903.75
Table 8. MSE obtained for linear log-FBS, nonlinear log-BS, and nonlinear log-FBS models by using validation approach (50%, 50%), 100 times.
Table 8. MSE obtained for linear log-FBS, nonlinear log-BS, and nonlinear log-FBS models by using validation approach (50%, 50%), 100 times.
Linear Log-FBSNonlinear Log-BSNonlinear Log-FBS
MSE 1.107259 0.8222637 0.004790429
Table 9. Results for the Anderson–Darling goodness-of-fit test.
Table 9. Results for the Anderson–Darling goodness-of-fit test.
Linear Log-FBSNonlinear Log-BSNonlinear Log-FBS
MSE 0.0052 0.0000 0.6750
Table 10. Estimates of parameters in Equation (32).
Table 10. Estimates of parameters in Equation (32).
α ^ = 2.3061 β ^ 1 = 10.1725 β ^ 2 = 5.8413 β ^ 3 = 17.9475
( 1.3846 ) ( 1.4170 ) ( 1.0844 ) ( 8.5355 )
σ ^ = 0.9499 δ ^ = 1.2984 λ ^ = 5.6195
( 0.3100 ) ( 0.6121 ) ( 2.6171 )
Table 11. Relative change, RC % ( RC ( SE ^ ( θ j ) ) ) , and p-values for model parameters.
Table 11. Relative change, RC % ( RC ( SE ^ ( θ j ) ) ) , and p-values for model parameters.
Dropped Case α ^ β ^ 1 β ^ 2 β ^ 3 σ ^ δ ^ λ ^
{ 1 } RC65.06 (373.35)3.15 (95.07)1.58 (92.37)8.77 (98.23)18.43 (27.66)1.14 (392.14)188.39 (726.20)
p-value < 0.0001 < 0.0001 < 0.0001
{ 2 } RC52.71 (1.38)6.78 (0.46)10.31 (0.47)20.22 (0.18)20.22 (0.12)42.08 (2.27)6.76 (0.40)
p-value < 0.0001 < 0.0001 0.0019
{ 3 } RC32.49 (400.94)0.56 (31.20)1.80 (32.00)0.64 (31.48)15.48 (100.93)28.76 (447.30)123.62 (623.66)
p-value < 0.0001 < 0.0001 0.0022
{ 4 } RC21.11 (195.41)7.75 (77.04)15.98 (81.52)28.77 (41.05)1.92 (223.74)101.39 (230.88)665.89 (2330.91)
p-value < 0.0001 < 0.0001 < 0.0001
{ 5 } RC57.45 (209.88)0.82 (31.88)3.41 (35.85)5.17 (23.69)27.15 (14.58)34.87 (209.86)133.96 (398.36)
p-value < 0.0001 < 0.0001 0.0037
{ 8 } RC59.70 (420.19)0.17 (94.88)3.32 (91.86)2.06 (97.94)23.12 (42.53)25.68 (379.91)120.30 (575.81)
p-value < 0.0001 < 0.0001 < 0.0001
{ 12 } RC61.67 (565.47)3.47 (51.15)6.86 (55.68)14.91 (32.77)24.02 (72.43)15.74 (551.91)130.18 (751.91)
p-value < 0.0001 < 0.0001 0.003
{ 13 } RC77.45 (519.92)1.38 (37.44)4.55 (41.19)5.92 (27.83)21.63 (43.23)3.74 (510.58)175.01 (841.64)
p-value < 0.0001 < 0.0001 0.0020
{ 19 } RC50.87 (375.77)1.77 (37.60)4.93 (42.11)7.76 (25.55)17.93 (52.82)16.93 (391.25)146.89 (643.19)
p-value < 0.0001 < 0.0001 0.0023
{ 32 } RC11.88 (109.88)2.75 (94.60)6.83 (91.06)10.26 (97.26)13.98 (16.50)54.41 (149.02)77.57 (205.20)
p-value < 0.0001 < 0.0001 < 0.0001
{ 44 } RC3.87 (2.66)5.88 (0.96)8.57 (0.94)17.81 (0.98)7.59 (2.53)49.73 (4.33)17.77 (0.85)
p-value0.63620.00000.00000.00000.35000.55120.3396
{ 46 } RC7.67 (1.42)5.43 (0.95)7.63 (0.93)18.59 (0.98)8.77 (0.67)10.22 (2.17)18.89 (0.28)
p-value < 0.0001 < 0.0001 < 0.0001
{ 2 , 44 } RC71.66 (2.59)6.96 (0.98)11.05 (0.97)19.89 (0.99)21.76 (0.02)67.37 (3.55)3.05 (0.94)
p-value < 0.0001 < 0.0001 < 0.0001
{ 2 , 46 } RC134.88 (4.40)6.81 (0.96)10.86 (0.90)19.11 (0.96)32.51 (0.25)85.00 (4.63)12.13 (1.63)
p-value < 0.0001 < 0.0001 < 0.0001
{ 44 , 46 } RC3.27 (2.16)5.76 (0.98)4.09 (0.94)18.50 (0.99)21.85 (3.44)115.02 (5.22)48.58 (0.09)
p-value < 0.0001 < 0.0001 < 0.0001
{ 2 , 44 , 46 } RC0.95 (0.12)5.47 (0.22)7.91 (0.29)18.78 (0.12)4.95 (0.02)17.26 (0.53)16.89 (0.13)
p-value < 0.0001 < 0.0001 0.0262
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Martínez-Flórez, G.; Barranco-Chamorro, I.; Gómez, H.W. New Flexible Asymmetric Log-Birnbaum–Saunders Nonlinear Regression Model with Diagnostic Analysis. Axioms 2024, 13, 576. https://doi.org/10.3390/axioms13090576

AMA Style

Martínez-Flórez G, Barranco-Chamorro I, Gómez HW. New Flexible Asymmetric Log-Birnbaum–Saunders Nonlinear Regression Model with Diagnostic Analysis. Axioms. 2024; 13(9):576. https://doi.org/10.3390/axioms13090576

Chicago/Turabian Style

Martínez-Flórez, Guillermo, Inmaculada Barranco-Chamorro, and Héctor W. Gómez. 2024. "New Flexible Asymmetric Log-Birnbaum–Saunders Nonlinear Regression Model with Diagnostic Analysis" Axioms 13, no. 9: 576. https://doi.org/10.3390/axioms13090576

APA Style

Martínez-Flórez, G., Barranco-Chamorro, I., & Gómez, H. W. (2024). New Flexible Asymmetric Log-Birnbaum–Saunders Nonlinear Regression Model with Diagnostic Analysis. Axioms, 13(9), 576. https://doi.org/10.3390/axioms13090576

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop