Next Article in Journal
Mixing Enhancement Study in Axisymmetric Trapped-Vortex Combustor for Propane, Ammonia and Hydrogen
Previous Article in Journal
Investigating Mechanical Response and Structural Integrity of Tubercle Leading Edge under Static Loads
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation Approach for a Linear Quantile-Regression Model with Long-Memory Stationary GARMA Errors

Mathematics and Applications Laboratory, Abdelmalek Essaadi University, Tangier 90000, Morocco
*
Author to whom correspondence should be addressed.
Modelling 2024, 5(2), 585-599; https://doi.org/10.3390/modelling5020031
Submission received: 29 April 2024 / Revised: 30 May 2024 / Accepted: 31 May 2024 / Published: 4 June 2024

Abstract

:
The aim of this paper is to assess the significant impact of using quantile analysis in multiple fields of scientific research . Here, we focus on estimating conditional quantile functions when the errors follow a GARMA (Generalized Auto-Regressive Moving Average) model. Our key theoretical contribution involves identifying the Quantile-Regression (QR) coefficients within the context of GARMA errors. We propose a modified maximum-likelihood estimation method using an EM algorithm to estimate the target coefficients and derive their statistical properties. The proposed procedure yields estimators that are strongly consistent and asymptotically normal under mild conditions. In order to evaluate the performance of the proposed estimators, a simulation study is conducted employing the minimum bias and Root Mean Square Error (RMSE) criterion. Furthermore, an empirical application is given to demonstrate the effectiveness of the proposed methodology in practice.

1. Introduction

Linear regression is one of the most common and well understood techniques for modeling the relationship between one or more covariates or predictor variables, X, and the conditional mean E ( Y | X = x ) of the response variable Y, given X = x . The conditional mean minimizes the expected squared error: E ( Y | X = x ) = a r g m i n μ E [ ( Y μ ) 2 | X = x ] . If the conditional mean of Y, given x, is linear and expressed as μ ( x ) = x β , then β can be estimated by solving m i n β ( y i x β ) 2 , which is the ordinary least squares solution of the linear regression model.
However, inference from this model requires that specific assumptions be made about the distribution of the error (i.e., linearity, homoscedasticity, independence, or normality). In contrast, quantile regression, as introduced by Koenker and Bassett [1], aims to extend these ideas to the estimation of conditional quantile function models wherein the quantiles of the conditional distribution of the response variable are expressed as functions of observed covariates. For a pair of observed values ( x , y ) of random vector ( X , Y ) , a quantile regression has the form q τ ( x ) = i n f { y : P ( Y y | x ) τ } , where τ indicates that the specific quantile of Y is the smallest value of y denoted by P ( Y y | x ) = E ( 1 { Y y } | X = x ) , given without making assumptions about the distribution of the error. In the case of classical quantile regressions, we can assume that these quantiles of the conditional distribution have a linear form,
q τ ( x ) = x β τ ,
where β τ is a parameter vector (a vector of regression coefficients).
For the following, it may be useful to note that this expression can be written in an equivalent way: Y = x β τ + ϵ τ , with q τ ( ϵ τ | x ) = 0 . Condition 1 is similar to that carried out in the standard linear regression, in which the conditional mean of the variable of interest Y is modelled as a linear expression of the explanatory variables X. One difference is that here we allow the coefficients to differ from quantile to quantile. This provides additional information not apparent from simple linear regression.
In the parameter estimation problem, the form of the quantile regression function is known, but it contains unknown parameters, β τ . The most intuitive way to calculate the standard estimator, β ^ τ , consists of ordering these n variables, the quantile of order τ being provided by the ( n τ t h ) observation where [ n τ ] is the smallest integer greater than or equal to n τ . However, it is more useful to notice that the popular method for estimating the unknown parameters, β τ , in a quantile-regression function is by solving the estimating equation
q ^ τ ( Y ) = m i n β τ ρ τ ( y i x β τ ) 2 ,
where q ^ τ ( . ) is the check function defined by ρ τ ( u ) = u ( τ I ( u < 0 ) ) for some τ ( 0 , 1 ) . Here, I ( . ) is an indicator function that takes the value of unity when I ( . ) is true and zero otherwise, and here u = y i x i β τ . However, this function is not differentiable at zero, and clear solutions to minimization problems are unobtainable [2,3]. In quantile-regression methods, linear programming is frequently implemented for parameter estimation.
The Expectation-Maximization (EM) algorithm and the Alternating Least Squares (ALS) algorithm are both iterative optimization techniques commonly employed in statistical modeling to estimate parameters iteratively. In quantile regression, aimed at estimating conditional quantiles of a response variable given covariates, the selection between EM and ALS relies on several considerations.
Opting for EM in quantile regression over ALS is justified due to its capability in managing latent variables and accommodating diverse error distributions, and its alignment with the objectives of quantile regression. Additionally, EM avoids the swapping effect by updating parameters sequentially rather than alternately, enhancing stability in estimation. However, it is essential to note that the bias in EM hinges on factors such as sample size and model assumptions.
Despite potential variations in speed, where the speed of EM varies depending on the complexity of the model and convergence criteria, EM’s advantages in handling latent variables and error distributions make it a favorable choice in many quantile-regression applications.
There are numerous articles about the parameter estimation of quantile-regression models with the EM algorithm. Tian et al. [4] proposed a new method based on the Expectation-Maximization algorithm for a linear quantile-regression model with symmetric Laplace error distribution. Furthermore, Tian et al. [5] used this method for a linear quantile-regression model with autoregressive errors. Zhou et al. [6] developed the EM algorithm and GEM algorithm for calculating the quantiles of linear and non-linear regression models.
All of these results are only for the errors that are modeled by short-memory, time-series models. However, long-memory models are very important; they are used in various fields, such us hydrology, chemistry, and economics; see, for example Hurst [7], Jeffreys [8], Student [9].
The most commonly used long-memory model, which deals with the modeling of cyclic behavior, is The Generalized Autoregressive Moving Average model (GARMA). It was proposed by Gray et al. [10] to deal with non-additivity, non-normality, and heteroscedasticity in real time-series data.
This behavior has attracted considerable attention, prompting extensive research efforts over recent decades. For instance, Darmawan et al. [11] utilized a GARMA model to predict COVID-19 data trends in Indonesia, Albarracin et al. [12] analyzed the structure of GARMA models in practical scenarios, while Huntet al. [13] introduced an R package, ‘garma’, specifically tailored for fitting and forecasting GARMA models in R (Version R-4.4.0).
The foundations of the methods for independent data have now been consolidated, and some computational commands for the analysis of such data are provided by most of the available statistical software (e.g., R (Version 4.4.1 R. Ross Ihaka and Robert Gentleman’s creation in Vienna, Austria)/Python (Version 3.13. Guido van Rossum’s creation, in Wilmington, DE, USA).
In this paper, we will consider a linear quantile-regression model with Generalized Autoregressive Moving Average error, defined as follows:
y t = x t β τ + ϵ t , t = 1 , 2 , , n with q τ ( y t | x t ) = x t β τ ,
where y t is the t-th observation of the response variable, x t = ( x t , 1 , , x t , M ) is a M × 1 covariate, β τ = ( β τ , 1 , , β τ , M ) is a regression parameter vector, and { ϵ t , t Z } is a stationary long-memory process generated by the GARMA(p,0) model, as follows:
Φ p ( L ) ( 1 2 η L + L 2 ) d ϵ t = ξ t ,
where Φ p ( L ) = 1 k = 1 p ϕ k L k with Φ = ( ϕ 1 , , ϕ p ) is the autoregressive parameter vector, L is the Backshift operator defined by L X t = X t 1 , ( 1 2 η L + L 2 ) d represent the long-memory Gegenbauer component, and { ξ t , t Z } is an iid process.
In this work, we aim to estimate the parameters of our models with the EM algorithm (see [14]) based on the method of Tian et al. [4]. However, we will derive the asymptotic properties of our estimators.
The outline of this paper is organized as follows: In Section 2, we provide the likelihood function, and Section 3 deals with the estimation of our parameters by the EM algorithm. In Section 4, we derive the asymptotic properties of the estimators under some mild conditions. Some simulation results will be provided in Section 5, and finally a real data example is provided in Section 6.

2. Estimation Procedure via EM Algorithm

The maximum-likelihood method is a technique that is widely recognized for deriving estimators. One of the principal reasons for the wide popularity of the maximum-likelihood method is that the resulting estimator has many interesting properties. However, in some complicated problems maximum-likelihood estimators are unsuitable or do not exist. This method estimates the parameters of various statistical models, including quantile-regression models. In such cases, to estimate the q τ ( y t | x t ) at model (1), it is natural to simultaneously estimate the regression coefficients and the GARMA parameters using quantile regression. Hence, in this section we focuses on the estimation of the parameter vector of models (3) and (4) by ( β τ , Φ , σ ) . It is also worth noting that maximum-likelihood estimation is related to other optimization techniques. In particular, if we assume a normal distribution then it is equivalent to the ordinary least squares method. In practice, the normality assumption of the random error may be abandoned for many reasons, such as outliers, contaminated data, and heavy-tailed distributions. The Laplace distribution is a good robust alternative in this case [15]. Yu and Moyeed [16] found that minimizing the expression (1) is equivalent to maximizing a likelihood function formed by combining the independently distributed asymmetric Laplace error distribution (see [17]). In this work, we adopt a similar idea to build a likelihood function; the distribution of the process { ξ t , t Z } in model (3) is assumed to be a symmetric Laplace distribution with zero mean, scale parameter σ > 0 , and skewness parameter τ ( 0 , 1 ) . Let ξ t A L D ( 0 , σ , τ ) , as follows:
f ( ξ t ) = τ ( 1 τ ) σ e x p ρ τ ( ξ t σ ) ,
where ρ τ ( u ) = u ( τ 1 [ u < 0 ] ) is the quantile check function.
The mean and the variance of this distribution are given by: E ( ξ t ) = σ 1 2 τ τ ( 1 τ ) and V a r ( ξ t ) = σ 2 ( 1 2 τ + 2 τ 2 ) ( 1 τ ) 2 τ 2 . (See [18]).
The Laplace distribution was chosen for the error term in the GARMA model due to several advantages it offers. Firstly, it offers robustness to outliers. The Laplace distribution, also referred to as the double-exponential distribution, has heavier tails compared to the normal distribution. This implies that it is more robust to outliers in the data. In a GARMA error model, which aims to capture the time-series behavior of a process, having robustness to outliers is essential for precise estimation and prediction. Secondly, unlike the normal distribution, the Laplace distribution is symmetric and has the capability to capture skewness in the data. In time-series analysis, it is common to encounter data that exhibit asymmetric patterns. By using the Laplace distribution as the error term in the GARMA model, we align the model’s assumptions with the potential skewness in the data, thus improving its ability to capture the underlying patterns. Additionally, the Laplace distribution possesses straightforward mathematical properties that enhance its computational efficiency. This characteristic can be beneficial when fitting the GARMA model to extensive datasets or when performing simulations for different scenarios.
Under such circumstance, the maximum-likelihood estimates for the parameters β τ , Φ , and σ are obtained by maximizing the marginal density, f ( y t | β τ , Φ , σ ) , which is obtained by replacing ξ t with Φ p ( L ) ( 1 2 η L + L 2 ) d ( y t x t β τ ) in Equation (3), that is,
L ( β τ , σ , τ ) = ( τ ( 1 τ ) σ ) n e x p t = 1 n ρ τ ( Φ p ( L ) ( 1 2 η L + L 2 ) d ( y t x t β τ ) σ ) .
The maximization of (4) is equivalent to the maximization of the logarithm of the likelihood function:
l o g ( L ( β τ , σ , τ ) ) = n l o g ( τ ( 1 τ ) σ ) t = 1 n ρ τ ( Φ p ( L ) ( 1 2 η L + L 2 ) d ( y t x t β τ ) σ ) .
Clearly, the objective function has no closed form. Hence, it is not possible the estimate the quantile-regression coefficients because this objective function is non-convex with respect to β τ and the differentiation (the function ρ τ ( . ) is not differentiable at 0). Therefore, to overcome this problem we suggest the use of the property of the asymmetric Laplace, combining a normal distribution conditional on an exponential distribution, proposed by Yu and Moyeed [16], as follows:
ξ t = θ z t + ω σ z t u t ,
where u t N ( 0 , 1 ) , z t e x p ( 1 σ ) , θ = 1 2 τ τ ( 1 τ ) and ω 2 = 2 τ ( 1 τ ) .
This proposal should be seen as a ratification to the previous objective function, using the Gegenbauer polynomials ( C i ( d ) ( η ) , i Z ), defined by:
( 1 2 η L + L 2 ) d = i = 0 C i ( d ) ( η ) L i .
(See [19]).
Then, under the previous results, Equations (3), (8) and (9) lead to the following explicit expression:
y t = i = 0 C i ( d ) ( η ) L i y t ( 1 2 η L + L 2 ) d ( k = 1 p ϕ k L k y t Φ p ( L ) x t β τ ) + θ z t + ω σ z t u t
Therefore, we can conclude that y t is conditionally normally distributed (see [5] for more details). Then, the quantile-regression model here is represented as a normal regression model. The full joint density for y = ( y 1 , , y n ) is given by:
f ( y / z , β τ ) = t = 1 n 1 2 π ω σ z t e x p ( Φ p ( L ) P ( L ) ( y t x t β τ ) θ z t ) 2 2 ω 2 z t σ ,
with z = ( z 1 , , z n ) and P ( L ) = ( 1 2 η L + L 2 ) d .
The estimators of the unknown quantile-regression parameters, β τ , the scale σ , and the vector parameters of the GARMA model, Φ , may be obtained by maximizing the likelihood function:
L ( β τ , Φ , σ / y , x , z ) = t = 1 n 1 2 π ω σ z t e x p ( Φ p ( L ) P ( L ) ( y t x t β τ ) θ z t ) 2 2 ω 2 z t σ 1 σ e x p ( z t σ ) .
The corresponding log-likelihood function is given by:
l o g ( L ( β τ , Φ , σ / y , x , z ) = n l o g ( 2 π τ ω ) 3 n 2 l o g ( σ ) 1 2 t = 1 n l o g ( z t ) t = 1 n ( Φ p ( L ) P ( L ) ( y t x t β τ ) ) 2 2 ω 2 σ z t 1 + 2 t = 1 n Φ p ( L ) P ( L ) ( y t x t β τ ) θ 2 ω 2 σ t = 1 n ( θ 2 2 ω 2 σ + 1 σ ) z t .
Due to the unobserved variables, z t , the maximum likelihood becomes intractable. In this case, solving Equation (2) requires an iterative algorithm. The EM algorithm (see [20]) is a broad method of finding the maximum-likelihood estimates of the parameters of a fundamental distribution from a provided dataset that has missing values. In the next section, we present the Expectation-Maximization (EM) algorithm, which is derived from likelihood optimization.

2.1. EM Algorithm

The Expectation-Maximization (EM) algorithm [14] is a commonly utilized method for finding maximum-likelihood estimates in statistical models that rely on missing data or unobservable latent variables. A latent variable is a variable that affects our observed data but in ways that we cannot know directly. The EM algorithm is an iterative approach that alternates between two modes. The first mode attempts to estimate the missing or hidden variables; this is known as the estimation-step or E-step. The second mode attempts to optimize the parameters of the model to provide the best explanation of the data; this is known as the maximization-step or M-step.
In the model (10), the EM iterations are based on regarding the random variable ( z t = ( z 1 , , z n ) ) as a set of unobserved latent variables or missing values. We seek to estimate our model by maximizing the complete-data log likelihood, which is then denoted in Equation (12), l o g ( L ( β τ , Φ , σ / y , x , z ) , where ϑ = ( β τ , Φ , σ ) are the unknown parameter vectors for which we wish to find the MLE. To apply the EM algorithm for estimation, we must first find the conditional pdf of the unobserved variable z t , in order to estimate the missing variables in the dataset. Indeed, since z t , in Equation (8), is assumed to be e x p ( 1 σ ) and u t N ( 0 , 1 ) , it can be observed that the conditional distribution of z t , given y, ( z t | y ) is given by:
f ( z t / y , x ) 1 σ z t e x p ( Φ p ( L ) ( 1 2 η L + L 2 ) d ( y t x t β τ ) θ z t ) 2 2 ω 2 z t σ 1 σ e x p ( z t σ ) 1 σ 3 z t 1 2 1 e x p ( Φ p ( L ) P ( L ) ( y t x t β τ ) θ z t ) 2 2 ω 2 z t σ e x p ( z t σ ) 1 σ 3 z t 1 2 1 e x p 1 2 ( Φ p ( L ) P ( L ) ( y t x t β τ ) ) 2 z t 1 ω 2 σ + ( θ 2 ω 2 σ + 2 w < σ ) z t G I G 1 2 , ( Φ p ( L ) P ( L ) ( y t x t β τ ) ) 2 ω 2 σ , ( θ 2 ω 2 σ + 2 σ ) ,
where GIG is the Generalized Inverse Gaussian distribution, a three parameter distribution introduced by Good, [21] that has been applied in a variety of fields of statistics. More recently, Sánchez et al. [22] used the GIG distribution as a mixing distribution to estimate the parameters of quantile-regression models. Furthermore, note that the r-th moment of z t G I G ( α , γ , δ ) is given by:
E ( z r ) = γ δ r 2 K α + r ( γ δ ) K α ( γ δ )
and
E ( l o g ( z ) ) = d E ( z r ) d r | r = 0 ,
where K is a modified Bassel function of the second kind (see [23] for more details). The above relations (14) and (15) will be useful in calculating the conditional expectation of the log-likelihood function, with respect to the conditional distribution of z t given y, when applying the following two EM steps, described as follows:
  • Step E
To applying EM to model (10) (with GARMA errors), we start by writing down the expected complete log-likelihood, given by Equation (2), known as the Q-function, Q ( ϑ / ϑ ( h ) ) = E ( l o g ( ϑ / y , z ) / y , ϑ ( h ) ) , where E ϑ ( h ) ) [ . ] means that the expectation is being affected using ϑ ( h ) for ϑ , with respect to the unknown data z given the observed data y and the current parameter estimates ϑ ( h ) , which is the estimated value of the h-th iteration. Calculating this expectation at the ( h 1 )-th iteration yields
Q ( β τ , Φ , σ / y , ϑ ( h ) ) = E ( l o g ( L ( β τ , Φ , σ / y , x , z ) ) / y , ϑ ( h ) ) .
Then,
Q ( β τ , Φ , σ / y , ϑ ( h 1 ) ) = n l o g ( 2 π τ ω ) 3 n 2 l o g ( σ ) 1 2 t = 1 n λ t t = 1 n ( Φ p ( L ) P ( L ) ( y t x t β τ ) ) 2 2 ω 2 σ μ t + 2 t = 1 n Φ p ( L ) P ( L ) ( y t x t β τ ) θ 2 ω 2 σ t = 1 n ( θ 2 2 ω 2 σ + 1 σ ) ν t ,
where we have defined the pseudo-values λ t = E ( l o g ( z t ) / y , ϑ ( h 1 ) ) , μ t = E ( z t 1 / y , ϑ ( h 1 ) ) , and ν t = E ( z t / y , ϑ ( h 1 ) ) .
For evaluating (2), it is necessary to compute λ t , μ t , and ν t , as in (14) and (15), which will depend of the conditional pdf of z in Equation (11).
These expressions are straightforward to derive using Equations (14) and (15), as discussed in Eberlein and Hammerstein [24]. The elements of the partial derivatives of the t-th moment of z t evaluated at ϑ ( h 1 ) are as follows:
λ t = d E ( z a / y , ϑ ( h 1 ) ) d a | a = 0
μ t = θ 2 + 2 ω 2 Φ p ( L ) P ( L ) ( y t x t β τ )
ν t = Φ p ( L ) P ( L ) ( y t x t β τ ) θ 2 + 2 ω 2 × K 3 2 ( Φ p ( L ) P ( L ) ( y t x t β τ ) ω 2 σ ( t 1 ) ) K 1 2 ( Φ p ( L ) P ( L ) ( y t x t β τ ) ω 2 σ ( t 1 ) .
We can calculate ν t explicitly by applying the well-known recursion formula defined by x K a 1 ( x ) x K a + 1 ( x ) = 2 a K a ( x ) , resulting in the following expression:
ν t = ω 2 σ ( t 1 ) θ 2 + 2 ω 2 + Φ p ( L ) P ( L ) ( y t x t β τ ) θ 2 + 2 ω 2 .
  • Step M (Maximization)
The above E-step of the EM algorithm simply uses L ( y | x , z , ϑ ( h 1 ) ) to calculate the expectation of the unobservable information, z, given the observed data ( y , x ) and the existing estimates of unknown parameters ϑ ( h 1 ) .
In the M-step, the parameters are re-estimated by maximizing the Q-function to find the new estimate ϑ ( h ) by solving the ( h ) t h step; the M-step procedures is as follows:
ϑ ( h ) = a r g m a x ϑ Q ( ϑ , ϑ ( h 1 ) ) = a r g m a x ϑ Q ( ϑ , ( β ( h 1 ) , Φ ( h 1 ) , σ ( h 1 ) ) ) ,
where each estimate parameter can be acquired by partially maximizing the objective Q-function:
(A.1)
β τ ( h ) = a r g m a x β Q ( ϑ ( h ) , ( β , Φ ( h 1 ) , σ ( h 1 ) ) )
(A.2)
Φ ( h ) = a r g m a x Φ Q ( ϑ ( h ) , ( β ( h ) , Φ , σ ( h 1 ) ) )
(A.3)
σ ( h ) = a r g m a x σ Q ( ϑ ( h ) , ( β ( h ) , Φ ( h ) , σ ) ) .
In this regard, starting values, ϑ ( 0 ) = ( β τ ( 0 ) , Φ ( 0 ) , σ ( 0 ) ) , are necessary in order to initiate the iterative procedure. These can be obtained from the Least Squares Estimates (LSE) method (see [25]).
An estimate of ϑ = ( β τ , Φ , σ ) can be obtained by equating the score vector, including the first-order partial derivatives of Q ( ϑ ) with respect to each parameter, denoted by Q ( ϑ ) ϑ = ( Q σ , Q ϕ 1 , , Q ϕ p , Q σ ) , to zero vector, leading to Q-function equations. In particular, we have the following explicit expressions:

2.1.1. Estimator of β τ

Let β τ be the parameter value at a local maximum of the Q-function in Equation (2). Differentiating the previous expression in respect to β τ , we find
Q β τ = 0 1 ω 2 σ t = 1 n μ t ( Φ p ( L ) P ( L ) ) 2 ( y t x t β τ ) x t θ ω 2 σ t = 1 n Φ p ( L ) P ( L ) x t = 0 t = 1 n μ t ( Φ p ( L ) P ( L ) ) 2 x t x t β τ = t = 1 n μ t ( Φ p ( L ) P ( L ) ) 2 y t x t θ t = 1 n Φ p ( L ) P ( L ) x t .
In matrix form, we have:
X Γ ( h 1 ) X β τ = X Γ ( h 1 ) Y X Θ ,
where:
  • X = ( Φ p ( L ) P ( L ) x 1 , , Φ p ( L ) P ( L ) x n )
  • Y = ( Φ p ( L ) P ( L ) y 1 , , Φ p ( L ) P ( L ) y n )
  • Γ ( h 1 ) = d i a g ( μ 1 , , μ n ) and Θ = ( θ , θ )
Denoting:
X ^ = ( Φ ^ p ( h 1 ) ( L ) P ( L ) x 1 , , Φ ^ p ( h 1 ) ( L ) P ( L ) x n )
Y ^ = ( Φ ^ p ( h 1 ) ( L ) P ( L ) y 1 , , Φ ^ p ( h 1 ) ( L ) P ( L ) y n )
We obtain the h-th iteration estimator of the parameter β τ as follows:
β τ ^ ( h ) = ( X ^ Γ ( h 1 ) X ^ ) 1 ( X ^ Γ ( h 1 ) Y ^ X ^ Θ ) .

2.1.2. Estimator of Φ

Now, we apply the same procedure for our parameters ϕ 1 , …, ϕ p :
For k { 1 , , p } , we have:
Q ϕ k = 0 1 ω 2 σ t = 1 n μ t P ( L ) 2 ( y t k x t k β τ ) Φ p ( L ) ( y t x t β τ ) θ ω 2 σ t = 1 n P ( L ) ( y t k x t k β τ ) = 0 t = 1 n μ t P ( L ) 2 ( y t k x t k β τ ) ( y t x t β τ ) j = 1 p ϕ j ( y t j x t j β τ ) θ t = 1 n P ( L ) ( y t k x t k β τ ) = 0 j = 1 p ϕ j t = 1 n μ t P ( L ) 2 ( y t k x t k β τ ) ( y t j x t j β τ ) = t = 1 n P ( L ) ( y t k x t k β τ ) μ t P ( L ) ( y t x t β τ ) θ .
Then, we write the matrix form as follows:
( E Γ ( h ) E ) Φ = E ( Γ ( h ) J Θ ) ,
where:
E = ( E i , j ) p × p , E i , j = P ( L ) ( y i j x i j β τ ) and J = ( J 1 , , J n ) , J i = P ( L ) ( y i x i β τ ) for i = 1 , , n
Therefore, the h-th iteration estimators of the parameters ϕ j are given by:
Φ ^ ( h ) = ( E ^ Γ ( h ) E ^ ) 1 ( E ^ ( Γ ( h ) J ^ Θ ) ) .

2.1.3. Estimator of σ

Finally, for σ , we have:
Q σ = 0 3 n 2 σ = 1 ω 2 σ 2 t = 1 n ( Φ p ( L ) P ( L ) ( y t x t β τ ) ) 2 μ t + t = 1 n ( θ 2 + 2 ω 2 ) ν t 2 θ t = 1 n Φ p ( L ) P ( L ) ( y t x t β τ ) .
Then, we get:
σ ^ ( h ) = t = 1 n Φ ^ p ( h ) ( L ) P ( L ) ( y t x t β τ ^ ( h ) ) 2 μ t + t = 1 n ( θ 2 + 2 ω 2 ) ν t 2 θ t = 1 n Φ ^ p ( h ) ( L ) P ( L ) ( y t x t β τ ^ ( h ) ) 3 n ω 2 .

3. Consistency and Asymptotic Normality

In this section, we establish the consistency and asymptotic results of our estimators in Equations (17)–(19). We will begin by stating and discussing a set of basic notations and some assumptions that will be used throughout the remaining parts of this paper.
For the derivation of asymptotic normality of the estimator of β τ , we need consistency and some additional assumptions. Because we try to show the asymptotic normality of n ( β τ ^ β τ ) , it is convenient to apply the notation introduced in (1)–(2) and make the following assumptions of regularity conditions:
C1:
W : = E ( ( Φ ^ p ( h 1 ) ( L ) P ( L ) x 1 ) 2 μ 1 ( h 1 ) ) <
C2:
T : = E ( Φ ^ p ( h 1 ) ( L ) P ( L ) x 1 μ 1 ( h 1 ) θ x 1 ) < .
While condition (C1) is needed in order to apply the ergodic theorem, condition (C2) allows us to apply the martingale central limit theorem of Billingsley [26]. Then, we have the following theorem:
Theorem 1.
Let β τ ^ ( h ) the EM estimator of β τ . Under (C1) and (C2) we have:
n ( β τ ^ ( h ) β τ ) n + N ( 0 , W 1 T W ) .
Proof. 
Note that:
β τ ^ ( h ) β τ = ( X ^ Γ ( h 1 ) X ^ ) 1 ( X ^ Γ ( h 1 ) Y ^ X ^ Θ ) β τ .
Then
n ( β τ ^ ( h ) β τ ) = X ^ Γ ( h 1 ) X ^ n 1 X ^ Γ ( h 1 ) Y ^ X ^ Θ n n β τ = X ^ Γ ( h 1 ) X ^ n 1 X ^ Γ ( h 1 ) Y ^ X ^ Θ X ^ Γ ( h 1 ) X ^ β τ n = X ^ Γ ( h 1 ) X ^ n 1 X ^ Γ ( h 1 ) ξ ^ X ^ Θ n n β τ ( ξ ^ = Y ^ X ^ β τ ) = A n 1 B n ,
where
A n = X ^ Γ ( h 1 ) X ^ n and B n = X ^ Γ ( h 1 ) ξ ^ X ^ Θ n .
Under (C1) and using the ergodic theorem, we find that:
A n a . s n + W .
Furthermore, using the martingale central limit theorem of Billingsley [26] and (C2), we find that:
B n d N ( 0 , T ) .
Finally, we have:
n ( β τ ^ ( h ) β τ ) = A n 1 B n n + N ( 0 , W 1 T W ) .
To obtain the asymptotic convergence of the EM estimator of Φ , let:
( G t ) p × p , ( G t ) i , j = P ( L ) ( y t i x t i β τ ^ ( h ) ) P ( L ) ( y t j x t j β τ ^ ( h ) )
and
H t = ( P ( L ) ( y t 1 x t 1 β τ ^ ( h ) ) , , P ( L ) ( y t p x t p β τ ^ ( h ) ) ) ,
and consider the conditions
C3:
G : = E ( μ 1 ( h ) G ^ 1 ) <
C4:
H : = E ( ( μ 1 ( h ) ( y ^ 1 x ^ 1 β τ ^ ( h ) ) θ ) H ^ 1 ) < .
As well as conditions (C1) and (C2), we will use condition (C3) to apply the ergodic theorem and condition(C4) for the martingale central limit theorem of Billingsley [26]. Thus, we have the following theorem:
Theorem 2.
Let Φ ^ ( h ) the EM estimator of Φ. Under (C3) and (C4), we have:
n ( Φ ^ ( h ) Φ ) n + N ( 0 , G 1 H G ) .
Proof. 
Note that ξ ^ = J ^ E ^ Φ , we have:
n ( Φ ^ ( h ) Φ ) = E ^ Γ ( h ) E ^ n 1 E ^ Γ ( h ) J ^ E ^ Θ n n Φ = E ^ Γ ( h ) E ^ n 1 E ^ Γ ( h ) J ^ E ^ Θ E ^ Γ ( h ) E ^ Φ n = E ^ Γ ( h ) E ^ n 1 E ^ Γ ( h ) ξ ^ E ^ Θ n = C n 1 D n .
Similarly to Proof 1, we found that:
n ( Φ ^ ( h ) Φ ) = C n 1 D n n + N ( 0 , G 1 H G ) .

4. Simulation

In this section, we present a simulation study in order to illustrate the performance of our proposed estimation procedure of ϑ = ( β τ , Φ , σ ) under different scenarios. This study was performed based on the QR model with GARMA errors, designed with two covariates each measuring n samples. The response variable, y t , is generated from the following equation:
y t = x t β + ϵ t = β 1 x t , 1 + β 2 x t , 2 + ϵ t , t = 1 , 2 , , n with ( 1 ϕ L ) ( 1 2 η L + L 2 ) d ϵ t = ξ t .
The simulation scenario considers the following setting:
The sample size (n) is fixed at n = 50 , n = 100 , and n = 200 and combinations of the vector of the true values for the parameters stated as η = 0.9 , d = 0.4 , ϕ = 0.5 , β = ( β 1 , β 2 ) = ( 3 , 2 ) σ = 1 , including different degrees of asymmetry considered by choosing τ = 0.25 , 0.5 , and 0.75 as the quantile points for estimation, with 500 Monte Carlo replications for each n.
The covariate values for x t = ( x t , 1 , x t , 2 ) , t = 1 , 2 , , n are i.i.d random vectors generated from the standard normal distribution. The data for the error term, ϵ t , are also generated by taking the different distributions, density-Gaussian, N ( 0 , 1 ) , density function for Student distribution with k = 3 degrees of freedom ( t 3 ), and the density function for the skewed t distribution with k = 5 degrees of freedom ( s t 5 ). The performance and recovery of the estimators are assessed by the bias and Root Mean Square Error (RMSE) to ensure that the parameters estimated have small bias and small variance, which are stated, respectively, by:
R M S E = E ( ( β τ β τ ^ ( h ) ) 2 )
and
B i a s ( β τ ^ ( h ) ) = E ( β τ ^ ( h ) ) β τ ,
where β τ and β τ ^ ( h ) are the true parameter value and its respective h t h maximum-likelihood estimate using the EM algorithm. The Monte Carlo simulation experiments were performed using the R software (version 4.4.1); see www.r-project.org, accessed on 31 October 2023.
The results derived from simulation studies are presented in Table 1, Table 2 and Table 3. A look at the results in all these tables allows us to conclude that, as expected, in general, as the sample size increases, the bias and RMSE of the estimators decrease for all values of quantile points ( τ ) considered. Moreover, β 1 ^ , β 2 ^ , and ϕ ^ all seem to be consistent and asymptotically normally distributed.

5. Real Data Example

In this work, to evaluate our proposed quantile-regression model with GARMA errors formulated in this paper, we used a dataset called Engel (Engel food expenditure data), sourced from the R package quantreg. Koenker and Bassett [27] utilized this dataset to exemplify the application of quantile regression in R using authentic, real-world data.
In Figure 1, displayed below, Koenker and Bassett show the scatter plot of the original data on double log axes. Fitted regression quartile lines are superimposed on the same figure. The estimated slope parameters (Engel elasticities), 0.8358, 0.8326, 0.8780, and 0.9170, for the quartiles corresponding to the 20th, 40th, 60th, and 80th percentiles, respectively. It is interesting to note in this example that the fitted quartile lines indicate an increasing conditional scale effect.
To investigate the influence of the GARMA error model’s “significance” in quantile-regression models, we suggest the following approach with the GARMA errors expressed as:
y t = β + x 1 , t β 1 + ϵ t , t = 1 , 2 , , n ,
with
( 1 ϕ 1 L ϕ 2 L 2 ) ( 1 2 η L + L 2 ) d ϵ t = ξ t ,
where:
  • y t is the food expenditure of observation t;
  • x 1 , t is the income of observation t;
  • d = 0.35 and η = 0.8 .
Our methodology commences by estimating the parameters β , β 1 , ϕ 1 , and ϕ 2 at the quartiles 0.2, 0.4, 0.6, and 0.8 via our method described in Section 2 with the R software (version 4.4.1). Table 4 shows the estimated coefficients (Estimate) and the corresponding predictions at four quantile levels, τ : 0.2 , 0.4 , 0.6 , and 0.8 .
Upon examination of Table 4, the estimates associated with the covariate “income” consistently escalate as τ increases, in accordance with our expectations. Moreover, the values of prediction are close to the true value, which is 750.32 , and tend to increase as τ increases, as expected.
This closeness to the true value not only validates the efficacy of our method but also underscores its reliability in making accurate predictions. Moreover, akin to the estimates, the predicted values also manifest a tendency to increase with escalating τ , a phenomenon that aligns seamlessly with our anticipated behavior of the model.

6. Conclusions

In this article, we have used the maximum-likelihood estimation method and the EM algorithm to estimate the unknown parameters of the quantile regression model with Generalized Autoregressive Moving Average error. To asses the performance of our estimators, we conducted a simulation study. The results were promising, demonstrating that our parameter estimates closely approximate the true values. Additionally, we provide a practical illustration using real data.

Author Contributions

Conceptualization, O.E.; methodology, O.E.; validation, R.E.H. and S.H.; writing—original draft, O.E.; writing—review & editing, R.E.H. and S.H.; visualization, R.E.H. and S.H.; supervision, R.E.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors gratefully acknowledge financial support from the National Scientific Research and Technology Center (CNRST), Morocco.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Koenker, R.; Bassett, J. Regression quantiles. Econometrica 1978, 46, 33–50. [Google Scholar] [CrossRef]
  2. Angrist, J.D.; Pischke, J.S. Mostly Harmless Econometrics: An Empiricist’s Companion; Princeton University Press: Princeton, NJ, USA, 2009. [Google Scholar]
  3. Porter, S.R. quantile-regression: Analyzing changes in distributions instead of means. In Higher Education: Handbook of Theory and Research; Springer: Berlin/Heidelberg, Germany, 2014; Volume 30, pp. 335–381. [Google Scholar]
  4. Tian, Y.; Tian, M.; Zhu, Q. Linear quantile regression based on EM algorithm. Commun. Stat. Theory Methods 2014, 43, 3464–3484. [Google Scholar] [CrossRef]
  5. Tian, Y.; Tang, M.; Zang, Y.; Tian, M. Quantile regression for linear models with autoregressive errors using EM algorithm. Comput. Stat. 2018, 33, 1605–1625. [Google Scholar] [CrossRef]
  6. Zhou, Y.H.; Ni, Z.X.; Li, Y. Quantile regression via the Em algorithm. Commun. Stat. Simul. Comput. 2014, 43, 2162–2172. [Google Scholar] [CrossRef]
  7. Hurst, H.E. Long-Term Storage Capacity of Reservoirs. Trans. Am. Socity Civ. Eng. 1951, 116, 770–799. [Google Scholar] [CrossRef]
  8. Jeffreys, H. The theory of probability. Nature Publishing Group UK London. Commun. Stat. Theory Methods 1922, 34, 1872–1879. [Google Scholar]
  9. Student. Errors of routine analysis. Biometrika 1927, 19, 151–164. [Google Scholar] [CrossRef]
  10. Gray, H.L.; Zhang, N.F.; Woodward, W.A. On generalized fractional processes. J. Time Ser. Anal. 1989, 10, 232–257. [Google Scholar] [CrossRef]
  11. Darmawan, G.; Rosadi, D.; Ruchjana, B.; Pontoh, R.; Asrirawan, A.; Setialaksana, W. Forecasting COVID-19 in INDONESIA with various time series models. Media Stat. 2022, 15, 83–93. [Google Scholar] [CrossRef]
  12. Albarracin, Y.; Alencar, A.; Ho, L. Generalized autoregressive and moving average models: Multicollinearity, interpretation and a new modified model. J. Stat. Comput. Simul. 2019, 89, 1819–1840. [Google Scholar] [CrossRef]
  13. Hunt, R.; Peiris, S.; Weber, N. Estimation methods for stationary Gegenbauer processes. Stat. Pap. 2022, 63, 1707–1741. [Google Scholar] [CrossRef]
  14. Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B 1977, 39, 12–22. [Google Scholar] [CrossRef]
  15. Buchinsky, M. Estimiting theasymptotic covariance matrix for quantile regression models: A mote carlo study. J. Econom. 1995, 68, 303–338. [Google Scholar] [CrossRef]
  16. Yu, K.; Moyeed, R.A. Bayesian quantile regression. Stat. Proba. Lett. 2001, 54, 437–447. [Google Scholar] [CrossRef]
  17. Yanuar, F.; Yozza, H.; Zetra, A. Modified quantile regression for modeling the low birth weight. Front. Appl. Math. Stat. 2022, 8, 58. [Google Scholar] [CrossRef]
  18. Yu, K.; Zhang, J. A three-parameter asymmetric Laplace distribution and its extension. Commun. Stat. Theory Methods 2005, 34, 1867–1879. [Google Scholar] [CrossRef]
  19. Stein, E.M.; Weiss, G. Introduction to Fourier Analysis on Euclidean Spaces; Princeton University Press: Princeton, NJ, USA, 1971; Volume 1. [Google Scholar]
  20. Meng, X.L.; Van Dyk, D. The EM algorithm-an old folk-song sung to a fast new tune. J. Royal Statist. Soc. B 1997, 59, 511–567. [Google Scholar] [CrossRef]
  21. Good, J. The population frequencies of species and the estimation of population parameters. Biometrika 1953, 40, 237–260. [Google Scholar] [CrossRef]
  22. Sánchez, B.L.; Lachos, H.V.; Labra, V.F. Likelihood based inference for quantile regression using the asymmetric Laplace distribution. J. Stat. Comput. Simul. 2013, 81, 1565–1578. [Google Scholar]
  23. Barndorff-Nielsen, O.E.; Shephard, N. Modelling by Lévy Processess Forfinancial Econometrics; Birkhauser: Boston, MA, USA, 2001; pp. 283–318. [Google Scholar]
  24. Eberlein, E.; Hammerstein, E.A.V. Generalized hyperbolic and inverse Gaussian distributions: Limiting cases and approximation of processes. In Seminar on Stochastic Analysis, Random Fields and Applications IV: Centro Stefano Franscini, Ascona, May 2002; Birkhauser: Basel, Swithzerland, 2004; pp. 212–264. [Google Scholar]
  25. Rencher, A.C.; Christensen, W.F. Chapter 10, Multivariate regression–Section 10.1, Introduction. In Methods of Multivariate Analysis; Wiley Series in Probability and Statistics; John Wiley and Sons: Hoboken, NJ, USA, 2012; Volume 709. [Google Scholar]
  26. Billingsley, P. The Lindeberg-Levy Theorem for Martingales. Proc. Am. Math. Soc. 1961, 12, 78–92. [Google Scholar]
  27. Koenker, R.; Bassett, G. Robust Tests of Heteroscedasticity based on Regression Quantiles. Econometrica 1982, 50, 43–61. [Google Scholar] [CrossRef]
Figure 1. Quartile Engel curves for food.
Figure 1. Quartile Engel curves for food.
Modelling 05 00031 g001
Table 1. The estimation results for the Standard Normal distribution (out of 500 replications).
Table 1. The estimation results for the Standard Normal distribution (out of 500 replications).
Parameter τ n = 50n = 100n = 200
β 1 0.25 Bias0.0940.0290.021
RMSE0.3410.2330.087
0.5 Bias0.1490.052−0.014
RMSE0.2420.2250.104
0.75 Bias−0.071−0.023−0.016
RMSE0.3280.2910.124
β 2 0.25 Bias0.121−0.083−0.030
RMSE0.3170.2260.136
0.5 Bias−0.1310.065−0.017
RMSE0.3420.2250.085
0.75 Bias0.1130.0050.002
RMSE0.3170.2030.139
ϕ 0.25 Bias0.1320.0950.074
RMSE0.3440.2220.176
0.5 Bias0.0920.0150.012
RMSE0.3240.1750.109
0.75 Bias−0.070−0.0670.036
RMSE0.3300.2120.154
Table 2. The estimation results for the Student ( t 3 ) distribution (out of 500 replications).
Table 2. The estimation results for the Student ( t 3 ) distribution (out of 500 replications).
Parameter τ n = 50n = 100n = 200
β 1 0.25 Bias0.06920.034−0.012
RMSE0.3610.2570.155
0.5 Bias0.0950.0270.017
RMSE0.2380.1640.080
0.75 Bias0.095-0.0830.049
RMSE0.3300.2150.085
β 2 0.25 Bias0.0570.041−0.002
RMSE0.2900.1330.062
0.5 Bias0.0650.0380.025
RMSE0.3370.2370.174
0.75 Bias0.0880.0400.039
RMSE0.2990.2150.088
ϕ 0.25 Bias0.125−0.0430.038
RMSE0.2450.1340.094
0.5 Bias0.0670.0530.025
RMSE0.2830.1920.085
0.75 Bias0.104−0.0330.001
RMSE0.2690.1360.062
Table 3. The estimation results for the Skew Student ( s t 5 ) distribution (out of 500 replications).
Table 3. The estimation results for the Skew Student ( s t 5 ) distribution (out of 500 replications).
Parameter τ n = 50n = 100n = 200
β 1 0.25 Bias0.08220.049−0.011
RMSE0.3270.2870.124
0.5 Bias0.086-0.0680.050
RMSE0.2710.1840.097
0.75 Bias0.0830.069−0.040
RMSE0.3320.2690.113
β 2 0.25 Bias0.1090.034−0.007
RMSE0.3590.2130.096
0.5 Bias0.0910.0750.020
RMSE0.3340.2800.137
0.75 Bias−0.0170.040−0.007
RMSE0.2920.1690.085
ϕ 0.25 Bias−0.113−0.0630.020
RMSE0.3100.1930.100
0.5 Bias0.0980.032−0.025
RMSE0.2310.1820.086
0.75 Bias0.1090.071−0.052
RMSE0.3250.2230.090
Table 4. Estimated parameters for Engel data with expectation-maximization algorithm.
Table 4. Estimated parameters for Engel data with expectation-maximization algorithm.
Estimate 0.20.40.60.8
τ
Intercept96.3594.886.9476.60
Income0.580.590.620.63
ϕ 1 −0.35−0.27−0.22−0.23
ϕ 2 −0.09−0.12−0.14−0.06
Prediction for observation 235721.6723.27736.9744.35
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Essefiani, O.; El Halimi, R.; Hamdoune, S. Estimation Approach for a Linear Quantile-Regression Model with Long-Memory Stationary GARMA Errors. Modelling 2024, 5, 585-599. https://doi.org/10.3390/modelling5020031

AMA Style

Essefiani O, El Halimi R, Hamdoune S. Estimation Approach for a Linear Quantile-Regression Model with Long-Memory Stationary GARMA Errors. Modelling. 2024; 5(2):585-599. https://doi.org/10.3390/modelling5020031

Chicago/Turabian Style

Essefiani, Oumaima, Rachid El Halimi, and Said Hamdoune. 2024. "Estimation Approach for a Linear Quantile-Regression Model with Long-Memory Stationary GARMA Errors" Modelling 5, no. 2: 585-599. https://doi.org/10.3390/modelling5020031

APA Style

Essefiani, O., El Halimi, R., & Hamdoune, S. (2024). Estimation Approach for a Linear Quantile-Regression Model with Long-Memory Stationary GARMA Errors. Modelling, 5(2), 585-599. https://doi.org/10.3390/modelling5020031

Article Metrics

Back to TopTop