Next Article in Journal
On the Generalized Cross-Law of Importation in Fuzzy Logic
Previous Article in Journal
Progressive Type-II Censoring Schemes of Extended Odd Weibull Exponential Distribution with Applications in Medicine and Engineering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation in Partial Functional Linear Spatial Autoregressive Model

1
School of Mathematics and Statistics, Zhengzhou University, Zhengzhou 450001, China
2
Henan Key Laboratory of Financial Engineering, Zhengzhou University, Zhengzhou 450001, China
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(10), 1680; https://doi.org/10.3390/math8101680
Submission received: 6 September 2020 / Revised: 24 September 2020 / Accepted: 24 September 2020 / Published: 1 October 2020
(This article belongs to the Section Probability and Statistics)

Abstract

:
Functional regression allows for a scalar response to be dependent on a functional predictor; however, not much work has been done when response variables are dependence spatial variables. In this paper, we introduce a new partial functional linear spatial autoregressive model which explores the relationship between a scalar dependence spatial response variable and explanatory variables containing both multiple real-valued scalar variables and a function-valued random variable. By means of functional principal components analysis and the instrumental variable estimation method, we obtain the estimators of the parametric component and slope function of the model. Under some regularity conditions, we establish the asymptotic normality for the parametric component and the convergence rate for slope function. At last, we illustrate the finite sample performance of our proposed methods with some simulation studies.

1. Introduction

Over the last two decades, there has been an increasing interest in functional data analysis in econometrics, biometrics, chemometrics, and medical research, as well as other fields. Due to the infinite-dimensional nature of functional data, the classical methods for functional data are no longer applicable. There has been a large amount of work in function data analysis; see Ramsay and Silverman [1], Cardot et al. [2], Yao et al. [3], Lian and Li [4], Fan et al. [5], Feng and Xue [6], Kong et al. [7], and Yu et al. [8]. Some methods and theories on partial functional linear models have been proposed. For example, based on a two-stage nonparametric regression calibration method, Zhang et al. [9] discussed a partial functional linear model. Shin [10] proposed new estimators of the parameters and coefficient function of partial functional linear model. Lu et al. [11] considered quantile regression for the functional partially linear model. Yu et al. [12] proposed a prediction procedure for the partial functional linear quantile regression model. However, the aforementioned articles have a significant limitation. That is, they assumed that response variables are independence variables. However, in many fields, such as economics, finance, and environmental studies, sometimes response variables are dependence spatial variables. Therefore, it is of practical interest to develop more flexible approaches using a broader family of data.
There has been considerable work on dependence spatial variables. One useful approach in dealing with spatial dependence is the spatial autoregressive model, which adds a weighted average of nearby values of the dependent variable to the base set of explanatory variables. Theories and methods based on parametric spatial autoregressive models have been extensively studied in Cliff and Ord [13], Anselin [14], and Cressie [15]. Lee [16] proposed the quasi-maximum likelihood estimation. Then, Su and Jin [17] extended the quasi-likelihood estimation method to partially linear spatial autoregressive models. Koch and Krisztin [18] developed the B-splines and genetic-algorithms method for partially linear spatial autoregressive models. Chen et al. [19] proposed a new estimation method based on the kernel estimation method. Du et al. [20] considered partially linear additive spatial autoregressive models, proposed the instrumental variable estimation method, and established the asymptotic normality for the parametric component.
It is a good idea to develop more flexible approaches using a broader family of data, where the limitation can, in principle, be easily solved by proposing a new model. Thus, in this paper, based on spatial variables and functional data, we combine the spatial autoregressive model and the partial functional linear model, and propose a partial functional linear spatial autoregressive model.
Let Y i be a real-valued dependence spatial variable corresponding to the ith observation, Z i be a p-dimensional vector of associated explanatory variables, for i = 1 , , n . X i ( t ) be zero mean random functions belonging to L 2 ( T ) , and be independent and identically distributed, i = 1 , , n . For simplicity, we suppose throughout that T = [ 0 , 1 ] . The partial functional linear spatial autoregressive model is given by
Y i = ρ j = 1 n w i j Y j + Z i T β + 0 1 γ ( t ) X i ( t ) d t + ε i ,
where w i j is the ( i , j ) th element of a given n × n non-stochastic spatial weighting matrix W n , such that w i j = 0 for all i = j , W n is a specified n × n spatial weight matrix. The definition of spatial weight matrix W n is based on the geographic arrangement of the observations or contiguity. More generally, W n matrices can be specified based on geographical distance decay, economic distance, and the structure of a social network. β = ( β 1 , , β p ) T is a vector of p-dimensional unknown parameters, γ ( t ) is a square integrable unknown slope function on [0, 1], and ε i are independent and identically distributed random errors with zero mean and finite variance σ 2 .
There are many methods which can be used to deal with functional data, such as the functional principal component method, spline methods, and the rough penalty method. Functional principal component analysis (FPCA) can analyse an infinite dimensional problem by a finite dimensional one—therefore, FPCA is popular and widely used by researchers. Dauxois et al. [21] investigated the asymptotic theory of FPCA. Cardot et al. [22] applied FPCA to estimate the slope function of the functional linear model. Hall and Horowitz [23] and Hall and Hosseini-Nasab [24] showed the optimal convergence rates of slope function based on the FPCA technique.
In this paper, we consider the estimating problem of the model (1). Based on FPCA and the instrumental variable estimation techniques, we obtain the estimators of the parameters and slope function of model (1) with the two-stage least squares method. Under some mild conditions, the rate of convergence and asymptotic normality of the resulting estimators are established. Finally, some simulation studies are carried out to assess the finite sample performance of the proposed method. The results are encouraging and show that all estimators perform well in finite samples. Overall, simulation experiments lend support to our asymptotic results.
The rest of the paper proceeds as follows. In Section 2, functional principal component analysis and the instrumental variable estimation method is proposed to estimate the partial functional linear spatial autoregressive regression model. In Section 3, the asymptotic properties are given. Some simulation studies are described in Section 4. Lastly, we conclude the paper in Section 5 with some future work.

2. Estimation Procedures

First, we introduce FPCA. Denote the covariance function of X ( t ) by K X . Then, by Mercer’s Theorem, we can obtain the spectral decomposition as K X ( s , t ) = k = 1 λ k ϕ k ( s ) ϕ k ( t ) , where λ 1 λ 2 0 are the eigenvalues of the linear operator associated with K X ( s , t ) , and ϕ k ( t ) are the corresponding eigenfunctions. By the Karhunen-Loève expansion, X i ( t ) can be represented as X i ( t ) = j = 1 ξ i j ϕ j ( t ) , where the coefficients ξ i k = X i ( t ) ϕ k ( t ) d t are uncorrelated random variables with mean zero and variances E ( ξ i k 2 ) = λ k , also called the functional principal component scores. Expanded on the orthonormal eigenbasis { ϕ k ( t ) } , the slope function can be written as γ ( t ) = k = 1 γ k ϕ k ( t ) . Based on the above FPCA, model (1) can be well-approximated by
Y i = ˙ ρ j = 1 n w i j Y j + Z i T β + j = 1 m γ j X i , ϕ j + ε i , i = 1 , , n ,
where · , · represents the L 2 ( T ) inner product, γ j = γ , ϕ j , and m is sufficiently large.
The approximate model (2) naturally suggests the idea of principal components regression. However, in practice, ϕ j are unknown and must be replaced by estimates in order to estimate β and γ j ( j = 1 , , m ) . For this purpose, we consider the empirical version of K X ( s , t ) , which is given by
K ^ X ( s , t ) = 1 n i = 1 n X i ( s ) X i ( t ) = j = 1 λ ^ j ϕ ^ j ( s ) ϕ ^ j ( t ) ,
where ( λ ^ j , ϕ ^ j ) are pairs of eigenvalues and eigenfunctions for the covariance operator associated with K ^ X and λ ^ 1 λ ^ 2 0 . We take ( λ ^ j , ϕ ^ j ) as the estimator of ( λ j , ϕ j ) .
Replacing ϕ j ( t ) by ϕ ^ j ( t ) , model (2) can be written as
Y i = ˙ ρ j = 1 n w i j Y j + Z i T β + j = 1 m γ j X i , ϕ ^ j + ε i , i = 1 , , n .
Let Y n = ( Y 1 , , Y n ) T , Z n = ( Z 1 , , Z n ) T , X n = ( X 1 ( t ) , , X n ( t ) ) T , X n , ϕ ^ j = 0 1 ϕ ^ j ( t ) X n ( t ) d t = ( 0 1 ϕ ^ j ( t ) X 1 ( t ) d t , , 0 1 ϕ ^ j ( t ) X n ( t ) d t ) T , and Π = ( X n , ϕ ^ 1 , , X n , ϕ ^ m ) , α = ( γ 1 , , γ m ) T , ε n = ( ε 1 , , ε n ) T , model (3) can be written as matrix notation
Y n = ˙ ρ W n Y n + Z n β + Π α + ε n .
Let P = Π ( Π T Π ) 1 Π T denote the projection matrix onto the space spanned by Π , and we obtain
( I P ) Y n = ˙ ρ ( I P ) W n Y n + ( I P ) Z n β + ( I P ) ε n .
Let Q = ( W n Y n , Z n ) , θ = ( ρ , β T ) T , applying the two-stage least squares procedure proposed by Kelejian and Prucha [25], we propose the following estimator
θ ^ = Q T ( I P ) M ( I P ) Q 1 Q T ( I P ) M ( I P ) Y n ,
where M = H ( H T H ) 1 H T and H is matrix of instrumental variables. Moreover,
α ^ = ( γ ^ 1 , , γ ^ m ) T = ( Π T Π ) 1 Π T ( Y n Q θ ^ ) .
Consequently, we use γ ^ ( t ) = k = 1 m γ ^ k ϕ ^ k ( t ) as the estimator of γ ( t ) .
Similar to Zhang and Shen [26], we next construct the instrument variables H . In the first step, the following instrumental variables are obtained
H ˜ = ( W n ( I ρ ˜ W n ) 1 ( Π α ˜ , Z n ) , Z n ) ,
where ρ ˜ and α ˜ are obtained by simply regressing Y n on pseudo regressor variables W n Y n , Z n , Π . In the second step, we use H ˜ to obtain the estimators α ¯ and θ ¯ , and then we can construct the instrumental variables
H = ( W n ( I ρ ¯ W n ) 1 ( Π α ¯ + Z n β ¯ ) , Z n ) .
To implement our estimation method, we need to choose m. Here, truncation parameter m is selected by AIC criterion. Specifically, we minimize
AIC ( m ) = log RSS ( m ) + 2 n 1 m ,
where
RSS ( m ) = i = 1 n Y i ρ ^ j = 1 n w i j Y j + Z i T β ^ + j = 1 m γ ^ j X i , ϕ ^ j 2 ,
with ρ ^ , β ^ and γ ^ j being the estimated value.

3. Asymptotic Properties

In this section, we discuss the asymptotic normality of θ ^ and the rate of convergence of γ ^ ( t ) . For convenience and simplicity, we let c denote a positive constant that may be different at each appearance. The following assumptions will be maintained throughout the paper.
Assumption 1.
The matrix I ρ W n is nonsingular with ρ < 1 .
Assumption 2.
The row and column sums of the matrices W n and ( I ρ W n ) 1 are bounded uniformly in absolute value for any ρ < 1 .
Assumption 3.
For matrix S = W n ( I ρ W n ) 1 , there exists a constant ρ c such that ρ c I SS T is a positive, semidefinite matrix.
Assumption 4.
1 n Q ˜ T ( I P ) M ( I P ) Q ˜ P Σ in probability for some positive definite matrix, where Q ˜ = ( S ( Z n β + η ) , Z n ) , η = ( 0 1 γ ( t ) X 1 ( t ) d t , , 0 1 γ ( t ) X n ( t ) d t ) T .
Assumption 5.
For matrix Q ˜ = ( S ( Z n β + η ) , Z n ) , there exists a constant ρ c * such that ρ c * I Q ˜ Q ˜ T is a positive semidefinite matrix.
Assumption 6.
The random vector Z has bounded fourth moments.
Assumption 7.
For any c > 0 , there exists an ϵ > 0 , such that
sup t [ 0 , 1 ] [ E { | X ( t ) | c } ] < , sup s , t [ 0 , 1 ] ( E [ { | s t | ϵ | X ( s ) X ( t ) | c } ] ) < .
For each integer r 1 , λ k r E ( ξ k 2 r ) is bounded uniformly in k.
Assumption 8.
X ( t ) is twice continuously differentiable on [ 0 , 1 ] with probability 1 and E [ X ( 2 ) ( t ) ] 4 d t < , X ( 2 ) ( t ) denotes the second derivative of X ( t ) .
Assumption 9.
There exists some canstants a > 1 and b > a / 2 + 1 , such that λ j λ j + 1 C j a 1 and γ j C j b for j 1 .
Assumption 10.
For truncation parameter m, we assume that m = O ( n 1 / ( a + 2 b ) ) .
Assumptions 1–3 impose restrictions on the spatial weighting matrix, and these restrictions are imposed for the spatial regression models (see Lee [16]; Zhang and Shen [26]; Du et al. [20]). Let the weighting matrix W n = I D B F , where I D is a D-dimensional unit matrix, B F = ( l F l F T I F ) / ( F 1 ) , l F is the F -dimensional unit vector, and ⊗ is a Kronecker product, then weighting matrix W n can satisfy Assumptions 1–3. Assumption 4 is used to represent the asymptotic covariance matrix of θ ^ . Assumption 5 is required to ensure the identifiability of parameter θ . Assumption 6 is the usual condition for the proofs of asymptotic properties of the estimators. Assumptions 7–9 are regularity assumptions for functional linear models (see Hall and Hosseini-Nasab [24]), where a Gaussian process with H o ¨ lder continuous sample paths satisfies Assumption 7. Assumption 10 usually appears in functional linear regression (see Feng and Xue [6]; Shin [10]; Hall and Horowitz [23]).
The following Theorem 1 shows the asymptotic property of the estimator of the parameter vector θ = ( ρ , β T ) T .
Theorem 1.
Under the Assumptions 1–10, then
n ( θ ^ θ ) D N ( 0 , σ 2 Σ 1 ) ,
where θ ^ = ( ρ ^ , β ^ T ) T and “ D ” denotes convergence in distribution.
Proof of Theorem 1.
Let e n = η Π α , then
Y n = Q θ + η + ε n = Q θ + e n + Π α + ε n .
By the definition of θ ^ , we have
θ ^ θ = Q T ( I P ) M ( I P ) Q 1 Q T ( I P ) M ( I P ) Y n θ = Q T ( I P ) M ( I P ) Q 1 Q T ( I P ) M ( I P ) [ ( I P ) Y n ] θ = Q T ( I P ) M ( I P ) Q 1 Q T ( I P ) M ( I P ) ( Q θ + e n + ε n ) θ = Q T ( I P ) M ( I P ) Q 1 Q T ( I P ) M ( I P ) ( e n + ε n ) .
First, consider Q T ( I P ) M ( I P ) Q . Recall that when Y n = ( I ρ W n ) 1 ( Z n β + η + ε n ) , it has
Q = ( W n Y n , Z n ) = W n ( I ρ W n ) 1 ( Z n β + η + ε n ) , Z n = W n ( I ρ W n ) 1 ( Z n β + η ) , Z n + W n ( I ρ W n ) 1 ε n , 0 = Δ Q ˜ + e ˜ ,
where Q ˜ = ( S ( Z n β + η ) , Z n ) , e ˜ = ( S ε n , 0 ) , S = W n ( I ρ W n ) 1 .
Hence, one has
Q T ( I P ) M ( I P ) Q = ( Q ˜ + e ˜ ) T ( I P ) M ( I P ) ( Q ˜ + e ˜ ) = Q ˜ T ( I P ) M ( I P ) Q ˜ + e ˜ T ( I P ) M ( I P ) e ˜ + Q ˜ T ( I P ) M ( I P ) e ˜ + e ˜ T ( I P ) M ( I P ) Q ˜ = Δ R 11 + R 12 + R 13 + R 14 ,
where
R 11 = Q ˜ T ( I P ) M ( I P ) Q ˜ ,
R 12 = e ˜ T ( I P ) M ( I P ) e ˜ ,
R 13 = Q ˜ T ( I P ) M ( I P ) e ˜ ,
and
R 14 = e ˜ T ( I P ) M ( I P ) Q ˜ .
By the properties of projection matrix and Assumption 3, we have
E ε n T S T ( I P ) M ( I P ) S ε n = E trace [ ε n T S T ( I P ) M ( I P ) S ε n ] = E trace [ ε n T S T ( I P ) H ( H T H ) 1 H T ( I P ) S ε n ] = E trace [ ( H T H ) 1 2 H T ( I P ) S ε n ε n T S T ( I P ) H ( H T H ) 1 2 ] σ 2 ρ c E trace [ ( H T H ) 1 2 H T ( I P ) H ( H T H ) 1 2 ] σ 2 ρ c E trace [ ( H T H ) 1 2 H T H ( H T H ) 1 2 ] = Δ O ( 1 ) .
Hence, we have
ε n T S T ( I P ) M ( I P ) S ε n = O p ( 1 ) .
Then, we get that
R 12 = ( S ε n , 0 ) T ( I P ) M ( I P ) ( S ε n , 0 ) = O p ( 1 ) .
By straightforward algebra, one has E ( R 13 ) = 0 . In addition, based on Assumption 3,
E Q ˜ T ( I P ) M ( I P ) S ε n 2 = E trace [ Q ˜ T ( I P ) M ( I P ) S ε n ε n T S T ( I P ) M ( I P ) Q ˜ ] σ 2 ρ c E trace [ Q ˜ T ( I P ) M ( I P ) M ( I P ) Q ˜ ] σ 2 ρ c E trace [ Q ˜ T M Q ˜ ] = σ 2 ρ c E trace [ Q ˜ T H ( H T H ) 1 H T Q ˜ ] = σ 2 ρ c E trace [ ( H T H ) 1 2 H T Q ˜ Q ˜ T H ( H T H ) 1 2 ] σ 2 ρ c ρ c * E trace [ ( H T H ) 1 2 H T H ( H T H ) 1 2 ] = Δ O ( 1 ) ,
Therefore, we have R 13 = O p ( 1 ) . Similarly, we have R 14 = O p ( 1 ) .
Combining the convergence rates of R 12 , R 13 and R 14 , we have
Q T ( I P ) M ( I P ) Q = R 11 + O p ( 1 ) .
Now, we consider Q T ( I P ) M ( I P ) e n . Obviously,
Q T ( I P ) M ( I P ) e n = Q ˜ T ( I P ) M ( I P ) e n + e ˜ T ( I P ) M ( I P ) e n = Q ˜ T ( I P ) M ( I P ) e n + ε n T S T ( I P ) M ( I P ) e n = Δ R 21 + R 22 ,
where
R 21 = Q ˜ T ( I P ) M ( I P ) e n ,
R 22 = ε n T S T ( I P ) M ( I P ) e n .
By Lemma 1 of Hu et al. [27], we have
e n = j = 1 γ j X N , ϕ j j = 1 m γ j X N , ϕ ^ j = j = 1 m γ j X N , ϕ j ϕ ^ j + j = m + 1 γ j X N , ϕ j j = 1 m γ j X N , ϕ j ϕ ^ j + j = m + 1 γ j X N , ϕ j .
By Lemma 1 ( b ) of Kong et al. [7] with the help of Assumptions 7–9, we have
ϕ ^ j ϕ j = O p ( n 1 2 j ) .
By Assumptions 7 and 9, one has
j = 1 m γ j X i , ϕ j ϕ ^ j 2 X i ( t ) 2 j = 1 m ( ϕ j ϕ ^ j ) γ j 2 = O p ( j = 1 m j b j n 1 2 ) 2 = O p ( n 1 m 4 2 b ) .
By Assumptions 9–10, one has
E | j = m + 1 γ j X i , ϕ j | j = m + 1 γ j E X i , ϕ j = O ( m b + 1 2 ) .
Thus, we have
j = m + 1 γ j X i , ϕ j 2 = O p ( m 1 2 b ) ,
e n 2 n · j = 1 m γ j X i , ϕ j ϕ ^ j 2 + n · j = m + 1 γ j X i , ϕ j 2 = O p ( m 4 2 b ) + O ( n m 1 2 b ) = o p ( n ) .
Combining this with Assumption 4, we have
E ( R 21 2 ) = E trace [ e n T ( I P ) M ( I P ) Q ˜ Q ˜ T ( I P ) M ( I P ) e n ] ρ c * E trace [ e n T ( I P ) M ( I P ) e n ] ρ c * E trace [ e n T e n ] = o p ( n ) .
Thus, we can get R 21 = o p ( n ) . Similarly, we have R 22 = o p ( n ) .
Then, we can find
n ( θ ^ θ ) = n R 11 + O p ( 1 ) 1 Q T ( I P ) M ( I P ) ( e n + ε n ) = R 11 n + O p ( n 1 ) 1 Q ˜ T ( I P ) M ( I P ) ε n n + o p ( 1 ) .
Invoking the central limit theorem and Slutsky’s theorem, we have
n ( θ ^ θ ) D N ( 0 , σ 2 Σ 1 ) .
 □
Rate of convergence of the slope function γ ^ ( t ) = k = 1 m γ ^ k ϕ ^ k ( t ) is given in the following theorem.
Theorem 2.
Under the Assumptions 1–10, then
γ ^ ( t ) γ ( t ) 2 = O p ( n 2 b 1 a + 2 b ) .
The proof of Theorem 2 follows the proof of Theorem 2 of Shin [10], so we omitted it here.

4. Simulation Study

In this section, we conduct simulation studies to assess the finite sample performance of the proposed estimation method. The data { Y i } are generated from the following model
Y n = ρ W n Y n + Z n 1 β 1 + Z n 2 β 2 + 0 1 γ ( t ) X n ( t ) d t + ε n ,
where Z n 1 = ( Z 11 , Z 21 , , Z n 1 ) T , Z n 2 = ( Z 12 , Z 22 , , Z n 2 ) T , Z i 1 and Z i 2 are independent and following uniform distributions on [ 1 , 1 ] and [0, 1] respectively, for i = 1 , 2 , , n , β 1 = 1 , β 2 = 1 , γ ( t ) = 2 sin ( π t / 2 ) + 3 2 sin ( 3 π t / 2 ) , ε n N ( 0 , σ 2 I n ) .
We suppose the functional predictors can be expressed as X i ( t ) = j = 1 50 U i j v j ( t ) , where U i j are independently distributed as the normal with mean 0 and variance λ j = ( ( j 0.5 ) π ) 2 , v j ( t ) = 2 sin ( ( j 0.5 ) π t ) . For the actual observations, we assume that they are realizations of { X i ( · ) } at an equally spaced grid of 100 points in [0, 1]. As we have said in Section 2, the truncation parameters m are selected by AIC criterion in our simulation. Similar to Lee [16] and Case [28], we focus on the spatial scenario with R number of districts, q members in each district, and with each neighbor of a member in a district given equal weight, that is, W n = I R B q , where B q = ( l q l q T I q ) / ( q 1 ) , l q is the q-dimensional unit vector, and ⊗ is a Kronecker product. Some simulation studies are examined with different values of R for 50 and 70, q for 2, 5, and 8, and σ 2 for 0.25 and 1. For comparison, three different values ρ = 0.2 , 0.5 , 0.7 are considered, which represent spatial dependence of the responses from weak to strong. ρ = 0.2 represents weak spatial dependence, and ρ = 0.5 represents mild spatial dependence, whereas ρ = 0.7 represents relatively strong spatial dependence.
Throughout the simulations, for different scalar parameters ρ , β 1 and β 2 , we use the average bias, standard deviation (SD) as a measure of parametric estimation accuracy. The performance of the estimator of the slope function γ ( t ) is assessed using the square root of average squared errors (RASE), defined as
RASE = 1 N l = 1 N [ γ ^ ( t l ) γ ( t l ) ] 2 1 / 2 ,
where { t l , l = 1 , , N } are the regular grid points at which the function γ ^ ( t ) is evaluated. In our simulation, N = 200 is used.
The sample size is n = R q . We use 1000 Monte Carlo runs for estimation assessment, and then summarize the results in Table 1, Table 2 and Table 3 and Figure 1 and Figure 2. Table 1, Table 2 and Table 3 list average Bias and SD of the estimators of ρ , β 1 , and β 2 , and average RASE of the estimator of γ ( t ) in the 1000 replications. Figure 1 and Figure 2 present the average estimate curves of γ ( t ) .
From Table 1, Table 2 and Table 3 and Figure 1 and Figure 2 we can see that: (1) The biases of ρ ^ , β ^ 1 and β ^ 2 are fairly small for almost all cases. (2) The standard deviation of ρ ^ , β ^ 1 and β ^ 2 decrease as either R or q increases. (3) The RASEs of γ ( t ) are small for all cases and decrease as sample size n increases or σ 2 decreases, and it can be concluded that the estimate curves fit better to the corresponding true line, which coincides with what was discovered from Figure 1 and Figure 2. Overall, the simulation results suggest that the proposed estimation procedure is effective for the partial functional linear spatial autoregressive model.

5. Conclusions

In this paper, we proposed a partial functional linear spatial autoregressive model to study the link between a scalar dependence spatial response variable and explanatory variables containing both multiple real-valued scalar variables and a functional predictor. We then used functional principal component basis and an instrumental variable to estimate the parametric vector and slope function based on the two-stage least squares procedure. Under some mild conditions, we obtained the asymptotic normality of estimators of a parametric vector. Furthermore, the rate of convergence of the proposed estimator of slope function was also established. The simulation studies demonstrate that the proposed method performs satisfactorily and the theoretical results are valid.
There are some interesting future directions. In this paper, we only considered the estimation of the unknown parametric vector and slope function, which does not present a way to test for the effects of the covariates, an important aspect of any statistical analysis. In the future, we would like to be able to identify the model structure by testing for the main effects of the scalar predictors and the functional predictor. Another interesting direction can be to extend our new procedure to the generalized partial functional linear spatial autoregressive model.

Author Contributions

Conceptualization, Y.H.; Software, S.W.; methodology, Y.H. and S.F.; writing—original draft preparation, Y.H. and J.J.; writing—review and editing, S.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Social Science Foundation of China (18BTJ021).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ramsay, J.O.; Silverman, B.W. Functional Data Analysis; Springer: New York, NY, USA, 1997. [Google Scholar]
  2. Cardot, H.; Ferraty, F.; Sarda, P. Spline estimators for the functional linear model. Statist. Sin. 2003, 13, 571–592. [Google Scholar]
  3. Yao, F.; Müller, H.G.; Wang, J.L. Functional linear regression analysis for longitudinal data. Ann. Statist. 2005, 33, 2873–2903. [Google Scholar] [CrossRef] [Green Version]
  4. Lian, H.; Li, G. Series expansion for functional sufficient dimension reduction. J. Multivariate Anal. 2014, 124, 150–165. [Google Scholar] [CrossRef]
  5. Fan, Y.; James, G.M.; Radchenko, P. Functional additive regression. Ann. Statist. 2015, 43, 2296–2325. [Google Scholar] [CrossRef]
  6. Feng, S.Y.; Xue, L.G. Partially functional linear varying coefficient model. Statistics 2016, 50, 717–732. [Google Scholar] [CrossRef]
  7. Kong, D.; Xue, K.; Yao, F.; Zhang, H.H. Partially functional linear regression in high dimensions. Biometrika 2016, 103, 147–159. [Google Scholar] [CrossRef] [Green Version]
  8. Yu, P.; Du, J.; Zhang, Z.Z. Varying-coefficient partially functional linear quantile regression models. J. Korean Statist. Soc. 2017, 46, 462–475. [Google Scholar] [CrossRef]
  9. Zhang, D.; Lin, X.; Sowers, M.F. Assessing the effects of reproductive hormone profiles on bone mineral density using functional two-stage mixed models. Biometrics 2007, 63, 351–362. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Shin, H. Partial functional linear regression. J. Statist. Plann. Inference 2009, 139, 3405–3418. [Google Scholar] [CrossRef]
  11. Lu, Y.; Du, J.; Sun, Z. Functional partially linear quantile regression model. Metrika 2014, 77, 317–332. [Google Scholar] [CrossRef]
  12. Yu, P.; Zhang, Z.Z.; Du, J. A test of linearity in partial functional linear regression. Metrika 2016, 79, 953–969. [Google Scholar] [CrossRef]
  13. Cliff, A.; Ord, J.K. Spatial Autocorrelation; Pion: London, UK, 1973. [Google Scholar]
  14. Anselin, L. Spatial Econometrics: Methods and Models; Kluwer: Dordrecht, The Netherlands, 1988. [Google Scholar]
  15. Cressie, N. Statistics for Spatial Data; John Wiley & Sons: New York, NY, USA, 1993. [Google Scholar]
  16. Lee, L.F. Asymptotic distributions of quasi-maximum likelihood estimators for spatial econometric models. Econometrica 2004, 72, 1899–1926. [Google Scholar] [CrossRef]
  17. Su, L.; Jin, S. Profile quasi-maximum likelihood estimation of partially linear spatial autoregressive models. J. Econom. 2010, 157, 18–33. [Google Scholar] [CrossRef]
  18. Koch, M.; Krisztin, T. Applications for asynchronous multi-agent teams in nonlinear applied spatial econometrics. J. Internet Technol. 2014, 12, 1007–1014. [Google Scholar]
  19. Chen, J.; Wang, R.; Huang, Y. Semiparametric spatial autoregressive model: A two-step bayesian approach. Ann. Public Health Res. 2015, 2, 1012–1024. [Google Scholar]
  20. Du, J.; Sun, X.Q.; Cao, R.Y.; Zhang, Z.Z. Statistical inference for partially linear additive spatial autoregressive models. Spat. Statist. 2018, 25, 52–67. [Google Scholar] [CrossRef]
  21. Dauxois, J.; Pousse, A.; Romain, Y. Asymptotic theory for the principal component analysis of a vector random function: Some applications to statistical inference. J. Multivariate Anal. 1982, 12, 136–154. [Google Scholar] [CrossRef] [Green Version]
  22. Cardot, H.; Ferraty, F.; Sarda, P. Functional linear model. Statist. Probab. Lett. 1999, 45, 11–22. [Google Scholar] [CrossRef]
  23. Hall, P.; Horowitz, J.L. Methodology and convergence rates for functional linear regression. Ann. Statist. 2007, 35, 70–91. [Google Scholar] [CrossRef] [Green Version]
  24. Hall, P.; Hosseini-nasab, M. Theory for high-order bounds in functional principal components analysis. Math. Proc. Cambridge Philos. Soc. 2009, 146, 225–256. [Google Scholar] [CrossRef]
  25. Kelejian, H.H.; Prucha, I.R. A generalized spatial two-stage least squares procedure for estimating a spatial autoregressive model with autoregressive disturbances. J. Real Estate Financ. 1998, 17, 99–121. [Google Scholar] [CrossRef]
  26. Zhang, Y.Q.; Shen, D.M. Estimation of semi-parametric varying-coefficient spatial panel data models with random-effects. J. Statist. Plann. Inference 2015, 159, 64–80. [Google Scholar] [CrossRef]
  27. Hu, Y.P.; Xue, L.G.; Zhao, J.; Zhang, L.Y. Skew-normal partial functional linear model and homogeneity test. J. Statist. Plann. Inference 2020, 204, 116–127. [Google Scholar] [CrossRef]
  28. Case, A.C. Spatial patterns in household demand. Econometrica 1991, 59, 953–965. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Simulation result of γ ^ ( t ) when ρ = 0.2 , R = 50 , q = 5 , σ 2 = 0.25 . The solid curve denotes the true curve, the dash curve denotes its estimate.
Figure 1. Simulation result of γ ^ ( t ) when ρ = 0.2 , R = 50 , q = 5 , σ 2 = 0.25 . The solid curve denotes the true curve, the dash curve denotes its estimate.
Mathematics 08 01680 g001
Figure 2. Simulation result of γ ^ ( t ) when ρ = 0.7 , R = 70 , q = 5 , σ 2 = 1 . The solid curve denotes the true curve, the dash curve denotes its estimate.
Figure 2. Simulation result of γ ^ ( t ) when ρ = 0.7 , R = 70 , q = 5 , σ 2 = 1 . The solid curve denotes the true curve, the dash curve denotes its estimate.
Mathematics 08 01680 g002
Table 1. Simulation results for ρ = 0.2 .
Table 1. Simulation results for ρ = 0.2 .
σ 2 REst q = 2 q = 5 q = 8
BiasSDRASEBiasSDRASEBiasSDRASE
0.25 R = 50 ρ ^ −1.0 × 10 4 0.022 −1.4 × 10 4 0.023 −0.0010.021
β ^ 1 −0.0020.045 −6.5 × 10 4 0.028 4.0 × 10 4 0.022
β ^ 2 0.0030.048 −0.0010.034 −0.0020.030
γ ^ ( t ) 0.400 0.255 0.197
R = 70 ρ ^ −1.4 × 10 4 0.018 −0.0010.019 −9.2 × 10 5 0.019
β ^ 1 5.1 × 10 4 0.038 −2.2 × 10 4 0.023 1.5 × 10 4 0.018
β ^ 2 7.8 × 10 4 0.042 4.5 × 10 4 0.029 −1.8 × 10 4 0.026
γ ^ ( t ) 0.345 0.211 0.168
1 R = 50 ρ ^ −5.4 × 10 4 0.084 −0.0060.092 −0.0090.086
β ^ 1 −0.0120.179 −0.0050.113 6.6 × 10 4 0.087
β ^ 2 0.0160.188 −0.0060.136 −0.0090.118
γ ^ ( t ) 0.601 0.386 0.296
R = 70 ρ ^ −0.0010.070 −0.0090.075 −0.0040.075
β ^ 1 −5.8 × 10 4 0.151 −0.0020.092 1.6 × 10 4 0.072
β ^ 2 0.0030.166 −3.7 × 10 4 0.117 −0.0030.104
γ ^ ( t ) 0.517 0.317 0.251
Table 2. Simulation results for ρ = 0.5 .
Table 2. Simulation results for ρ = 0.5 .
σ 2 REst q = 2 q = 5 q = 8
BiasSDRASEBiasSDRASEBiasSDRASE
0.25 R = 50 ρ ^ −2.0 × 10 4 0.017 −1.3 × 10 4 0.015 −8.0 × 10 4 0.014
β ^ 1 −0.0020.046 −6.3 × 10 4 0.029 4.6 × 10 4 0.022
β ^ 2 0.0030.051 −0.0010.035 −0.0020.030
γ ^ ( t ) 0.401 0.255 0.197
R = 70 ρ ^ −1.9 × 10 4 0.014 −8.9 × 10 4 0.013 −7.2 × 10 5 0.012
β ^ 1 6.0 × 10 4 0.039 −1.0 × 10 4 0.023 1.6 × 10 4 0.018
β ^ 2 6.0 × 10 4 0.045 3.0 × 10 4 0.030 −2.0 × 10 4 0.027
γ ^ ( t ) 0.346 0.211 0.168
1 R = 50 ρ ^ −0.0020.066 −0.0040.062 −0.0060.056
β ^ 1 −0.0100.182 −0.0040.113 0.0010.088
β ^ 2 0.0130.200 −0.0080.141 −0.0100.121
γ ^ ( t ) 0.611 0.387 0.297
R = 70 ρ ^ −0.0020.055 −0.0060.051 −0.0030.049
β ^ 1 6.2 × 10 4 0.153 −0.0010.093 3.8 × 10 4 0.073
β ^ 2 6.7 × 10 4 0.178 − 0.0020.121 −0.0040.107
γ ^ ( t ) 0.525 0.317 0.251
Table 3. Simulation results for ρ = 0.7 .
Table 3. Simulation results for ρ = 0.7 .
σ 2 REst q = 2 q = 5 q = 8
BiasSDRASEBiasSDRASEBiasSDRASE
0.25 R = 50 ρ ^ −1.9 × 10 4 0.012 −9.4 × 10 5 0.010 −5.0 × 10 4 0.009
β ^ 1 −0.0010.047 −6.1 × 10 4 0.029 5.1 × 10 4 0.022
β ^ 2 0.0030.053 − 0.0010.036 −0.0020.031
γ ^ ( t ) 0.402 0.255 0.197
R = 70 ρ ^ −1.7 × 10 4 0.010 −5.7 × 10 4 0.008 −5.0 × 10 5 0.007
β ^ 1 7.0 × 10 4 0.040 −2.0 × 10 5 0.023 1.6 × 10 4 0.018
β ^ 2 4.4 × 10 4 0.047 1.9 × 10 4 0.031 −2.2 × 10 4 0.027
γ ^ ( t ) 0.348 0.211 0.168
1 R = 50 ρ ^ −0.0030.046 −0.0030.039 −0.0040.035
β ^ 1 −0.0090.187 −0.0040.114 0.0020.088
β ^ 2 0.0100.210 − 0.0090.145 −0.0110.123
γ ^ ( t ) 0.622 0.389 0.298
R = 70 ρ ^ −0.0020.038 −0.0040.032 −0.0020.030
β ^ 1 0.0020.157 −6.9 × 10 4 0.093 5.5 × 10 4 0.073
β ^ 2 −0.0020.187 −0.0030.124 −0.0040.109
γ ^ ( t ) 0.534 0.318 0.251

Share and Cite

MDPI and ACS Style

Hu, Y.; Wu, S.; Feng, S.; Jin, J. Estimation in Partial Functional Linear Spatial Autoregressive Model. Mathematics 2020, 8, 1680. https://doi.org/10.3390/math8101680

AMA Style

Hu Y, Wu S, Feng S, Jin J. Estimation in Partial Functional Linear Spatial Autoregressive Model. Mathematics. 2020; 8(10):1680. https://doi.org/10.3390/math8101680

Chicago/Turabian Style

Hu, Yuping, Siyu Wu, Sanying Feng, and Junliang Jin. 2020. "Estimation in Partial Functional Linear Spatial Autoregressive Model" Mathematics 8, no. 10: 1680. https://doi.org/10.3390/math8101680

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop