Next Article in Journal
Fractal Characteristics and Energy Evolution Analysis of Rocks under True Triaxial Unloading Conditions
Previous Article in Journal
Actor-Critic Neural-Network-Based Fractional-Order Sliding Mode Control for Attitude Tracking of Spacecraft with Uncertainties and Actuator Faults
Previous Article in Special Issue
Critical Exponents and Universality for Fractal Time Processes above the Upper Critical Dimensionality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

PQMLE and Generalized F-Test of Random Effects Semiparametric Model with Serially and Spatially Correlated Nonseparable Error

1
School of Mathematics and Statistics, Henan University of Science and Technology, Luoyang 471000, China
2
School of Mathematics and Statistics & Fujian Provincial Key Laboratory of Statistics and Artificial Intelligence, Fujian Normal University, Fuzhou 350117, China
3
School of Computer Science and Mathematics, Fujian University of Technology, Fuzhou 350117, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2024, 8(7), 386; https://doi.org/10.3390/fractalfract8070386
Submission received: 7 May 2024 / Revised: 24 June 2024 / Accepted: 27 June 2024 / Published: 28 June 2024
(This article belongs to the Special Issue Fractional Models and Statistical Applications)

Abstract

:
Semiparametric panel data models are powerful tools for analyzing data with complex characteristics such as linearity and nonlinearity of covariates. This study aims to investigate the estimation and testing of a random effects semiparametric model (RESPM) with serially and spatially correlated nonseparable error, utilizing a combination of profile quasi-maximum likelihood estimation and local linear approximation. Profile quasi-maximum likelihood estimators (PQMLEs) for unknowns and a generalized F-test statistic F N T are built to determine the beingness of nonlinear relationships. The asymptotic properties of PQMLEs and F N T are proven under regular assumptions. The Monte Carlo results imply that the PQMLEs and F N T performances are excellent on finite samples; however, missing the spatially and serially correlated error leads to estimator inefficiency and bias. Indonesian rice-farming data is used to illustrate the proposed approach, and indicates that l a n d a r e a exhibits a significant nonlinear relationship with r i c e y i e l d , in addition, h i g h - y i e l d v a r i e t i e s , m i x e d - y i e l d v a r i e t i e s , and s e e d w e i g h t have significant positive impacts on rice yield.

1. Introduction

By tracking each individual in a predetermined sample over time, a panel dataset with multiple observations can be collected. In contrast to cross-sectional data (Cheng and Chen [1]), the panel data structure is sufficiently abundant to permit estimation and testing of regression models that include not only individual characteristics observed by the researchers but also unobserved characteristics that vary across individuals or/and time periods, simply referred to as “individual or/and time effects”. In recent decades, many studies have focused on parametric panel data models (Chamberlain [2], Chamberlain [3], Hsiao [4], Baltagi [5], Arellano [6], Wooldridge [7], Thapa et al. [8], and Rehfeldt and Weiss [9]).
In practice, a lot of spatial panel datasets are collected from “locations” and require further studying. Therefore, based on classic parametric panel data models, different types of parametric panel data models with spatial correlations at different locations are presented. There are three fundamental types of spatial parametric panel data model (Elhorst [10]): spatial autoregressive (SAR) model (Brueckner [11]), spatial Durbin model (SDM) (LeSage [12], Elhorst [13]), and spatial error model (SEM) (Allers and Elhorst [14]). A more detailed discussion can be found in Elhorst [15], Baltagi and Li [16], Pesaran [17], Kapoor et al. [18], Lee and Yu [19], Mutl and Pfaffermayr [20], and Baltagi et al. [21]. Among these, the SEM introduced by Anselin [22] accounts for the interaction effects in spatially correlated determinants of the dependent variable that is excluded from the model or in unobserved shocks with a spatial mode (Elhorst [13]). The development of testing and estimation of SEM has been summarized in books by Baltagi [5], Elhorst [10], and Anselin [22] as well as in surveys by Baltagi et al. [23], Bordignon et al. [24], and Kelejian and Prucha [25], among others. However, the aforementioned SEM assumes that the only correlation over time results from the same regional impact presented across the panel. In spatial panel data analysis, this assumption may be too restrictive for real situations. For example, if an undetected shock influences investments across regions, behavioral relationships are affected, at least in the next few stages. Thus, ignoring serially correlated errors may lead to inefficient regression coefficient estimators (Baltagi [5]).
By adding serially correlated errors to SEM, parametric panel data models with serially and spatially correlated errors can be established. The extended models consider not only serial correlations through spatial units but also spatial correlations at every point temporally. Researchers have established two types of parametric panel data models with serially and spatially correlated separable and nonseparable errors. The model with the separable error was used to analyze the effects of public infrastructure investment on the costs and productivity of private enterprises (Cohen and Paul [26]). The MLE and corresponding asymptotic properties of spatial panel data model with serially and spatially correlated separable error were investigated by Lee and Yu [27]. A random effects parametric panel data model with serially and spatially correlated error was introduced by Parent and LeSage [28], and the estimators were obtained using the Markov chain Monte Carlo (MCMC) method. Elhorst [29] constructed the MLE of a parametric panel data model with serially and spatially correlated nonseparable error without establishing the asymptotic properties of the estimators. Lee and Yu [30] established the QMLE and corresponding asymptotic properties for both linear panel data models with serially and spatially correlated separable and nonseparable errors.
Although the theories and applications of the aforementioned linear parametric panel data models with serially and spatially correlated separable/nonseparable errors have been thoroughly explored, their usefulness in applications is often impractical because they lack flexibility and are limited in their ability to detect complicated structures (e.g., nonlinearity). Furthermore, a misspecification of the data generating process by the linear parametric panel data model may result in significant modeling bias and can lead to incorrect findings. For these reasons, Zhao et al. [31] investigated the estimation and testing issues of a partially linear single-index panel data model with serially and spatially correlated separable errors using the semiparametric minimum average variance method and a F-type test statistic. Li et al. [32] obtained the PQMLE, the generalized F-test, and the asymptotic properties for a partially linear nonparametric regression model with serially and spatially correlated separable error. Li et al. [33] studied the PQMLEs of the unknowns for the fixed effects partially linear varying coefficient panel data model with serially and spatially correlated nonseparable error, thereby deriving the consistency and asymptotic normality. Bai et al. [34] constructed the estimators of the parametric and nonparametric components of a partially linear varying-coefficient panel data model with serially and spatially correlated separable errors using weighted semiparametric least squares and weighted polynomial spline series methods, respectively, thereby proving their asymptotic normality.
By adding a nonparametric component to a random effects parametric model with serially and spatially correlated nonseparable errors, a new random effects semiparametric model (RESPM) can be established to concurrently capture the linearity and nonlinearity of the covariates, spatial correlation and serial correlation of errors, and individual random effects. This study explores its PQMLE and hypothesis testing issues from theoretical, simulation, and application perspectives. Our proposed model is different from the model proposed by Li et al. [32], which assumes spatially and serially correlated errors are separable. The difference in model structure leads to differences in the following aspects: (1) The parameter spaces (i.e, stationarity conditions) of the two models are different. (2) There are differences in the estimation process, assumption conditions and theorem proofs.
The remainder of this paper is organized as follows: In Section 2, a RESPM with serially and spatially correlated nonseparable error is introduced, and the estimation and testing procedures are established. Section 3 lists several regular assumptions and theorems for the estimators and test statistic. In Section 4 and Section 5, numerical simulations and real data analyses are presented. Finally, Section 6 summarizes the results. The Appendix contains the proofs of the lemmas and theorems.
Notation. Some important symbols and their definitions that appear throughout the paper are given in Table 1.

2. Estimation and Testing

2.1. The Model

The form of RESPM with serially and spatially correlated nonseparable errors is as follows:
y i t = z i t γ + f ( ι i t ) + b i + ν i t ,
where i ( 1 i N ) denotes the i-th spatial unit; t ( 1 t T ) denotes the t-th period; y i t denotes the observation of the response variable; z i t = ( z i t 1 , z i t 2 , , z i t p ) and ι i t = ( ι i t 1 , ι i t 2 , , ι i t q ) denote observations of p- and q-dimensional covariates, respectively; γ = ( γ 1 , γ 2 , , γ p ) is a p-dimensional regression coefficient vector of z i t ; f ( · ) is an unknown nonparameteric function; b i represents the i-th individual random effect; and ν i t represents the it-th error term and is independent of b i .
Let Y t = ( y 1 t , , y N t ) , Z t = ( z 1 t , , z N t ) , F t = ( f ( ι 1 t ) , , f ( ι N t ) ) , and b = ( b 1 , , b N ) . Then, model (1) can be expressed in the following matrix form
Y t = Z t γ + F t + b + ν t ,
and its error term follows a serially and spatially correlated nonseparable error structure.
ν t = ρ 1 W ν t + ρ 2 ν t 1 + ϵ t ,
where ν t = ( ν 1 t , , ν N t ) and W = ( w i j ) N × N is predetermined spatial weights matrix; ϵ t = ( ϵ 1 t , , ϵ N t ) , ϵ i t I I D ( 0 , ξ ϵ 2 ) , and b i I I D ( 0 , ξ b 2 ) , ξ ϵ 2 , ξ b 2 are the variances of ϵ i t and b i , respectively; and ρ 1 and ρ 2 denote the spatial and serial correlation coefficients, respectively. Similar to Elhorst [29], model stationarity requires not only | ρ 2 |   <   1 and 1 / ω m i n < ρ 1 < 1 / ω m a x but also
| ρ 2 | < 1 ρ 1 ω m a x , i f ρ 1 0 | ρ 2 | > 1 ρ 1 ω m i n , i f ρ 1 < 0
where ω m a x and ω m i n denote the largest and smallest eigenvalues of W, respectively. Furthermore, let δ 0 = ( ρ 10 , ρ 20 , γ 0 , ξ ϵ 0 2 , ξ b 0 2 ) be the true values of δ = ( ρ 1 , ρ 2 , γ , ξ ϵ 2 , ξ b 2 ) ; f 0 ( ι ) is the true value of f ( ι ) .

2.2. PQMLE

To ensure homogeneity of variance, (3) can be rewritten as
C ν t = ρ 2 ν t 1 + ν t ,
where C = C ( ρ 1 ) I N ρ 1 W , I N denotes an N × N identity matrix. By repeated substitution, we obtain the following equation: C ν t = ν t + ρ 2 C 1 ν t 1 + ρ 2 2 C 2 ν t 2 + because C is nonsingular. Therefore, E ( C ν t ) = 0 , and the variance of C ν t is
V a r ( C ν t ) = E ( ν t ν t ) + ρ 2 2 C 1 E ( ν t 1 ν t 1 ) C 1 + ρ 2 4 C 2 E ( ν t 2 ν t 2 ) C 2 + = ξ ϵ 2 ( I N + ρ 2 2 C 1 C 1 + ρ 2 4 C 2 C 2 + ) = ξ ϵ 2 [ I N ρ 2 2 ( C C ) 1 ] 1 .
Substituting ν t + b = Y t F t Z t γ and ν t 1 + b = Y t 1 F t 1 Z t 1 γ into (3), we obtain
C Y t = C ( F t + Z t γ ) + ρ 2 ( Y t 1 F t 1 Z t 1 γ ) + ϵ t + b , t > 1 .
Because E ( ν t ) = 0 and V a r ( ν t ) = ξ ϵ 2 C 1 [ I N ρ 2 2 ( C C ) 1 ] 1 C 1 , model (2) for the first period of Y t can be written as
Γ 1 / 2 C Y 1 = Γ 1 / 2 C F 1 + Γ 1 / 2 C Z 1 γ + Γ 1 / 2 C U b + ϵ 1 + b ,
where Γ = I N ρ 2 2 ( C C ) 1 and Γ 1 / 2 may potentially be the Cholesky or spectral decomposition of Γ ; U = I N l T , ⊗ denotes the Kronecker product. Letting η t = ϵ t + b , then η = U b + ϵ , and the covariance matrix of η can be calculated as follows:
Σ = V a r ( U b + ϵ ) = ξ ϵ 2 I + ξ b 2 I N ( l T l T ) , | Σ | = ξ ϵ 2 N ( T 1 ) ( ξ b 2 + T ξ ϵ 2 ) N ,
Σ 1 = 1 ξ ϵ 2 I + ( 1 ξ ϵ 2 + T ξ b 2 1 T ξ b 2 ) I N ( 1 T l T l T ) ,
where η = ( η 1 , , η T ) and ϵ = ( ϵ 1 , , ϵ T ) ; I denotes the N T × N T identity matrix; and l T is a T-dimensional vector consisting of ones. Hence, the log quasi-likelihood function of the proposed model is obtained as follows:
l o g L ( δ ) = N T 2 l o g ( 2 π ) + 1 2 l o g | Γ | + T l o g | C | N ( T 1 ) 2 l o g ξ ϵ 2 N 2 l o g ( ξ ϵ 2 + T ξ b 2 ) t = 2 T [ C ( Y t Z t γ F t ) ρ 2 ( Y t 1 Z t 1 γ F t 1 ) ] H [ C ( Y t Z t γ F t ) ρ 2 ( Y t 1 Z t 1 γ F t 1 ) ] 2 ( ξ ϵ 2 + T ξ b 2 ) t = 2 T [ C ( Y t Z t γ F t ) ρ 2 ( Y t 1 Z t 1 γ F t 1 ) ] ( I H ) [ C ( Y t Z t γ F t ) ρ 2 ( Y t 1 Z t 1 γ F t 1 ) ] 2 ξ ϵ 2 ( Y 1 Z 1 γ F 1 ) C Γ 1 / 2 ( I H ) Γ 1 / 2 C ( Y 1 Z 1 γ F 1 ) 2 ξ ϵ 2 ( Y 1 Z 1 γ F 1 ) C Γ 1 / 2 H Γ 1 / 2 C ( Y 1 Z 1 γ F 1 ) 2 ( ξ ϵ 2 + T ξ b 2 ) ,
where H = I N ( T 1 l T l T ) .
Because F t is unknown, Equation (6) cannot be maximized to obtain the quasi-maximum likelihood estimators of the unknowns. To solve this issue, the unknowns of the model are estimated by combining PQMLE and Working Independence Theory (Cai [35]). The primary steps are as follows:
Step 1. Let f ^ I N ( ι ) be the initial estimator of f ( ι ) . We further assume that γ is known and let f ^ I N ( ι ) = a ^ 1 . Then, ϕ ^ = ( a ^ 1 , ( S a ^ 2 ) ) can be obtained as
ϕ ^ = arg min a 1 , a 2 1 N T [ Y ˜ ι ( ι ) ϕ ] K S ( ι ) [ Y ˜ ι ( ι ) ϕ ] ,
where a 2 is the first derivative of f ( ι ) , Y ˜ = ( y ˜ 11 , , y ˜ 1 T , , y ˜ N 1 , , y ˜ N T ) , y ˜ i t = y i t z i t γ , ι ( ι ) = [ ι 11 ( ι ) , , ι 1 T ( ι ) , , ι N 1 ( ι ) , , ι N T ( ι ) ] , ι i t ( ι ) = [ 1 , S 1 ( ι i t ι ) ] ; | S | is the determinant of S = d i a g ( s 1 , , s q ) , K S ( ι ) = d i a g [ k S ( ι 11 ι ) , , k S ( ι 1 T ι ) , , k S ( ι N 1 ι ) , , k S ( ι N T ι ) ] ,   k S ( ι ) = | S | 1 k ( S 1 ι ) ; and k ( · ) is a q-dimensional kernel function.
Let ι ( ι ) = [ ι ( ι ) K S ( ι ) ι ( ι ) ] 1 ι ( ι ) K S ( ι ) , then ϕ ^ = ι ( ι ) Y ˜ and
f ^ I N ( ι ) = ι e ( ι ) Y ˜ ,
where ι e ( ι ) = e 0 ι ( ι ) , e 0 = ( 1 , 0 , , 0 ) .
Step 2. Denote Y 1 = Γ 1 / 2 C ( Y 1 ι e , 1 Y ) , Z 1 = Γ 1 / 2 C ( Z 1 ι e , 1 Z ) , Y t = C ( Y t ι e , t Y ) ρ 2 ( Y t 1 ι e , t 1 Y ) , Z t = C ( Z t ι e , t Z ) ρ 2 ( Z t 1 ι e , t 1 Z ) , where ι e , t is an N × N T block of ι e ( ι ) in Equation (7) and t > 2 . Substituting f ^ I N ( ι ) into (6), we obtain
l o g L ( δ , f ^ I N ( ι ) ) = 1 2 l o g | Γ | + T l o g | C | N ( T 1 ) 2 l o g ξ ϵ 2 N 2 l o g ( ξ ϵ 2 + T ξ b 2 ) ( Y Z γ ) H ( Y Z γ ) 2 ξ ϵ 2 ( Y Z γ ) ( I H ) ( Y Z γ ) 2 ( ξ ϵ 2 + T ξ b 2 ) ,
where Y = ( Y 1 , , Y T ) , Z = ( Z 1 , , Z T ) . Therefore, the initial estimators of γ , ξ ξ 2 , and ξ b 2 are:
ξ ^ 2 ϵ I N = 1 N ( T 1 ) Y ˜ ( I H ) Y ˜ ,
ξ ^ 2 b I N = 1 N T Y ˜ H Y ˜ 1 T ξ ^ ϵ I N 2 ,
γ ^ I N = [ Z ( H ξ ^ 2 ϵ I N + I H ξ ^ 2 ϵ I N + T ξ ^ 2 b I N ) Z ] 1 Z ( H ξ ^ 2 ϵ I N + I H ξ ^ 2 ϵ I N + T ξ ^ 2 b I N ) Y ,
where Y ˜ = ( Y ˜ 1 , , Y ˜ T ) , and Y ˜ t = Y t Z t γ .
Step 3. By substituting (9)–(11) into (8), the concentrated log quasi-likelihood function of ρ 1 and ρ 2 can be calculated as follows:
l o g L ˜ ( ρ 1 , ρ 2 ) = 1 2 l o g | Γ | + T l o g | C | N ( T 1 ) 2 l o g ξ ^ ϵ I N 2 N 2 l o g ( ξ ^ ϵ I N 2 + T ξ ^ b I N 2 ) .
Because Equation (12) is a nonlinear function of ρ 1 and ρ 2 , the corresponding estimators obtained by the optimization algorithm are
( ρ ^ 1 , ρ ^ 2 ) = arg max ρ 1 , ρ 2 l o g L ˜ ( ρ 1 , ρ 2 ) .
Step 4. Substituting ρ ^ 1 and ρ ^ 2 into (9)–(11), we obtain the ultimate estimators of γ , ξ ϵ 2 , and ξ b 2 as γ ^ , ξ ^ ϵ 2 , and ξ ^ b 2 , respectively. By replacing γ ^ , ξ ^ ϵ 2 , and ξ ^ b 2 in (7), the ultimate estimator f ^ ( ι ) of f ( ι ) are obtained. Hence,
ξ ϵ ^ 2 = 1 N ( T 1 ) Y ˜ ^ ( I H ) Y ˜ ^ ,
ξ b ^ 2 = 1 N T Y ˜ ^ H Y ˜ ^ 1 T ξ ^ ϵ 2 ,
γ ^ = [ Z ^ ( H ξ ^ ϵ 2 + I H ξ ^ ϵ 2 + T ξ ^ b 2 ) Z ^ ] 1 Z ^ ( H ξ ^ ϵ 2 + I H ξ ^ ϵ 2 + T ξ ^ b 2 ) Y ^ ,
f ^ ( ι ) = ι e ( ι ) Y ˜ ^ ,
where Y ˜ ^ = ( Y ˜ ^ 1 , , Y ˜ ^ T ) , Y ˜ ^ t = Y ^ t Z ^ t γ ^ , Y ^ = ( Y ^ 1 , , Y ^ T ) , Z ^ = ( Z ^ 1 , , Z ^ T ) , Y ^ 1 = Γ ^ 1 / 2 C ^ ( Y 1 ι e , 1 Y ) , Z ^ 1 = Γ ^ 1 / 2 C ^ ( Z 1 ι e , 1 Z ) , Y ^ t = C ^ ( Y t ι e , t Y ) ρ ^ 2 ( Y t 1 ι e , t 1 Y ) , Z ^ t = C ^ ( Z t ι e , t Z ) ρ ^ 2 ( Z t 1 ι e , t 1 Z ) , Γ ^ = I N ρ ^ 2 2 ( C ^ C ^ ) 1 , C ^ = I N ρ ^ 1 W , and Y ˜ ^ = Y Z γ ^ .

2.3. Hypothesis Test for Nonparametric Component

As outlined in Section 2.2, a hypothesis test is constructed to ensure the rationality of nonparametric model specification (1) as follows:
H 0 : f ( ι i t ) = ζ 0 + j = 1 q ζ j ι i t j v e r s u s H 1 : f ( ι i t ) ζ 0 + j = 1 q ζ j ι i t j ,
where ζ 0 , ζ 1 , , ζ q are constant parameters. Following the generalized F-test statistic introduced by Fan et al. [36], let f ^ ( ι i t ) be the estimator obtained by the PQML method in Section 2.2. Let f ˜ ( ι i t ) = ζ ˜ 0 + j = 1 q ζ ˜ j ι i t j and ζ ˜ be the quasi-maximum likelihood estimators. The resulting residual sum of squares can be obtained using the null and alternative hypotheses, respectively:
R S S ( H 0 ) = i N t = 1 T [ y i t f ˜ ( ι i t ) z i t γ ˜ ] 2 , R S S ( H 1 ) = i N t = 1 T [ y i t f ^ ( ι i t ) z i t γ ^ ] 2 .
To test the null hypothesis, a test statistic is conducted as follows:
F N T = N T 2 R S S ( H 0 ) R S S ( H 1 ) R S S ( H 1 ) .

3. Asymptotic Theory

3.1. Assumptions

Before presenting the theoretical analysis of the proposed PQMLEs and test statistic, we make some regular assumptions.
Assumption 1.
(i) 
Covariates { z i t , ι i t } i = 1 , t = 1 N , T are nonstochastic regressors which are uniformly bounded (UB) in Z × Δ , where Z and Δ are parameter spaces of z i t and ι i t , respectively.
(ii) 
There exist m t ( · ) and a continuous function ε ( · ) that satisfy m t ( · ) > 0 and l i m N 1 N i = 1 N ε ( ι i ) = ε ( ι ) m t ( ι ) d ι .
Assumption 2.
(i) 
w i i = 0 , w i j = O ( 1 / l N ) , where W = ( w i j ) N × N .
(ii) 
When N , l N / N 0 holds.
(iii) 
C is nonsingular in parameter space Λ.
(iv) 
The row and column sums of W and C 1 are UB.
Assumption 3.
As N , S 0 , N | S | 2 , S 4 | S | 1 0 and N | S | S 4 M [ 0 , ) , where M is a constant.
Assumption 4.
K ( · ) is a kernel function on R q with 0 odd order moments.
Assumption 5.
The models (1)–(3) are tenable with δ 0 = ( ρ 10 , ρ 20 , γ 0 , ξ ϵ 0 2 , ξ b 0 2 ) .
Assumption 6.
Σ δ 0 is a positive definite matrix, as defined in Theorem 2.
Remark 1.
Assumption 1 provides the features of the covariates, random error term, and density function. Assumption 2 concerns the basic features of the spatial weights matrix parallels Assumption 3 in Su and Jin [37] and Assumption 5 in Lee [38]. Assumptions 3 and 4 are related to the bandwidth and kernel functions. Specifically, Assumption 3 is similar to Assumption 4 of Hamilton and Truong [39] and Assumption 7 of Su and Ullah [40] and is easily satisfied by considering S = d i a g ( s 1 , . . . , s q ) with s i N 1 / ( 4 + q ) for q < 4 . Assumptions 5 and 6 are important for proving the consistency and asymptotic properties of the estimators.

3.2. Asymptotic Properties

The reduced form of y i t is obtained as follows:
y i t = z i t γ 0 + f 0 ( ι i t ) + b i + ν i t ,
ν i t = ρ 10 j = 1 N w i j ν j t + ρ 20 ν i , t 1 + ϵ i t .
Note that ι e = ( ι e 11 , , ι e N T ) , where ι e i t = ι e ( ι i t ) . Denote a typical entry of ι e ( ι ) by ι e ( ι i t , ι ) = e [ ι ( ι ) K S ( ι ) ι ( ι ) ] 1 ι i t ( ι ) K S ( ι i t ι ) , where ι i t ( ι ) is a typical column of ι ( ι ) .
Lemma 1.
Upon fulfilling Assumptions 1–5, we have ι = O p ( N 1 / 2 | S | 1 / 2 ) .
Hamilton and Truong [39] provided the proof.
Lemma 2.
Upon fulfilling Assumptions 1–5, we have
(i) 
ι e ( ι i t , ι ) = N 1 K S ( ι i t ι ) m ¯ ( ι ) [ 1 + o p ( 1 ) ] , where m ¯ ( ι ) = t = 1 T m t ( ι ) .
(ii) 
l i m N P { N 1 [ ι ( ι ) K S ( ι ) ι ( ι ) ] i j M } = 1 for ι Δ , i , j = 1 , , q + 1 .
(iii) 
l i m N P { s u p ι m a x 1 i N , 1 t T | ι e ( ι i t , ι ) | O ( N T ) 1 | S | 1 } = 1 .
Su and Ullah [41] provided the proofs.
Lemma 3.
Upon fulfilling Assumptions 1–5, we have ( I ι ) ( F t ) N T × N T = O p ( S 2 ) .
Hamilton and Truong [39] provided the proof.
Using the lemmas listed above, we establish the following theorems:
Theorem 1.
Fulfillment of Assumptions 1–5 leads to δ ^ δ 0 = o p ( 1 ) .
Theorem 2.
Fulfillment of Assumptions 1–6 leads to
N T ( δ ^ δ 0 ) D N ( 0 , l i m N ( Σ δ 0 1 + Σ δ 0 1 Ω δ 0 Σ δ 0 1 ) ) ,
where Σ δ 0 = E [ 1 N T 2 l o g L ( δ 0 ) δ δ ] , Ω δ 0 is defined in (A7).
Theorem 3.
Fulfillment of Assumptions 1–4 leads to
N | S | ( ϕ ^ ϕ ) D N ( 0 , σ 2 ( ϕ ) ) ,
where σ 2 ( ϕ ) = G ´ ι ( ι ) σ η 2 ι ( ι ) G ´ and the form of G ´ is provided in the Appendix.
Theorem 4.
Fulfillment of Assumptions 1–4 leads to
r k F N T D χ d f 2 ,
where d f = r k c k | Δ | / | S | , r k = K ( 0 ) 1 2 K 2 ( ι ) d ι ( K ( ι ) 1 2 K K ( ι ) ) 2 d ι , c k = K ( 0 ) 1 2 K 2 ( ι ) d ι , Δ is the support set of ι, K ( ι ) = d i a g ( S 1 k ( S 1 ( ι 11 ι ) ) , , S 1 k ( S 1 ( ι N T ι ) ) ) .
Remark 2.
This paper focuses on the case with a large N and fixed T. In terms of estimation procedure, there is no restriction on N and T. Therefore, the estimation method can be applied to both fixed and large T cases. However, the asymptotic theory of estimators with both the fixed T and large T cases may be different.

4. Monte Carlo Simulation

4.1. Performance of PQMLEs

This section is devoted to use Monte Carlo simulation to examine the performance of PQMLEs in finite sample cases. The mean square error ( M S E ), mean absolute deviation error ( M A D E ), and standard deviation ( S D ) are used as the evaluation criteria, where
MADE j = Q 1 q = 1 Q | f ^ ( ι q ) f ( ι q ) | , j = 1 , 2 , , s r t ,
{ ι q } q = 1 Q are fixed nodes in the support set of ι , and s r t denotes the number of repetitions. In the subsequent simulations, K ( ι ) = 3 4 5 ( 1 1 5 ι 2 ) 1 ( ι 2 5 ) (Su [42]) and the rule of thumb method (Mack and Silverman [43]) is utilized to select window width and srt is set as 1000.
The simulation data are generated using the following model:
y i t = z i t γ + f ( ι i t ) + b i + ν i t , ν i t = ρ 1 j = 1 N w i j ν j t + ρ 2 ν i , t 1 + ϵ i t , i = 1 , , N , t = 1 , , T ,
where N, T are the numbers of spatial unit and time period, respectively. In order to study the performance of PQMLEs in different finite sample cases, we take N = 40 , 60 , 80 and T = 10 , 15 , 20 respectively. z i t U [ 2 , 2 ] , ι i t U [ 3 , 3 ] , b i i . i . d . N ( 0 , 0.5 ) , ϵ i t i . i . d . N ( 0 , 0.5 ) , f ( ι i t ) = s i n ( 1 2 ι i t 2 ) + 0.6 ι i t + 1 , γ = 1 , respectively. According to the parameter space of ρ 1 and ρ 2 , we set ρ 1 = 0.2 , 0.7 , and ρ 2 = 0.4 , 0.5 , respectively. W = ( w i j ) N × N is replaced with the Rook weights matrix (Anselin [22]). The simulation results are listed in Figure 1 and Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7.
Figure 1 shows the trajectories of f ^ ( ι ) and its 95% confidence interval for ρ 1 = 0.2 and ρ 2 = 0.7 . In Figure 1, the green solid, red solid, and black dashed curves denote f ^ ( ι ) , f ( ι ) , and the corresponding 95% confidence intervals, respectively. Upon examining each curve and subgraph carefully, clearly, f ^ ( ι ) is quite close to f ( ι ) and its confidence bandwidth is tight, indicating that the PQMLE of nonparametric component works effectively on finite samples.
In Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7, we observe the following: First, the M S E s of ρ ^ 1 , ρ ^ 2 , and γ ^ are extremely small and decrease with an increase in N. The M S E s of ξ ^ ϵ 2 and ξ ^ b 2 are slightly larger than those of the other parameter estimates, but decrease with an increase in N or T. When N is fixed and T gradually increases, the behavior of the parameter estimates resembles that of N varying under a fixed T. Second, the M e d i a n s and S D s for M A D E s of f ^ ( ι ) decrease as T or N increases. Thus, the PQMLEs are convergent.

4.2. Monte Carlo Simulation of Multivariate Nonparametric Estimates

A two-dimensional nonparametric function is selected for the simulation experiment. Considering model (21), we set f ( ι i t ) = f ( ι i t 1 , ι i t 2 ) = s i n ( ι i t 1 2 + ι i t 2 2 + 4 ι i t 1 ) , ι i t 1 U [ 0 , 1 ] , ι i t 2 U [ 0 , 1 ] , ρ 1 = 0.2 , ρ 2 = 0.7 , N = 40 , and T = 10 . The settings of the other variables in this simulation are the same as those of the model in Section 4.1. The simulation results indicate that the parametric estimation results are similar to those described in Section 4.1. To save space, we only present the nonparametric simulation results. Figure 2 and Figure 3 show the trajectories of f ( ι ) and f ^ ( ι ) , where Figure 2 is f ( ι ) and Figure 3 is f ^ ( ι ) . The median and SD values of f ^ ( ι ) are 0.2987 and 0.1328, respectively. The trajectory of f ^ ( ι ) is similar to the real trajectory, and the fitting effect is acceptable.

4.3. Simulation Results of Different Models

To examine the necessity of including f ( ι ) , ρ 1 , and ρ 2 of the model (21), four new models are established by successively eliminating f ( ι ) , ρ 1 , ρ 2 , ρ 1 , and ρ 2 from model (21) as follows:
y i t = z i t γ + b i + ν i t , ν i t = ρ 1 j = 1 N w i j ν j t + ρ 2 ν i , t 1 + ϵ i t ,
y i t = z i t γ + f ( ι i t ) + b i + ν i t , ν i t = ρ 2 ν i , t 1 + ϵ i t ,
y i t = z i t γ + f ( ι i t ) + b i + ν i t , ν i t = ρ 1 j = 1 N w i j ν j t + ϵ i t ,
y i t = z i t γ + f ( ι i t ) + b i + ϵ i t ,
where all the variables of the aforementioned four models have the same setting as in model (21). To save space, the following simulation results only present the cases of ρ 1 = 0.4 , ρ 2 = 0.5 , N = 40 , T = 10 , 15 , and s r t = 1000 : The PQMLE results for the parameters of models (21)–(25) are provided in Table 8.
Table 8 displays the parametric estimation results for models (21)–(25), where G R M represents the growth rate of MSE which is compared with the MSE of model (21). From Table 7, the following facts can be deduced: (1) The M e a n s of the parameter estimates in model (21) are closer to the real value as sample size increases. The M e a n s of parameter estimates in models (22)–(25) approach the real value with increasing sample size except for ρ ^ 1 , ρ ^ 2 , and ξ ^ b 2 in model (22), γ ^ and ξ ^ b 2 in model (23), γ ^ and ξ ^ b 2 in model (24), and all the parameters in model (25). (2) Compared to model (21), the M S E s of each parameter in models (22)–(25) increase, and for almost all parametric estimates, they decrease with increasing sample size, except for ρ ^ 2 and ξ ^ b 2 in model (22), ξ ^ b 2 in model (23), and ρ ^ 1 in model (24). (3) The G R M s of all parametric estimates are large. However, G R M s do not decrease with increasing sample size for some parameter estimates, such as ρ ^ 2 and ξ ^ b 2 in model (22), ξ ^ ϵ 2 and ξ ^ b 2 in model (23), and ρ ^ 1 and ξ ^ b 2 in model (24). This implies that consistency of parameter estimation is difficult to ensure in case of model misspecification.

4.4. Performance of the Testing

In this section, to assess the performance of F N T in Section 2.3, the power is calculated. Consider model (21), where the settings of z i t , b i , w i j and ϵ i t remain unchanged from those in Section 4.1, namely, ρ 1 = 0.2 , ρ 2 = 0.2 , N = 80 , T = 8 , 16, s r t = 1000 . Thus, we study the following hypotheses:
H 0 : f ( ι i t ) = ζ 0 + ζ 1 ι i t v e r s u s H 1 : f ( ι i t ) ζ 0 + ζ 1 ι i t ,
where f ( ι i t ) denotes a series of nonparametric functions with f ( ι i t ) = τ s i n ( 3 ι i t 2 ) + 0.6 ι i t + 1 , ι i t U [ 0.2 , 1.6 ] and τ is set to ( 0 , 0.05 , 0.1 , 0.2 , 0.5 ) . f ( ι i t ) = τ s i n ( 3 ι i t 2 ) + 0.6 ι i t + 1 is a combination of a nonparametric function controlled by τ and a linear function. Moreover, when τ is 0, f ( ι i t ) is a linear function, and as τ increases slightly, f ( ι i t ) is a nonlinear function. All the trajectories of f ( ι ) at different τ values are shown in Figure 4, where clearly the trajectory of f ( ι ) at τ = 0 is the same as that under the null hypothesis. As τ increases, the trajectory of f ( ι ) digresses from the null hypothesis. The powers of F N T are shown in Figure 5.
In Figure 5, (1) when τ = 0, the test size approaches its significance level and surges with an increase in τ . This implies that F N T is susceptible to the alternative hypothesis of the given test problem. (2) For the same N, testing at T = 16 outperforms that of This demonstrates that testing performance improves with increasing sample size.

5. Real Data Analysis

In this section, the proposed estimation and testing methods are demonstrated on the Indonesian rice farm dataset. This dataset is collected by the Agricultural Economic Research Center of the Ministry of Agriculture of Indonesia. It covers 171 farms over six growing seasons (three wet and three dry seasons) and has been widely used in random effects models (Feng and Horrace [44]).
The dataset includes observations of five variables which are rice yield, high-yield varieties, mixed-yield varieties, seed weight, and land area, respectively. Rice yield is taken as response variable and others are taken as covariates whose definitions are given in Table 9.
The testing method proposed in Section 2.3 is used to determine whether a nonlinear connection exists between the covariates and response variable. The outcomes of the F-test are listed in Table 10. Table 10 shows that l a n d a r e a (other covariates) exhibits a significant nonlinear (linear) relationship with r i c e y i e l d at a significance level of α = 0.01 .
Therefore, the following model is established for the data analysis:
y i t = z i t γ + f ( ι i t ) + b i + ν i t , ν i t = ρ 1 j = 1 N w i j ν j t + ρ 2 ν i , t 1 + ϵ i t , 1 i 171 , 1 t 6 ,
where y i t , z i t , and ι i t denote the denotes the ith observation of log(rice), high- and mixed-yield varieties, seed weight, and land area during the tth growing season, respectively. Furthermore, the w i j is determined using the following method (Druska and Horrace [45]):
w i j = 1 , i f i j w i j = 0 , i f i = j
where i , j denote the i-th and j-th village, respectively. Moreover, the weights matrix is normalized to ensure the elements of each row sum to 1. Additionally, the weights matrix is assumed time invariant, so the t subscript can be dropped.
Table 11 shows the parametric estimation results, which reveals that (1) All linear covariates γ ^ i ( i = 1 , 2 , 3 ) > 0 have a positive impact on rice yield. (2) ξ ^ ϵ 2 = 0.3140 demonstrates that the rice yield is generally steady and less susceptible to exogenous perturbations.
Figure 6 presents f ^ ( ι ) and its 95% confidence interval, where the blue short dashed curve denotes f ^ ( ι ) , and the red and green solid curves correspond to the 95% confidence bands. Clearly, l a n d a r e a has a nonlinear effect on rice yield.

6. Summary

This study mainly explores the PQMLE and F-test of RESPM with serially and spatially correlated nonseparable error. Our model simultaneously captures the linear and nonlinear effects of the covariates of spatially and serially correlated errors and individual random effects. PQMLEs are constructed for unknowns along with test statistic F N T to determine the beingness of nonlinear relationships. Their asymptotic properties are derived under regular conditions. The Monte Carlo results indicate that the proposed estimators and test statistic behave well on finite samples and that ignoring the spatial and serial correlations of errors may lead to inefficient and biased estimators. The proposed estimation and testing techniques are used to analyze Indonesian rice farming data.
The paper can be extended in several ways. We can apply spline estimation method or local polynomial method to approximate the nonparametric function, and combine GMM (Cheng and Chen [46]), quadratic inference function estimation (Qu [47]) or PQMLE to obtain the estimators of unknowns. Additionally, we can also consider extending these methods to other similar models, and the Bayesian analysis, variable selection, and quantile regression analysis of these models are also worth studying. In the future, we may use the proposed estimation and test methods to conduct empirical analysis, for example, investigating the drivers of C O 2 , P M 2.5 and energy efficiency.

Author Contributions

Formal analysis, S.L. and D.C.; methodology, S.L. and J.C.; software and writing—original draft, S.L.; supervision, writing—review and editing, and funding acquisition, J.C.; data curation, D.C. All authors have read and agreed to the published version of manuscript.

Funding

This research was funded by National Social Science Fund of China (22BTJ024) and Natural Science Foundation of Fujian Province (2020J01170, 2022J01193).

Data Availability Statement

Data is available upon request from the authors.

Acknowledgments

The authors thank the editors and reviewers for their hard work, who have made outstanding contributions to the improvement of the quality of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Proof of Theorem 1.
Define Q N T = m a x γ , ξ ϵ 2 E [ l o g L ( δ , f ^ I N ( ι ) ) ] . A straightforward calculation provides
Q N T ( ρ 1 , ρ 2 ) = 1 2 l o g | Γ | + T l o g | C | N ( T 1 ) 2 E [ l o g ξ ^ ϵ I N 2 ] N 2 E [ l o g ( ξ ^ ϵ I N 2 + T ξ ^ b I N 2 ) ]
and
l o g L ˜ ( ρ 1 , ρ 2 ) = 1 2 l o g | Γ | + T l o g | C | N ( T 1 ) 2 l o g ξ ^ ϵ I N 2 N 2 l o g ( ξ ^ ϵ I N 2 + T ξ ^ b I N 2 ) ,
The method for proving the consistency of δ ^ is adopted from Lee [38] and White [48] (Theorem 3.4), to show
1 N T [ l o g L ( ρ 1 , ρ 2 ) Q N T ( ρ 1 , ρ 2 ) ] = o p ( 1 ) .
By (11), we have 1 N T [ l o g L ( ρ 1 , ρ 2 ) Q n ( ρ 1 , ρ 2 ) ] = T 1 2 T [ l o g ξ ^ ϵ I N 2 l o g E ( ξ ^ ϵ I N 2 ) ] + 1 2 T [ l o g ( ξ ^ ϵ I N 2 + T ξ ^ b I N 2 ) E ( l o g ( ξ ^ ϵ I N 2 + T ξ ^ b I N 2 ) ) ] .
Therefore, to prove (A1), we only need to prove that
ξ ^ ϵ I N 2 E ( ξ ^ ϵ I N 2 ) = o p ( 1 )
and
ξ ^ b I N 2 E ( ξ ^ b I N 2 ) = o p ( 1 ) .
First, we prove that (A2) holds. Recall
ξ ^ 2 ϵ I N = 1 N ( T 1 ) Y ˜ ( I H ) Y ˜ ,
where Y ˜ = ( Y ˜ 1 , , Y ˜ T ) , Y ˜ 1 = Γ 1 / 2 C ( Y 1 ι e , 1 Y ) Γ 1 / 2 C ( Z 1 ι e , 1 Z ) , and Y ˜ t = C ( Y t ι e , t Y ) ρ 2 ( Y t 1 ι e , t 1 Y ) [ C ( Z t ι e , t Z ) ρ 2 ( Z t 1 ι e , t 1 Z ) ] ( t = 2 , , T ) . By a straightforward calculation, we obtain:
ξ ^ 2 ϵ I N = D 1 D 2 ,
where D 1 = 1 N ( T 1 ) Y ˜ Y ˜ and D 2 = 1 N ( T 1 ) Y ˜ H Y ˜ .
For D 1 ,
D 1 = 1 N ( T 1 ) Y ˜ 1 Y ˜ 1 + 1 N ( T 1 ) t = 2 T Y ˜ t Y ˜ t .
For the first term of (A4), we have
1 N ( T 1 ) Y ˜ 1 Y ˜ 1 = [ Γ 1 / 2 C ( Y 1 ι e , 1 Y ) Γ 1 / 2 C ( Z 1 ι e , 1 Z ) ] [ Γ 1 / 2 C ( Y 1 ι e , 1 Y ) Γ 1 / 2 C ( Z 1 ι e , 1 Z ) ] N ( T 1 ) = [ Γ 0 1 2 C 0 F 10 + η 1 Γ 0 1 2 C 0 ι e , 1 ( Y Z γ ) ] N ( T 1 ) [ Γ 0 1 2 C 0 F 10 + η 1 Γ 0 1 2 C 0 ι e , 1 ( Y Z γ ) ] N ( T 1 )
and
ι e , 1 Y = ι e , 1 F 10 + Z 1 γ 0 + C 0 1 Γ 0 1 2 η 10 F 20 + Z 2 γ 0 + ρ 20 C 0 2 Γ 0 1 2 η 10 + C 0 1 η 20 F 30 + Z 3 γ 0 + ρ 20 2 C 0 3 Γ 0 1 2 η 10 + ρ 20 C 0 2 η 20 + C 0 1 η 30 F T 0 + Z T γ 0 + ρ 20 T 1 C 0 T Γ 0 1 2 η 10 + ρ 20 T 2 C 0 T + 1 η 20 + + C 0 1 η T 0 ,
then
1 N ( T 1 ) Y ˜ 1 Y ˜ 1 = D 11 + D 12 + D 13 ,
where
D 11 = 1 N ( T 1 ) ( Γ 0 1 2 C 0 F 10 Γ 0 1 2 C 0 ι e , 1 F 0 ) ( Γ 0 1 2 C 0 F 10 Γ 0 1 2 C 0 ι e , 1 F 0 ) , D 12 = 1 N ( T 1 ) ( η 10 C 0 1 Γ 0 1 2 η 10 ρ 20 C 0 2 Γ 0 1 2 e 10 + C 0 1 η 20 ρ 20 2 C 0 3 Γ 0 1 2 η 10 + ρ 20 C 0 2 η 20 + C 0 1 η 30 ρ 20 T 1 C 0 T Γ 0 1 2 η 10 + ρ 20 T 2 C 0 T + 1 η 20 + + C 0 1 η T 0 ) × ( Γ 0 1 2 C 0 F 10 Γ 0 1 2 C 0 ι e , 1 F 0 ) ,
D 13 = 1 N ( T 1 ) ( η 10 B 0 1 Γ 0 1 2 η 10 ρ 20 C 0 2 Γ 0 1 2 η 10 + C 0 1 η 20 ρ 20 2 C 0 3 Γ 0 1 2 η 10 + ρ 20 C 0 2 η 20 + C 0 1 η 30 ρ 20 T 1 C 0 T Γ 0 1 2 η 10 + ρ 20 T 2 C 0 T + 1 η 20 + + C 0 1 η T 0 ) × ( η 10 C 0 1 Γ 0 1 2 η 10 ρ 20 C 0 2 Γ 0 1 2 η 10 + C 0 1 η 20 ρ 20 2 C 0 3 Γ 0 1 2 η 10 + ρ 20 C 0 2 η 20 + C 0 1 η 30 ρ 20 T 1 C 0 T Γ 0 1 2 η 10 + ρ 20 T 2 C 0 T + 1 η 20 + + C 0 1 η T 0 ) ,
and F 0 = ( F 10 , , F T 0 ) .
For the above expression of D 11 , we find that the first term converges to the probabilistic mean with certainty.
The above expression for D 12 can be decomposed as follows:
D 12 = D 121 D 122 D 123 + D 124 ,
where D 121 = 1 N ( T 1 ) η 10 Γ 0 1 2 C 0 F 10 , D 122 = 1 N ( T 1 ) η 10 Γ 0 1 2 C 0 ι e , 1 F 0 ,
D 123 = 1 N ( T 1 ) ( Γ 0 1 2 C 0 ι e , 1 C 0 1 Γ 0 1 2 η 10 ρ 20 C 0 2 Γ 0 1 2 e 10 + C 0 1 η 20 ρ 20 2 C 0 3 Γ 0 1 2 η 10 + ρ 20 C 0 2 η 20 + C 0 1 η 30 ρ 20 T 1 C 0 T Γ 0 1 2 η 10 + ρ 20 T 2 C 0 T + 1 η 20 + + C 0 1 η T 0 ) Γ 0 1 2 C 0 F 10 ,
D 124 = 1 N ( T 1 ) ( Γ 0 1 2 C 0 ι e , 1 C 0 1 Γ 0 1 2 η 10 ρ 20 C 0 2 Γ 0 1 2 η 10 + C 0 1 η 20 ρ 20 2 C 0 3 Γ 0 1 2 η 10 + ρ 20 C 0 2 η 20 + C 0 1 η 30 ρ 20 T 1 C 0 T Γ 0 1 2 η 10 + ρ 20 T 2 C 0 T + 1 η 20 + + C 0 1 η T 0 ) Γ 0 1 2 C 0 ι e , 1 F 0 .
By simple calculations and Lemma 1, we obtain
E ( D 121 ) = 0
and
V a r ( D 121 ) = 1 N 2 ( T 1 ) 2 v a r ( η 10 ) ( Γ 0 1 2 C 0 F 10 ) ( Γ 0 1 2 C 0 F 10 ) = O ( 1 ) .
According to Chebyshev’s law of large numbers, D 121 E [ D 121 ] = o p ( 1 ) . Similarly, we obtain D 12 i E [ D 12 i ] = o p ( 1 ) ( i = 2 , 3 , 4 ) . Therefore, D 12 E [ D 12 ] = o p ( 1 ) is proven, and thus, D 1 . The proof of D 2 E [ D 2 ] = o p ( 1 ) is similar to that of D 1 E [ D 1 ] = o p ( 1 ) . To avoid repetition, this is not described in this article. Combined with Assumption 5, the consistency of δ ^ is obtained.
Proof of Theorem 2.
Taylor expanding (8) at δ 0 , we obtain
l o g L ( δ ) δ | δ = δ 0 + 2 l o g L ( δ ) δ δ | δ = δ ˜ ( δ ^ δ 0 ) = 0 ,
where δ ˜ = ( ρ ˜ 1 , ρ ˜ 2 , γ ˜ , ξ ϵ 2 ˜ , ξ b 2 ˜ ) and δ ˜ lies in between δ ^ and δ 0 . By Theorem 1, we have δ ˜ P δ 0 .
Denote
l o g L ( δ 0 ) δ = ^ l o g L ( δ ) δ | δ = δ 0 , 2 l o g L ( δ 0 ) δ δ = ^ 2 l o g L ( δ ) δ δ | δ = δ 0 , 2 l o g L ( δ ˜ ) δ δ = ^ 2 l o g L ( δ ) δ δ | δ = δ ˜ .
Thus,
N T ( δ ^ δ 0 ) = ( 1 N T 2 l o g L ( δ ˜ ) δ δ ) 1 1 N T l o g L ( δ 0 ) δ .
Next, we need to prove
1 N T 2 l o g L ( δ ˜ ) δ δ 1 N T 2 l o g L ( δ 0 ) δ δ = o p ( 1 )
and
1 N T l o g L ( δ 0 ) δ D N ( 0 , Σ δ 0 + Ω δ ) .
To prove (A5), we need to show that each element of 1 N T 2 l o g L ( δ ˜ ) δ δ 1 N T 2 l o g L ( δ 0 ) δ δ converges to 0 in probability. It can be shown that
1 N T l o g L ( δ 0 ) ρ 1 = ρ 20 2 N T t r ( Γ 0 1 C 0 3 W ) T N t r ( C 0 1 W ) + η A 4 W ( I H ) A 1 η N T ξ ϵ 0 2 + η 1 W Γ 0 1 2 H Γ 0 1 2 C 0 η 1 N T ( ξ ϵ 0 2 + T ξ b 0 2 ) + ρ 20 2 η 1 C 0 [ Γ 0 1 2 ( C 0 C 0 ) 2 W C 0 ] H Γ 0 1 2 C 0 η 1 + η A 4 W H A 1 η N T ( ξ ϵ 0 2 + T ξ b 0 2 ) + η 1 W Γ 0 1 2 ( I H ) Γ 0 1 2 C 0 η 1 + ρ 20 2 η 1 C 0 [ Γ 0 1 2 ( C 0 C 0 ) 2 W C 0 ] ( I H ) Γ 0 1 2 C 0 η 1 N T ξ ϵ 0 2 , 1 N T l o g L ( δ 0 ) ρ 2 = ρ 20 N T t r ( Γ 0 1 C 0 2 ) + ρ 20 η 1 C 0 Γ 0 1 2 ( C 0 C 0 ) 1 H Γ 0 1 2 C 0 η 1 + η A 2 A 3 H A 1 η N T ( ξ ϵ 0 2 + T ξ b 0 2 ) + ρ 20 η 1 C 0 Γ 0 1 2 ( C 0 C 0 ) 1 ( I H ) Γ 0 1 2 C 0 η 1 + η A 2 A 3 ( I H ) A 1 η N T ξ ϵ 0 2 ,
1 N T l o g L ( δ 0 ) γ = Z 1 C 0 Γ 0 1 2 H Γ 0 1 2 C 0 η 1 + Z ( C ˜ A 1 ρ 20 A 2 ) H A 1 η N T ( ξ ϵ 0 2 + T ξ b 0 2 ) + Z ( C ˜ A 1 ρ 20 A 2 ) ( I H ) A 1 η + Z 1 C 0 Γ 0 1 2 ( I H ) Γ 0 1 2 C 0 η 1 N T ξ ϵ 0 2 , 1 N T l o g L ( δ 0 ) ξ ϵ 2 = N ( T 1 ) 2 T ξ ϵ 0 2 N 2 T ( ξ ϵ 0 2 + T ξ b 0 2 ) + η 1 C 0 Γ 0 1 2 H Γ 0 1 2 C 0 η 1 2 N T ( ξ ϵ 0 2 + T ξ b 0 2 ) 2 + η 1 C 0 Γ 0 1 2 ( I H ) Γ 0 1 2 C 0 η 1 2 N T ξ ϵ 0 2 + η A 1 H A 1 η 2 N T ( ξ ϵ 0 2 + T ξ b 0 2 ) 2 + η A 1 ( I H ) A 1 η 2 N T ξ ϵ 0 4 , 1 N T l o g L ( δ 0 ) ξ b 2 = T 2 N ( ξ ϵ 0 2 + T ξ b 0 2 ) 2 + T η 1 C 0 Γ 0 1 2 H Γ 0 1 2 C 0 η 1 2 N ( ξ ϵ 0 2 + T ξ b 0 2 ) 2 + T η A 1 H A 1 η 2 N ( ξ ϵ 0 2 + T ξ b 0 2 ) 2 ,
where C ˜ = I T C and I T is T × T identity matrix.
Consequently, we have
1 N T 2 l o g L ( δ 0 ) ρ 1 2 = ( 1 N T + 11 ρ 20 2 N T + 2 ρ 20 2 ξ b 0 2 N ) t r [ ( C 0 1 W ) 2 ] + o ( 1 ) , 1 N T 2 l o g L ( δ 0 ) ρ 1 ρ 2 = ρ 20 2 N T t r ( C 0 1 W ) 2 ρ 20 N T t r ( C 0 1 W ) + o ( 1 ) , 1 N T 2 l o g L ( δ 0 ) ρ 1 γ = 0 , 1 N T 2 l o g L ( δ 0 ) ρ 1 ξ ϵ 2 = t r ( Γ 0 C 0 W ) N T ( ξ ϵ 0 2 + T ξ b 0 2 ) ρ 20 2 N T t r ( C 0 1 W ) + o ( 1 ) , 1 N T 2 l o g L ( δ 0 ) ρ 1 ξ b 2 = t r ( Γ 0 C 0 W ) + ρ 20 2 t r ( C 0 1 W ) + t r ( A 4 W ˜ ) N ( ξ ϵ 0 2 + T ξ b 0 2 ) + o ( 1 ) , 1 N T 2 l o g L ( δ 0 ) ρ 2 2 = ( 1 T ) ξ b 0 2 ξ ϵ 0 2 + o ( 1 ) , 1 N T 2 l o g L ( δ 0 ) ρ 2 γ = 0 , 1 N T 2 l o g L ( δ 0 ) ρ 2 ξ ϵ 2 = 1 T ( ξ ϵ 0 2 + T ξ b 0 2 ) 2 ρ 20 ( ξ ϵ 0 2 + T ξ b 0 2 ) ξ ϵ 4 + o ( 1 ) ,
1 N T 2 l o g L ( δ 0 ) ρ 2 ξ b 2 = 1 ξ ϵ 0 2 + T ξ b 0 2 , 1 N T 2 l o g L ( δ 0 ) γ ξ ϵ 2 = 0 , 1 N T 2 l o g L ( δ 0 ) γ ξ b 2 = 0 , 1 N T 2 l o g L ( δ 0 ) γ γ = Z 1 C 0 Γ 0 1 2 H Γ 0 1 2 C 0 Z 1 Z ( C ˜ 0 A 1 ρ 20 A 2 ) H ( C ˜ 0 A 1 ρ 20 A 2 ) Z N T ( ξ ϵ 0 2 + T ξ b 0 2 ) Z 1 C 0 Γ 0 1 2 ( I H ) Γ 0 1 2 C 0 Z 1 + Z ( C ˜ 0 A 1 ρ 20 A 2 ) ( I H ) ( C ˜ 0 A 1 ρ 20 A 2 ) Z N T ξ ϵ 0 2 , 1 N T 2 l o g L ( δ 0 ) ξ ϵ 2 ξ ϵ 2 = 1 2 ξ ϵ 0 4 , 1 N T 2 l o g L ( δ 0 ) ξ ϵ 2 ξ b 2 = 1 2 ( ξ ϵ 0 2 + T ξ b 0 2 ) 2 + o ( 1 ) , 1 N T 2 l o g L ( δ 0 ) ξ b 2 ξ b 2 = T 2 ( ξ ϵ 0 2 + T ξ b 0 2 ) 2 + o ( 1 ) ,
So, according to Assumption 2 ( i v ) and Fact 2 in Lee [38], it is important to know
1 N T 2 l o g L 1 ( δ 0 ) ρ 1 2 1 N T 2 l o g L 1 ( δ ˜ ) ρ 1 2 = 11 N T { ρ 20 2 t r [ ( C 0 1 W ) 2 ] ρ 20 2 t r [ ( C 1 W ) 2 ] + ρ 20 2 t r [ ( C 1 W ) 2 ρ 2 2 t r [ ( C 1 W ) 2 } + { t r [ ( C 0 1 W ) 2 ] t r [ ( C 1 ( ρ ˜ 1 ) W ) 2 ] } N T + { ρ 20 2 ξ b 0 2 t r [ ( C 0 1 W ) 2 ] ρ 2 2 ξ b 0 2 t r [ ( C 0 1 W ) 2 ] } N + { ρ 2 2 ξ b 0 2 t r [ ( C 0 1 W ) 2 ] ρ 2 2 ξ b 2 t r [ ( C 0 1 W ) 2 ] } N + { ρ 2 2 ξ b 2 t r [ ( C 0 1 W ) 2 ] ρ 2 2 ξ b 2 t r [ ( C 1 W ) 2 ] } N = o p ( 1 ) .
Then, we have
1 N T 2 l o g L ( δ ˜ ) δ δ 1 N T 2 l o g L ( δ 0 ) δ δ = o p ( 1 ) .
Now, to prove (A6), from Assumption 1 and Theorem 1 in Kelejian and Prucha [25], the conclusion that 1 N T l o g L ( δ 0 ) δ is an asymptotically normal distribution with mean 0 is obtained. Therefore, the variance is calculated as follows:
V a r ( 1 N T l o g L ( δ 0 ) δ ) = E ( 1 N T l o g L ( δ 0 ) δ · 1 N T l o g L ( δ 0 ) δ ) = E ( 1 N T 2 l o g L ( δ 0 ) δ δ ) + Ω δ 0 ,
where
1 N T 2 l o g L ( δ 0 ) δ δ = L 11 L 12 L 13 L 14 L 15 L 22 L 23 L 24 L 25 L 33 L 34 L 35 L 44 L 45 L 55 + o p ( 1 ) Σ δ 0 + o p ( 1 ) ,
Ω δ 0 = Π 11 Π 12 Π 13 Π 14 Π 15 Π 22 Π 23 Π 24 Π 25 Π 33 Π 34 Π 35 Π 44 Π 45 Π 55 ,
L 11 = ( 1 N T + 11 ρ 20 2 N T + 2 ρ 20 2 ξ b 0 2 N ) t r [ ( C 0 1 W ) 2 ] , L 12 = ρ 20 2 + 2 ρ 20 N T t r ( C 0 1 W ) , L 13 = 0 , L 14 = t r ( Γ 0 C 0 W ) N T ( ξ ϵ 0 2 + T ξ b 0 2 ) + ρ 20 2 N T t r ( C 0 1 W ) , L 15 = t r ( Γ 0 C 0 W ) N ( ξ ϵ 0 2 + T ξ b 0 2 ) + ρ 20 2 t r ( C 0 1 W ) N ( ξ ϵ 0 2 + T ξ b 0 2 ) + t r ( A 4 W ˜ ) N ( ξ ϵ 0 2 + T ξ b 0 2 ) , L 22 = ( T 1 ) ξ b 0 2 ξ ϵ 0 2 , L 23 = 0 , L 24 = 1 T ( ξ ϵ 0 2 + T ξ b 0 2 ) 2 + ρ 20 ( ξ ϵ 0 2 + T ξ b 0 2 ) ξ ϵ 4 , L 25 = 1 ξ ϵ 0 2 + T ξ b 0 2 , L 33 = Z 1 C 0 Γ 0 1 2 H Γ 0 1 2 C 0 Z 1 + Z ( C ˜ 0 A 1 ρ 20 A 2 ) H ( C ˜ 0 A 1 ρ 20 A 2 ) Z N T ( ξ ϵ 0 2 + T ξ b 0 2 ) + Z 1 C 0 Γ 0 1 2 ( I H ) Γ 0 1 2 C 0 Z 1 + Z ( C ˜ 0 A 1 ρ 20 A 2 ) ( I H ) ( C ˜ 0 A 1 ρ 20 A 2 ) Z N T ξ ϵ 0 2 , L 34 = 0 , L 35 = 0 , L 44 = 1 2 ξ ϵ 0 4 , L 45 = 1 2 ( ξ ϵ 0 2 + T ξ b 0 2 ) 2 , L 55 = T 2 ( ξ ϵ 0 2 + T ξ b 0 2 ) 2 , Π 11 = ( μ 4 3 ξ 0 4 ) i = 1 N T ( A 60 i i ) 2 N T ( ξ ϵ 0 2 + T ξ b 0 2 ) 2 + ( μ 4 3 σ 0 4 ) i = 1 N T ( A 70 i i ) 2 N T ξ ϵ 0 4 , Π 13 = μ 3 Z A 120 d i a g A 70 N T ( ξ ϵ 0 2 + T ξ b 0 2 ) 2 , Π 12 = ( μ 4 3 ξ ϵ 0 4 ) i = 1 N T ( A 60 i i A 90 i i ) N T ξ ϵ 0 2 ( ξ ϵ 0 2 + T ξ b 0 2 ) + ( μ 4 3 ξ ϵ 0 4 ) i = 1 N T ( A 90 i i 2 ) + μ 3 Z A 130 d i a g A 70 N T ξ ϵ 0 4 , + ( μ 4 3 ξ 0 4 ) i = 1 N T ( A 60 i i A 80 i i ) N T ( ξ ϵ 0 2 + T ξ b 0 2 ) 2 , Π 14 = μ 4 i = 1 N T ( A 60 i i 2 ) 3 ξ ϵ 0 4 i = 1 N T ( A 60 i i 2 ) N T ( ξ ϵ 0 2 + T ξ b 0 2 ) 3 + ( μ 4 3 ξ ϵ 0 4 ) i = 1 N T ( A 70 i i A 100 i i ) N T ξ ϵ 0 2 ( ξ ϵ 0 2 + T ξ b 0 2 ) 2 + ( μ 4 3 ξ ϵ 0 4 ) i = 1 N T ( A 70 i i A 110 i i ) N T ξ ϵ 0 8 , Π 15 = ( μ 4 3 ξ ϵ 0 4 ) i = 1 N T ( A 60 i i A 100 i i ) 2 N ( ξ ϵ 0 2 + T ξ b 0 2 ) 4 + ( μ 4 3 ξ 0 4 ) i = 1 N T ( A 70 i i A 100 i i ) 2 N ξ ϵ 0 2 ( ξ ϵ 0 2 + T ξ b 0 2 ) , Π 22 = ( μ 4 3 ξ ϵ 0 4 ) i = 1 N T ( A 80 i i 2 ) N T ( ξ e 0 2 + T ξ b 0 2 ) 2 + ( μ 4 3 ξ ϵ 0 4 ) i = 1 N T ( A 80 i i A 90 i i ) N T ξ ϵ 0 2 ( ξ ϵ 0 2 + T ξ b 0 2 ) + ( μ 4 3 ξ ϵ 0 4 ) i = 1 N T ( A 90 i i 2 ) N T ξ ϵ 0 4 , Π 33 = 0 , Π 25 = ( μ 4 3 ξ ϵ 0 4 ) i = 1 N T ( A 80 i i A 110 i i ) 2 N ( ξ ϵ 0 2 + T ξ b 0 2 ) 3 + ( μ 4 3 ξ 0 4 ) i = 1 N T ( A 70 i i A 110 i i ) 2 ξ ϵ 0 2 ( ξ ϵ 0 2 + T ξ b 0 2 ) 2 ,
Π 24 = ( μ 4 3 ξ ϵ 0 4 ) i = 1 N T ( A 80 i i A 100 i i ) 2 N T ( ξ ϵ 0 2 + T ξ b 0 2 ) 3 + ( μ 4 3 ξ ϵ 0 4 ) i = 1 N T ( A 80 i i A 110 i i ) N T ξ e 0 4 ( ξ ϵ 0 2 + T ξ b 0 2 ) + ( μ 4 3 ξ ϵ 0 4 ) i = 1 N T ( A 90 i i A 110 i i ) N T ξ ϵ 0 6 ,
Π 34 = μ 3 Z A 120 d i a g A 100 2 N T ( ξ ϵ 0 2 + T ξ b 0 2 ) 3 + μ 3 Z A 120 d i a g A 110 N T ξ ϵ 0 4 ( ξ e 0 2 + T ξ b 0 ) , Π 35 = μ 3 Z A 120 d i a g A 100 2 N ( ξ ϵ 0 2 + T ξ b 0 2 ) 3 + μ 3 Z A 130 d i a g A 100 2 N ξ ϵ 0 2 ( ξ e 0 2 + T ξ b 0 ) 2 , Π 45 = ( μ 4 3 ξ ϵ 0 4 ) i = 1 N T ( A 100 i i 2 ) 2 N ( ξ ϵ 0 2 + T ξ b 0 2 ) 4 + ( μ 4 3 ξ ϵ 0 4 ) i = 1 N T ( A 100 i i A 110 i i ) 2 N ξ ϵ 0 4 ( ξ ϵ 0 2 + T ξ b 0 2 ) 2 , Π 55 = T ( μ 4 3 ξ ϵ 0 4 ) i = 1 N T ( A 100 i i 2 ) 2 N ( ξ ϵ 0 2 + T ξ b 0 2 ) 4 , Π 44 = ( μ 4 3 ξ ϵ 0 4 ) i = 1 N T ( A 100 i i 2 ) 2 N T ( ξ ϵ 0 2 + T ξ b 0 2 ) 4 + ( μ 4 3 ξ 0 4 ) i = 1 N T ( A 100 i i A 110 i i ) 2 N T ξ ϵ 0 4 ( ξ ϵ 0 2 + T ξ b 0 2 ) 2 + ( μ 4 3 ξ ϵ 0 4 ) i = 1 N T ( A 110 i i 2 ) 4 N T ξ ϵ 0 8 , Π 23 = μ 3 Z A 120 d i a g A 80 N T ( ξ ϵ 0 2 + T ξ b 0 2 ) 2 + μ 3 Z A 130 d i a g A 80 N T ξ ϵ 0 2 ( ξ ϵ 0 2 + T ξ b 0 2 ) , A 1 = ( 0 N , I N ( T 1 ) × N ( T 1 ) ) , A 2 = ( I N ( T 1 ) × N ( T 1 ) , 0 N ) , A 5 = ( I N , 0 N × N ( T 1 ) ) , A 6 = A 4 W ˜ H A 1 , A 7 = A 4 W ˜ ( I H ) A 1 , A 8 = A 2 A 3 H A 1 , A 9 = A 2 A 3 ( I H ) A 1 , A 10 = A 1 H A 1 , A 11 = A 1 ( I H ) A 1 , A 12 = ( C ˜ A 1 ρ 2 A 2 ) H A 1 ,
A 3 = C 1 Γ 1 2 0 0 0 ρ 2 C 2 Γ 1 2 C 1 0 0 ρ 2 2 C 3 Γ 1 2 ρ 2 C 2 Γ 1 2 C 1 0 ρ 2 T 2 C T + 1 Γ 1 2 ρ 2 T 3 C T + 2 Γ 1 2 ρ 2 T 4 C T + 3 Γ 1 2 C 1 N ( T 1 ) × N ( T 1 ) ,
A 4 = ρ 2 C 2 Γ 1 2 C 1 0 0 ρ 2 2 C 3 Γ 1 2 ρ 2 C 2 Γ 1 2 C 1 0 ρ 2 T 1 C T Γ 1 2 ρ 2 T 2 C T + 1 Γ 1 2 ρ 2 T 3 C T + 2 Γ 1 2 C 1 N ( T 1 ) × N T ,
μ 3 and μ 4 are the third and fourth moments, and A i 0 ( i = 1 , , 13 ) are the true value of A i ( i = 1 , , 13 ) , C ˜ 0 is the true value of C ˜ , respectively.
Therefore, according to Assumption 6 and the forms of Σ δ 0 and Ω δ 0 , we obtain
N T ( δ ^ δ 0 ) D N ( 0 , l i m N ( Σ δ 0 1 + Σ δ 0 1 Ω δ 0 Σ δ 0 1 ) ) .
Proof of Theorem 3.
Recall ϕ ^ = ι ( ι ) ( Y Z γ ^ ) and
Y = F 10 + Z 1 γ 0 + C 0 1 Γ 0 1 2 η 10 F 20 + Z 2 γ 0 + ρ 20 C 0 2 Γ 0 1 2 η 10 + C 0 1 η 20 F 30 + Z 3 γ 0 + ρ 20 2 C 0 3 Γ 0 1 2 η 10 + ρ 20 C 0 2 η 20 + C 0 1 η 30 F T 0 + Z T γ 0 + ρ 20 T 1 C 0 T Γ 0 1 2 η 10 + ρ 20 T 2 C 0 T + 1 η 20 + + C 0 1 η T 0 ,
then
ϕ ^ = ι ( ι ) ( Y Z γ ^ ) = ι ( ι ) [ f 0 ( ι ) + η ˜ Z ( γ ^ γ 0 ) ] ,
where η ˜ = G ´ ( η 10 , η 20 , , η T 0 ) and
G ´ = C 0 1 Γ 0 1 2 0 0 ρ 20 C 0 2 Γ 0 1 2 C 0 1 0 ρ 20 T 1 C 0 T Γ 0 1 2 ρ 20 T 2 C 0 T + 1 C 0 1 .
Let ϕ = f ( ι ) S f ˙ ( ι ) , by the second order Taylor expression, we know that f ( ι i t ) = ι i t ( ι ) ϕ + 1 2 ( ι i t ι ) f ¨ ( ι ) ( ι i t ι ) + o p ( S 2 ) . It is easy to obtain
ι ( ι ) f ( ι i t ) = [ ι ( ι ) K S ( ι ) ι ( ι ) ] 1 ι ( ι ) K S ( ι ) ι ( ι ) ϕ + 1 2 ι ( ι i t ) ( ι i t ι ) f ¨ ( ι ) ( ι i t ι ) = ϕ + 1 2 ι ( ι i t ) ( ι i t ι ) f ¨ ( ι ) ( ι i t ι ) .
Therefore,
ϕ ^ ϕ = 1 2 i = 1 N t = 1 T ι ( ι i t ) ( ι i t ι ) f ¨ ( ι ) ( ι i t ι ) + ι ( ι ) η ˜ ι ( ι ) Z ( γ ^ γ )
and
N | S | ( ϕ ^ ϕ ) = B 11 + B 12 B 13 ,
where B 11 = N | S | 2 i = 1 N t = 1 T ι ( ι i t ) ( ι i t ι ) f ¨ ( ι ) ( ι i t ι ) ,   B 12 = N | S | ι ( ι ) η ˜ and B 13 = N | S | ι ( ι ) Z ( γ ^ γ ) .
For B 11 , write
N | S | 2 i = 1 N t = 1 T ι ( ι i t ) ( ι i t ι ) f ¨ ( ι ) ( ι i t ι ) = N | S | 2 Q 1 1 N T i = 1 N t = 1 T ι i t ( ι ) K S ( ι i t ι ) ( ι i t ι ) f ¨ ( ι ) ( ι i t ι ) + o p ( 1 ) = Q 1 N | S | 2 1 N T i = 1 N t = 1 T [ 1 ( ι i t ι ) S 1 ] K S ( ι i t ι ) ( ι i t ι ) f ¨ ( ι ) ( ι i t ι ) + o p ( 1 ) = Q 1 N | S | 2 1 N T i = 1 N t = 1 T K S ( ι i t ι ) ( ι i t ι ) f ¨ ( ι ) ( ι i t ι ) i = 1 N t = 1 T ( ι i t ι ) S 1 K S ( ι i t ι ) ( ι i t ι ) f ¨ ( ι ) ( ι i t ι ) + o p ( 1 ) = Q 1 N | S | 2 O ( | S | 3 s 3 ) + o p ( 1 ) = o ( 1 ) .
For B 12 , it is easy to know that E [ B 12 ] = 0 and V a r ( B 12 ) = E [ B 12 2 ] = O ( 1 ) .
For B 13 , by Lemma 2 and Facts 1, 2 (Lee [38]) and Theorem 1, we can show that ι ( ι ) = O p ( N 1 / 2 | S | 1 ) and γ ^ γ 0 = O p ( 1 N T ) , then B 13 = o p ( 1 ) can be obtained. Following Slutsky’s theorem and central limit theorem, we obtain
N | S | ϕ ^ ϕ D N ( 0 , σ 2 ( ϕ ) ) ,
where σ 2 ( ϕ ) = G ´ ι ( ι ) σ η 2 ι ( ι ) G ´ . □
Proof of Theorem 4.
It can be seen that
R S S ( H 1 ) = i = 1 N t = 1 T [ y i t f ^ ( ι i t ) z i t γ ^ ] 2 = i = 1 N t = 1 T [ y i t f ^ ( ι i t ) z i t γ ] 2 + { i = 1 N t = 1 T [ y i t f ^ ( ι i t ) z i t γ ^ ] 2 i = 1 N t = 1 T [ y i t f ^ ( ι i t ) z i t γ ] 2 } = R S S ( H 1 ) + T 1 ,
where R S S ( H 1 ) = i = 1 N t = 1 T [ y i t f ^ ( ι i t ) z i t γ ] 2 and T 1 = i = 1 N t = 1 T [ y i t f ^ ( ι i t ) z i t γ ^ ] 2 i = 1 N t = 1 T [ y i t f ^ ( ι i t ) z i t γ ] 2 . For T 1 , it can be obtained by Theorems 1–3 and the calculation that
T 1 = i = 1 N t = 1 T [ y i t f ^ ( ι i t ) z i t γ ^ ] 2 i = 1 N t = 1 T [ y i t f ^ ( ι i t ) z i t γ ] 2 = i = 1 N t = 1 T [ z i t γ + f ( ι i t ) + ν i t f ^ ( ι i t ) z i t γ ^ ] 2 i = 1 N t = 1 T [ z i t γ + f ( ι i t ) + ν i t f ^ ( ι i t ) z i t γ ] 2 = i = 1 N t = 1 T [ O p ( 1 N T ) + O ( | S | 2 ) + | S | 2 O p ( 1 N | S | ) + ν ] 2
i = 1 N t = 1 T [ O ( | S | 2 ) + | S | 2 O p ( 1 N | S | ) + ν ] 2 = O ( 1 ) .
Similarly, we have that
R S S ( H 0 ) = i = 1 N t = 1 T [ y i t f ˜ ( ι i t ) z i t γ ˜ ] 2 = i = 1 N t = 1 T [ y i t f ˜ ( ι i t ) z i t γ ] 2 + { i = 1 N t = 1 T [ y i t f ˜ ( ι i t ) z i t γ ˜ ] 2 i = 1 N t = 1 T [ y i t f ˜ ( ι i t ) z i t γ ] 2 } = R S S ( H 0 ) + T 0 ,
where R S S ( H 0 ) = i = 1 N t = 1 T [ y i t f ˜ ( ι i t ) z i t γ ] 2 , and T 0 = i = 1 N t = 1 T [ y i t f ˜ ( ι i t ) z i t γ ˜ ] 2 i = 1 N t = 1 T [ y i t f ˜ ( ι i t ) z i t γ ] 2 . For T 0 , it can be obtained by Theorems 1–3 and the calculation that
T 0 = i = 1 N t = 1 T [ y i t f ˜ ( ι i t ) z i t γ ˜ ] 2 i = 1 N t = 1 T [ y i t f ˜ ( ι i t ) z i t γ ] 2 = i = 1 N t = 1 T [ z i t γ + f ( ι i t ) + ν i t f ˜ ( ι i t ) z i t γ ˜ ] 2 i = 1 N t = 1 T [ z i t γ + f ( ι i t ) + ν i t f ˜ ( ι i t ) z i t γ ] 2 = i = 1 N t = 1 T [ O p ( 1 N T ) + f ˜ ( ι i t ) z i t γ ˜ ] 2 i = 1 N t = 1 T [ f ˜ ( ι i t ) z i t γ ˜ ] 2 = O p ( 1 ) .
Furthermore, we obtain that
F N T = N T 2 R S S ( H 0 ) R S S ( H 1 ) R S S ( H 1 ) ( 1 + o p ( 1 ) ) + O p ( 1 ) .
According to the Remark 3.4 in Fan et al. [36], under H 0 , we have
r k F N T D χ r k c k | Δ | / | S | 2 ,
where r k = K ( 0 ) 1 2 K 2 ( ι ) d ι ( K ( ι ) 1 2 K K ( ι ) ) 2 d ι , c k = K ( 0 ) 1 2 K 2 ( ι ) d ι , Δ is the support set of ι , K ( ι ) = d i a g ( S 1 k ( S 1 ( ι 11 ι ) ) , , S 1 k ( S 1 ( ι N T ι ) ) ) , and K K is the convolution of K. □

References

  1. Cheng, S.; Chen, J. GMM estimation of partially linear additive spatial autoregressive model. Comput. Stat. Data Anal. 2023, 182, 107712. [Google Scholar] [CrossRef]
  2. Chamberlain, G. Multivariate regression models for panel data. J. Econ. 1982, 18, 5–46. [Google Scholar] [CrossRef]
  3. Chamberlain, G. Panel data. Handb. Econom. 1984, 2, 1247–1318. [Google Scholar]
  4. Hsiao, C. Analysis of Panel Data; Cambridge University Press: Cambridge, UK, 2022; Volume 64. [Google Scholar]
  5. Baltagi, B.H. Econometric Analysis of Panel Data; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  6. Arellano, M. Panel Data Econometrics; Oxford University Press: Oxford, UK, 2003. [Google Scholar]
  7. Wooldridge, J.M. Econometric Analysis of Cross Section and Panel Data; MIT Press: Cambridge, MA, USA, 2010. [Google Scholar]
  8. Thapa, S.; Lomholt, M.A.; Krog, J.; Cherstvy, A.G.; Metzler, R. Bayesian analysis of single-particle tracking data using the nested-sampling algorithm: Maximum-likelihood model selection applied to stochastic-diffusivity data. Phys. Chem. Chem. Phys. 2018, 20, 29018–29037. [Google Scholar] [CrossRef] [PubMed]
  9. Rehfeldt, F.; Weiss, M. The random walker’s toolbox for analyzing single-particle tracking data. Soft Matter 2023, 19, 5206–5222. [Google Scholar] [CrossRef] [PubMed]
  10. Elhorst, J.P. Spatial Econometrics: From Cross-Sectional Data to Spatial Panels; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  11. Brueckner, J.K. Strategic interaction among governments: An overview of empirical studies. Int. Reg. Sci. Rev. 2003, 26, 175–188. [Google Scholar] [CrossRef]
  12. LeSage, J.P. An introduction to spatial econometrics. Rev. Econ. Ind. 2008, 123, 19–44. [Google Scholar]
  13. Elhorst, J.P. Applied spatial econometrics: Raising the bar. Spat. Econ. Anal. 2010, 5, 9–28. [Google Scholar] [CrossRef]
  14. Allers, M.; Elhorst, J. Tax mimicking and yardstick competition among local governments in The Netherlands. Int. Tax Public Financ. 2005, 12, 493–513. [Google Scholar] [CrossRef]
  15. Elhorst, J.P. Specification and estimation of spatial panel data models. Int. Reg. Sci. Rev. 2003, 26, 244–268. [Google Scholar] [CrossRef]
  16. Baltagi, B.H.; Li, D. Prediction in the panel data model with spatial correlation. In Advances in Spatial Econometrics: Methodology, Tools and Applications; Springer: Berlin/Heidelberg, Germany, 2004; pp. 283–295. [Google Scholar]
  17. Pesaran, M.H. General diagnostic tests for cross-sectional dependence in panels. Empir. Econ. 2021, 60, 13–50. [Google Scholar] [CrossRef]
  18. Kapoor, M.; Kelejian, H.H.; Prucha, I.R. Panel data models with spatially correlated error components. J. Econ. 2007, 140, 97–130. [Google Scholar] [CrossRef]
  19. Lee, L.; Yu, J. Estimation of spatial autoregressive panel data models with fixed effects. J. Econ. 2010, 154, 165–185. [Google Scholar] [CrossRef]
  20. Mutl, J.; Pfaffermayr, M. The Hausman test in a Cliff and Ord panel model. Econ. J. 2011, 14, 48–76. [Google Scholar] [CrossRef]
  21. Baltagi, B.H.; Egger, P.; Pfaffermayr, M. A generalized spatial panel data model with random effects. Econ. Rev. 2013, 32, 650–685. [Google Scholar] [CrossRef]
  22. Anselin, L. The Scope of Spatial Econometrics. In Spatial Econometrics: Methods and Models; Springer: Dordrecht, The Netherlands, 1988; pp. 7–15. [Google Scholar]
  23. Baltagi, B.H.; Song, S.H.; Koh, W. Testing panel data regression models with spatial error correlation. J. Econ. 2003, 117, 123–150. [Google Scholar] [CrossRef]
  24. Bordignon, M.; Cerniglia, F.; Revelli, F. In search of yardstick competition: A spatial analysis of Italian municipality property tax setting. J. Urban Econ. 2003, 54, 199–217. [Google Scholar] [CrossRef]
  25. Kelejian, H.H.; Prucha, I.R. On the asymptotic distribution of the Moran I test statistic with applications. J. Econ. 2001, 104, 219–257. [Google Scholar] [CrossRef]
  26. Cohen, J.P.; Paul, C.J. Public infrastructure investment, interstate spatial spillovers, and manufacturing costs. Rev. Econ. Stat. 2004, 86, 551–560. [Google Scholar] [CrossRef]
  27. Lee, L.; Yu, J. Spatial panels: Random components versus fixed effects. Int. Econ. Rev. 2012, 53, 1369–1412. [Google Scholar] [CrossRef]
  28. Parent, O.; LeSage, J.P. A space–time filter for panel data models containing random effects. Comput. Stat. Data Anal. 2011, 55, 475–490. [Google Scholar] [CrossRef]
  29. Elhorst, J.P. Serial and spatial error correlation. Econ. Lett. 2008, 100, 422–424. [Google Scholar] [CrossRef]
  30. Lee, L.; Yu, J. Estimation of fixed effects panel regression models with separable and nonseparable space–time filters. J. Econ. 2015, 184, 174–192. [Google Scholar] [CrossRef]
  31. Zhao, J.; Zhao, Y.; Lin, J.; Miao, Z.X.; Khaled, W. Estimation and testing for panel data partially linear single-index models with errors correlated in space and time. Random Matrices Theory Appl. 2020, 9, 2150005. [Google Scholar] [CrossRef]
  32. Li, S.; Chen, J.; Li, B. Estimation and Testing of Random Effects Semiparametric Regression Model with Separable Space-Time Filters. Fractal Fract. 2022, 6, 735. [Google Scholar] [CrossRef]
  33. Li, B.; Chen, J.; Li, S. Estimation of Fixed Effects Partially Linear Varying Coefficient Panel Data Regression Model with Nonseparable Space-Time Filters. Mathematics 2023, 11, 1531. [Google Scholar] [CrossRef]
  34. Bai, Y.; Hu, J.; You, J. Panel data partially linear varying-coefficient model with errors correlated in space and time. Stat. Sin. 2015, 35, 275–294. [Google Scholar]
  35. Cai, Z. Trending time-varying coefficient time series models with serially correlated errors. J. Econ. 2007, 136, 163–188. [Google Scholar] [CrossRef]
  36. Fan, J.; Zhang, C.; Zhang, J. Generalized likelihood ratio statistics and Wilks phenomenon. Ann. Stat. 2001, 29, 153–193. [Google Scholar] [CrossRef]
  37. Su, L.; Jin, S. Profile quasi-maximum likelihood estimation of partially linear spatial autoregressive models. J. Econ. 2010, 157, 18–33. [Google Scholar] [CrossRef]
  38. Lee, L. Asymptotic distributions of quasi-maximum likelihood estimators for spatial autoregressive models. Econometrica 2004, 72, 1899–1925. [Google Scholar] [CrossRef]
  39. Hamilton, S.A.; Truong, Y.K. Local linear estimation in partly linear models. J. Multivar. Anal. 1997, 60, 1–19. [Google Scholar] [CrossRef]
  40. Su, L.; Ullah, A. Nonparametric and semiparametric panel econometric models: Estimation and testing. In Handbook of Empirical Economics and Finance; CRC: Boca Raton, FL, USA, 2011; pp. 455–497. [Google Scholar]
  41. Su, L.; Ullah, A. Profile likelihood estimation of partially linear panel data models with fixed effects. Econ. Lett. 2006, 92, 75–81. [Google Scholar] [CrossRef]
  42. Su, L. Semiparametric GMM estimation of spatial autoregressive models. J. Econ. 2012, 167, 543–560. [Google Scholar] [CrossRef]
  43. Mack, Y.; Silverman, B.W. Weak and strong uniform consistency of kernel regression estimates. Z. Wahrscheinlichkeitstheorie Verwandte Geb. 1982, 61, 405–415. [Google Scholar] [CrossRef]
  44. Feng, Q.; Horrace, W.C. Alternative technical efficiency measures: Skew, bias, and scale. J. Appl. Econ. 2010, 27, 253–268. [Google Scholar] [CrossRef]
  45. Druska, V.; Horrace, W.C. Generalized moments estimation for spatial panel data: Indonesian rice farming. Am. J. Agric. Econ. 2004, 86, 185–198. [Google Scholar] [CrossRef]
  46. Cheng, S.; Chen, J. Estimation of partially linear single-index spatial autoregressive model. Stat. Pap. 2021, 62, 495–531. [Google Scholar] [CrossRef]
  47. Qu, A.; Lindsay, B.G.; Li, B. Improving generalized estimating equations using quadratic inference functions. Biometrika 2000, 87, 823–836. [Google Scholar] [CrossRef]
  48. White, H. Estimation, Inference and Specification Analysis; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
Figure 1. f ^ ( ι ) and its 95% confidence intervals with ρ 1 = 0.2 and ρ 2 = 0.7 under different N and T.
Figure 1. f ^ ( ι ) and its 95% confidence intervals with ρ 1 = 0.2 and ρ 2 = 0.7 under different N and T.
Fractalfract 08 00386 g001aFractalfract 08 00386 g001b
Figure 2. The trajectory of f ( · ) .
Figure 2. The trajectory of f ( · ) .
Fractalfract 08 00386 g002
Figure 3. The trajectory of f ^ ( · ) .
Figure 3. The trajectory of f ^ ( · ) .
Fractalfract 08 00386 g003
Figure 4. Trajectories of f ( ι ) under τ = 0 , 0.05 , 0.1 , 0.2 , and 0.5 , respectively.
Figure 4. Trajectories of f ( ι ) under τ = 0 , 0.05 , 0.1 , 0.2 , and 0.5 , respectively.
Fractalfract 08 00386 g004
Figure 5. The test size ( τ = 0) and the powers ( τ 0 ) of test statistic F N T under α = 0.01 , 0.05 , respectively, with sample size N T .
Figure 5. The test size ( τ = 0) and the powers ( τ 0 ) of test statistic F N T under α = 0.01 , 0.05 , respectively, with sample size N T .
Fractalfract 08 00386 g005
Figure 6. The nonlinear effect of ι i t ( l a n d a r e a ) on y i t ( l o g ( r i c e ) ).
Figure 6. The nonlinear effect of ι i t ( l a n d a r e a ) on y i t ( l o g ( r i c e ) ).
Fractalfract 08 00386 g006
Table 1. Some important symbols and their definitions.
Table 1. Some important symbols and their definitions.
SymbolDefinition
ω m a x Maximum eigenvalue of matrix W
ω m i n Minimum eigenvalue of matrix W
| ρ | The absolute value of ρ
| Σ | The determinant of matrix Σ
Σ The transpose of matrix Σ
Σ 1 The inverse of matrix Σ
t r ( Σ ) The Trace of matrix Σ
d i a g ( a 1 , , a n ) Diagonal matrix composed of elements a 1 , , a n
S t r ( S S ) 1 2 = { i j | s i j | 2 } 1 2
D Convergence in distribution
P Convergence in probability
i i d Independent idetically distributed
K K The convolution of function K
Kronecker product
”Defined as” or “marked as”
N ( 0 , Σ ) Normal distribution with mean 0 and variance Σ
χ d f 2 χ 2 distribution with degree of freedom d f
Table 2. Parametric simulation results with N = 40.
Table 2. Parametric simulation results with N = 40.
T = 10T = 15T = 20
ParameterTrueMeanMSEMeanMSEMeanMSE
ρ ^ 1 0.20000.17460.00170.21130.00150.20460.0012
ρ ^ 2 0.70000.76460.00140.67860.00090.68950.0007
ρ 1 = 0.2 γ ^ 1.60001.60340.00221.60100.00161.60080.0013
ρ 2 = 0.7 ξ ϵ 2 ^ 0.50000.47460.03460.49320.02140.50350.0075
ξ b 2 ^ 0.50000.55120.01540.53130.00450.50540.0014
ρ ^ 1 0.40000.42790.00380.38700.00280.38890.0020
ρ ^ 2 0.50000.55900.00290.55890.00250.53300.0019
ρ 1 = 0.4 γ ^ 1.60001.60050.00091.60030.00071.59980.0004
ρ 2 = 0.5 ξ ϵ 2 ^ 0.50000.57000.02420.53530.02120.52430.0205
ξ b 2 ^ 0.50000.54250.03510.51660.01380.51380.0108
ρ ^ 1 0.50000.52870.00370.47220.00320.48690.0031
ρ ^ 2 0.40000.46400.00310.42640.00300.41130.0028
ρ 1 = 0.5 γ ^ 1.60001.59810.00071.60050.00031.60040.0002
ρ 2 = 0.4 ξ ϵ 2 ^ 0.50000.53900.01760.51840.01410.49350.0034
ξ b 2 ^ 0.50000.56150.01770.48200.01190.50150.0039
ρ ^ 1 0.70000.66670.00550.70920.00410.70540.0040
ρ ^ 2 0.20000.28600.00520.23190.00430.20380.0037
ρ 1 = 0.7 γ ^ 1.60001.60200.00081.59960.00031.59970.0002
ρ 2 = 0.2 ξ ϵ 2 ^ 0.50000.54460.00510.53720.00240.52320.0014
ξ b 2 ^ 0.50000.55780.01520.53810.00910.52950.0027
Table 3. Parametric simulation results with N = 60.
Table 3. Parametric simulation results with N = 60.
T = 10T = 15T = 20
ParameterTrueMeanMSEMeanMSEMeanMSE
ρ ^ 1 0.20000.21300.00090.20810.00070.20070.0005
ρ ^ 2 0.70000.71710.00160.68780.00140.69540.0004
ρ 1 = 0.2 γ ^ 1.60001.59810.00111.60060.00101.60020.0008
ρ 2 = 0.7 ξ ϵ 2 ^ 0.50000.44900.02870.47730.02130.52140.0057
ξ b 2 ^ 0.50000.56260.00740.54410.00710.49470.0069
ρ ^ 1 0.40000.39120.00170.40740.00130.39490.0011
ρ ^ 2 0.50000.50820.00180.50450.00150.50390.0012
ρ 1 = 0.4 γ ^ 1.60001.60070.00051.60040.00031.59990.0002
ρ 2 = 0.5 ξ ϵ 2 ^ 0.50000.55940.02050.49000.02000.50990.0195
ξ b 2 ^ 0.50000.47140.00900.48470.00870.50770.0079
ρ ^ 1 0.50000.47300.00240.48230.00180.48580.0016
ρ ^ 2 0.40000.43220.00220.42730.00170.41390.0014
ρ 1 = 0.5 γ ^ 1.60001.60050.00031.60030.00021.59980.0001
ρ 2 = 0.4 ξ ϵ 2 ^ 0.50000.55910.00710.51250.00650.50720.0026
ξ b 2 ^ 0.50000.48510.01180.48560.01030.51000.0030
ρ ^ 1 0.70000.68080.00260.71300.00240.68760.0019
ρ ^ 2 0.20000.21890.00250.21240.00230.19700.0016
ρ 1 = 0.7 γ ^ 1.60001.60100.00031.60080.00021.60070.0001
ρ 2 = 0.2 ξ ϵ 2 ^ 0.50000.55090.00350.52140.00200.50850.0012
ξ b 2 ^ 0.50000.55560.00860.53020.00840.51700.0019
Table 4. Parametric simulation results with N = 80.
Table 4. Parametric simulation results with N = 80.
T = 10T = 15T = 20
ParameterTrueMeanMSEMeanMSEMeanMSE
ρ ^ 1 0.20000.20660.00040.20480.00040.20030.0001
ρ ^ 2 0.70000.69090.00060.69780.00030.69890.0002
ρ 1 = 0.2 γ ^ 1.60001.59910.00131.59970.00071.60020.0006
ρ 2 = 0.7 ξ ϵ 2 ^ 0.50000.47550.00790.51930.00470.49670.0046
ξ b 2 ^ 0.50000.53610.00630.50880.00610.50040.0039
ρ ^ 1 0.40000.40120.00110.40040.00090.39990.0007
ρ ^ 2 0.50000.50920.00100.50810.00080.50550.0007
ρ 1 = 0.4 γ ^ 1.60001.59860.00041.60100.00031.60030.0002
ρ 2 = 0.5 ξ ϵ 2 ^ 0.50000.47630.02040.48570.01570.49610.0097
ξ b 2 ^ 0.50000.55950.00870.48130.00740.48990.0068
ρ ^ 1 0.50000.49330.00180.49970.00130.49980.0010
ρ ^ 2 0.40000.44010.00170.42000.00140.40640.0013
ρ 1 = 0.5 γ ^ 1.60001.59910.00021.59940.00021.60020.0001
ρ 2 = 0.4 ξ ϵ 2 ^ 0.50000.50340.00670.50080.00480.50040.0021
ξ b 2 ^ 0.50000.46280.00660.47670.00530.48020.0025
ρ ^ 1 0.70000.72720.00280.72580.00190.71190.0012
ρ ^ 2 0.20000.18400.00460.18790.00390.19270.0013
ρ 1 = 0.7 γ ^ 1.60001.59930.00081.60040.00051.60030.0001
ρ 2 = 0.2 ξ ϵ 2 ^ 0.50000.50430.00310.50050.00180.49970.0011
ξ b 2 ^ 0.50000.51910.00480.48350.00340.51270.0010
Table 5. Nonparametric simulation results for medians and SDs of MADEs with N = 40.
Table 5. Nonparametric simulation results for medians and SDs of MADEs with N = 40.
T = 10T = 15T = 20
ρ 1 = 0.2 ρ 2 = 0.7 Median0.30000.26350.2198
SD0.28260.25880.2518
ρ 1 = 0.4 ρ 2 = 0.5 Median0.24300.23650.2313
SD0.29390.25840.1742
ρ 1 = 0.5 ρ 2 = 0.4 Median0.30130.27790.2281
SD0.20890.20850.1833
ρ 1 = 0.7 ρ 2 = 0.2 Median0.36350.36270.3502
SD0.31160.30120.2903
Table 6. Nonparametric simulation results for medians and SDs of MADEs with N = 60.
Table 6. Nonparametric simulation results for medians and SDs of MADEs with N = 60.
T = 10T = 15T = 20
ρ 1 = 0.2 ρ 2 = 0.7 Median0.26560.25320.2145
SD0.20880.20130.1466
ρ 1 = 0.4 ρ 2 = 0.5 Median0.20990.20220.1768
SD0.18720.16660.1653
ρ 1 = 0.5 ρ 2 = 0.4 Median0.29170.27230.2188
SD0.20440.20320.1809
ρ 1 = 0.7 ρ 2 = 0.2 Median0.35500.33650.3244
SD0.28570.27810.2235
Table 7. Nonparametric simulation results for medians and SDs of MADEs with N = 80.
Table 7. Nonparametric simulation results for medians and SDs of MADEs with N = 80.
T = 10T = 15T = 20
ρ 1 = 0.2 ρ 2 = 0.7 Median0.24250.24150.1743
SD0.20490.17680.0952
ρ 1 = 0.4 ρ 2 = 0.5 Median0.19910.18550.1701
SD0.18070.15940.1530
ρ 1 = 0.5 ρ 2 = 0.4 Median0.27790.23920.2082
SD0.20080.17520.1747
ρ 1 = 0.7 ρ 2 = 0.2 Median0.30740.30710.2835
SD0.20080.19760.1598
Table 8. Parameter simulation results of the models (22)–(25) with N = 40.
Table 8. Parameter simulation results of the models (22)–(25) with N = 40.
T = 10 T = 15
γ ^ ρ ^ 1 ρ ^ 2 ξ ^ ϵ 2 ξ ^ b 2 γ ^ ρ ^ 1 ρ ^ 2 ξ ^ ϵ 2 ξ ^ b 2
model (21)Mean1.60050.42790.55900.57000.54251.60030.38700.55890.53530.5166
MSE0.00090.00380.00290.02420.03510.00070.00280.00250.02120.0138
Mean1.60360.52210.36960.48790.69911.60110.53860.36110.48880.7302
model (22)MSE0.20110.50390.70340.00540.11840.00080.10810.70510.00220.9068
GRM(%)22,24413,16024,1554.9623714.28376028,10436471
Mean1.5978-0.36100.75300.31471.6024-0.38400.62820.2515
model (23)MSE0.0309-1.00521.02060.04670.0013-0.30261.00651.0143
GRM(%)3333-34,56241173385.71-12,00446477250
Mean1.59800.5406-0.64580.41811.60240.5070-0.62820.3487
model (24)MSE0.00160.0083-0.11070.05130.00120.0124-0.03650.0294
GRM(%)77.77118.4-357.44671.42342.8-72.2113.0
Mean1.6005--0.62580.41671.6025--0.64770.2888
model (25)MSE0.0013--0.04720.05930.0007--0.04520.0153
GRM(%)44.44--185568.90--11310.86
Table 9. The description of variables.
Table 9. The description of variables.
VariableDescription
high yield varietiesIf high yield varieties are used, value of high yield varieties equals 1, else equals 0.
mixed yield varietiesIf mixed yield varieties are used, value of mixed yield varieties equals 1, else equals 0.
seed weightSeed weight used at t-th season on i-th farm (unit: kilogram)
land areaNumber of hectares of rice
Table 10. F-test results with α = 0.01 .
Table 10. F-test results with α = 0.01 .
VariableHigh Yield Variety M i x e d Y i e l d V a r i e t i e S e e d W e i g h t L a n d A r e a
F-test value4.435.161.27181.64 ***
Note: *** presents that it is significant under test level 1%.
Table 11. Parameter estimation results.
Table 11. Parameter estimation results.
Parameter ρ ^ 1 ρ ^ 2 γ ^ 1 γ ^ 2 γ ^ 3 ξ ^ ϵ 2 ξ ^ b 2
Estimate0.9018 ***0.0124 ***0.2818 ***0.4881 ***0.0020 ***0.3140 ***0.2907 ***
Note: *** presents that it is significant under test level 1%.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, S.; Chen, J.; Chen, D. PQMLE and Generalized F-Test of Random Effects Semiparametric Model with Serially and Spatially Correlated Nonseparable Error. Fractal Fract. 2024, 8, 386. https://doi.org/10.3390/fractalfract8070386

AMA Style

Li S, Chen J, Chen D. PQMLE and Generalized F-Test of Random Effects Semiparametric Model with Serially and Spatially Correlated Nonseparable Error. Fractal and Fractional. 2024; 8(7):386. https://doi.org/10.3390/fractalfract8070386

Chicago/Turabian Style

Li, Shuangshuang, Jianbao Chen, and Danqing Chen. 2024. "PQMLE and Generalized F-Test of Random Effects Semiparametric Model with Serially and Spatially Correlated Nonseparable Error" Fractal and Fractional 8, no. 7: 386. https://doi.org/10.3390/fractalfract8070386

APA Style

Li, S., Chen, J., & Chen, D. (2024). PQMLE and Generalized F-Test of Random Effects Semiparametric Model with Serially and Spatially Correlated Nonseparable Error. Fractal and Fractional, 8(7), 386. https://doi.org/10.3390/fractalfract8070386

Article Metrics

Back to TopTop