Next Article in Journal
On Enthalpy–Entropy Compensation Characterizing Processes in Aqueous Solution
Previous Article in Journal
Maximum Power Efficiency
Previous Article in Special Issue
Physics-Informed Neural Networks with Unknown Partial Differential Equations: An Application in Multivariate Time Series
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modeling Spatial Data with Heteroscedasticity Using PLVCSAR Model: A Bayesian Quantile Regression Approach

1
School of Computer and Information Engineering, Xiamen University of Technology, Xiamen 361024, China
2
Xiamen Software Supply Chain Security Public Technology Service Platform, Xiamen 361024, China
3
School of Mathematics and Statistics, Fujian Normal University, Fuzhou 350117, China
4
Fujian Provincial Key Laboratory of Statistics and Artificial Intelligence, Fuzhou 350117, China
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(7), 715; https://doi.org/10.3390/e27070715
Submission received: 4 June 2025 / Revised: 22 June 2025 / Accepted: 30 June 2025 / Published: 1 July 2025
(This article belongs to the Special Issue Bayesian Hierarchical Models with Applications)

Abstract

Spatial data not only enables smart cities to visualize, analyze, and interpret data related to location and space, but also helps departments make more informed decisions. We apply a Bayesian quantile regression (BQR) of the partially linear varying coefficient spatial autoregressive (PLVCSAR) model for spatial data to improve the prediction of performance. It can be used to capture the response of covariates to linear and nonlinear effects at different quantile points. Through an approximation of the nonparametric functions with free-knot splines, we develop a Bayesian sampling approach that can be applied by the Markov chain Monte Carlo (MCMC) approach and design an efficient Metropolis–Hastings within the Gibbs sampling algorithm to explore the joint posterior distributions. Computational efficiency is achieved through a modified reversible-jump MCMC algorithm incorporating adaptive movement steps to accelerate chain convergence. The simulation results demonstrate that our estimator exhibits robustness to alternative spatial weight matrices and outperforms both quantile regression (QR) and instrumental variable quantile regression (IVQR) in a finite sample at different quantiles. The effectiveness of the proposed model and estimation method is demonstrated by the use of real data from the Boston median house price.

1. Introduction

The technological progress in the field of spatial data management enables cities to better control the processing of spatial information. Geospatial applications enable people to access real-time maps to visualize spatial data, and this updated information can be used for decision-making. Spatial data, as widely collected and inexpensive geographical information systems on the internet, are frequently encountered in such diverse fields as real estate, finance, economics, geography, epidemiology, and environmetrics. The spatial autoregressive (SAR) model of Cliff and Ord [1] has attracted a great deal of interest among the class of spatial models, and it has extensively been used to address spatial interaction effects among geographical units in cross-sectional or panel data. A number of early studies are summarized in Anselin [2], Case [3], Cressie [4], LeSage [5], and Kazar and Celik [6], among others. These studies are mainly focused on the parametric SAR models, which are highly susceptible to model misspecification. However, the relationship between the covariates and the response variables are usually nonlinear effects in reality. Indeed, it has been proven in practice that many economic variables have highly nonlinear effects on response variables [7]. Neglecting potential nonlinear relationships often leads to inconsistent parameter estimates in spatial models, and even misleading conclusions [8]. Although fully nonparametric models possess the flexibility to capture underlying complex nonlinear effects, they may suffer from the “curse of dimensionality” [9] with high dimensional covariates. Semiparametric SAR models are used as an alternative for processing spatial data to overcome the problems mentioned above. It not only offers greater flexibility than generalized linear models but also alleviates the “curse of dimensionality” inherent in nonparametric models.
Semiparametric models have garnered significant attention in econometrics and statistics due to their parametric interpretability combined with nonparametric flexibility. Among these, the partially linear varying coefficient (PLVC) model [10] stands out as one of the most widely used. It provides a good tradeoff between the interpretation of a partially linear model and the adaptability of the varying coefficient model. In spatial data analysis, the partially linear varying coefficient spatial (PLVCS) model has been widely applied, improving the capacity of spatial analysis by exploring regression relationships. To approximate the varying coefficient functions in the PLVCS model, many authors have developed various methods, including the local smoothing method [11], basis expansion technique [12], Bayesian approach [13], profile quasi-maximum likelihood estimation method [14], etc. While much of the existing literature on PLVC models focuses on methodological developments and applications, rigorous theoretical analyses often assume data observed on grid points within a rectangular domain. To balance the interpretability and flexibility inherent in PLVC models while explicitly accounting for spatial dependence, we consider the PLVCSAR model.
It is well known that the basic spline [15] approximation technique is a popular method for modeling nonlinearities in the semiparametric model [7,16]. Compared to the kernel estimate, the main advantage of the splines method is that its calculations are relatively simple and easily combined with classical spatial econometric estimates. However, it requires the user to choose the appropriate position and number of knots. If the number of knots is too large, the model may be seriously overfitted. Conversely, if the number of knots is too small, the splines may not fully reflect the nonlinearity of the modeling curve. Various approaches have been proposed to solve the problem mentioned above. A common approach could vary the number of knots to minimize some criteria, such as the form of information criteria including AIC and BIC. Moreover, Bayesian penalty splines [17] offer an alternative approach through Bayesian shrinkage implemented via careful prior specification. In contrast, Bayesian free-knot splines ([18,19,20], among others) offer a distinct advantage: by treating both the position and number of spline knots as random variables within the modeling framework, this approach achieves intrinsic spatial adaptivity. This approach enables the model to automatically determine optimal smoothing parameters and implement variable bandwidth selection, a critical capability for handling heterogeneous patterns in data [19]. This adaptive capability constitutes the primary motivation for employing free-knot splines in our modeling framework, particularly when dealing with spatial or temporal processes exhibiting non-stationary characteristics.
The recent literature has increasingly focused on semiparametric SAR models. While quasi-likelihood estimation [8,21] remains prevalent, maximum likelihood-based approaches face computational challenges due to the need for repeated determinant calculations of large-dimensional matrices. Furthermore, these methods typically require homoscedastic error assumptions. Alternative estimators employing instrumental variables (IV) [22], generalized method of moments (GMM) [23], and Bayesian approach [24] relax the homoscedasticity constraint but remain fundamentally limited to mean regression frameworks. Crucially, all existing approaches rely on the restrictive zero-mean error assumption, thereby failing to characterize potential heterogeneous covariate effects across response quantiles.
Quantile regression (QR), introduced by Koenker and Bassett [25], offers significant advantages over traditional mean regression. It not only delivers robust estimations but also comprehensively delineates the fundamental relationship between covariates and the response variable throughout the entire conditional distribution. Crucially, QR captures heterogeneous covariate effects at different quantile levels of the response variable, thereby addressing a fundamental limitation of conventional approaches. For instance, Dai et al. [26] considered the IV approach for the quantile regression of PLVCSAR models. As concerns the frequency method, the estimation strategy hinges on minimizing the sum of asymmetric absolute deviations. Particularly at the extreme quantile levels, not only is the estimation accuracy low, but it is also prone to instability in estimation. As a result, more and more scholars tend to use the Bayesian approach to estimate models. Pioneering work by Yu and Moyeed [27] established a Bayesian QR paradigm utilizing an asymmetric Laplace error distribution as a working likelihood for inferential procedures, and sampling unknown quantities from its posterior distribution via the MCMC approach is feasible even under the most complex model specifications.
To sum up, Bayesian quantile regression (BQR) has emerged as a versatile framework, spanning diverse applications including longitudinal analysis [28], risk measure [29], variable selection [30], empirical likelihood [31], etc. In the progress of Bayesian spatial quantile regression methodology, Lee and Neocleous [32] first proposed fixed effect model to process spatial counting data; King and Song [33] then extended the framework to include random spatial effects. Related developments also include the spatial process multiple (or individual) quantile regression method proposed by Lum and Gelfand [34] and the spatiotemporal trend joint (or simultaneous) quantile regression method established by Reich et al. [35]. The former models each quantile independently, while the latter estimates all quantiles simultaneously. In the spatial context, Castillo et al. [36] considered the multi-quantile regression of mixed effects autoregressive models, while Chen and Tokdar [37] and Castillo et al. [38] developed a joint quantile regression of spatial data. However, BQR, to the best of our knowledge, remains largely unexplored within semiparametric SAR models. Notably, Chen et al. [39] proposed a Bayesian P-splines approach for the quantile regression of PLVCSAR models.
Building upon these collective results, we present a Bayesian free-knot splines method for the quantile regression of the PLVCSAR model to be applied to spatial dependent response. The contribution of this paper extends the traditional PLVCSAR models by allowing for quantile-specific effects and unknown heteroscedasticity. It can allow for varying degrees of spatial dependence in the response distributions at different quantiles. We develop an enhanced Bayesian approach that estimates unknown parameters and uses free-knot splines to approximate the nonparametric function. To enhance algorithmic convergence, we modify the movement step of the reversible-jump MCMC algorithm so that each iteration can relocate the positions of all knots instead of only one knot position. We develop a Bayesian sampling-based method that can be executed via the MCMC approach, and we design a Gibbs sampler to explore the joint posterior distributions. The computational tractability of MCMC methods in deriving posterior distributions through careful prior specification establishes Bayesian inference as a particularly attractive framework, even for complex modeling scenarios.
The remainder of this article is organized as follows. Section 2 specifies the Bayesian quantile regression for the PLVCSAR model for spatial data, subsequently approximating the link function via B-splines to derive the likelihood function. To establish a Bayesian sampling-based analytical framework, we delineate the prior distributions and derive the full conditional posteriors of the latent variables and unknown parameters, and we describe the detailed sampling algorithms in Section 3. Section 4 reports Monte Carlo results for the finite sample performance of the BQR estimator, and for the comparisons with the QR and IVQR estimators at different quantile points. Section 5 provides an empirical illustration, as well as the study results. Finally, we conclude the paper in Section 6.

2. Methodology

2.1. Model

Consider the following quantile regression for the PLVCSAR model:
y i = ρ τ j = 1 n w i j y j + x i T β τ + z i T α τ ( u i ) + ϵ τ , i , i = 1 , , n ,
where y i is the dependent variable for spatial unit i; x i = ( x i 1 , , x i p ) T , z i = ( z i 1 , , z i q ) T , and u i are the observations of the relevant covariates; w i j is a prespecified constant spatial weight; α τ ( · ) = ( α τ 1 ( · ) , , α τ q ( · ) ) T is a q-dimensional vector of unknown smooth functions; u i is the smoothing variable; ρ τ represents an unknown spatial autoregressive coefficient that measures neighbor-based autocorrelation with stability constraint ρ τ ( 1 , 1 ) ; β τ = ( β τ 1 , , β τ p ) T is a p-dimensional unknown parameters; and  ϵ i is the error term whose τ th quantile on ( x i , z i , u i ) equals zero for τ ( 0 , 1 ) .

2.2. Likelihood

Quantile regression is usually achieved by solving the minimization problem based on the check loss function. With model (1), the specific problem is to estimate ρ τ , β τ , and α τ ( · ) by minimizing the following objective function:
L ( y , x , z , u ) = i = 1 n λ τ y i ρ τ j = 1 n w i j y j x i T β τ z i T α τ ( u i ) ,
where y = ( y 1 , , y n ) T , x = ( x 1 , , x n ) T , z = ( z 1 , , z n ) T , u = ( u 1 , , u n ) T , λ τ ( ϵ ) = ϵ ( τ I ( ϵ < 0 ) ) is the so called check function, and I ( · ) is the indication function. In a Bayesian setup, we assume the error terms ϵ τ , i ALD ( 0 , δ 0 , τ ) are mutually independent and identically distributed (i.i.d.) random variables from an asymmetric Laplace distribution with density
p ( ϵ τ , i ) = τ ( 1 τ ) δ 0 exp 1 δ 0 λ τ ( ϵ τ , i ) ,
where δ 0 is the scale parameter. Then, the conditional distribution of y is in the form of
p ( y | x ) = τ n ( 1 τ ) n δ 0 n exp 1 δ 0 i = 1 n λ τ y i ρ τ j = 1 n w i j y j x i T β τ z i T α τ ( u i ) .
Therefore, maximizing the likelihood (3) is equivalent to minimizing the objective function (2), giving (2) a likelihood-based interpretation. The location-scale mixture representation by introducing the asymmetric Laplace distribution [40], (1) can be equivalently written as
y i = ρ τ j = 1 n w i j y j + x i T β τ + z i T α τ ( u i ) + m 1 e i + m 2 δ 0 e i ν i , i = 1 , , n ,
where m 1 = 1 2 τ τ ( 1 τ ) , m 2 = 2 τ ( 1 τ ) , and e i exp ( 1 / δ 0 ) follows an exponential distribution with mean δ 0 and is independent of ν i N ( 0 , 1 ) . For ease of notation, we omit τ in the following expressions.
For j = 1 , , q , we represent the unknown functions α j ( · ) in model (1) via B-splines approximations [15]. Specifically, for  j = 1 , , p , α j ( · ) is defined as a polynomial spline of degree t j with k j interior knots ξ j = ( ξ j 1 , , ξ j k j ) T satisfying a j < ξ j 1 < < ξ j k j < b j ; i.e.,
α j ( u i j ) l = 1 K j B j l ( u i j ) γ j l = B j T ( u i j ) γ j u i j [ a j , b j ] ,
where K j = 1 + t j + k j , B j ( u i j ) = ( B j 1 ( u i j ) , , B j K j ( u i j ) ) T denotes the K j × 1 B-spline basis vector determined by the knot vector ξ j ; γ j = ( γ j 1 , , γ j K j ) T is the corresponding K j × 1 spline coefficients vector, and 
a j = min 1 i n { u i j } and b j = max 1 i n { u i j }
are boundary knots.
Consequently, model (4) with spline specification (5) takes the following representation:
y i = ρ j = 1 n w i j y j + x i T β + D T ( z i , u i ) γ + m 1 e i + m 2 δ 0 e i ν i , i = 1 , , n ,
where D ( z i , u i ) = ( z i 1 B 1 ( u i 1 ) , , z i q B q ( u i q ) ) T and γ = ( γ 1 T , , γ q T ) T . We view e i as latent variables for i = 1 , , n and denote e = ( e 1 , , e n ) T . Then, the matrix formulation of (7) can be written as
y = ρ W y + x T β + D T ( z , u ) γ + m 1 e + E 1 2 ν ,
where W = ( w i j ) denotes an n × n specified spatial weight matrix, ν = ( ν 1 , , ν n ) T , E = m 2 δ 0 diag { e 1 , , e n } , and D T ( z , u ) is an n × ( K 1 + + K q ) matrix whose ith row is D T ( z i , u i ) . Further, D T ( z , u ) = ( D 1 T ( z , u ) , , D q T ( z , u ) ) , where each D j T ( z , u ) is an n × K j submatrix.
The likelihood function associated with model (8) is given by the following:
p ( x , y , z , u | e , ρ , β , γ , k , ξ , δ 0 ) | I n ρ W | i = 1 n ( δ 0 e i ) 1 2 exp i = 1 n ( y ˜ i x i T β D i T ( z i , u i ) γ m 1 e i ) 2 2 m 2 δ 0 e i | ( I n ρ W ) E 1 2 | exp { 1 2 [ y ρ W y x T β D T ( z , u ) γ m 1 e ] T × E 1 [ y ρ W y x T β D T ( z , u ) γ m 1 e ] } = | A ( ρ ) E 1 2 | exp { 1 2 [ A ( ρ ) y x T β D T ( z , u ) γ m 1 e ] T × E 1 [ A ( ρ ) y x T β D T ( z , u ) γ m 1 e ] } ,
where k = ( k 1 , , k q ) T , ξ = ( ξ 1 T , , ξ q T ) T , I n denotes the n × n identity matrix, A ( ρ ) = I n ρ W , and  y ˜ = A ( ρ ) y = ( y ˜ 1 , , y ˜ n ) T .

3. Bayesian Estimation

We develop a Bayesian approach employing a Gibbs sampler to fit the proposed model. We begin with describing the prior distributions for all unknown parameters, then, we derive the full conditional posteriors of the latent variables and all unknown parameters, and we narrate a detailed MCMC sampling scheme. Furthermore, we improve the movement step of the reversible-jump MCMC algorithm to relocate all knot positions at each iteration rather than adjusting a single knot as in conventional implementations.

3.1. Priors

To complete the Bayesian model specification, we assign prior distributions for all unknown parameters, including the spatial autocorrelation coefficient ρ , regression coefficient vectors β and γ , and scale parameter δ 0 , as well as the number k and location ξ of knots for the splines.
For j = 1 , , q , we choose a Poisson prior with mean λ j
π ( k j ) = λ j k j k j ! e λ j
for the number of knots k j P ( λ j ) and a conditional uniform prior on knot positions ξ j given k j
π ( ξ j k j ) = k j ! ( b j a j ) k j Δ j ,
where Δ j = I a j = ξ j 0 < ξ j 1 < < ξ j k < ξ j , k + 1 = b j , a j , and b j are defined in Equation (6); we assign a hierarchical prior for coefficients β N ( 0 , τ 0 ) and γ j N ( 0 , τ j ) , which consists of a conjugate normal prior
π ( β | τ 0 ) ( 2 π τ 0 ) p 2 exp β T β 2 τ 0 and π ( γ j | k j , ξ j , τ j ) ( 2 π τ j ) K j 2 exp γ j T γ j 2 τ j ,
and an inverse-gamma prior
π ( τ j ) τ j r τ j 2 1 exp s τ j 2 2 τ j ,
on the unknown hyperparameter τ j I G ( r τ j 2 , s τ j 2 2 ) , where r τ j and s τ j 2 are pre-specified hyperparameters for j = 0 , 1 , , q . This hierarchical specification effectively induces heavy-tailed t-distribution priors for β and γ j . In addition, we assign a conjugate inverse-gamma prior
π ( δ 0 ) ( δ 0 ) r 0 2 1 exp s 0 2 2 δ 0
on the scale parameter δ 0 I G ( r 0 2 , s 0 2 2 ) , where r 0 and s 0 2 are pre-specified hyperparameters. Throughout this study, we specify r 0 = s 0 2 = 1 to induce a Cauchy prior for δ 0 . For hyperparameters τ j ( j = 0 , 1 , , q ), we employ weakly informative inverse gamma priors r τ j = 1 and s τ j 2 = 0.005 . Finally, the spatial autocorrelation coefficient ρ is assigned a uniform prior for ρ U ( λ min 1 , λ max 1 ) , where λ max and λ min denote the extreme eigenvalues of the standardized spatial weight matrix W.
The joint prior of all unknown parameters is specified as follows:
π ( ρ , β , γ , k , ξ , δ 0 , τ ) = π ( ρ ) π ( δ 0 ) π ( τ 0 ) π ( β | τ 0 ) i = 1 q π ( k j ) π ( ξ j | k j ) π ( τ j ) π ( γ j | k j , ξ j , τ j ) ,
where τ = ( τ 0 , τ 1 , , τ q ) denotes the vector of hyperparameters. Note that hyperparameters τ are explicitly included in the parameter vector for computational tractability.

3.2. The Full Conditional Posteriors of the Latent Variables

From the likelihood function (9) together with a standard exponential density, for i = 1 , , n , the full conditional posterior distribution of the latent variables e i is as follows. Given the observation data ( y , x , z , u ) and the remaining unknown parameters, the conditional density of e i is proportional to
p ( e i | y , x , z , u , ρ , β , γ , k , ξ , δ 0 ) e i 1 2 exp 1 2 m 2 δ 0 e i ( y ˜ i x i T β D i T ( z i , u i ) γ m 1 e i ) 2 1 δ 0 e i e i 1 2 exp 1 2 ( a e 2 e i 1 + b e 2 e i ) ,
where a e 2 = ( y ˜ i x i T β D i T ( z i , u i ) γ ) 2 / m 2 δ 0 and b e 2 = m 1 2 / m 2 δ 0 + 2 / δ 0 . Since (11) constitutes the kernel of a generalized inverse Gaussian distribution, it follows that
e i | y , x , z , u , ρ , β , γ , k , ξ , δ 0 G I G ( 1 2 , a e , b e ) .
The probability density function of G I G ( υ , a , b ) is given by
f ( x | υ , a , b ) = ( b / a ) υ 2 K υ ( a b ) x υ 1 exp 1 2 ( a 2 x 1 + b 2 x ) , x > 0 , < υ < + , a , b 0 ,
where K v ( · ) denotes the modified Bessel function of the third kind. The availability of efficient sampling algorithms for the generalized inverse Gaussian distribution [41] ensures the computational tractability of our Gibbs sampler for Bayesian quantile regression estimation.

3.3. The Full Conditional Posterior Distributions of the Parameters

Given the joint posterior’s analytical complexity, direct sampling is impracticable. Consequently, we develop a Metropolis–Hastings within the Gibbs algorithm to sample from the posterior distributions. Therefore, we derive the full conditional posterior distributions for all parameters and delineate the corresponding Markov chain Monte Carlo sampling procedures.
It follows from the likelihood function (9) that given the remaining unknown quantities, the conditional posterior distribution of the spatial autocorrelation coefficient ρ is given by
p ( ρ | y , x , z , u , e , β , γ , k , ξ , δ 0 , τ ) | A ( ρ ) | exp { 1 2 [ A ( ρ ) y x T β D T ( z , u ) γ m 1 e ] T × E 1 [ A ( ρ ) y x T β D T ( z , u ) γ m 1 e ] } .
Direct sampling from (12) is analytically intractable as the density lacks a standard closed form. To address this, we implement a Metropolis–Hastings algorithm [42,43] with the following procedure: generate candidate ρ from a Cauchy distribution truncated to ( λ min 1 , λ max 1 ) , centered at the current ρ with scale σ ρ , where σ ρ serves as the proposal tuning parameter; and accept ρ with probability
min 1 , p ( ρ | y , x , z , u , e , β , γ , k , ξ , δ 0 , τ ) p ( ρ | y , x , z , u , e , β , γ , k , ξ , δ 0 , τ ) × C ρ ,
given the factor
C ρ = arctan [ ( λ max 1 ρ ) / σ ρ ] arctan [ ( λ min 1 ρ ) / σ ρ ] arctan [ ( λ max 1 ρ ) / σ ρ ] arctan [ ( λ min 1 ρ ) / σ ρ ] .
Combining the likelihood function (9) with a standard exponential density, we obtain the full conditional posterior for the scale parameter δ 0 conditional on ( y , x , z , u , e , ρ , β , γ , k , ξ , τ ) as
p ( δ 0 | y , x , z , u , e , ρ , β , γ , k , ξ , τ ) ( δ 0 ) 3 n + r 0 2 1 exp 1 δ 0 i = 1 n 1 2 m 2 e i ( y ˜ i μ i ) 2 + i = 1 n e i + s 0 2 2 ,
where μ i = x i T β D i T ( z i , u i ) γ m 1 e i . Since (13) is an inverse-gamma distribution, we have
δ 0 | y , x , z , u , e , ρ , β , γ , k , ξ , τ I G ( r ˜ 0 2 , s ˜ 0 2 2 ) ,
where r ˜ 0 = 3 n + r 0 and s ˜ 0 2 = s 0 2 + 2 i = 1 n e i + i = 1 n ( y ˜ i μ i ) 2 / m 2 e i . Thus, the integration of scale parameters imposes no computational impediments within our Gibbs sampling framework.
Directly observed from likelihood function (9) and priors (10), the conditional joint posterior of ( β , γ , k , ξ ) given ( y , x , z , u , e , ρ , δ 0 , τ ) takes the form:
p ( β , γ , k , ξ | y , x , z , u , e , ρ , δ 0 , τ ) exp 1 2 [ A ( ρ ) y x T β D T ( z , u ) γ m 1 e ] T E 1 [ A ( ρ ) y x T β D T ( z , u ) γ m 1 e ] × exp β T β 2 τ 0 × j = 1 q λ j b j a j k j × τ j k j 2 exp γ j T γ j 2 τ j .
From the conditional joint posterior (14), the full conditional posterior of β is given by
p ( β | y , x , z , u , e , ρ , γ , k , ξ , δ 0 , τ ) exp { 1 2 [ A ( ρ ) y x T β m 1 e D T ( z , u ) γ ] T × E 1 [ A ( ρ ) y x T β m 1 e D T ( z , u ) γ ] } × exp β T β 2 τ 0 exp 1 2 ( y ^ x T β ) T E 1 ( y ^ x T β ) β T β 2 τ 0 | Ξ 0 | 1 2 exp 1 2 ( β β ^ ) T Ξ 0 ( β β ^ ) ,
where y ^ = y ˜ m 1 e D T ( z , u ) γ , Ξ 0 = τ 0 1 I p + x E 1 x T , and  β ^ = Ξ 0 1 x E 1 y ^ . It is easy to generate β from the multivariate normal density (15).
We have from the joint posterior (14) that the full conditional posterior of ( γ j , k j , ξ j ) for j = 1 , q is given by
p ( γ j , k j , ξ j | y , x , z , u , e , ρ , β , γ j , k j , ξ j , δ 0 , τ ) exp 1 2 [ A ( ρ ) y x T β m 1 e D T ( z , u ) γ ] T E 1 [ A ( ρ ) y x T β m 1 e D T ( z , u ) γ ] × λ j b j a j k j × τ j k j 2 exp γ j T γ j 2 τ j exp 1 2 ( y D j T ( z , u ) γ j ) T E 1 ( y D j T ( z , u ) γ j ) + τ j 1 γ j T γ j × τ j 1 2 λ j b j a j k j | Ξ j | 1 2 exp 1 2 ( γ j γ ^ j ) T Ξ j ( γ j γ ^ j ) × | Ξ j | 1 2 τ j 1 2 λ j b j a j k j exp S j 2 ,
where y = y ˜ x T β m 1 e D j T ( z , u ) γ j , Ξ j = τ j 1 I K j + D j ( z , u ) E 1 D j T ( z , u ) , γ ^ j = Ξ j 1 D j ( z , u ) E 1 y , and  S j = y T E 1 y γ ^ j T Ξ j γ ^ j , which gives rise to a marginal posterior distribution
p ( k j , ξ j | y , x , z , u , e , ρ , β , k j , ξ j , δ 0 , τ ) | Ξ j | 1 2 τ j 1 2 λ j b j a j k j exp S j 2 ,
where γ j , k j , ξ j , D j T ( z , u ) are γ , k, ξ , D T ( z , u ) with γ j , k j , ξ j , D j T ( z , u ) excluded, respectively.
We can see from (16) that the method of composition [44] can be used to draw γ j from the conditional normal posterior
p ( γ j | y , x , z , u , e , ρ , β , k , ξ , δ 0 , τ ) | Ξ j | 1 2 exp 1 2 ( γ j γ ^ j ) T Ξ j ( γ j γ ^ j ) .
As it is convenient to generate γ j from multivariate normal density (18), we mainly focus on sampling from (17). Hence, we design a partially collapsed Gibbs sampler [45] and use a reversible-jump sampler [18] to update the number k j and location ξ j of knots. Following the standard reversible-jump algorithm [46], we implement three transition operators: birth, death, and movement steps. While preserving the birth and death steps, we modify the movement step via the hit-and-run algorithm [47] so that all the knots can be relocated instead of only one knot in each iteration. That is, we generate a random direction vector c j = ( c j 1 , , c j k j ) T , define a feasible range ω j 1 , ω j 2 = ω j : ξ j = ξ j + ω j c j , with a j < ξ i < b j , i = 1 , , k j , and sample a signed distance ω j from a Cauchy distribution with location 0 and scale σ ξ j truncated on ω j 1 , ω j 2 . Finally, we set ξ j = ξ j + ω j c j , reorder the knots, and accept candidate knots ξ j with probability
min 1 , | Ξ j | | Ξ j | 1 2 × exp S j S j 2 × arctan ( ω j 2 / σ ξ j ) arctan ( ω j 1 / σ ξ j ) arctan [ ( ω j 2 ω j ) / σ ξ j ] arctan [ ( ω j 1 ω j ) / σ ξ j ] ,
where Ξ j and S j denote candidate analogues of current-state Ξ j and S j .
Evidently, the hyperparameters τ j ( j = 0 , 1 , , q ) exhibit mutual posterior independence. Their conditional posterior distributions follow inverse-gamma forms:
p ( τ 0 | β ) τ 0 p + r τ 0 2 1 exp s τ 0 2 + β T β 2 τ 0
and for j = 1 , , q ,
p ( τ j | γ j , k , ξ ) τ j K j + r τ j 2 1 exp s τ j 2 + γ j T γ j 2 τ j ,
which can be sampled directly from (19) and (20), respectively.

3.4. Sampling

Bayesian estimates Θ = { ρ , β , γ , k , ξ , δ 0 , τ } are obtained via MCMC simulations from the full conditional posterior distributions of all parameters. The detailed pseudocode for our MCMC algorithm (Algorithm 1) is as follows:
Algorithm 1 The pseudocode of the MCMC sampling scheme
Input: Observed data { ( y i , x i , z i , u i ) } i = 1 , , n .
Initialization: Set t = 0 with initial states e ( 0 ) and Θ ( 0 ) .
MCMC iterations: For t = 1 to T:
 Given current states e ( t 1 ) and Θ ( t 1 ) , sequentially update
 (a)
Sample e i ( t ) from p ( e i | y i , x i , z i , u i , Θ ( t 1 ) ) for i = 1 , , n .
 (b)
Sample Θ ( t ) from p ( Θ | y , x , z , u , e ( t 1 ) ) .
Due to its complexity, step (b) is further decomposed into
-
Generate δ 0 ( t ) from p ( δ 0 | y , x , z , u , e ( t 1 ) , ρ ( t 1 ) , β ( t 1 ) , θ ( t 1 ) ) ;
-
Generate ρ ( t ) from p ( ρ | y , x , z , u , e ( t 1 ) , β ( t 1 ) , δ 0 ( t 1 ) , θ ( t 1 ) ) ;
-
Generate β ( t ) from p ( β | y , x , z , u , e ( t 1 ) , ρ ( t 1 ) , δ 0 ( t 1 ) , θ ( t 1 ) ) ,
where θ ( t 1 ) = ( γ ( t 1 ) , k ( t 1 ) , ξ ( t 1 ) , τ ( t 1 ) ) .
-
For j = 1 , , q , generate ( k j ( t ) , ξ j ( t ) ) from
p ( k j , ξ j | y , x , z , u , e ( t 1 ) , ρ ( t 1 ) , γ j ( t 1 ) , k j ( t 1 ) , ξ j ( t 1 ) , δ 0 ( t 1 ) , τ ( t 1 ) ) ,
-
Sample γ j ( t ) from
p ( γ j | y , x , z , u , e ( t 1 ) , ρ ( t 1 ) , γ j ( t 1 ) , β ( t 1 ) , k ( t 1 ) , ξ ( t 1 ) , δ 0 ( t 1 ) , τ ( t 1 ) ) ,
-
Sample τ j ( t ) from p ( τ j | γ ( t 1 ) , k ( t 1 ) , ξ ( t 1 ) ) .
-
Sample τ 0 ( t ) from p ( τ 0 | β ( t 1 ) ) .
Output: A sequence of MCMC drawn from the posterior distribution of { Θ ( t ) } t = 1 T .

4. Monte Carlo Simulations

In this section, Monte Carlo simulations are used to assess the finite sample performance of the proposed model and estimation method. We assess the performance of the estimated varying-coefficient functions α ^ ( · ) using the mean absolute deviation error (MADE) and global mean absolute deviation error (GMADE), defined as
MADE j = 1 n g r i d i = 1 n g r i d | α ^ j ( u i ) α j ( u i ) | and GMADE = 1 q j = 1 q MADE j
evaluated at n g r i d = 100 equidistant points u i i = 1 n g r i d spanning [ a j , b j ] .
We conducted simulation studies using the date generated from the following model:
y i = ρ j = 1 n w i j y j + x i T β + z i T α ( u i ) + ϵ i , i = 1 , , n ,
with covariates x i = ( x i 1 , x i 2 ) T N 2 0 , Σ where
Σ = 1 0.5 0.5 1 .
z i = ( z i 1 , z i 2 ) T with z i k i . i . d . U ( 2 , 0 ) for k = 1 , 2 , the scalar smoothing variable being u i i . i . d . U ( 0 , 1 ) . The varying-coefficient functions α ( u ) = ( α 1 ( u ) , α 2 ( u ) ) T with α 1 ( u ) = 2 cos ( 2 π u ) + 1 and α 2 ( u ) = 0.5 exp { 2 ( 2 u 1 ) 2 } + 2 u , the regression coefficients are β = ( 1 , 1 ) T . The random error ϵ i = ε i Φ 1 ( τ ) with Φ being the standard normal CDF of ε i , ensuring P ( ϵ i 0 ) = τ . By subtracting the τ th quantile, we obtain a random error ϵ i for which the τ th quantile is equal to zero. For comparison, we specify two types of matrices to study the impact of the spatial weight matrix W on estimation performance, where W is generated under two scenarios: the Rook weight matrix [2] with n { 100 , 400 } and the Case weight matrix [3] with ( r , m ) { ( 20 , 5 ) , ( 80 , 5 ) } , respectively. For each configuration, we consider spatial dependence parameters ρ { 0.2 , 0.5 , 0.8 } at quantile levels τ { 0.25 , 0.5 , 0.75 } , capturing weak to strong spatial autocorrelation.
According to the aforementioned design, we conducted 1000 replications of each group of simulation results. We assigned hyperparameters ( r 0 , s 0 2 , r τ j 0 , s τ j 0 2 ) = ( 1 , 1 , 1 , 0.005 ) and apply a quadratic B-spline in our computation for j = 0 , 1 , , q . The initial states of the Markov chain are derived from the respective prior distributions of unknown parameters. By adjusting the values of the tuning parameters, σ ρ and σ ξ j for j = 1 , , q are used such that the resultant acceptable rates of parameters can reach about 25%. We generated 6000 sampling values by running a Monte Carlo experiment, which removed the first 3000 values of each replication as a burn-in period. Based on the last 3000 sampling values, we computed the posterior mean (Mean), standard error (SE), and the 95% posterior credible intervals (95% CI) of the parameters across 1000 replications. Furthermore, we calculated the posterior standard deviation (SD) and compared it with the mean of the posterior SE. Following LeSage and Pace [48], we employed scalar summary measures for the marginal effects, derived as y x j = ( I n ρ W ) 1 I n α j under the spatial specification in model (21). Total effects comprised direct effects (mean of diagonal entries) and indirect effects (mean of column-wise or row-wise sums of off-diagonal entries, excluding diagonal entries).
To assess MCMC convergence, we conducted five parallel Markov chains with distinct initial values through Monte Carlo experiments using an arbitrarily selected replication. The sampled traces of parts of unknown quantities, including model parameters and fitted varying-coefficient functions evaluated at grid points, are plotted in Figure 1 (only a replication with ( r , m ) = ( 80 , 5 ) and ρ = 0.5 at τ = 0.5 quantile is displayed). It is obvious that the five parallel MCMC chains are adequately mixed. We further calculated the “potential scale reduction factor” R ^ for all model parameters and fitted varying-coefficient functions at 10 equidistant grid points based on the five parallel sequences. Figure 2 (for ( ρ , τ ) = ( 0.5 , 0.5 ) ) demonstrates the evolution of R ^ values throughout iterations. It is easy to see that convergence is achieved within 2000 burn-in iterations, as all R ^ estimates stabilize below the recommended threshold of 1.2, following the suggestion of Gelman and Rubin [49].
Figure 3a shows the boxplots of the MADE and GMADE values with a ρ = 0.5 at τ = 0.5 quantile under sample size n = 100 . We calculated the medians MADE 1 = 0.2199 , MADE 2 = 0.1745 , and GMADE = 0.2043 under a Rook weight matrix and the medians MADE 1 = 0.2209 , MADE 2 = 0.1765 , and GMADE = 0.1995 under a Case weight matrix. The boxplots of the MADE and GMADE with a ρ = 0.5 at τ = 0.5 quantile under sample size n = 400 are displayed Figure 3b. We also computed the medians MADE 1 = 0.1193 , MADE 2 = 0.0882 , and GMADE = 0.1060 under a Rook weight matrix and the medians MADE 1 = 0.1177 , MADE 2 = 0.0886 , and GMADE = 0.1057 under a Case weight matrix. These results indicate that the MADE and GMADE values of the varying-coefficient functions decrease as the sample size increases, which suggests that the estimated performance of the unknown varying-coefficient functions improves as the sample size grows. It is evident that both the Case weight matrix and Rook weight matrix yield reasonable estimates in fitting varying-coefficient functions.
The simulation results of parameter estimation are reported in Table 1 and Table 2. The simulation results demonstrate both minimal bias in parameter estimates (mean estimates closely align with true values) and robust uncertainty quantification (SE approximate empirical SD), confirming a high estimation accuracy. Moreover, the covariates affecting the response are different at different quantile points of the response distribution. Estimation precision improves with a larger sample sizes under identical spatial weight matrices. By comparing the estimates of spatial coefficient ρ at the same quantile level with the same sample sizes, we observe that the estimation performances of ρ becomes more and more accurate along with the increase in ρ , and the Case weight matrices are slightly better than that with the Rook weight matrix. Furthermore, Table 1 and Table 2 indicate that all parameter estimators produce larger estimation errors for total effects under conditions of strong positive spatial dependence, regardless of comparable sample sizes. The results of repeating the above experiments with different starting values were similar, confirming the robustness of the proposed Gibbs sampler.
Figure 4 depicts the estimated varying-coefficient functions at different quantile points, together with its 95% pointwise posterior credible intervals from a typical sample under ρ = 0.5 when the sample size is n = 100 and n = 400 , respectively. The way to select a typical sample is to make its GMADE value equal to the median of the 1000 replicates. The results show that the estimation performance of the variable coefficient function improves with the increase in the sample size, while the effects exhibit quantile-specific heterogeneity across the response distribution.
The simulations were implemented in C++ on an Intel(R) Core(TM) i7-8750H processor (2.20 GHz PC). The mean CPU times per replication reached 5 s ( n = 100 ) and 25 s ( n = 400 ). The implementation code is available from the authors upon request.
In addition, in order to compare the performance between our Bayesian quantile regression estimator (BQRE) and the instrumental variable quantile regression estimator (IVQRE) [26], the following [26] spatial weight matrix W = ( w i j ) was generated based on a mechanism that w i j = 0 . 3 | i j | for i , j = 1 , , n , and then row normalization.
Example 1. 
The samples were generated as follows:
y i = ρ j = 1 n w i j y j + x i β + z i 1 α 1 ( u i ) + z i 2 α 2 ( u i ) + ϵ i , i = 1 , , n ,
where β = 1 , ρ = 0.5 , α 1 ( u ) = 1 0.5 u , and α 2 ( u ) = 1 + sin ( 2 π u ) . The error term is specified as ϵ i = ε i Φ 1 ( τ ) with Φ being the standard normal CDF of ε i . Independent covariates are simulated such that x i i . i . d . N ( 0 , 1 ) , u i i . i . d . U [ 0 , 2 ] , and bivariate z i = ( z i 1 , z i 2 ) with z i 1 i . i . d . U [ 2 , 2 ] and z i 2 i . i . d . N ( 1 , 1 ) . For each configuration, 1000 simulation replications yield a bias and an RMSE (in parentheses) for parameter estimators, along with MADE [in brackets] for varying-coefficient function accuracy. Table 3 compares the QR, IVQR, and BQR methods under homoscedastic errors.
Example 2. 
The samples were generated as follows:
y i = ρ j = 1 n w i j y j + x i β + z i 1 α 1 ( u i ) + z i 2 α 2 ( u i ) + ( 1 + 0.5 z i 1 ) ϵ i , i = 1 , , n ,
where ρ = 0.5 , β = 1 , α 1 ( u ) = 1 0.5 u , and α 2 ( u ) = 0.5 u 2 u + 1 . The error term follows ϵ i = ε i Φ 1 ( τ ) , where Φ denotes the standard normal CDF of ε i i . i . d . N ( 0 , 1 ) . x i i . i . d . N ( 0 , 1 ) and u i i . i . d . U ( 0 , 2 ) , z i = ( z i 1 , z i 2 ) T are bivariates, where z i 1 i . i . d . N ( 0 , 1 ) and z i 2 i . i . d . U [ 2 , 2 ] are generated independently.
Table 4 reports the comparison results of the QR, IVQR, and BQR estimation methods with a heteroscedastic error term. The results in Table 3 and Table 4 indicate that all estimators exhibit a decreasing bias, RMSE, and MADE with an increasing sample size. Simultaneously, the influence of explanatory variables on the response differs significantly across quantile levels of the conditional distribution. Moreover, the BQR estimator yields a marginally lower bias, RMSE, and MADE for both parameters and the varying-coefficient functions compared to the QR and IVQR estimators at the same quantile level under the same sample size. It is evident that BQR has superior relative performance, though QR and IVQR still produce statistically reasonable estimates. This comparative advantage persists despite QR/IVQR maintaining estimation feasibility, further reinforcing BQR’s methodological superiority in spatial quantile regression.

5. Application

To demonstrate our proposed method, we analyzed a real data set collected from the well-known Boston housing data. The data set was collected in the Boston Standard Meteropolitan Statistical Area of 1970. It contains about 506 different houses’ information from a variety of locations, and it is available from the spData package in R developed by Bivand et al. [50]. Since our model and method investigate not only the influencing effects of covariates but also the spatial effects of response variable at different quantile points, it is interesting to examine the socioeconomic drivers of housing price variation.
In this application, we mainly considered the influencing factors of the median value of owner-occupied homes (MEDV) from the aspects of pupil–teacher ratio by town school district (PTRATIO), full value property tax rate (TAX), the percentage of lower status proportion (LSTAT), per capita crime rate by town (CRIM), and nitric oxides concentration parts per 10 million (NOX) in Boston. In order to perform fitting at different lower status percentages in our model, we took LSTAT as the index variable. Meanwhile, log-transformed MEDV, PTRATIO, and TAX mitigate scale disparities induced by domain magnitude variations. This theoretical foundation motivated the specification of the PLVCSAR model:
y i = ρ j = 1 n w i j y j + x i 1 β 1 + x i 2 β 2 + z i 1 α 1 ( u i ) + z i 2 α 2 ( u i ) + ϵ i , i = 1 , , n ,
where the response variable y i = log ( MEDV i ) , x i 1 = log ( PTRATIO i ) , x i 2 = log ( TAX i ) , u i = LSTAT i , z i 1 = CRIM i , z i 2 = NOX i . We adopted the Sun et al. [21] approach: constructing spatial weights via spherical Euclidean distances between housing coordinates (longitude, latitude). That is, the spatial weight
w i j = exp { s i s j } / k i exp { s i s k } ,
where s i = ( L o n i , L a t i ) , and · is the Euclidean norm. Based on the above designs, we conducted 1000 replications of each experiment. We executed five independent runs of the proposed Gibbs sampler with different initial states and generated 10,000 sampled values after a burn-in of 20,000 iterations in each experiment. Part traces of parameters are plotted in Figure 5 (only a replication at the median quantile ( τ = 0.5 ) is displayed), showing satisfactory mixing across parallel sequences. Using the five parallel sequences, we computed the “potential scale reduction factor” R ^ , which is plotted in Figure 6 (the case of τ = 0.5 quantile). These diagnostics confirm Markov chain convergence within the 20,000-iteration burn-in period.
Table 5 lists the estimated parameters, their standard errors, and the 95% posterior credible intervals at different quantile points. The results reveal significant spatial autocorrelation ρ ^ = 0.5366 ( S D = 0.0033 ) at τ = 0.5 , confirming statistically significant positive spatial spillover effects in housing markets. More interestingly, we see that the spatial effect slightly increases with the increase in quantiles. That is, the spatial effect changes across the quantile points. On the other hand, we observe that the regression coefficients of two explanatory variables PTRATIO and TAX are β ^ 1 = 0.5272 and β ^ 2 = 0.2461 at a τ = 0.5 quantile, respectively. It implies that PTRATIO and TAX have a positive and significant effect on the housing price. That is, the effect of PTRATIO increased with the increase in quantiles, while the effect of TAX decreased with the increase in quantiles. Therefore, we can observe the way that this quantile-dependent heterogeneity demonstrates differential mechanisms through which covariates influence housing markets along the price distribution.
The estimated varying-coefficient functions at different quantile points, along with their corresponding 95% pointwise posterior credible intervals, are presented in Figure 7, exhibiting nonlinear characteristics. The results indicate that α 1 ( u ) reaches a local maximum of 0.0057 at around u = 4.1264 at a τ = 0.5 quantile, and α 2 ( u ) attains a local maximum of 0.8079 at around u = 1.6546 and a local minimum of 0.2875 at around u = 4.9988 at a τ = 0.5 quantile. This provides evidence that the analysis reveals significant nonlinear relationships: CRIM exhibits an inverted U-shaped effect, whereas NOX demonstrates a U-shaped pattern in housing prices. Furthermore, the varying-coefficient functions show distinct quantile-specific modulation across the price distribution, indicating differential response mechanisms at varying market valuation tiers.

6. Conclusions

Spatial data analysis, ubiquitous in empirical research, frequently employs spatial autoregressive (SAR) frameworks. To address specification risks in conventional SAR modeling, we propose a Bayesian quantile regression approach integrating partial linear varying coefficient (PLVC) structures with SAR components. This unified framework simultaneously captures linear and nonlinear covariate effects across response quantiles while enhancing predictive accuracy. Methodologically, we develop a fully Bayesian free-knot spline estimation procedure coupled with an optimized Metropolis–Hastings within the Gibbs sampler for posterior exploration. Computational efficiency is achieved through a modified reversible-jump MCMC algorithm incorporating adaptive movement steps to accelerate chain convergence. Monte Carlo simulations demonstrate the BQR estimator’s superior robustness to spatial weight matrix specifications compared to conventional QR and IVQR methods. While alternative estimators produce reasonable results, our approach exhibits statistically significant performance across quantile levels in finite-sample scenarios. Empirical validation through real-world data analysis further substantiates the methodology’s practical utility.
The proposed method can analyze spatial data exhibiting either homoscedastic or heteroscedastic errors without requiring a specification of the error distribution. A few more issues still merit further research. While the PLVCSAR model is used herein to evaluate covariate effects on the response, other approaches, including the partially linear single-index SAR and partially linear additive SAR models, could also be considered. We further characterize the asymptotic properties of spatial quantile processes under heterogeneous spatial dependency regimes. In addition, it is also interesting to study issues such as model selection and variable selection for high-dimensional predictors. Finally, the error distribution of model-based quantile regression is asymmetric Laplacian distribution, which has the advantage of a convenient calculation, but it has some limitations in accurately capturing uncertainty. Similar recent alternatives, such as the generalization method proposed by Yan et al. [51], may provide a more robust modeling approach and may become a promising direction for future research.

Author Contributions

Supervision, R.C. and Z.C.; software, R.C. and Z.C.; methodology, R.C. and Z.C.; writing—original draft preparation, R.C. and Z.C.; writing—review and editing, Z.C. All authors have read and agreed to the published version of the manuscript.

Funding

The project was supported by the National Social Science Foundation of China (Series number: 24BTJ067) and Open Fund of Xiamen Software Supply Chain Security Public Technology Service Platform (3502Z20231042).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in Reference [50].

Acknowledgments

The authors are deeply grateful to the editors and anonymous referees for their careful reading and insightful comments, which helped to significantly improve this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cliff, A.D.; Ord, J.K. Spatial Autocorrelation; Pion Ltd.: London, UK, 1973. [Google Scholar]
  2. Anselin, L. Spatial Econometrics: Methods and Models; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1988. [Google Scholar]
  3. Case, A.C. Spatial patterns in householed demand. Econometrica 1991, 59, 953–965. [Google Scholar] [CrossRef]
  4. Cressie, N. Statistics for Spatial Data; John Wiley and Sons: New York, NY, USA, 1993. [Google Scholar]
  5. LeSage, J.P. Bayesian estimation of spatial autoregressive models. Int. Reg. Sci. Rev. 1997, 20, 113–129. [Google Scholar] [CrossRef]
  6. Kazar, B.M.; Celik, M. Spatial Autoregressive Model; Springer Press: New York, NY, USA, 2012. [Google Scholar]
  7. Basile, R. Regional economic growth in Europe: A semiparametric spatial dependence approach. Pap. Reg. Sci. 2008, 87, 527–544. [Google Scholar] [CrossRef]
  8. Su, L.; Jin, S. Profile quasi-maximum likelihood estimation of partially linear spatial autoregressive models. J. Econom. 2010, 157, 18–33. [Google Scholar] [CrossRef]
  9. Bellman, R.E. Adaptive Control Processes; Princeton University Press: Princeton, NJ, USA, 1961. [Google Scholar]
  10. Hastie, T.J.; Tibshirani, R.J. Varying-coefficient models. J. R. Stat. Soc. Ser. B 1993, 55, 757–796. [Google Scholar] [CrossRef]
  11. Brunsdon, C.; Fotheringham, A.S.; Charlton, M.E. Geographically weighted regression: A method for exploring spatial nonstationarity. Geogr. Anal. 1996, 28, 281–298. [Google Scholar] [CrossRef]
  12. Mu, J.R.; Wang, G.N.; Wang, L. Estimation and inference in spatially varying coefficient models. Environmetrics 2018, 29, e2485. [Google Scholar] [CrossRef]
  13. Gelfand, A.E.; Kim, H.J.; Sirmans, C.F.; Banerjee, S. Spatial modeling with spatially varying coefficient processes. J. Am. Stat. Assoc. 2003, 98, 387–396. [Google Scholar] [CrossRef]
  14. Li, S.S.; Chen, J.B.; Chen, D.Q. PQMLE and Generalized F-Test of Random Effects Semiparametric Model with Serially and Spatially Correlated Nonseparable Error. Fractal Fract. 2024, 8, 386. [Google Scholar] [CrossRef]
  15. de Boor, C. A Practical Guide to Splines; Springer: New York, NY, USA, 1978. [Google Scholar]
  16. Chen, Z.Y.; Chen, J.B. Bayesian analysis of partially linear, single-index, spatial autoregressive models. Comput. Stat. 2021, 37, 327–353. [Google Scholar] [CrossRef]
  17. Eilers, P.H.C.; Marx, B.D. Flexible smoothing with B-splines and penalties. Stat. Sci. 1996, 11, 89–121. [Google Scholar] [CrossRef]
  18. Green, P. Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika 1995, 82, 711–732. [Google Scholar] [CrossRef]
  19. Holmes, C.C.; Mallick, B.K. Generalized nonlinear modeling with multivariate free-knot regression splines. J. Am. Stat. Assoc. 2003, 98, 352–368. [Google Scholar] [CrossRef]
  20. Wang, H.-B. Bayesian estimation and variable selection for single index models. Comput. Stat. Data Anal. 2009, 53, 2617–2627. [Google Scholar] [CrossRef]
  21. Sun, Y.; Yan, H.J.; Zhang, W.Y.; Lu, Z. A Semiparametric spatial dynamic model. Ann. Stat. 2014, 42, 700–727. [Google Scholar] [CrossRef]
  22. Xu, X.B.; Lee, L.F. A spatial autoregressive model with a nonlinear transformation of the dependent variable. J. Econom. 2015, 186, 1–18. [Google Scholar] [CrossRef]
  23. Hoshino, T. Semiparametric spatial autoregressive models with endogenous regressors: With an application to crime data. J. Bus. Econ. Stat. 2017, 36, 160–172. [Google Scholar] [CrossRef]
  24. Liu, T.; Xu, D.K.; Ke, S.Q. A Semiparametric Bayesian Approach to Heterogeneous Spatial Autoregressive Models. Entropy 2024, 26, 498. [Google Scholar] [CrossRef]
  25. Koenker, R.; Bassett, G. Regression quantiles. Econometrica 1978, 46, 33–50. [Google Scholar] [CrossRef]
  26. Dai, X.W.; Li, S.Y.; Tian, M.Z. Quantile regression for partially linear varying coefficient spatial autoregressive models. Commun. Stat.-Simul. Comput. 2016, 53, 4396–4411. [Google Scholar] [CrossRef]
  27. Yu, K.; Moyeed, R.A. Bayesian quantile regression. Stat. Probab. Lett. 2001, 54, 437–447. [Google Scholar] [CrossRef]
  28. Yuan, Y.; Yin, G.S. Bayesian quantile regression for longitudinal studies with nonignorable missing data. Biometrics 2010, 66, 105–114. [Google Scholar] [CrossRef] [PubMed]
  29. Bernardi, M.; Gayraud, G.; Petrella, L. Bayesian tail risk interdependence using quantile regression. Bayesian Anal. 2015, 10, 553–603. [Google Scholar] [CrossRef]
  30. Alhamzawi, R.; Yu, K.M. Variable selection in quantile regression via Gibbs sampling. J. Appl. Stat. 2012, 39, 799–813. [Google Scholar] [CrossRef]
  31. Yang, Y.W.; He, X.M. Bayesian empirical likelihood for quantile regression. Ann. Stat. 2012, 40, 1102–1131. [Google Scholar] [CrossRef]
  32. Lee, D.; Neocleous, T. Bayesian quantile regression for count data with application to environmental epidemiology. J. R. Stat. Soc. Ser. C 2010, 59, 905–920. [Google Scholar] [CrossRef]
  33. King, C.; Song, J.J. Bayesian spatial quantile regression for areal count data, with application on substitute care placements in Texas. J. Appl. Stat. 2019, 46, 580–597. [Google Scholar] [CrossRef]
  34. Lum, K.; Gelfand, A.E. Spatial quantile multiple regression using the asymmetric Laplace process. Bayesian Anal. 2012, 7, 235–258. [Google Scholar] [CrossRef]
  35. Reich, B.J.; Fuentes, M.; Dunson, D.B. Bayesian spatial quantile regression. J. Am. Stat. Assoc. 2012, 106, 6–20. [Google Scholar] [CrossRef]
  36. Castillo-Mateo, J.; Asín, J.; Cebrían, A.C.; Gelfand, A.E.; Abaurrea, J. Spatial quantile autoregression for season within year daily maximum temperature data. Ann. Appl. Stat. 2023, 17, 2305–2325. [Google Scholar] [CrossRef]
  37. Chen, X.; Tokdar, S.T. Joint quantile regression for spatial data. J. R. Stat. Soc. Ser. Stat. Methodol. 2021, 83, 826–852. [Google Scholar] [CrossRef]
  38. Castillo-Mateo, J.; Gelfand, A.E.; Asín, J.; Cebrían, A.C.; Abaurrea, J. Bayesian joint quantile autoregression. Test 2024, 33, 335–357. [Google Scholar] [CrossRef]
  39. Chen, Z.Y.; Chen, M.H.; Ju, F.Y. Bayesian P-splines quantile regression of partially linear varying coefficient spatial autoregressive models. Symmetry 2022, 14, 1175. [Google Scholar] [CrossRef]
  40. Kozumi, H.; Kobayashi, G. Gibbs sampling methods for bayesian quantile regression. J. Stat. Comput. Simul. 2011, 81, 1565–1578. [Google Scholar] [CrossRef]
  41. Dagnapur, J.S. An easily implemented generalised inverse Gaussian generator. Commun. Stat.-Simul. Comput. 1989, 18, 703–710. [Google Scholar]
  42. Hastings, W.K. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 1970, 57, 97–109. [Google Scholar] [CrossRef]
  43. Metropolis, N.; Rosenbluth, A.W.; Rosenbluth, M.N.; Teller, A.H.; Teller, E. Equations of state calculations by fast computing machine. J. Chem. Phys. 1953, 21, 1087–1091. [Google Scholar] [CrossRef]
  44. Tanner, M.A. Tools for Statistical Inference: Methods for the Exploration of Posterior Distributions and Likelihood Functions, 2nd ed.; Springer: New York, NY, USA, 1993. [Google Scholar]
  45. Liu, J.S. The collapsed Gibbs sampler in Bayesian computations with applications to a gene regulation problem. J. Am. Stat. Assoc. 1994, 89, 958–966. [Google Scholar] [CrossRef]
  46. Poon, W.-Y.; Wang, H.-B. Bayesian analysis of generalized partially linear single-index models. Comput. Stat. Data Anal. 2013, 68, 251–261. [Google Scholar] [CrossRef]
  47. Chen, M.-H.; Schmeiser, B.W. General hit-and-run Monte Carlo sampling for evaluating multidimensional integrals. Oper. Res. Lett. 1996, 19, 161–169. [Google Scholar] [CrossRef]
  48. LeSage, P.J.; Pace, R.K. Introduction to Spatial Econometrics; CRC Press: Boca Raton, FL, USA; London, UK; New York, NY, USA, 2009. [Google Scholar]
  49. Gelman, A.; Rubin, D.B. Inference from iterative simulation using multiple sequences. Stat. Sci. 1992, 7, 457–511. [Google Scholar] [CrossRef]
  50. Biv, R.; Nowosad, J.; Lovelace, R. spData: Datasets for Spatial Analysis. R Package Version 2.3.4. 2025. Available online: https://CRAN.R-project.org/package=spData (accessed on 1 May 2025).
  51. Yan, Y.F.; Zheng, X.T.; Kottas, A. A New Family of Error Distributions for Bayesian Quantile Regression. Bayesian Anal. 2025, 1, 1–29. [Google Scholar] [CrossRef]
Figure 1. Trace plots of five parallel sequences corresponding to different starting values for parts of the unknown quantities.
Figure 1. Trace plots of five parallel sequences corresponding to different starting values for parts of the unknown quantities.
Entropy 27 00715 g001
Figure 2. The “potential scale reduction factor” R ^ for simulation results.
Figure 2. The “potential scale reduction factor” R ^ for simulation results.
Entropy 27 00715 g002
Figure 3. Mean absolute deviation errors are presented in boxplots: (a) for n = 100 and (b) for n = 400 (the three left panels are based on a Rook weight matrix and the three right panels are based on a Case weight matrix with ( ρ , τ ) = ( 0.5 , 0.5 ) ).
Figure 3. Mean absolute deviation errors are presented in boxplots: (a) for n = 100 and (b) for n = 400 (the three left panels are based on a Rook weight matrix and the three right panels are based on a Case weight matrix with ( ρ , τ ) = ( 0.5 , 0.5 ) ).
Entropy 27 00715 g003
Figure 4. The estimated varying-coefficient functions (dotted( τ = 0.25 ), starred ( τ = 0.5 ), and forked ( τ = 0.75 ) lines) and their 95% pointwise posterior credible intervals (dot-dashed lines) for a typical sample. The solid lines represents the true varying-coefficient functions.
Figure 4. The estimated varying-coefficient functions (dotted( τ = 0.25 ), starred ( τ = 0.5 ), and forked ( τ = 0.75 ) lines) and their 95% pointwise posterior credible intervals (dot-dashed lines) for a typical sample. The solid lines represents the true varying-coefficient functions.
Entropy 27 00715 g004
Figure 5. Trace plots for five parallel MCMC chains with different initializations for parts of the unknown quantities at quantile ( τ = 0.5 ).
Figure 5. Trace plots for five parallel MCMC chains with different initializations for parts of the unknown quantities at quantile ( τ = 0.5 ).
Entropy 27 00715 g005
Figure 6. The “potential scale reduction factor” R ^ for Boston housing price data.
Figure 6. The “potential scale reduction factor” R ^ for Boston housing price data.
Entropy 27 00715 g006
Figure 7. The estimated function (dotted ( τ = 0.25 ), starred ( τ = 0.5 ), and forked ( τ = 0.75 ) lines) and its 95% pointwise posterior credible intervals (dot-dashed lines) at different quantile points in the model (24) for Boston housing price data.
Figure 7. The estimated function (dotted ( τ = 0.25 ), starred ( τ = 0.5 ), and forked ( τ = 0.75 ) lines) and its 95% pointwise posterior credible intervals (dot-dashed lines) at different quantile points in the model (24) for Boston housing price data.
Entropy 27 00715 g007
Table 1. Simulation results for parameter estimation at τ = { 0.25 , 0.5 , 0.75 } .
Table 1. Simulation results for parameter estimation at τ = { 0.25 , 0.5 , 0.75 } .
τ Para.nRook Weight Matrix ( r , m ) Case Weight Matrix
MeanSESD 95 % CI MeanSESD 95 % CI
0.25 ρ = 0.2000 100 0.2037 0.0605 0.0693 ( 0.0860 , 0.3230 ) (20,5) 0.2015 0.0566 0.0662 ( 0.0908 , 0.3126 )
β 1 = 1.0000 0.9861 0.1212 0.1426 ( 0.7485 , 1.2236 ) 0.9852 0.1211 0.1415 ( 0.7477 , 1.2223 )
β 2 = 1.0000 0.9886 0.1215 0.1462 ( 1.2267 , 0.7506 ) 0.9901 0.1216 0.1438 ( 1.2285 , 0.7522 )
Total effect
x 1 = 1.2500 1.2558 0.1826 0.2159 ( 0.9195 , 1.6369 ) 1.2492 0.1778 0.2096 ( 0.9192 , 1.6165 )
x 2 = 1.2500 1.2577 0.1826 0.2159 ( 1.6390 , 0.9220 ) 1.2542 0.1786 0.2051 ( 1.6237 , 0.9228 )
ρ = 0.5000 0.5023 0.0478 0.0546 ( 0.4084 , 0.5959 ) 0.5006 0.0379 0.0445 ( 0.4256 , 0.5742 )
β 1 = 1.0000 0.9860 0.1214 0.1433 ( 0.7480 , 1.2241 ) 0.9853 0.1212 0.1415 ( 0.7478 , 1.2226 )
β 2 = 1.0000 0.9885 0.1218 0.1463 ( 1.2274 , 0.7500 ) 0.9899 0.1216 0.1442 ( 1.2291 , 0.7522 )
Total effect
x 1 = 2.0000 2.0239 0.3162 0.3714 ( 1.4582 , 2.6991 ) 1.9990 0.2855 0.3350 ( 1.4713 , 2.5855 )
x 2 = 2.0000 2.0268 0.3152 0.3658 ( 2.7004 , 1.4630 ) 2.0062 0.2855 0.3280 ( 2.5971 , 1.4764 )
ρ = 0.8000 0.8010 0.0231 0.0270 ( 0.7556 , 0.8461 ) 0.8000 0.0163 0.0189 ( 0.7678 , 0.8314 )
β 1 = 1.0000 0.9876 0.1215 0.1434 ( 0.7497 , 1.2257 ) 0.9854 0.1218 0.1417 ( 0.7467 , 1.2240 )
β 2 = 1.0000 0.9884 0.1219 0.1465 ( 1.2278 , 0.7497 ) 0.9903 0.1222 0.1452 ( 1.2302 , 0.7513 )
Total effect
x 1 = 5.0000 5.1266 0.8818 1.0249 ( 3.6128 , 7.0745 ) 4.9950 0.7120 0.8344 ( 3.6735 , 6.4690 )
x 2 = 5.0000 5.1266 0.8761 1.0087 ( 7.0556 , 3.6195 ) 5.0147 0.7137 0.8220 ( 6.4916 , 3.6903 )
0.50 ρ = 0.2000 100 0.2076 0.0564 0.0620 ( 0.0962 , 0.3176 ) (20,5) 0.2042 0.0533 0.0588 ( 0.0987 , 0.3080 )
β 1 = 1.0000 0.9842 0.1221 0.1326 ( 0.7447 , 1.2236 ) 0.9840 0.1220 0.1325 ( 0.7447 , 1.2232 )
β 2 = 1.0000 0.9912 0.1228 0.1352 ( 1.2321 , 0.7504 ) 0.9920 0.1228 0.1355 ( 1.2325 , 0.7510 )
Total effect
x 1 = 1.2500 1.2567 0.1802 0.2013 ( 0.9208 , 1.6087 ) 1.2491 0.1763 0.1925 ( 0.9186 , 1.6104 )
x 2 = 1.2500 1.2643 0.1805 0.1957 ( 1.6361 , 0.9273 ) 1.2583 0.1772 0.1907 ( 1.6212 , 0.9250 )
ρ = 0.5000 0.5066 0.0424 0.0477 ( 0.4233 , 0.5893 ) 0.5033 0.0358 0.0392 ( 0.4324 , 0.5728 )
β 1 = 1.0000 0.9838 0.1223 0.1324 ( 0.7440 , 1.2236 ) 0.9836 0.1223 0.1325 ( 0.7440 , 1.2237 )
β 2 = 1.0000 0.9904 0.1228 0.1352 ( 1.2310 , 0.7496 ) 0.9916 0.1228 0.1359 ( 1.2324 , 0.7509 )
Total effect
x 1 = 2.0000 2.0288 0.3056 0.3438 ( 1.4698 , 2.6709 ) 2.0019 0.2829 0.3091 ( 1.4717 , 2.5831 )
x 2 = 2.0000 2.0390 0.3051 0.3290 ( 2.6787 , 1.4808 ) 2.0165 0.2847 0.3061 ( 2.6012 , 1.4830 )
ρ = 0.8000 0.8030 0.0188 0.0219 ( 0.7658 , 0.8394 ) 0.8010 0.0153 0.0168 ( 0.7704 , 0.8303 )
β 1 = 1.0000 0.9841 0.1221 0.1321 ( 0.7449 , 1.2237 ) 0.9838 0.1225 0.1324 ( 0.7438 , 1.2244 )
β 2 = 1.0000 0.9910 0.1230 0.1352 ( 1.2320 , 0.7500 ) 0.9838 0.1225 0.1364 ( 1.2327 , 0.7498 )
Total effect
x 1 = 5.0000 5.1079 0.7997 0.9103 ( 3.6654 , 6.8065 ) 5.0013 0.7053 0.7728 ( 3.6786 , 6.4519 )
x 2 = 5.0000 5.1358 0.8012 0.8745 ( 6.8398 , 3.6910 ) 5.0350 0.7099 0.7596 ( 6.4870 , 3.7008 )
0.75 ρ = 0.2000 100 0.2082 0.0511 0.0606 ( 0.1063 , 0.3065 ) (20,5) 0.2073 0.0486 0.0567 ( 0.1109 , 0.3016 )
β 1 = 1.0000 0.9851 0.1229 0.1450 ( 0.7444 , 1.2261 ) 0.9850 0.1228 0.1432 ( 0.7442 , 1.2260 )
β 2 = 1.0000 0.9914 0.1232 0.1438 ( 1.230 , 0.7490 ) 0.9909 0.1229 0.1432 ( 1.2320 , 0.7497 )
Total effect
x 1 = 1.2500 1.2570 0.1766 0.2115 ( 0.9236 , 1.6165 ) 1.2539 0.1737 0.2080 ( 0.9243 , 1.6062 )
x 2 = 1.2500 1.2642 0.1775 0.2041 ( 1.6251 , 0.9283 ) 1.2604 0.1746 0.1983 ( 1.6143 , 0.9292 )
ρ = 0.5000 0.5061 0.0367 0.0450 ( 0.4326 , 0.5758 ) 0.5041 0.0325 0.0385 ( 0.4390 , 0.5666 )
β 1 = 1.0000 0.9840 0.1230 0.1448 ( 0.7431 , 1.2258 ) 0.9844 0.1229 0.1461 ( 0.7438 , 1.2256 )
β 2 = 1.0000 0.9914 0.1232 0.1440 ( 1.2330 , 0.7490 ) 0.9904 0.1231 0.1441 ( 1.2323 , 0.7491 )
Total effect
x 1 = 2.0000 2.0206 0.2913 0.3554 ( 1.4742 , 2.6174 ) 2.0053 0.2784 0.3554 ( 1.4785 , 2.5717 )
x 2 = 2.0000 2.0340 0.2920 0.3554 ( 2.6311 , 1.4859 ) 2.0150 0.2788 0.3174 ( 2.5807 , 1.4865 )
ρ = 0.8000 0.8026 0.0152 0.0194 ( 0.7722 , 0.8318 ) 0.8017 0.0138 0.0163 ( 0.7739 , 0.8280 )
β 1 = 1.0000 0.9847 0.1227 0.1463 ( 0.7443 , 1.2256 ) 0.9844 0.1230 0.1463 ( 0.7439 , 1.2266 )
β 2 = 1.0000 0.9922 0.1229 0.1445 ( 1.2331 , 0.7511 ) 0.9891 0.1233 0.1446 ( 1.2309 , 0.7474 )
Total effect
x 1 = 5.0000 5.0667 0.7419 0.9047 ( 3.6926 , 6.5929 ) 5.0167 0.6945 0.8400 ( 3.7006 , 6.4274 )
x 2 = 5.0000 5.1011 0.7419 0.8712 ( 6.6265 , 3.7159 ) 5.0342 0.6977 0.7929 ( 6.4465 , 3.7098 )
Table 2. Simulation results for parameter estimation at τ = { 0.25 , 0.5 , 0.75 } (const.).
Table 2. Simulation results for parameter estimation at τ = { 0.25 , 0.5 , 0.75 } (const.).
τ Para.nRook Weight Matrix ( r , m ) Case Weight Matrix
MeanSESD 95 % CI MeanSESD 95 % CI
0.25 ρ = 0.2000 400 0.2014 0.0293 0.0357 ( 0.1442 , 0.2591 ) (80,5) 0.1997 0.0267 0.0334 ( 0.1475 , 0.2520 )
β 1 = 1.0000 0.9967 0.0583 0.0737 ( 0.8824 , 1.1105 ) 0.9967 0.0583 0.0734 ( 0.8824 , 1.1108 )
β 2 = 1.0000 1.0035 0.0582 0.0723 ( 1.1169 , 0.8897 ) 1.0031 0.0582 0.0722 ( 1.1169 , 0.8892 )
Total effect
x 1 = 1.2500 1.2523 0.0859 0.1090 ( 1.0891 , 1.4250 ) 1.2487 0.0832 0.1034 ( 1.0894 , 1.4154 )
x 2 = 1.2500 1.2606 0.0860 0.1045 ( 1.4337 , 1.0976 ) 1.2570 0.0833 0.1051 ( 1.4244 , 1.0979 )
ρ = 0.5000 0.5013 0.0231 0.0286 ( 0.4561 , 0.5464 ) 0.4998 0.0180 0.0223 ( 0.4647 , 0.5349 )
β 1 = 1.0000 0.9967 0.0586 0.0740 ( 0.8821 , 1.1112 ) 0.9969 0.0585 0.0734 ( 0.8823 , 1.1114 )
β 2 = 1.0000 1.0032 0.0584 0.0726 ( 1.1171 , 0.8892 ) 1.0029 0.0582 0.0722 ( 1.1168 , 0.8889 )
Total effect
x 1 = 2.0000 2.0091 0.1466 0.1860 ( 1.7334 , 2.3072 ) 1.9988 0.1336 0.1656 ( 1.7436 , 2.2667 )
x 2 = 2.0000 2.0218 0.1466 0.1791 ( 2.3205 , 1.7465 ) 2.0113 0.1336 0.1677 ( 2.2793 , 1.7562 )
ρ = 0.8000 0.8006 0.0114 0.0146 ( 0.7782 , 0.8228 ) 0.7998 0.0076 0.0095 ( 0.7848 , 0.8145 )
β 1 = 1.0000 0.9965 0.0585 0.0734 ( 0.8819 , 1.1110 ) 0.9969 0.0588 0.0738 ( 0.8820 , 1.1118 )
β 2 = 1.0000 1.0035 0.0583 0.0727 ( 1.1175 , 0.8892 ) 1.0031 0.0584 0.0723 ( 1.1174 , 0.8888 )
Total effect
x 1 = 5.0000 5.0393 0.4022 0.5184 ( 4.2950 , 5.8690 ) 4.9954 0.3329 0.4144 ( 4.3593 , 5.6618 )
x 2 = 5.0000 5.0393 0.4022 0.4963 ( 5.9052 , 4.3280 ) 5.0271 0.3321 0.4178 ( 5.6920 , 4.3917 )
0.50 ρ = 0.2000 400 0.2018 0.02701 0.0306 ( 0.1488 , 0.2545 ) (80,5) 0.2008 0.0250 0.0290 ( 0.1516 , 0.2495 )
β 1 = 1.0000 0.9980 0.0582 0.0678 ( 0.8841 , 1.1119 ) 0.9978 0.0583 0.0672 ( 0.8837 , 1.1120 )
β 2 = 1.0000 0.9994 0.0584 0.0655 ( 1.1136 , 0.8852 ) 0.9994 0.0583 0.0653 ( 1.1138 , 0.8849 )
Total effect
x 1 = 1.2500 1.2535 0.0838 0.0966 ( 1.0932 , 1.4213 ) 1.2513 0.0821 0.0956 ( 1.0937 , 1.4152 )
x 2 = 1.2500 1.2553 0.0841 0.0937 ( 1.4213 , 1.0942 ) 1.2531 0.0827 0.0910 ( 1.4180 , 1.0942 )
ρ = 0.5000 0.5012 0.0204 0.0234 ( 0.4612 , 0.5408 ) 0.5004 0.0166 0.0196 ( 0.4677 , 0.5331 )
β 1 = 1.0000 0.9979 0.0583 0.0679 ( 0.8835 , 1.1119 ) 0.9978 0.0585 0.0674 ( 0.8834 , 1.1123 )
β 2 = 1.0000 0.9992 0.0585 0.0657 ( 1.1138 , 0.8845 ) 0.9994 0.0586 0.0654 ( 1.1141 , 0.8848 )
Total effect
x 1 = 2.0000 2.0077 0.1401 0.1624 ( 1.7414 , 2.2900 ) 2.0022 0.1317 0.1532 ( 1.7493 , 2.2653 )
x 2 = 2.0000 2.0105 0.1406 0.1579 ( 2.2939 , 1.7433 ) 2.0053 0.1320 0.1465 ( 2.2653 , 1.7516 )
ρ = 0.8000 0.8007 0.0094 0.0113 ( 0.7821 , 0.8189 ) 0.8002 0.0071 0.0083 ( 0.7861 , 0.8140 )
β 1 = 1.0000 0.9981 0.0583 0.0675 ( 0.8839 , 1.1119 ) 0.9976 0.0586 0.0673 ( 0.8830 , 1.1126 )
β 2 = 1.0000 0.9992 0.0585 0.0657 ( 1.1137 , 0.8847 ) 0.9995 0.0586 0.0656 ( 1.1142 , 0.8847 )
Total effect
x 1 = 5.0000 5.0341 0.3699 0.4364 ( 4.3375 , 5.7857 ) 5.0051 0.3290 0.3814 ( 4.3729 , 5.6620 )
x 2 = 5.0000 5.0392 0.3708 0.4213 ( 5.7916 , 4.3411 ) 5.0142 0.3300 0.3657 ( 5.6724 , 4.3798 )
0.75 ρ = 0.2000 400 0.2018 0.0245 0.0302 ( 0.1531 , 0.2492 ) (80,5) 0.2005 0.0230 0.0289 ( 0.1554 , 0.2451 )
β 1 = 1.0000 0.9986 0.0588 0.0739 ( 0.8837 , 1.1137 ) 0.9985 0.0588 0.0745 ( 0.8836 , 1.1134 )
β 2 = 1.0000 0.9990 0.0587 0.0708 ( 1.1139 , 0.8840 ) 0.9987 0.0587 0.0707 ( 1.1136 , 0.8840 )
Total effect
x 1 = 1.2500 1.2541 0.0828 0.1057 ( 1.0950 , 1.4187 ) 1.2518 0.0815 0.1059 ( 1.0948 , 1.4134 )
x 2 = 1.2500 1.2541 0.0828 0.0991 ( 1.4184 , 1.0955 ) 1.2515 0.0812 0.0955 ( 1.4127 , 1.0950 )
ρ = 0.5000 0.5014 0.0178 0.0225 ( 0.4664 , 0.5357 ) 0.5004 0.0155 0.0194 ( 0.4697 , 0.5302 )
β 1 = 1.0000 0.9985 0.0589 0.0744 ( 0.8833 , 1.1138 ) 0.9987 0.0588 0.0742 ( 0.8834 , 1.1138 )
β 2 = 1.0000 0.9986 0.0588 0.0710 ( 1.1138 , 0.8838 ) 0.9984 0.0587 0.0707 ( 1.1135 , 0.8837 )
Total effect
x 1 = 2.0000 2.0088 0.1359 0.1744 ( 1.7485 , 2.2800 ) 2.0039 0.1304 0.1696 ( 1.7521 , 2.2626 )
x 2 = 2.0000 2.0087 0.1358 0.1632 ( 2.2803 , 1.7487 ) 2.0024 0.1301 0.1528 ( 2.2608 , 1.7514 )
ρ = 0.8000 0.8008 0.0077 0.0101 ( 0.7856 , 0.8157 ) 0.8000 0.0066 0.0084 ( 0.7869 , 0.8127 )
β 1 = 1.0000 0.9986 0.0587 0.0744 ( 0.8836 , 1.1133 ) 0.9985 0.0592 0.0743 ( 0.8833 , 1.1141 )
β 2 = 1.0000 0.9988 0.0586 0.0710 ( 1.1135 , 0.8840 ) 0.9990 0.0593 0.0711 ( 1.1146 , 0.8839 )
Total effect
x 1 = 5.0000 5.0332 0.3476 0.4537 ( 4.3692 , 5.7299 ) 5.0060 0.3266 0.4240 ( 4.3786 , 5.6527 )
x 2 = 5.0000 5.0325 0.3478 0.4223 ( 5.7300 , 4.3691 ) 5.0062 0.3266 0.3825 ( 5.6509 , 4.3791 )
Table 3. Simulation results for parameter estimation using model (22).
Table 3. Simulation results for parameter estimation using model (22).
nPara.QR IVQR BQR
τ = 0.25 τ = 0.50 τ = 0.75 τ = 0.25 τ = 0.50 τ = 0.75 τ = 0.25 τ = 0.50 τ = 0.75
100 ρ 0.0214 0.0373 0.0528 0.0037 0.0025 0.0021 0.0001 0.0003 0.0086
( 0.0516 ) ( 0.0700 ) ( 0.0993 ) ( 0.1315 ) ( 0.1186 ) ( 0.1329 ) ( 0.0030 ) ( 0.0039 ) ( 0.0075 )
β 0.0063 0.0036 0.0149 0.0065 0.0030 0.0041 0.0091 0.0087 0.0086
( 0.1440 ) ( 0.1334 ) ( 0.1460 ) ( 0.1431 ) ( 0.1364 ) ( 0.1508 ) ( 0.0302 ) ( 0.0253 ) ( 0.0298 )
α 1 [ 0.2203 ] [ 0.1973 ] [ 0.2207 ] [ 0.2202 ] [ 0.2031 ] [ 0.2200 ] [ 0.1534 ] [ 0.1441 ] [ 0.1532 ]
α 2 [ 0.2038 ] [ 0.1930 ] [ 0.2002 ] [ 0.2139 ] [ 0.1971 ] [ 0.2145 ] [ 0.1929 ] [ 0.1764 ] [ 0.1838 ]
200 ρ 0.0198 0.0341 0.0569 0.0016 0.0008 0.0011 0.0009 0.0008 0.0001
( 0.0372 ) ( 0.0527 ) ( 0.0804 ) ( 0.0853 ) ( 0.0761 ) ( 0.0859 ) ( 0.0014 ) ( 0.0021 ) ( 0.0036 )
β 0.0054 0.0044 0.0171 0.0003 0.0016 0.0021 0.0040 0.0045 0.0042
( 0.1010 ) ( 0.0930 ) ( 0.1035 ) ( 0.1009 ) ( 0.0918 ) ( 0.0966 ) ( 0.0159 ) ( 0.0139 ) ( 0.0159 )
α 1 [ 0.1479 ] [ 0.1379 ] [ 0.1491 ] [ 0.1520 ] [ 0.1377 ] [ 0.1515 ] [ 0.1131 ] [ 0.1032 ] [ 0.1089 ]
α 2 [ 0.1533 ] [ 0.1425 ] [ 0.1452 ] [ 0.1513 ] [ 0.1423 ] [ 0.1530 ] [ 0.1398 ] [ 0.1263 ] [ 0.1320 ]
500 ρ 0.0213 0.0384 0.0572 0.0025 0.0006 0.0009 0.0001 0.0002 0.0017
( 0.0297 ) ( 0.0463 ) ( 0.0672 ) ( 0.0599 ) ( 0.0600 ) ( 0.0635 ) ( 0.0007 ) ( 0.0009 ) ( 0.0016 )
β 0.008 0.0103 0.0106 0.0011 0.0002 0.0010 0.0001 0.0019 0.0009
( 0.0600 ) ( 0.0590 ) ( 0.0635 ) ( 0.0599 ) ( 0.0600 ) ( 0.0635 ) ( 0.0059 ) ( 0.0048 ) ( 0.0061 )
α 1 [ 0.0925 ] [ 0.0862 ] [ 0.0921 ] [ 0.0919 ] [ 0.0857 ] [ 0.0914 ] [ 0.0745 ] [ 0.0678 ] [ 0.0749 ]
α 2 [ 0.1066 ] [ 0.1083 ] [ 0.1044 ] [ 0.1040 ] [ 0.1027 ] [ 0.1041 ] [ 0.0948 ] [ 0.0845 ] [ 0.0901 ]
800 ρ 0.0226 0.0362 0.0599 0.0002 0.0006 0.0004 0.0002 0.0010 0.0010
( 0.0280 ) ( 0.0413 ) ( 0.0660 ) ( 0.0405 ) ( 0.0385 ) ( 0.0402 ) ( 0.0004 ) ( 0.0005 ) ( 0.0009 )
β 0.0038 0.0064 0.0116 0.0029 0.0020 0.0005 0.0009 0.0011 0.0025
( 0.0486 ) ( 0.0451 ) ( 0.0485 ) ( 0.0478 ) ( 0.0443 ) ( 0.0476 ) ( 0.0031 ) ( 0.0026 ) ( 0.0036 )
α 1 [ 0.0721 ] [ 0.0675 ] [ 0.0722 ] [ 0.0711 ] [ 0.0674 ] [ 0.0701 ] [ 0.0598 ] [ 0.0539 ] [ 0.0605 ]
α 2 [ 0.0981 ] [ 0.0956 ] [ 0.0947 ] [ 0.0741 ] [ 0.0892 ] [ 0.0924 ] [ 0.0761 ] [ 0.0686 ] [ 0.0738 ]
Table 4. Simulation results for parameter estimation in model (23).
Table 4. Simulation results for parameter estimation in model (23).
nPara.QRIVQRBQR
τ = 0.25 τ = 0.50 τ = 0.75 τ = 0.25 τ = 0.50 τ = 0.75 τ = 0.25 τ = 0.50 τ = 0.75
100 ρ 0.0477 0.0861 0.0560 0.0070 0.0011 0.0009 0.0176 0.0104 0.0106
( 0.0835 ) ( 0.1252 ) ( 0.0915 ) ( 0.1289 ) ( 0.1197 ) ( 0.1289 ) ( 0.0076 ) ( 0.0089 ) ( 0.0090 )
β 0.0147 0.0204 0.0101 0.0074 0.0014 0.0026 0.0044 0.0072 0.0054
( 0.1298 ) ( 0.1222 ) ( 0.1309 ) ( 0.1326 ) ( 0.1155 ) ( 0.1325 ) ( 0.0246 ) ( 0.0202 ) ( 0.0279 )
α 1 [ 0.2257 ] [ 0.1892 ] [ 0.2323 ] [ 0.2317 ] [ 0.1989 ] [ 0.2405 ] [ 0.1445 ] [ 0.1379 ] [ 0.2015 ]
α 2 [ 0.1953 ] [ 0.1782 ] [ 0.1982 ] [ 0.2004 ] [ 0.1775 ] [ 0.2030 ] [ 0.1469 ] [ 0.1399 ] [ 0.1500 ]
200 ρ 0.0445 0.0874 0.0531 0.0004 0.0036 0.0007 0.0074 0.0029 0.0037
( 0.0638 ) ( 0.1049 ) ( 0.0723 ) ( 0.0801 ) ( 0.0740 ) ( 0.0907 ) ( 0.0035 ) ( 0.0043 ) ( 0.0033 )
β 0.0114 0.0177 0.0133 0.0029 0.0008 0.0056 0.0034 0.0054 0.0047
( 0.0837 ) ( 0.0786 ) ( 0.0827 ) ( 0.0819 ) ( 0.0740 ) ( 0.0907 ) ( 0.0113 ) ( 0.0103 ) ( 0.0111 )
α 1 [ 0.1337 ] [ 0.1093 ] [ 0.1421 ] [ 0.1406 ] [ 0.1139 ] [ 0.1403 ] [ 0.0984 ] [ 0.0794 ] [ 0.1214 ]
α 2 [ 0.1231 ] [ 0.1141 ] [ 0.1211 ] [ 0.1256 ] [ 0.1117 ] [ 0.1235 ] [ 0.1016 ] [ 0.0955 ] [ 0.1032 ]
500 ρ 0.0433 0.0804 0.0510 0.0013 0.0009 0.0001 0.0032 0.0025 0.0029
( 0.0512 ) ( 0.0878 ) ( 0.0585 ) ( 0.0440 ) ( 0.0429 ) ( 0.0517 ) ( 0.0014 ) ( 0.0013 ) ( 0.0013 )
β 0.0104 0.0140 0.0117 0.0008 0.0008 0.0001 0.0003 0.0016 0.0013
( 0.0466 ) ( 0.0464 ) ( 0.0473 ) ( 0.0476 ) ( 0.0420 ) ( 0.0484 ) ( 0.0034 ) ( 0.0029 ) ( 0.0038 )
α 1 [ 0.0789 ] [ 0.0548 ] [ 0.0906 ] [ 0.0757 ] [ 0.0582 ] [ 0.0766 ] [ 0.0623 ] [ 0.0420 ] [ 0.0697 ]
α 2 [ 0.0703 ] [ 0.0643 ] [ 0.0691 ] [ 0.0706 ] [ 0.0628 ] [ 0.0715 ] [ 0.0603 ] [ 0.0545 ] [ 0.0607 ]
800 ρ 0.0417 0.0776 0.0495 0.0013 0.0013 0.0001 0.0021 0.0011 0.0006
( 0.0462 ) ( 0.0823 ) ( 0.0545 ) ( 0.0326 ) ( 0.0340 ) ( 0.0380 ) ( 0.0007 ) ( 0.0008 ) ( 0.0008 )
β 0.0083 0.0174 0.0081 0.0042 0.0009 0.0014 0.0008 0.0011 0.0017
( 0.0367 ) ( 0.0381 ) ( 0.0347 ) ( 0.0358 ) ( 0.0317 ) ( 0.0346 ) ( 0.0019 ) ( 0.0015 ) ( 0.0022 )
α 1 [ 0.0657 ] [ 0.0408 ] [ 0.0762 ] [ 0.0582 ] [ 0.0402 ] [ 0.0585 ] [ 0.0493 ] [ 0.0297 ] [ 0.0553 ]
α 2 [ 0.0522 ] [ 0.0502 ] [ 0.0530 ] [ 0.0521 ] [ 0.0483 ] [ 0.0526 ] [ 0.0448 ] [ 0.0398 ] [ 0.0453 ]
Table 5. Parameter estimation in model (24) for Boston housing data.
Table 5. Parameter estimation in model (24) for Boston housing data.
τ τ = 0.25 τ = 0.5 τ = 0.75
Para.MeanSE95%CI MeanSE95%CI MeanSE95%CI
ρ 0.5298 0.0016 ( 0.5268 , 0.5324 ) 0.5366 0.0033 ( 0.5319 , 0.5414 ) 0.5573 0.0006 ( 0.5563 , 0.5579 )
β 1 0.5196 0.0727 ( 0.3733 , 0.6573 ) 0.5272 0.0894 ( 0.3550 , 0.7054 ) 0.7077 0.0825 ( 0.5422 , 0.8645 )
β 2 0.2534 0.0410 ( 0.1767 , 0.3372 ) 0.2461 0.0494 ( 0.1482 , 0.3421 ) 0.1390 0.0455 ( 0.0513 , 0.2282 )
Total effect
x 1 1.1051 0.1818 ( 0.7415 , 1.4687 ) 1.1377 0.2146 ( 0.7085 , 1.5669 ) 1.5986 0.1897 ( 1.2192 , 1.9780 )
x 2 0.5389 0.1025 ( 0.3339 , 0.7439 ) 0.5311 0.1186 ( 0.2939 , 0.7683 ) 0.3140 0.1046 ( 0.1048 , 0.5232 )
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, R.; Chen, Z. Modeling Spatial Data with Heteroscedasticity Using PLVCSAR Model: A Bayesian Quantile Regression Approach. Entropy 2025, 27, 715. https://doi.org/10.3390/e27070715

AMA Style

Chen R, Chen Z. Modeling Spatial Data with Heteroscedasticity Using PLVCSAR Model: A Bayesian Quantile Regression Approach. Entropy. 2025; 27(7):715. https://doi.org/10.3390/e27070715

Chicago/Turabian Style

Chen, Rongshang, and Zhiyong Chen. 2025. "Modeling Spatial Data with Heteroscedasticity Using PLVCSAR Model: A Bayesian Quantile Regression Approach" Entropy 27, no. 7: 715. https://doi.org/10.3390/e27070715

APA Style

Chen, R., & Chen, Z. (2025). Modeling Spatial Data with Heteroscedasticity Using PLVCSAR Model: A Bayesian Quantile Regression Approach. Entropy, 27(7), 715. https://doi.org/10.3390/e27070715

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop