Next Article in Journal
Chaos-Based Color Image Encryption with JPEG Compression: Balancing Security and Compression Efficiency
Previous Article in Journal
A Guiding Principle for Quantum State Discrimination in the Real-Spectrum Phase of P-Pseudo-Hermitian Systems
Previous Article in Special Issue
Bootstrap Confidence Intervals for Multiple Change Points Based on Two-Stage Procedures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multivariate Modeling of Some Datasets in Continuous Space and Discrete Time

Department of Statistics, Kansas State University, Manhattan, KS 66506, USA
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(8), 837; https://doi.org/10.3390/e27080837 (registering DOI)
Submission received: 22 June 2025 / Revised: 31 July 2025 / Accepted: 31 July 2025 / Published: 6 August 2025

Abstract

Multivariate space–time datasets are often collected at discrete, regularly monitored time intervals and are typically treated as components of time series in environmental science and other applied fields. To effectively characterize such data in geostatistical frameworks, valid and practical covariance models are essential. In this work, we propose several classes of multivariate spatio-temporal covariance matrix functions to model underlying stochastic processes whose discrete temporal margins correspond to well-known autoregressive and moving average (ARMA) models. We derive sufficient and/or necessary conditions under which these functions yield valid covariance matrices. By leveraging established methodologies from time series analysis and spatial statistics, the proposed models are straightforward to identify and fit in practice. Finally, we demonstrate the utility of these multivariate covariance functions through an application to Kansas weather data, using co-kriging for prediction and comparing the results to those obtained from traditional spatio-temporal models.

1. Introduction

Multivariate space–time datasets frequently arise in environmental science, meteorology, geophysics, and many other fields. Examples include studying the impact of soil greenhouse gas fluxes on global warming potential, or analyzing temperature–precipitation relationships under climate change (see [1,2,3], among others). Typically, temporal data are collected at regularly spaced intervals, in contrast to spatial data that are often recorded at irregular locations, such as weather stations. With the increasing availability and complexity of such datasets, it is essential to develop efficient models that capture their intricate dependence structures.
This paper focuses on constructing valid covariance matrix functions that jointly incorporate spatial and temporal information for multivariate random fields. While the spatial statistics literature includes various spatial models, few account for discrete-time dependencies, despite time series playing a crucial role in most environmental and geophysical processes. Traditional approaches often rely on separable space–time covariance structures, which assume the overall covariance is the product of purely spatial and purely temporal components. While computationally convenient, these models ignore space–time interactions that are often fundamental to the underlying physical processes. An increasing body of work has highlighted the importance of nonseparable models. For example, ref. [4] introduced nonseparable stationary spatio-temporal covariance functions, and subsequent generalizations for both stationary and nonstationary processes were developed in [5,6,7], among others. Applications to environmental data such as air pollution are explored by [8,9], while ref. [10] incorporates an inflated gamma distribution to model precipitation trends with zero inflation. However, most of these models are constructed under the assumption of continuous time. In practice, time is typically observed on a discrete, regular grid, whereas spatial locations are distributed more irregularly. Although some models incorporate discrete time through stochastic differential equations or spectral methods (e.g., [11,12]), these approaches often lack closed-form expressions for the covariance structure. While ref. [13] deals with the univariate case, in this work, we derive explicit covariance matrix functions for multivariate space–time processes with discrete temporal components, where the temporal margins follow well-established autoregressive and moving average (ARMA) models. Leveraging the rich theoretical foundation of ARMA processes along with classical spatial modeling, we aim to build flexible, interpretable, and computationally feasible models.
In many modern scientific applications, such as geosciences, environmental monitoring, and economics, large numbers of variables are observed simultaneously. These variables are often correlated, and borrowing information from related (secondary) variables can improve the prediction of a primary variable, especially when the latter is sparsely observed. For simplicity, spatial variables are often modeled separately, ignoring cross-variable dependencies.A key contribution of this work is the development of multivariate spatial covariance structures that capture both within-variable spatial dependence and cross-variable covariances, while also incorporating discrete time information. This enables more accurate predictions through co-kriging across a wide range of applications. While previous efforts have been made in this direction, many are limited to purely spatial or continuous-time settings, or they rely on Bayesian frameworks. Notable contributions include [14,15,16,17], among others. For example, multivariate Poisson-lognormal spatial models have improved predictions in traffic safety studies [18], and recent works have established kriging formulas [19] and copula-based models [20] for multivariate spatial data. We aim to integrate parameter interpretability from analytic model expressions into a unified space–time framework to facilitate multivariate fitting and co-kriging.
On a global scale, many spatial datasets are collected using spherical coordinates. Euclidean-based distances and covariance structures can become distorted on the sphere, especially over large distances, making spherical modeling critical in geophysical and atmospheric sciences. Recent advances include constructions of isotropic positive definite functions on spheres [21], covariance functions for stationary and isotropic Gaussian vector fields [22], and isotropic variogram matrix functions expressed through ultraspherical polynomials [23]. Drawing from these approaches, we also extend some of our discrete-time multivariate spatio-temporal models to spherical domains to ensure validity across both Euclidean and spherical spaces.
We aim to develop a flexible multivariate spatio-temporal modeling framework that incorporates discrete-time structure, spatial correlation (in both Euclidean and spherical spaces), and cross-variable dependencies. Specifically, we consider a p-variate space–time random field
{ Z ( s , t ) = ( Z 1 ( s , t ) , , Z p ( s , t ) ) , s S d or R d , t Z } ,
with covariance matrix function
C ( s 1 , s 2 , t 1 , t 2 ) = C 1 , 1 ( s 1 , s 2 , t 1 , t 2 ) C 1 , p ( s 1 , s 2 , t 1 , t 2 ) C p , 1 ( s 1 , s 2 , t 1 , t 2 ) C p , p ( s 1 , s 2 , t 1 , t 2 ) ,
where each entry
C i , j ( s 1 , s 2 , t 1 , t 2 ) = Cov ( Z i ( s 1 , t 1 ) , Z j ( s 2 , t 2 ) ) ,
for i , j = 1 , , p , where S d and R d denote the d-dimensional unit sphere and Euclidean space, respectively. The process is stationary in both space and time if E ( Z ( s , t ) ) is constant for all ( s , t ) and C ( s 0 , s 0 + s ; t 0 , t 0 + t ) depends only on the spatial lag s and temporal lag t. We then denote the spatial and temporal margins as C ( s 1 , s 2 , t , t ) and C ( s , s , t 1 , t 2 ) , respectively, following [24]. In practice, analyzing multivariate space–time data often begins with marginal exploration, applying time series models to study temporal behavior and multivariate spatial analysis to capture cross-variable structure. Given the substantial research advances in both areas, combining their strengths provides a robust foundation for model development, selection, and estimation.
The remainder of this paper is organized as follows. In Section 2, we propose several classes of multivariate spatio-temporal covariance matrix functions, whose discrete-time margins follow ARMA models. We derive necessary and sufficient conditions for these functions to define valid covariance matrices. Section 3 extends the models to incorporate general ARMA margins. In Section 4, we apply our models to Kansas weather data to demonstrate their performance in spatio-temporal prediction compared to traditional methods.

2. Moving-Average-Type Temporal Margin

We begin constructing the foundation of our overall framework by examining the covariance structure corresponding to a first-order moving average (MA(1)) model in the discrete temporal margin. It is straightforward to verify that Equation (1) satisfies the defining properties of an MA(1) process at a fixed spatial location. Notably, this structure does not rely on the assumption of temporal stationarity. The main challenge in proving the validity of Equation (1) lies in its nature as a discrete space–time matrix function that varies across different time scales, making it more complex than simply verifying a static covariance matrix. Theorem 8 in [25] offers useful insights that support the proof of the following Theorem 1 (see Appendix A).
Theorem 1.
Let G 0 ( s 1 , s 2 ) and G 1 ( s 1 , s 2 ) , s 1 , s 2 D , D R d or S d , d 1 be p × p matrix functions, and let G 0 ( s 1 , s 2 ) be symmetric, i.e.,  G 0 ( s 1 , s 2 ) = G 0 ( s 1 , s 2 ) . Then, the  p × p function
C ( s 1 , s 2 ; t ) = G 0 ( s 1 , s 2 ) , t = 0 , G 1 ( s 1 , s 2 ) , t = 1 , G 1 ( s 2 , s 1 ) , t = 1 , 0 , t 0 , ± 1 , t Z , s 1 , s 2 D .
is a covariance matrix function on D × Z if and only if the following two conditions are satisfied:
(i) 
G 0 ( s 1 , s 2 ) + G 1 ( s 1 , s 2 ) + G 1 ( s 2 , s 1 ) is a covariance matrix function on D ,
(ii) 
G 0 ( s 1 , s 2 ) G 1 ( s 1 , s 2 ) G 1 ( s 2 , s 1 ) is a covariance matrix function on D .
This theorem reduces the verification of a complex space–time problem to that of a purely spatial covariance model. Building upon the foundational structure developed earlier, we are now prepared to incorporate a broader range of spatial covariance margins to enrich the class of admissible models. Specifically, we integrate the widely used Matérn-type spatial covariance function into our framework and derive the full set of parameter conditions required to ensure validity. In Theorem 2, we begin with a parsimonious Matérn structure in which all smoothness parameters α in M ( h v , α ) are assumed to be equal, as specified in Equation (4) below. Theorem 3 of [14] provides necessary and sufficient conditions under various settings for Equation (4) to define a valid covariance matrix. These results offer important insights that inform the conditions of the theorem and corollary that follows.
Theorem 2.
Let v = ( v 1 , v 2 , , v p ) , α = ( α 1 , α 2 ) , and  β = ( β 1 , β 2 ) be constant vectors. v k 0 ,   α k 0 ,   1 / 2 β k 1 / 2 , and let v i j = ( v i + v j ) / 2 , D R d . The sufficient condition for the p × p matrix function
C ( h ; t ) = c M ( h | v , α 1 ) + ( 1 c ) M ( h | v , α 2 ) , t = 0 , c M ( h | v , α 1 ) β 1 + ( 1 c ) M ( h | v , α 2 ) β 2 , t = ± 1 , h D 0 , o t h e r w i s e ,
to be a correlation matrix function on D × Z is that the constant c satisfies
0 c 1 .
  • And if p 2 , (3) is also necessary.
  • where
M ( h | v , α ) = ( ( ρ i j m ( h | v i j , α ) ) 1 i , j p ,
m ( h | v i j , α ) = 2 1 v i j Γ ( v i j ) ( α h ) v i j K v i j ( α h ) , i , j = 1 , 2 , ρ i j = Γ ( v i + d 2 ) 1 2 Γ ( v i ) 1 2 Γ ( v j + d 2 ) 1 2 Γ ( v j ) 1 2 Γ ( v i j ) Γ ( v i j + d 2 ) .
The following theorem generalizes the parsimonious Matérn covariance structure by relaxing the constraint that all smoothness parameters α in M ( h v , α ) must be equal, as in Equation (4). Following [14], we assume that M ( h v , α , ρ 12 ) is a general multivariate Matérn covariance function in Theorem 3. In addition, the choice of c is assumed to satisfy the conditions of Theorem 2 in [13], ensuring that the main diagonal elements of the resulting matrix structure are valid univariate correlation functions.
Theorem 3.
Let v = ( v 1 ,   v 2 ,   v 12 ) , α = ( α 1 ,   α 2 ,   α 12 ) , α = ( α 1 ,   α 2 ,   α 12 ) , β = ( β 1 ,   β 2 ) be constant vectors. v k 0 ,   α k 0 ,   α k 0 ,   1 / 2 β k 1 / 2 , D R d . A sufficient and necessary condition for the p × p matrix function, p 2
C ( h ; t ) = c M ( h | v , α , ρ 12 ) + ( 1 c ) M ( h | v , α , ρ 12 ) , t = 0 , c M ( h | v , α , ρ 12 ) β 1 + ( 1 c ) M ( h | v , α , ρ 12 ) β 2 , t = ± 1 , h D 0 , o t h e r w i s e ,
to be a correlation matrix function on D × Z is that the constant c satisfies
i n f h 0 , D ( h ) > 0 c 2 ( 1 ± 2 β 1 ) 2 H ( h ) + ( 1 c ) 2 ( 1 ± 2 β 2 ) 2 H ˜ ( h ) ( 1 ± 2 β 1 ) ( 1 ± 2 β 2 ) D ( h ) c ( c 1 ) .
given D ( h ) 0 . Where
M ( h | v , α , ρ 12 ) = m 11 ( h | v 1 , α 1 ) ρ 12 m 12 ( h | v 12 , α 12 ) ρ 12 m 12 ( h | v 12 , α 12 ) m 22 ( h | v 2 , α 2 ) ,
m i j ( h | v k , α k ) = 2 1 v k Γ ( v k ) ( α k h ) v k K v k ( α k h ) , i , j = 1 , 2 , k = 1 , 2 , 12 .
H ( h ) = α 1 2 v 1 α 2 2 v 2 c v 1 c v 2 ( α 1 2 + h 2 ) v 1 + d / 2 ( α 2 2 + h 2 ) v 2 + d / 2 ρ 12 2 α 12 4 v 12 c v 12 2 ( α 12 2 + h 2 ) 2 v 12 + d ,
H ˜ ( h ) is defined like H ( h ) with α i replaced with α i , i = 1 , 2 , 12 and
D ( h ) = α 1 2 v 1 α 2 2 v 2 c v 1 c v 2 ( α 1 2 + h 2 ) v 1 + d / 2 ( α 2 2 + h 2 ) v 2 + d / 2 + α 1 2 v 1 α 2 2 v 2 c v 1 c v 2 ( α 1 2 + h 2 ) v 1 + d / 2 ( α 2 2 + h 2 ) v 2 + d / 2 2 ρ 12 ρ 12 α 12 2 v 12 α 12 2 v 1 2 c v 12 2 ( ( α 12 2 + h 2 ) ( α 12 2 + h 2 ) ) v 12 + d / 2 .
If fact, from [14], M ( h | v , α , ρ 12 ) is a valid covariance matrix if and only if
ρ 12 2 c v 1 c v 2 c v 12 2 α 1 2 v 1 α 2 2 v 2 α 12 4 v 12 i n f h 0 ( α 12 2 + h 2 ) 2 v 12 + d ( α 1 2 + h 2 ) v 1 + d / 2 ( α 2 2 + t 2 ) v 2 + d / 2 ,
ρ 12 2 c v 1 c v 2 c v 12 2 α 1 2 v 1 α 2 2 v 2 α 12 4 v 12 i n f h 0 ( α 12 2 + h 2 ) 2 v 12 + d ( α 1 2 + h 2 ) v 1 + d / 2 ( α 2 2 + t 2 ) v 2 + d / 2 .
where c v = π d / 2 Γ ( v + d / 2 ) / Γ ( v ) . Therefore, we can show that H ( h ) 0 , H ˜ ( h ) 0 , and  D ( h ) 0 . Under certain conditions, the minimum of the left-hand side of inequality (8) can be equal to zero, which leads to the following corollary.
Corollary 1.
The sufficient and necessary condition for Equation (5) to be a correlation matrix function can be reduced to 0 c 1 in the following cases:
(a) When α 12 m i n ( α 1 , α 2 ) , α 12 m i n ( α 1 , α 2 ) , v 12 = v 1 + v 2 2 ,
ρ 12 2 = c v 1 c v 2 c v 12 2 ( α 12 2 α 1 α 2 ) d , ρ 12 2 = c v 1 c v 2 c v 12 2 ( α 12 2 α 1 α 2 ) d .
(b) When α 12 m a x ( α 1 , α 2 ) , α 12 m a x ( α 1 , α 2 ) , v 12 = v 1 + v 2 2 , 
ρ 12 2 = c v 1 c v 2 c v 12 2 ( α 1 α 12 ) 2 v 1 ( α 2 α 12 ) 2 v 2 , ρ 12 2 = c v 1 c v 2 c v 12 2 ( α 1 α 12 ) 2 v 1 ( α 2 α 12 ) 2 v 2 .
The proofs of the theorems and corollary are deferred to the Appendix A. It is well known that setting v = 1 / 2 in the Matérn covariance function yields the exponential form. This leads to the following example:
Example 1.
Let α , α , ρ 12 , ρ 12 and β k ( k = 1 , 2 ) be assumed as in Theorem 3, and take v 1 = v 2 = v 12 = 1 2 ; then, A sufficient and necessary condition for the matrix function of exponential type
C ( h ; t ) = c E ( h | α , ρ 12 ) + ( 1 c ) E ( h | α , ρ 12 ) , t = 0 , c E ( h | α , ρ 12 ) β 1 + ( 1 c ) E ( h | α , ρ 12 ) β 2 , t = ± 1 , h D 0 , o t h e r w i s e ,
to be a stationary correlation matrix function on D × Z is that the constant c satisfies inequality (6). Where
E ( h | α , ρ 12 ) = e 11 ( h | α 1 ) ρ 12 e 12 ( h | α 12 ) ρ 12 e 12 ( h | α 12 ) e 22 ( h | α 2 ) .
e i j ( h | α k ) = e x p ( α k h ) , i , j = 1 , 2 , k = 1 , 2 , 12 .

3. ARMA Type Temporal Margin

In the previous section, we considered the spatio-temporal covariance structure with a moving average of order one (MA(1)) as the temporal margin. in this section, we extend the covariance matrix to more general cases involving some other autoregressive and moving average (ARMA) temporal margins.
The following model establishes the necessary and sufficient conditions for a valid spatio-temporal covariance matrix with ARMA-type temporal dependence. As before, this theorem assumes uniform α in M ( h v , α ) .
Theorem 4.
Let v = ( v 1 , v 2 , v 12 ) , β = ( β 1 , β 2 ) , be constant vectors. v k 0 ,   α k 0 ,   1 β k 1 , and let v 12 = ( v 1 + v 2 ) / 2 , D R d or S d . A sufficient condition for the p × p matrix function
C ( h ; t ) = c M ( h | v , α 1 ) β 1 | t | + ( 1 c ) M ( h | v , α 2 ) β 2 | t | , t Z , h D
to be a correlation matrix function on D × Z is that the constant c satisfies the following:
0 c 1 .
  • And if p 2 , (13) is also necessary.
  • where
M ( h | v , α ) = ( ( ρ i j m ( h | v i j , α ) ) 1 i , j p ,
m ( h | v k , α ) = 2 1 v k Γ ( v k ) ( α h ) v k K v k ( α h ) , i , j = 1 , 2 , k = 1 , 2 , 12 , ρ 12 = Γ ( v 1 + d 2 ) 1 2 Γ ( v 1 ) 1 2 Γ ( v 2 + d 2 ) 1 2 Γ ( v 2 ) 1 2 Γ ( v 12 ) Γ ( v 12 + d 2 ) .
We now extend this theorem to various values of α in M ( h v , α ) . As in the preceding section, we follow [14] and assume that both M ( h v , α , ρ 12 ) and M ( h v , α ) below are general multivariate Matérn covariance functions. Furthermore, we assume that the choice of c satisfies the conditions in Theorem 4 in [13], ensuring that the main diagonal elements in the resulting matrix structure are valid univariate correlation functions.
Theorem 5.
Let v = ( v 1 , v 2 , v 12 ) , α = ( α 1 , α 2 , α 12 ) , α = ( α 1 , α 2 , α 12 ) , β = ( β 1 , β 2 ) be constant vectors. v k 0 ,   α k 0 ,   α k 0 ,   1 β k 1 , D R d or S d . A sufficient and necessary condition for p × p matrix function, p 2
C ( h ; t ) = c M ( h | v , α , ρ 12 ) β 1 | t | + ( 1 c ) M ( h | v , α , ρ 12 ) β 2 | t | , t Z , h D
to be a correlation matrix function on D × Z is that the constant c satisfies
i n f h 0 , D ( h ) > 0 c 2 ( β 1 ) 2 H ( h ) + ( 1 c ) 2 ( β 2 ) 2 H ˜ ( h ) ( β 1 ) ( β 2 ) D ( h ) c ( c 1 ) .
where
M ( h | v , α , ρ 12 ) = m 11 ( h | v 1 , α 1 ) ρ 12 m 12 ( h | v 12 , α 12 ) ρ 12 m 12 ( h | v 12 , α 12 ) m 22 ( h | v 2 , α 2 ) ,
m i j ( h | v k , α k ) = 2 1 v k Γ ( v k ) ( α k h ) v k K v k ( α k h ) , β i = 1 β i 2 1 + β i 2 2 β i c o s ( ω ) , i , j = 1 , 2 , k = 1 , 2 , 12
H ( h ) = α 1 2 v 1 α 2 2 v 2 c v 1 c v 2 ( α 1 2 + h 2 ) v 1 + d / 2 ( α 2 2 + h 2 ) v 2 + d / 2 ρ 12 2 α 12 4 v 12 c v 12 2 ( α 12 2 + h 2 ) 2 v 12 + d ,
H ˜ ( h ) is defined like H ( h ) , with  α i replaced with α i , i = 1 , 2 , 12 .
D ( h ) = α 1 2 v 1 α 2 2 v 2 c v 1 c v 2 ( α 1 2 + h 2 ) v 1 + d / 2 ( α 2 2 + h 2 ) v 2 + d / 2 + α 1 2 v 1 α 2 2 v 2 c v 1 c v 2 ( α 1 2 + h 2 ) v 1 + d / 2 ( α 2 2 + h 2 ) v 2 + d / 2 2 ρ 12 ρ 12 α 12 2 v 12 α 12 2 v 1 2 c v 12 2 ( ( α 12 2 + h 2 ) ( α 12 2 + h 2 ) ) v 12 + d / 2 .
Incorporating different α i values into the model allows for more detailed spatial parameterization, enabling a more precise capture of spatial trends. Once again, the condition in this theorem can be simplified under several special cases:
Corollary 2.
The sufficient and necessary condition for Equation (15) to be a correlation matrix function can be reduced to 0 c 1 in the following cases:
(a) When α 12 m i n ( α 1 , α 2 ) , α 12 m i n ( α 1 , α 2 ) , v 12 = v 1 + v 2 2 ,
ρ 12 2 = c v 1 c v 2 c v 12 2 ( α 12 2 α 1 α 2 ) d , ρ 12 2 = c v 1 c v 2 c v 12 2 ( α 12 2 α 1 α 2 ) d .
(b) When α 12 m a x ( α 1 , α 2 ) , α 12 m a x ( α 1 , α 2 ) , v 12 = v 1 + v 2 2 , 
ρ 12 2 = c v 1 c v 2 c v 12 2 ( α 1 α 12 ) 2 v 1 ( α 2 α 12 ) 2 v 2 , ρ 12 2 = c v 1 c v 2 c v 12 2 ( α 1 α 12 ) 2 v 1 ( α 2 α 12 ) 2 v 2 .
The proof of this corollary is similar to that of Corollary 1. The temporal margin in both theorems is given by
C ( 0 , t ) = c I 2 × 2 β 1 | t | + ( 1 c ) I 2 × 2 β 2 | t | , t Z ,
which is a linear combination of valid correlation matrices. This structure encompasses a family of valid spatio-temporal correlation functions with stationary AR(1) (first-order autoregressive model), AR(2), and ARMA(2,1) temporal margins. The parameters α k and ν k for k = 1 , 2 , 12 can be interpreted as the spatial scaling and smoothness parameters, respectively. The parameters β 1 and β 2 govern the temporal dynamics, while c serves as a mixing parameter balancing the two components.
To apply the proposed parametric models, one may first use time series techniques to fit ARMA models at each spatial location. This process can help determine the appropriate ARMA order and provide starting values for β 1 , β 2 , and c. Final parameter estimation can then be performed using either maximum likelihood estimation or the weighted least squares method of [26] (see also Equation (22) in [27]). For the spatial component, standard procedures in spatial statistics can be employed to estimate initial values for α i and the cross-correlation parameters ρ , ρ . For instance, one can use the fitted parameters from the marginal spatial and cross-correlation functions at different time lags as starting points. Additional insights into the temporal structure can be obtained using tools such as the autocorrelation function (ACF), partial autocorrelation function (PACF), and information criteria like AIC and BIC. Since the temporal margin can initially be analyzed independently, this step provides useful guidance for model selection. Ultimately, the choice of the final model should be guided by space–time fitting criteria, which are generally robust to small variations in the marginal temporal model. Simplicity is also an important consideration in final model selection. Therefore, the proposed models, along with the stepwise estimation strategy, offer a practical and flexible approach by decomposing the complex spatio-temporal modeling problem into two more manageable steps. The proposed framework also provides an intuitive path toward modeling multivariate spatio-temporal processes, where each spatial location may follow an ARMA-type temporal process. One benefit of this approach is that it allows the multivariate MA(1) process to be approximated by analyzing marginal trends. Since the spatial correlation structure can differ across variables and time lags, it is often beneficial to estimate the trend separately at each time lag to obtain more accurate initial values. These components can then be integrated into a unified model, which is subsequently refined using joint estimation.
For the data application presented in the next section, parameter estimation was conducted using the least squares method from [7] and the techniques developed in [27]. Extending these techniques to accommodate general ARMA(p, q) temporal margins would require further theoretical development of the results presented here. However, such extensions remain computationally feasible, particularly when using Cressie’s weighted least squares approach. We leave the exploration of more complex temporal margins for future research.

4. Data Example: Kansas Daily Temperature Data

This dataset is sourced from the National Oceanic and Atmospheric Administration (NOAA) and includes observations from 105 weather stations across Kansas. For our real-data application, we focus on two highly correlated variables: daily maximum and minimum temperatures recorded over 8030 days, from 1 January 1990, to 31 December 2011, across all 105 counties. To preprocess the data, we compute weekly averages over the 8030 days, resulting in 1144 weeks of average maximum and minimum temperatures, which we use as our raw dataset. To reduce short-term variability to obtain a more stable pattern, we compute weekly averages from the daily temperature data. We divide the dataset into training and testing sets: the first 800 weeks (approximately the first fifteen years) are used for training, and the remaining 344 weeks (the last seven years) are used for testing. To detrend and deseasonalize the data, we follow the procedure outlined in [27] by subtracting the overall mean weekly temperature for each calendar week. Specifically:
Let X y , w , i be the weekly average temperature in year y, week w, location i; X ¯ w , i be the average temperature for week w at location i across n years; and X y , w , i represents the weekly value at location i with the seasonal mean removed, defined as:
X y , w , i = X y , w , i X ¯ w , i
where
X ¯ w , i = 1 n y = 1 n X y , w , i , w = 1 , 2 , , 52
This deseasonalization step removes the dominant annual signal and yields weekly anomalies, which reveals the underlying MA(1) correlation pattern.
We then compute the autocorrelation function (ACF) and cross-correlation function (CCF) of the de-trended minimum and maximum temperature series across the 105 counties using the training period. Figure 1 and Figure 2 display the ACFs of average maximum and minimum temperatures for all locations, as well as for three randomly chosen stations. Based on the ACF and CCF plots, both variables exhibit a pattern consistent with a moving average process of order one (MA(1)), supporting the use of a spatio-temporal model with an MA(1) temporal margin.
The next step is to calculate space–time correlation using detrended data, X y , w , i for model fitting. Since the data includes many location pairs at each distance, it is hard to extract stable spatial trends across time lags. To reduce noise, we apply spatial binning using h = 4 and δ = 2 , which means that we average the spatial correlations within each 4-km bin and discard any empty bins. The binned correlations are the input data for further model fitting. We use the least squares optimization method to fit empirical spatial correlations for minimum temperature, maximum temperature, and their cross-correlation at lag zero. These fits provide suitable initial values for the PMM, SMM, and Cauchy models introduced below.
Guided by this exploratory analysis, using an MA(1)-type temporal margin is a suitable choice for Theorem 3 application. While the correlation approaches 1 at the distance h = 0 in Theorem 3, real world data often exhibits a nugget effect and must be accounted for. By incorporating the nugget effect as described in Theorem 3, we formulate the proposed model, referred to as the PMM (Partially Mixed Model) based on C ( h ; t ) in Equation (5), as follows:
C P M M ( h ; t ) = ( 1 η 1 ) 1 1 ( 1 η 2 ) C ( h ; t ) + C ( 0 ; t ) η 1 1 { h = 0 } 0 0 η 2 1 { h = 0 } .
Cressie’s weighted-least-squares optimization method [26] for parameter estimation (Algorithm 1):
Algorithm 1 Estimation Procedure
Initialize parameters:
    θ ( 0 ) = ( η 1 , η 2 , α 11 , α 1 , α 1 , α 2 , α 2 α 12 , α 12 , c , β 1 , β 2 , ρ 12 , ρ 12 ) ;
   Set iteration counter d = 0 ;
Repeat
   Compute predicted covariances in Equation (19) at t = 0 , 1 , 2 across all distances;
   Calculate weighted sum of squares:
       WSS ( d ) = residuals at t = 0 , 1 , 2 across all distances 2 ;
   Update parameters θ ( d + 1 ) by minimizing WSS ( d ) using the L-BFGS-B algorithm;
    d d + 1 ;
until convergence: | WSS ( d + 1 ) WSS ( d ) | < δ , for a small threshold δ > 0 .
Finally, the fitted and estimated parameter values for the PMM model are as follows: η 1 = 0.1014 , η 2 = 0.1280 , α 1 = 0.000025 , α 1 = 0.004088 , α 2 = 0.003852 , α 2 = 0.000025 , α 12 = 0.002868 , α 12 = 0.000100 , c = 0.5254 , β 1 = 0.2496 , β 2 = 0.2591 , ρ 12 = 0.6964 , ρ 12 = 0.6523 , and all v i j are set to 2.5. All of the estimated parameters satisfy the conditions in Theorem 3 to ensure Equation (19) is valid as a covariance matrix function. Otherwise, the involved matrix is not invertible, and co-kriging cannot be performed. Next, we apply the purely spatial multivariate Matérn model (SMM), as proposed in [14], for comparison with the incorporated nugget effect.
C ( h ) = ( 1 η 1 ) 1 1 ( 1 η 2 ) M ( h | v , α , ρ 12 ) + η 1 1 { h = 0 } 0 0 η 2 1 { h = 0 } .
In addition, we compared the performance of the Cauchy separable model in continuous time, as proposed in [14], with the nugget effect incorporated.
C ( h ; t ) = { ( 1 η 1 ) 1 1 ( 1 η 2 ) M ( h | v , α , ρ 12 ) + η 1 1 h = 0 0 0 η 2 1 h = 0 } · { ( 1 + a | t | 2 α ) 1 } ,
where t R , h D .
Figure 3 and Figure 4 show the fitted PMM, SMM, and Cauchy models at time lags of 0 and 1 for maximum temperature, minimum temperature, and their cross space–time correlations. In Figure 3, the PMM model fits the empirical correlations better than the SMM and Cauchy models, capturing the underlying structure more accurately. In Figure 4, for maximum and minimum temperature correlations at lag 1, the PMM better capture the correlation patterns, while the Cauchy model performs slightly better for the cross-correlation.
Across both figures, correlation dispersion increases at long distances, as seen in the first plot of Figure 3. This pattern aligns with real-world expectations, where correlation typically decreases with distance, and it also contributes to reduced model fitting performance. Figure 5 shows that at time lag 2, all correlations are near zero, highlighting the MA(1) temporal structure in the data. PMM model correlation estimates are also zero by definition.
After fitting the PMM, SMM, and Cauchy models on the training data, the next step is to perform co-kriging for prediction on the testing data, as described below.
The response variable Y ^ ( s 0 , t 0 ) at location s 0 , time t 0 is estimated as follows:
Y ^ ( s 0 , t 0 ) = i = 1 n j = 1 m λ i j Y Y ( s i , t j ) + i = 1 n j = 1 m λ i j X X ( s i , t j , ) .
where the weights λ i j Y and λ i j X are obtained by solving the following:
λ ck = K ck 1 k ck ,
K ck = C Y Y C Y X 1 0 C X Y C X X 0 1 1 0 0 0 0 1 0 0 ,
where:
C Y Y = C Y Y ( s i s j , t 1 t 1 ) C Y Y ( s i s j , t 1 t m ) C Y Y ( s i s j , t m t 1 ) C Y Y ( s i s j , t m t m ) .
C Y Y ( s i s j , t 1 t 1 ) is the covariance matrix across distances at time lag t 1 t 1 for variable Y and
k ck = C Y Y ( s 0 s 1 , t 0 t 1 ) C Y Y ( s 0 s n , t 0 t m ) C Y X ( s 0 s 1 , t 0 t 1 ) C Y X ( s 0 s n , t 0 t m ) 1 0 .
In the PMM and Cauchy models, co-kriging is performed using the minimum and maximum values across all locations at t 1 and t 2 as input data, and SMM model uses t 1 only.
In addition, we consider a traditional time series modeling approach. Since standard time series prediction functions in the R package do not support forecasting with fixed parameters, we developed a custom implementation using the Innovations algorithm described in [4]. We fit the time series model on the training data for maximum temperatures at all 105 stations. Specifically, for each station, we estimated the parameters θ and σ , which are the key components of a moving average process of order one (MA(1)), and we used them to generate predictions on the testing data.
Finally, predictions were obtained for the testing period. The root mean squared error (RMSE) and 95% data interval was computed for each method to assess predictive performance. Table 1 reports the model performance across all counties for maximum temperature.
The percentage of Stations with the Lowest RMSE shows that the PMM model outperforms the others at most locations, achieving the lowest RMSE at 93.3% of the 105 stations. This demonstrates the model’s broad applicability and consistency across different locations. Consequently, the PMM model also produces the lowest average RMSE across all locations. While this difference may seem small, it is important in the de-seasonalized weekly average temperature data, where fluctuations are limited, making even small improvements both statistically and practically meaningful; see [28]. Moreover, the PMM model proves to be more reliable at individual stations, consistently providing better local predictions. This suggests that the model captures more complex spatial structures than simple MA(1) temporal margin or simple spatial correlation margin like the SMM model. The models used for comparison also have strong performance, using the marginal average as the starting points. All models shared those initial value together, so slight improvements are still considered beneficial. Based on this analysis, the proposed PMM model demonstrates consistently better predictive performance, particularly when the temporal margin of the space–time process is properly modeled using an MA(1) structure. The PMM model can also perform well when there is a large number of missing values in the primary variable by leveraging information from the secondary variable, which time series models are unable to utilize. Also, in real-world applications involving complex spatial–temporal data, model selection can be challenging. The PMM model stands out as an easy choice by simply using the marginal spatial and temporal correlations to select the appropriate structure in the PMM model. These results suggest that incorporating both strongly correlated spatial components and discrete-time dependence improves the overall predictive accuracy.

5. Discussion

This work presents a foundational framework for direct modeling of space–time random fields with spatially correlated structures and time series components. The methodology developed here enables the integration of spatial covariance models with some autoregressive and moving average temporal structures, offering a tractable yet flexible approach for analyzing spatio-temporal data. Looking ahead, several avenues for further development are promising. One direction is to incorporate more complex forms of temporal dependence, such as general ARMA or nonstationary time dynamics, to better reflect the intricate temporal behaviors observed in environmental and geophysical data. From an inferential standpoint, parameter estimation techniques can be enhanced by moving beyond least squares approaches. Specifically, adopting maximum likelihood estimation to fit the full correlation structure could lead to more efficient and statistically robust inference, particularly when the data exhibits strong space–time interactions. Additionally, while some of the current framework relies on the Matérn class of spatial covariance functions due to its theoretical and practical appeal, other families of spatial structures, including compactly supported or nonstationary models, may offer advantages in specific applications. Exploring these alternatives can further improve the adaptability of the modeling strategy to diverse scientific domains.

Author Contributions

Conceptualization, J.D. and R.T.; methodology, J.D. and R.T.; software, R.T.; validation, R.T. and J.D.; formal analysis, R.T. and J.D.; investigation, R.T.; resources, J.D. and R.T.; data curation, R.T.; writing—original draft preparation, R.T.; writing—review and editing, J.D.; visualization, R.T.; supervision, J.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this paper will be made available by the authors upon request.

Acknowledgments

The authors sincerely thank the Associate Editor and the two anonymous reviewers for their careful reading of the previous version of the manuscript and for their constructive comments and suggestions, which have greatly improved the quality of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ARMAAutoregressive and Moving Average
ACFAutocorrelation Function (ACF)
PMMPartially Mixed Model
RMSERoot Mean Squared Error

Appendix A. Proofs

Proof of Theorem 1.
Under the assumptions (i) and (ii), we are going to apply Theorem 8 of [29] to verify that (1) is a covariance matrix function on D × Z . Clearly, { C ( s 1 , s 2 ; t ) } = C ( s 2 , s 1 ; t ) , s 1 , s 2 D , t Z . Thus, it suffices that the inequality
i = 1 n j = 1 n a i C ( s i , s j ; i j ) a j 0
or, equivalently,
i = 1 n a i G 0 ( s i , s i ) a i + i = 1 n 1 { a i G 1 ( s i + 1 , s i ) a i + 1 + a i + 1 G 1 ( s i + 1 , s i ) a i } 0 ,
holds for every positive integer n, any s k D , and any a k R m .
Since G 0 ( s 1 , s 2 ) + G 1 ( s 1 , s 2 ) + G 1 ( s 1 , s 2 ) is a covariance matrix function on D , its transpose { G 0 ( s 1 , s 2 ) + G 1 ( s 1 , s 2 ) + G 1 ( s 1 , s 2 ) } = G 0 ( s 1 , s 2 ) + G 1 ( s 1 , s 2 ) + G 1 ( s 1 , s 2 ) equals G 0 ( s 2 , s 1 ) + G 1 ( s 2 , s 1 ) + G 1 ( s 2 , s 1 ) , so that
G 1 ( s 1 , s 2 ) + G 1 ( s 1 , s 2 ) = G 1 ( s 2 , s 1 ) + G 1 ( s 2 , s 1 ) , s 1 , s 2 D .
Notice that the matrix function 1 2 ( C ( s 1 , s 2 ; t ) + C ( s 1 , s 2 ; t ) ) can be written as follows:
1 2 ( C ( s 1 , s 2 ; t ) + C ( s 1 , s 2 ; t ) ) = G 0 ( s 1 , s 2 ) , t = 0 , 1 2 { G 1 ( s 1 , s 2 ) + G 1 ( s 1 , s 2 ) } , t = 1 , 1 2 { G 1 ( s 2 , s 1 ) + G 1 ( s 2 , s 1 ) } , t = 1 , 0 , t ± 2 , ± 3 , , s 1 , s 2 D , = G 0 ( s 1 , s 2 ) , t = 0 , 1 2 { G 1 ( s 1 , s 2 ) + G 1 ( s 1 , s 2 ) } , t = 1 , 1 2 { G 1 ( s 1 , s 2 ) + G 1 ( s 1 , s 2 ) } , t = 1 , 0 , t ± 2 , ± 3 , , s 1 , s 2 D , = G 0 ( s 1 , s 2 ) + G 1 ( s 1 , s 2 ) + G 1 ( s 1 , s 2 ) 2 · I , t = 0 , 1 2 I , t = 1 , 1 2 I , t = 1 , 0 , t = ± 2 , ± 3 , , s 1 , s 2 D , + G 0 ( s 1 , s 2 ) G 1 ( s 1 , s 2 ) G 1 ( s 1 , s 2 ) 2 · I , t = 0 , 1 2 I , t = 1 , 1 2 I , t = 1 , 0 , t = ± 2 , ± 3 , , s 1 , s 2 D .
This is a sum of two separable covariance matrix functions and is thus a covariance matrix function on D × Z . Based on Theorem 8 of [29], we obtain
0 1 2 i = 1 n j = 1 n a i { C ( s i , s j ; i j ) + C ( s i , s j ; i j ) } a j = 1 2 i = 1 n a i { C ( s i , s i ; 0 ) + C ( s i , s i ; 0 ) } a i + 1 2 i = 1 n 1 a i { C ( s i , s i + 1 ; 1 ) + C ( s i , s i + 1 ; 1 ) } a i + 1 + 1 2 i = 1 n 1 a i + 1 { C ( s i + 1 , s i ; 1 ) + C ( s i + 1 , s i ; 1 ) } a i = i = 1 n a i G 0 ( s i , s i ) a i + 1 2 i = 1 n 1 a i { G 1 ( s i + 1 , s i ) + G 1 ( s i + 1 , s i ) } a i + 1 + 1 2 i = 1 n 1 a i + 1 { G 1 ( s i + 1 , s i ) + G 1 ( s i + 1 , s i ) } a i = i = 1 n a i G 0 ( s i , s i ) a i + i = 1 n 1 { a i G 1 ( s i + 1 , s i ) a i + 1 + a i + 1 G 1 ( s i + 1 , s i ) a i } ,
where the last equality follows from a i G 1 ( s i + 1 , s i ) a i + 1 = a i + 1 G 1 ( s i + 1 , s i ) } a i . Thus, inequality (A1) is derived. Conversely, suppose that Equation (1) is a covariance matrix function on D × Z . Then, for arbitrary n locations and l integer time points at each location, we formulate n m pairs s i and t j = j , choose the corresponding vectors as the products a i b j , i = 1 , , n , j = 1 , , l , and obtain
i = 1 n i = 1 n j = 1 l j = 1 l b j b j a i C ( s i , s i ; j j ) a i 0 ,
or
i = 1 n i = 1 n a i j = 1 l b j 2 C ( s i , s i ; 0 ) + j = 1 l 1 b j b j + 1 ( C ( s i , s i ; 1 ) + C ( s i , s i ; 1 ) ) a i 0 ,
or
i = 1 n i = 1 n a i j = 1 l b j 2 G 0 ( s i , s i ) + j = 1 l 1 b j b j + 1 ( G 1 ( s i , s i ) + G 1 ( s i , s i ) ) a i 0 .
In particular, in Equation (A2), taking b j = 1 , j = 1 , , l and dividing both sides by l yields
i = 1 n i = 1 n a i G 0 ( s i , s i ) + l 1 l ( G 1 ( s i , s i ) + G 1 ( s i , s i ) ) a i 0 .
Letting l gives
i = 1 n i = 1 n a i ( G 0 ( s i , s i ) + G 1 ( s i , s i ) + G 1 ( s i , s i ) ) a i 0 .
This implies that G 0 ( s 1 , s 2 ) + G 1 ( s 1 , s 2 ) + G 1 ( s 2 , s 1 ) is a covariance matrix function on S , based on Theorem 8 of [29]. Thus, condition (i) is confirmed.
Similarly, in order to confirm condition (ii), in Equation (A2), we take b j = ( 1 ) j , j = 1 , , l , divide both sides by l, and obtain
i = 1 n i = 1 n a i G 0 ( s i , s i ) l 1 l ( G 1 ( s i , s i ) + G 1 ( s i , s i ) ) a i 0 .
Letting l gives
i = 1 n i = 1 n a i G 0 ( s i , s i ) G 1 ( s i , s i ) G 1 ( s i , s i ) ) a i 0 .
This implies that G 0 ( s 1 , s 2 ) G 1 ( s 1 , s 2 ) G 1 ( s 2 , s 1 ) is a covariance matrix function on S , based on Theorem 8 of [29]. □
Proof of Theorem 2.
Following from Theorem 1, it is equivalent to show that the inequality (3) is a necessary and sufficient condition for G 0 ( h ) ± G 1 ( h ) ± G 1 ( h ) to be a valid covariance matrix function on D with h = | | s i s j | | . Under the scenario of this theorem,
G 0 ( h ) ± G 1 ( h ) ± G 1 ( h ) = c M ( h | v , α 1 ) ( 1 ± 2 β 1 ) + ( 1 c ) M ( h | v , α 2 ) ( 1 ± 2 β 2 ) , h D .
For sufficiency in general, by applying Theorem 2 in [22], A positive linear combination of two covariance matrix functions is also a valid covariance matrix function on D . Since condition (3) holds—i.e., 0 c 1 , 1 2 β i 1 2 for i = 1 , 2 , and M ( h ) is a spatial correlation matrix function, Equation (A3) is also a valid covariance matrix function. Furthermore, by Theorem 1, Equation (2) defines a valid spatio-temporal covariance matrix function.
When p 2 , we will show condition (3) is both sufficient and necessary. To this end, let’s consider the spectral density of the Matérn class of a function (see Equation (32) of [30]). The Fourier transforms of G 0 ( h ) + G 1 ( h ) + G 1 ( h ) and G 0 ( h ) G 1 ( h ) G 1 ( h ) are given as follows:
F ( h ) = c ν 1 f 11 ( h ) c ν 12 ρ 12 f 12 ( h ) c ν 12 ρ 12 f 21 ( h ) c ν 2 f 22 ( h ) ; G ( h ) = c ν 1 g 11 ( h ) c ν 12 ρ 12 g 12 ( h ) c ν 12 ρ 12 g 21 ( h ) c ν 2 g 22 ( h ) ,
where c ν = π d / 2 Γ ( ν + d / 2 ) / Γ ( ν ) ,
f i j ( h ) = c ( α 1 ) v i + v j ( h 2 + α 1 2 ) v i + v j 2 d / 2 ( 1 + 2 β 1 ) + ( 1 c ) α 2 v i + v j ( h 2 + α 2 2 ) v i + v j 2 d / 2 ( 1 + 2 β 2 ) ,
and
g i j ( h ) = c ( α 1 ) v i + v j ( h 2 + α 1 2 ) v i + v j 2 d / 2 ( 1 2 β 1 ) + ( 1 c ) α 2 v i + v j ( h 2 + α 2 2 ) v i + v j 2 d / 2 ( 1 2 β 2 ) s D .
respectively. Hence, it is reduced to show that inequality (3) is necessary and sufficient for F ( h ) and G ( h ) to be nonnegative definite, which means f 11 ( h ) 0 , f 22 ( h ) 0 , g 11 ( h ) 0 , g 22 ( h ) 0 and
c v 1 c v 2 f 11 ( h ) f 22 ( h ) c v 12 2 ρ 12 2 f 12 ( h ) f 21 ( h ) 0 ,
c v 1 c v 2 g 11 ( h ) g 22 ( h ) c v 12 2 ρ 12 2 g 12 ( h ) g 21 ( h ) 0 ,
based on Cramér’s Theorem [31]. From Theorem 2 in [13], we already know that f 11 ( h ) 0 and g 11 ( h ) 0 if and only if
{ 1 α 2 d ( 1 2 β 1 ) α 1 d ( 1 2 β 2 ) } 1 c { 1 α 1 2 v 1 ( 1 + 2 β 1 ) α 2 2 v 1 ( 1 + 2 β 2 ) } 1 .
Also, f 22 ( h ) 0 and g 22 ( h ) 0 if and only if
{ 1 α 2 d ( 1 2 β 1 ) α 1 d ( 1 2 β 2 ) } 1 c { 1 α 1 2 v 2 ( 1 + 2 β 1 ) α 2 2 v 2 ( 1 + 2 β 2 ) } 1 .
Since 0 v 1 v 2 , 0 α 1 α 2 , 1 / 2 β 1 β 2 1 / 2 , f i i 0 and g i i 0 , i = 1 , 2 , entail
{ 1 α 2 d ( 1 2 β 1 ) α 1 d ( 1 2 β 2 ) } 1 c { 1 α 1 2 v 2 ( 1 + 2 β 1 ) α 2 2 v 2 ( 1 + 2 β 2 ) } 1 .
To evaluate inequalities (A4) and (A5), noting that c v 1 c v 2 = c v 12 2 ρ 12 2 , we expand the LHS of Equations (A4) and (A5) with this positive factor removed.
c v 1 c v 2 ( c α 1 2 v 1 ( | | h | | 2 + α 1 2 ) v 1 d / 2 ( 1 ± 2 β 1 ) + ( 1 c ) α 2 2 v 1 ( h 2 + α 2 2 ) v 1 d / 2 ( 1 ± 2 β 2 ) ) · ( c α 1 2 v 2 ( h 2 + α 1 2 ) v 2 d / 2 ( 1 ± 2 β 1 ) + ( 1 c ) α 2 2 v 2 ( h 2 + α 2 2 ) v 2 d / 2 ( 1 ± 2 β 2 ) ) c v 12 2 ρ 12 2 ( c α 1 v 1 + v 2 ( h 2 + α 1 2 ) v 1 + v 2 2 d / 2 ( 1 ± 2 β 1 ) + ( 1 c ) α 2 v 1 + v 2 ( h 2 + α 2 2 ) v 1 + v 2 2 d / 2 ( 1 ± 2 β 2 ) ) 2 = c 2 α 1 2 v 1 + 2 v 2 ( h 2 + α 1 2 ) v 1 d / 2 ( h 2 + α 1 2 ) v 2 d / 2 ( 1 ± 2 β 1 ) 2 + c ( 1 c ) α 1 2 v 2 α 2 2 v 1 ( h 2 + α 1 2 ) v 2 d / 2 ( h 2 + α 2 2 ) v 1 d / 2 ( 1 ± 2 β 1 ) ( 1 ± 2 β 2 ) + c ( 1 c ) α 1 2 v 1 α 2 2 v 2 ( h 2 + α 1 2 ) v 1 d / 2 ( h 2 + α 2 2 ) v 2 d / 2 ( 1 ± 2 β 1 ) ( 1 ± 2 β 2 ) + ( 1 c ) 2 α 2 2 v 1 + 2 v 2 ( h 2 + α 2 2 ) v 1 d / 2 ( h 2 + α 2 2 ) v 2 d / 2 ( 1 ± 2 β 2 ) 2 c 2 α 1 2 v 1 + 2 v 2 ( h 2 + α 1 2 ) ( v 1 + v 2 ) d ( 1 ± 2 β 1 ) 2 2 c ( 1 c ) α 1 v 1 + v 2 α 2 v 1 + v 2 ( h 2 + α 1 2 ) ( v 1 + v 2 ) / 2 d / 2 ( h 2 + α 2 2 ) ( v 1 + v 2 ) / 2 d / 2 ( 1 ± 2 β 1 ) ( 1 ± 2 β 2 ) ( 1 c ) 2 α 2 2 v 1 + 2 v 2 ( h 2 + α 2 2 ) ( v 1 + v 2 ) d ( 1 ± 2 β 2 ) 2 = c ( 1 c ) { α 1 2 v 2 α 2 2 v 1 ( h 2 + α 1 2 ) v 2 2 / d ( h 2 + α 2 2 ) v 1 2 / d + α 1 2 v 1 α 2 2 v 2 ( h 2 + α 1 2 ) v 1 2 / d ( h 2 + α 2 2 ) v 2 2 / d 2 α 1 v 1 + v 2 α 2 v 1 + v 2 ( h 2 + α 2 2 ) ( v 1 + v 2 ) / 2 2 / d ( h 2 + α 2 2 ) ( v 1 + v 2 ) / 2 2 / d } ( 1 ± 2 β 1 ) ( 1 ± 2 β 2 )
= c ( 1 c ) { α 1 v 2 v 1 α 2 v 1 v 2 ( h 2 + α 1 2 ) ( v 1 v 2 ) / 2 ( h 2 + α 2 2 ) ( v 2 v 1 ) / 2 + α 1 v 1 v 2 α 2 v 2 v 1 ( h 2 + α 1 2 ) ( v 2 v 1 ) / 2 ( h 2 + α 2 2 ) ( v 1 v 2 ) / 2 2 } ( 1 ± 2 β 1 ) ( 1 ± 2 β 2 ) = c ( 1 c ) { ( α 1 2 ( h 2 + α 2 2 ) α 2 2 ( h 2 + α 1 2 ) ) ( v 2 v 1 ) / 2 + ( α 2 2 ( h 2 + α 1 2 ) α 1 2 ( h 2 + α 2 2 ) ) ( v 2 v 1 ) / 2 2 } ( 1 ± 2 β 1 ) ( 1 ± 2 β 2 )
  • For the necessary part, with ( 1 ± 2 β 1 ) ( 1 ± 2 β 2 ) 0 , letting h + in ( α 1 2 ( h 2 + α 2 2 ) α 2 2 ( h 2 + α 1 2 ) ) ( v 2 v 1 ) / 2 + ( α 2 2 ( h 2 + α 1 2 ) α 1 2 ( h 2 + α 2 2 ) ) ( v 2 v 1 ) / 2 2 yields ( α 1 2 α 2 2 ) ( v 2 v 1 ) / 2 + ( α 2 2 α 1 2 ) ( v 2 v 1 ) / 2 2 , which is greater than zero, so c ( 0 , 1 ) if inequalities (A4) and (A5) hold.
  • For the sufficient part, other than using Theorem 2 in [22], we can work on ( α 1 2 ( h 2 + α 2 2 ) α 2 2 ( h 2 + α 1 2 ) ) ( v 2 v 1 ) / 2 + ( α 2 2 ( h 2 + α 1 2 ) α 1 2 ( h 2 + α 2 2 ) ) ( v 2 v 1 ) / 2 2 by using inequality a + b 2 a b along with c ( 0 , 1 ) , we can prove both inequalities (A4) and (A5). Finally noting that condition c ( 0 , 1 ) automatically satisfies inequalities (A7) and (A8), therefore Equation (2) is a valid correlation matrix function if and only if c ( 0 , 1 ) . □
Proof of Theorem 3.
If we let
G 0 ( h ) = c M ( h | v , α ) + ( 1 c ) M ( h | v , α ) , G 1 ( h ) = c M ( h | v , α ) β 1 + ( 1 c ) M ( h | v , α ) β 2 ,
in Theorem 1, from which we only need to show G 0 ( h ) ± 2 G 1 ( h ) is a valid covariance matrix function. By Cramér’s theorem in spectral domain, it is to show that the Fourier transform of G 0 ( h ) ± 2 G 1 ( h ) is nonnegative definite. Consider the following Fourier transform matrix function:
f 11 ( h ) f 12 ( h ) f 21 ( h ) f 22 ( h ) ,
where
f 11 ( h ) = c σ 2 ( h 2 + α 1 2 ) v 1 d / 2 · Γ ( v 1 + d / 2 ) α 1 2 v 1 Γ ( v 1 ) π d / 2 ( 1 ± 2 β 1 ) + ( 1 c ) σ 2 ( h 2 + α 1 2 ) v 1 d / 2 · Γ ( v 1 + d / 2 ) α 1 2 v 1 Γ ( v 1 ) π d / 2 ( 1 ± 2 β 2 ) ,
f 12 ( s ) = c ρ 12 σ 2 ( | | s | | 2 + α 12 2 ) v 12 d / 2 · Γ ( v 12 + d / 2 ) α 12 2 v 12 Γ ( ( v 1 + v 2 ) / 2 ) π d / 2 ( 1 ± 2 β 1 ) + ( 1 c ) ρ 12 σ 2 ( | | s | | 2 + α 12 2 ) v 12 d / 2 · Γ ( ( v 1 + v 2 ) / 2 + d / 2 ) α 12 2 v 12 Γ ( ( v 1 + v 2 ) / 2 ) π d / 2 ( 1 ± 2 β 2 ) ,
f 21 ( h ) = f 12 ( h ) ,
f 22 ( h ) = c σ 2 ( h 2 + α 2 2 ) v 2 d / 2 · Γ ( v 2 + d / 2 ) α 2 2 v 1 Γ ( v 2 ) π d / 2 ( 1 ± 2 β 1 ) + ( 1 c ) σ 2 ( h 2 + α 2 2 ) v 2 d / 2 · Γ ( v 2 + d / 2 ) α 2 2 v 1 Γ ( v 2 ) π d / 2 ( 1 ± 2 β 2 ) ,
then it is to show that, the condition (6) is equivalent to f 11 ( h ) f 22 ( h ) f 12 ( h ) f 21 ( h ) 0 for any h 0 based on Cramér’s Theorem, since f 11 ( h ) 0 , f 22 ( h ) 0 have been ensured by condition like (A8) with α 1 and α 2 replaced with α i and α i following [13,14], i=1,2.
Let h 1 = h 2 + α 1 2 ,   h 1 = h 2 + α 1 2 ,   h 2 = h 2 + α 2 2 ,   h 2 = h 2 + α 2 2 ,   c ( v ) = π d / 2 Γ ( v + d / 2 ) / Γ ( v ) . Note that
f 11 ( h ) f 22 ( h ) f 12 ( h ) f 21 ( h ) = c 2 h 1 v 1 d 2 c v 1 α 1 2 v 1 ( 1 ± 2 β 1 ) h 2 v 2 d 2 c v 2 α 2 2 v 2 ( 1 ± 2 β 1 ) + c ( 1 c ) h 1 v 1 d 2 c v 1 α 1 2 v 1 ( 1 ± 2 β 1 ) h 2 v 2 d 2 c v 2 α 2 2 v 2 ( 1 ± 2 β 2 ) + c ( 1 c ) h 1 v 1 d 2 c v 1 α 1 2 v 1 ( 1 ± 2 β 2 ) h 2 2 v 2 d 2 c v 2 α 2 2 v 2 ( 1 ± 2 β 2 ) + ( 1 c ) 2 h 1 v 1 d / 2 c v 1 α 1 2 v 1 ( 1 ± 2 β 2 ) h 2 v 2 d / 2 c v 2 α 2 2 v 2 ( 1 ± 2 β 2 ) c 2 ρ 12 2 h 12 2 v 12 d c v 12 2 α 12 4 v 12 ( 1 ± 2 β 1 ) 2 2 c ( 1 c ) ρ 12 ρ 12 h 12 v 12 d 2 c v 12 α 12 2 v 12 ( 1 ± 2 β 1 ) h 12 v 12 d 2 c v 12 α 12 2 v 12 ( 1 ± 2 β 2 ) ( 1 c ) 2 ρ 12 2 h 12 2 v 12 d c v 12 2 α 12 4 v 12 ( 1 ± 2 β 2 ) 2 = c 2 ( 1 ± 2 β 1 ) 2 ( h 1 v 1 d 2 h 2 v 2 d 2 α 1 2 v 1 α 2 2 v 2 c v 1 c v 2 ρ 12 2 h 12 2 v 12 d α 12 4 v 12 c v 12 2 ) + c ( 1 c ) ( 1 ± 2 β 1 ) ( 1 ± 2 β 2 ) ( h 1 v 1 d 2 h 2 v 2 d 2 α 1 2 v 1 α 2 2 v 2 c v 1 c v 2 + h 1 v 1 d / 2 h 2 v 2 d / 2 α 1 2 v 1 α 2 2 v 2 c v 1 c v 2 2 ρ 12 ρ 12 h 12 v 12 d 2 h 12 v 12 d 2 α 12 2 v 12 α 12 2 v 12 c v 12 2 ) + ( 1 c ) 2 ( 1 ± 2 β 2 ) 2 ( h 1 v 1 d / 2 h 2 v 2 d / 2 α 1 2 v 1 α 2 2 v 2 c v 1 c v 2 ρ 12 2 h 12 2 v 12 d α 12 4 v 12 c v 12 2 ) = c 2 ( 1 ± 2 β 1 ) 2 H ( h ) + c ( 1 c ) ( 1 ± 2 β 1 ) ( 1 ± 2 β 2 ) D ( h ) + ( 1 c ) 2 ( 1 ± 2 β 2 ) 2 H ˜ ( h ) .
Moreover inequalities (8) and (9) imply H ( h ) 0 , H ˜ ( h ) 0 . The proof of this equivalence can be proved if we can show D ( h ) 0 . Because
H ( h ) = α 1 2 v 1 α 2 2 v 2 c v 1 c v 2 ( α 1 2 + h 2 ) v 1 + d / 2 ( α 2 2 + h 2 ) v 2 + d / 2 ρ 12 2 α 12 4 v 12 c v 12 2 ( α 12 2 + h 2 ) 2 v 12 + d 0 ,
H ˜ ( h ) = α 1 2 v 1 α 2 2 v 2 c v 1 c v 2 ( α 1 2 + h 2 ) v 1 + d / 2 ( α 2 2 + h 2 ) v 2 + d / 2 ρ 12 2 α 12 4 v 12 c v 12 2 ( α 12 2 + h 2 ) 2 v 12 + d 0 ,
by using inequality a 2 + b 2 2 a b , we have
D ( h ) = α 1 2 v 1 α 2 2 v 2 c v 1 c v 2 ( α 1 2 + h 2 ) v 1 + d / 2 ( α 2 2 + h 2 ) v 2 + d / 2 + α 1 2 v 1 α 2 2 v 2 c v 1 c v 2 ( α 1 2 + h 2 ) v 1 + d / 2 ( α 2 2 + h 2 ) v 2 + d / 2 2 ρ 12 ρ 12 α 12 2 v 12 α 12 2 v 12 c v 12 2 ( ( α 12 2 + h 2 ) ( α 12 2 + h 2 ) ) v 12 + d / 2 2 c v 1 c v 2 α 1 v 1 α 1 v 1 α 2 v 2 α 2 v 2 ( α 1 2 + h 2 ) v 1 / 2 + d / 4 ( α 1 2 + h 2 ) v 1 / 2 + d / 4 ( α 2 2 + h 2 ) v 2 / 2 + d / 4 ( α 2 2 + h 2 ) v 2 / 2 + d / 4 2 ρ 12 ρ 12 α 12 2 v 12 α 12 2 v 1 2 c v 12 2 ( ( α 12 2 + h 2 ) ( α 12 2 + h 2 ) ) v 12 + d / 2 0 .
where the last inequality holds since a 2 b 2 , a 2 b 2 , then a a b b if a , a , b , b are all greater than zero. When D ( h ) = 0 , it follows from (A9) that f 11 ( h ) f 22 ( h ) f 12 ( h ) f 21 ( h ) 0 holds automatically. So the remaining case is when D ( h ) > 0 , and that the RHS of (A9) being greater than or equal to zero for all h 0 if and only if
i n f h 0 , D ( h ) > 0 c 2 ( 1 ± 2 β 1 ) 2 H ( h ) + ( 1 c ) 2 ( 1 ± 2 β 2 ) 2 H ˜ ( h ) ( 1 ± 2 β 1 ) ( 1 ± 2 β 2 ) D ( h ) c ( c 1 ) .
Proof of Corollary 1.
The sufficiency can be proved by following the proof of Theorem 2. For the necessary condition, first note that those conditions will ensure (8) and (9) by Theorem 3 in [14] and the inequality (6) will be evaluated in the following cases.
(a) When α 12 m i n ( α 1 , α 2 ) , α 12 m i n ( α 1 , α 2 ) , v 12 = v 1 + v 2 2 , equalities ρ 12 2 = c v 1 c v 2 c v 12 2 ( α 12 2 α 1 α 2 ) d , ρ 12 2 = c v 1 c v 2 c v 12 2 ( α 12 2 α 1 α 2 ) d hold. The minimum zero of LHS of inequality (6) can be reached when h = 0 , since
H ( 0 ) = α 1 2 v 1 α 2 2 v 2 c v 1 c v 2 ( α 1 2 + 0 2 ) v 1 + d / 2 ( α 2 2 + 0 2 ) v 2 + d / 2 ρ 12 2 α 12 4 v 12 c v 12 2 ( α 12 2 + 0 2 ) 2 v 12 + d = α 1 2 v 1 α 2 2 v 2 c v 1 c v 2 ( α 1 2 + 0 2 ) v 1 + d / 2 ( α 2 2 + 0 2 ) v 2 + d / 2 c v 1 c v 2 c v 12 2 ( α 12 2 α 1 α 2 ) d α 12 4 v 12 c v 12 2 ( α 12 2 + 0 2 ) 2 v 12 + d = α 1 d α 2 d c v 1 c v 2 α 1 d α 2 d c v 1 c v 2 = 0 .
Similarly, H ˜ ( 0 ) = 0 . So 0 c 1 , given that c is also constrained by condition like (A8) with α 1 and α 2 replaced with α i and α i , i = 1 , 2 .
(b) When α 12 m a x ( α 1 , α 2 ) , α 12 m a x ( α 1 , α 2 ) , v 12 = v 1 + v 2 2 , equalities ρ 12 2 = c v 1 c v 2 c v 12 2 ( α 1 α 12 ) 2 v 1 ( α 2 α 12 ) 2 v 2 , ρ 12 2 = c v 1 c v 2 c v 12 2 ( α 1 α 12 ) 2 v 1 ( α 2 α 12 ) 2 v 2 hold; the minimum zero of LHS of inequality (6) is obtained when h ,
H ( ) = lim h α 1 2 v 1 α 2 2 v 2 c v 1 c v 2 ( α 1 2 + h 2 ) v 1 + d / 2 ( α 2 2 + h 2 ) v 2 + d / 2 ρ 12 2 α 12 4 v 12 c v 12 2 ( α 12 2 + h 2 ) 2 v 12 + d = 0 .
Similarly, H ˜ ( ) = 0 . So 0 c 1 , given that c is also constrained by condition like (A8) with α 1 and α 2 replaced with α i and α i , i = 1 , 2 . □
Proof of Theorem 4.
The sufficiency can be established by following the similar proof of Theorem 2. The necessary part will be proved using Cramér’s Theorem for the case p = 2 as follows. The Fourier transform of (12) is equal to
F ( h ) = c ν 1 f 11 ( h ) c ν 12 ρ 12 f 12 ( h ) c ν 12 ρ 12 f 21 ( h ) c ν 2 f 22 ( h ) ,
where c ν = π d / 2 Γ ( ν + d / 2 ) / Γ ( ν ) ,
f i j ( h ) = c ( α 1 ) v i + v j ( h 2 + α 1 2 ) v i + v j 2 d / 2 β 1 + ( 1 c ) α 2 v i + v j ( h 2 + α 2 2 ) v i + v j 2 d / 2 β 2 . , i , j = 1 , 2 .
Hence, it is reduced to show that 0 c 1 is necessary for F ( h ) to be nonnegative definite for any h > 0 based on Cramér’s Theorem. Which means f 11 ( h ) 0 , f 22 ( h ) 0 , and
c v 1 c v 2 f 11 ( h ) f 22 ( h ) c v 12 2 ρ 12 2 f 12 ( h ) f 21 ( h ) 0 , h 0
From [13], we already know that f 11 ( h ) 0 if and only if
{ 1 α 2 d ( 1 β 1 ) ( 1 + β 2 ) α 1 d ( 1 + β 1 ) ( 1 β 2 ) } 1 c { 1 α 1 2 v 1 ( 1 + β 1 ) ( 1 β 2 ) α 2 2 v 1 ( 1 β 1 ) ( 1 + β 2 ) } 1 .
Also, f 22 ( h ) 0 if and only if
{ 1 α 2 d ( 1 β 1 ) ( 1 + β 2 ) α 1 d ( 1 + β 1 ) ( 1 β 2 ) } 1 c { 1 α 1 2 v 2 ( 1 + β 1 ) ( 1 β 2 ) α 2 2 v 2 ( 1 β 1 ) ( 1 + β 2 ) } 1 .
To evaluate (A10), noting that c v 1 c v 2 = c v 12 2 ρ 12 2 , We expand the left-hand side of (A10), omitting the positive factor, as follows: Letting f α v = ( h 2 + α 2 ) v d / 2 α 2 v ,
c v 1 c v 2 ( c α 1 2 v 1 ( h 2 + α 1 2 ) v 1 d / 2 β 1 + ( 1 c ) α 2 2 v 1 ( h 2 + α 2 2 ) v 1 d / 2 β 2 ) · ( c α 1 2 v 2 ( h 2 + α 1 2 ) v 2 d / 2 β 1 + ( 1 c ) α 2 2 v 2 ( h 2 + α 2 2 ) v 2 d / 2 β 2 ) c v 12 2 ρ 12 2 ( c α 1 v 1 + v 2 ( h 2 + α 1 2 ) v 1 + v 2 2 d / 2 β 1 + ( 1 c ) α 2 v 1 + v 2 ( h 2 + α 2 2 ) v 1 + v 2 2 d / 2 β 2 ) 2 . = c 2 f α 1 v 1 f α 1 v 2 ( β 1 ) 2 + c ( 1 c ) f α 1 v 2 f α 2 v 1 β 1 β 2 + c ( 1 c ) f α 1 v 1 f α 2 v 2 β 1 β 2 + ( 1 c ) 2 f α 2 v 1 f α 2 v 2 ( β 2 ) 2 c 2 f α 1 v 12 f α 1 v 12 ( β 1 ) 2 2 c ( 1 c ) f α 1 v 12 f α 2 v 12 β 1 β 2 ( 1 c ) 2 f α 2 v 12 f α 2 v 12 ( β 2 ) 2 = c ( 1 c ) { f α 1 v 2 f α 2 v 1 + f α 1 v 1 f α 2 v 2 2 f α 1 v 12 f α 2 v 12 } β 1 β 2 = c ( 1 c ) { α 1 v 2 v 1 α 2 v 1 v 2 ( h 2 + α 1 2 ) ( v 1 v 2 ) / 2 ( h 2 + α 2 2 ) ( v 2 v 1 ) / 2 + α 1 v 1 v 2 α 2 v 2 v 1 ( h 2 + α 1 2 ) ( v 2 v 1 ) / 2 ( h 2 + α 2 2 ) ( v 1 v 2 ) / 2 2 } β 1 β 2 = c ( 1 c ) { ( α 1 2 ( h 2 + α 2 2 ) α 2 2 ( h 2 + α 1 2 ) ) ( v 2 v 1 ) / 2 + ( α 2 2 ( h 2 + α 1 2 ) α 1 2 ( h 2 + α 2 2 ) ) ( v 2 v 1 ) / 2 2 } β 1 β 2
With β 1 β 2 0 , f α 1 v 12 f α 2 v 12 0 , letting h in ( α 1 2 ( h 2 + α 2 2 ) α 2 2 ( h 2 + α 1 2 ) ) ( v 2 v 1 ) / 2 + ( α 2 2 ( h 2 + α 1 2 ) α 1 2 ( h 2 + α 2 2 ) ) ( v 2 v 1 ) / 2 2 yields ( α 1 2 α 2 2 ) ( v 2 v 1 ) / 2 + ( α 2 2 α 1 2 ) ( v 2 v 1 ) / 2 2 , which is greater than zero, so c ( 0 , 1 ) . Moreover, c ( 0 , 1 ) satisfies the constraint in inequality (13). Finally, the nonnegative definiteness of F ( h ) for any h 0 implies c ( 0 , 1 ) . □
Proof of Theorem 5.
The proof idea and procedure are similar to those of Theorem 3. Following a similar setup, we consider the Fourier transform of Equation (15) as follows:
f 11 ( h ) f 12 ( h ) f 21 ( h ) f 22 ( h ) = c f α 1 v 1 β 1 + ( 1 c ) f α 1 v 1 β 2 c f α 12 v 12 β 1 ρ 12 + ( 1 c ) f α 12 v 12 β 2 ρ 12 c f α 12 v 12 β 1 ρ 12 + ( 1 c ) f α 12 v 12 β 2 ρ 12 c f α 2 v 2 β 1 + ( 1 c ) f α 2 v 2 β 2 ,
where
f α v = ( h 2 + α 2 ) v d / 2 · Γ ( v + d / 2 ) α 2 v Γ ( v ) π d / 2 .
It remains to show that condition (16) is equivalent to f 11 ( h ) f 22 ( h ) f 12 ( h ) f 21 ( h ) 0 for all h 0 , based on Cramér’s Theorem. Since f 11 ( h ) 0 and f 22 ( h ) 0 are guaranteed by a condition similar to (A8), with α 1 and α 2 replaced by α i and α i respectively, following [13,14] for i = 1 , 2 . To this end, note that
f 11 ( h ) f 22 ( h ) f 12 ( h ) f 21 ( h ) = c 2 f α 1 v 1 f α 2 v 2 ( β 1 ) 2 + c ( 1 c ) f α 1 v 1 f α 2 v 2 β 1 β 2 + c ( 1 c ) f α 1 v 1 f α 2 v 2 β 1 β 2 + ( 1 c ) 2 f α 1 v 1 f α 2 v 2 ( β 2 ) 2 c 2 ρ 12 2 f α 12 v 12 2 ( β 1 ) 2 2 c ( 1 c ) ρ 12 ρ 12 f α 12 v 12 f α 12 v 12 β 1 β 2 ( 1 c ) 2 ρ 12 2 f α 12 v 12 2 ( β 2 ) 2 = c 2 ( β 1 ) 2 ( f α 1 v 1 f α 2 v 2 f α 12 v 12 2 ρ 12 2 ) + c ( 1 c ) β 1 β 2 ( f α 1 v 1 f α 2 v 2 + f α 1 v 1 f α 2 v 2 2 ρ 12 ρ 12 f α 12 v 12 f α 12 v 12 ) + ( 1 c ) 2 ( β 2 ) 2 ( f α 1 v 1 f α 2 v 2 f α 12 v 12 2 ρ 12 2 ) = c 2 ( β 1 ) 2 H ( h ) + c ( 1 c ) β 1 β 2 D ( h ) + ( 1 c ) 2 ( β 2 ) 2 H ˜ ( h ) .
Similar to the proof of Theorem 3, inequalities (8) and (9) imply that H ( h ) 0 , H ˜ ( h ) 0 and D ( h ) 0 .
When D ( h ) = 0 , it follows from Equation (A13) that f 11 ( h ) f 22 ( h ) f 12 ( h ) f 21 ( h ) 0 holds naturally. It turns out that D ( h ) = 0 whenever
1 α 1 2 + h 2 ν 1 + d / 2 1 α 2 2 + h 2 ν 2 + d / 2 α 1 2 ν 1 α 2 2 ν 2 = 1 α 1 2 + h 2 ν 1 + d / 2 1 α 2 2 + h 2 ν 2 + d / 2 α 1 2 ν 1 α 2 2 ν 2 .
Hence the remaining case is when D ( h ) > 0 , and the right-hand side of Equation (A13) is nonnegative for all h 0 if and only if
inf h 0 D ( h ) > 0 c 2 ( β 1 ) 2 H ( h ) + ( 1 c ) 2 ( β 2 ) 2 H ˜ ( h ) β 1 β 2 D ( h ) c ( c 1 ) .

References

  1. Gaspari, G.; Cohn, S.E. Construction of correlation functions in two and three dimensions. Q. J. R. Meteorol. Soc. 1999, 125, 723–757. [Google Scholar] [CrossRef]
  2. Sain, S.R.; Furrer, R.; Cressie, N. A spatial analysis of multivariate output from regional climate models. Ann. Appl. Stat. 2011, 5, 150–175. [Google Scholar] [CrossRef]
  3. Tebaldi, C.; Lobell, D.B. Towards probabilistic projections of climate change impacts on global crop yields. Geophys. Res. Lett. 2008, 35, L08705. [Google Scholar] [CrossRef]
  4. Cressie, N.; Huang, H.-C. Classes of nonseparable, spatio-temporal stationary covariance functions. J. Am. Stat. Assoc. 1999, 94, 1330–1340. [Google Scholar] [CrossRef]
  5. Ma, C. Families of spatio-temporal stationary covariance models. J. Stat. Plann. Inference 2003, 116, 489–501. [Google Scholar] [CrossRef]
  6. Castruccio, S.; Stein, M.L. Global space-time models for climate ensembles. Ann. Appl. Stat. 2013, 7, 1593–1611. [Google Scholar] [CrossRef]
  7. Cressie, N.; Wikle, C.K. Statistics for Spatio-Temporal Data; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  8. Wan, Y.; Xu, M.; Huang, H.; Chen, S.X. A spatio-temporal model for the analysis and prediction of fine particulate matter concentration in Beijing. Environmetrics 2021, 32, e2648. [Google Scholar] [CrossRef]
  9. Xu, J.; Yang, W.; Han, B.; Wang, M.; Wang, Z.; Zhao, Z.; Bai, Z.; Vedal, S. An advanced spatio-temporal model for particulate matter and gaseous pollutants in Beijing, China. Atmos. Environ. 2019, 211, 120–127. [Google Scholar] [CrossRef]
  10. Medeiros, E.S.; de Lima, R.R.; Olinda, R.A.; Dantas, L.G.; Santos, C.A.C. Space–time kriging of precipitation: Modeling the large-scale variation with model GAMLSS. Water 2019, 11, 2368. [Google Scholar] [CrossRef]
  11. Storvik, G.; Frigessi, A.; Hirst, D. Stationary time autoregressive representation. Stat. Probab. Lett. 2002, 60, 263–269. [Google Scholar]
  12. Stein, M.L. Statistical methods for regular monitoring data. J. R. Stat. Soc. Ser. B 2005, 67, 667–687. [Google Scholar] [CrossRef]
  13. Demel, S.S.; Du, J. Spatio-temporal models for some data sets in continuous space and discrete time. Stat. Sin. 2015, 25, 81–98. [Google Scholar]
  14. Gneiting, T.; Kleiber, W.; Schlather, M. Matérn cross-covariance functions for multivariate random fields. J. Am. Stat. Assoc. 2010, 105, 1167–1177. [Google Scholar] [CrossRef]
  15. Sain, S.R.; Cressie, N. A spatial model for multivariate lattice data. J. Econom. 2007, 140, 226–259. [Google Scholar] [CrossRef]
  16. Zhu, X.; Huang, D.; Pan, R.; Wang, H. Multivariate spatial autoregressive model for large scale social networks. J. Econom. 2020, 215, 591–606. [Google Scholar] [CrossRef]
  17. Dörr, C.; Schlather, M. Covariance models for multivariate random fields resulting from pseudo cross-variograms. J. Multivar. Anal. 2023, 205, 105199. [Google Scholar] [CrossRef]
  18. Hosseinpour, M.; Sahebi, S.; Zamzuri, Z.; Yahaya, A.; Ismail, N. Predicting crash frequency for multi-vehicle collision types using multivariate Poisson-lognormal spatial model: A comparative analysis. Accid. Anal. Prev. 2018, 118, 277–288. [Google Scholar] [CrossRef]
  19. Somayasa, W.; Makulau; Pasolon, Y.B.; Sutiari, D.K. Universal kriging of multivariate spatial data under multivariate isotropic power type variogram model. In Proceedings of the 7th International Conference on Mathematics—Pure, Applied and Computation (ICoMPAC 2020), Surabaya, Indonesia, 24 October 2020. [Google Scholar]
  20. Krupskii, P.; Genton, M.G. A copula model for non-Gaussian multivariate spatial data. J. Multivar. Anal. 2019, 169, 264–277. [Google Scholar] [CrossRef]
  21. Gneiting, T. Strictly and non-strictly positive definite functions on spheres. Bernoulli 2013, 19, 1327–1349. [Google Scholar] [CrossRef]
  22. Ma, C. Stationary and isotropic vector random fields on spheres. Math. Geosci. 2012, 44, 765–778. [Google Scholar] [CrossRef]
  23. Du, J.; Ma, C.; Li, Y. Isotropic variogram matrix functions on spheres. Math. Geosci. 2013, 45, 341–357. [Google Scholar] [CrossRef]
  24. Ma, C. Spatio-temporal variograms and covariance models. Adv. Appl. Probab. 2005, 37, 706–725. [Google Scholar] [CrossRef]
  25. Du, J.; Ma, C. Spherically invariant vector random fields in space and time. IEEE Trans. Signal Process. 2011, 59, 5921–5929. [Google Scholar] [CrossRef]
  26. Cressie, N. Statistics for Spatial Data, rev. ed.; Wiley: New York, NY, USA, 1993. [Google Scholar]
  27. Gneiting, T. Nonseparable, stationary covariance functions for space-time data. J. Am. Stat. Assoc. 2002, 97, 590–600. [Google Scholar] [CrossRef]
  28. Gneiting, T.; Genton, M.G.; Guttorp, P. Geostatistical space-time models, stationarity, separability, and full symmetry. Monogr. Stat. Appl. Probab. 2006, 107, 151–174. [Google Scholar]
  29. Ma, C. Vector random fields with second-order moments or second-order increments. Stoch. Anal. Appl. 2011, 29, 197–215. [Google Scholar] [CrossRef]
  30. Stein, M.L. Interpolation of Spatial Data: Some Theory for Kriging; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
  31. Wackernagel, H. Multivariate Geostatistics, 3rd ed.; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
Figure 1. ACFs of maximum temperature in Kansas counties.
Figure 1. ACFs of maximum temperature in Kansas counties.
Entropy 27 00837 g001
Figure 2. ACFs of minimum temperature in Kansas counties.
Figure 2. ACFs of minimum temperature in Kansas counties.
Entropy 27 00837 g002
Figure 3. Empirical Temperature space–time correlations and fitted models at time lag 0 in Kansas.
Figure 3. Empirical Temperature space–time correlations and fitted models at time lag 0 in Kansas.
Entropy 27 00837 g003
Figure 4. Empirical Temperature space–time correlations and fitted models at time lag 1 in Kansas.
Figure 4. Empirical Temperature space–time correlations and fitted models at time lag 1 in Kansas.
Entropy 27 00837 g004
Figure 5. Empirical Temperature space–time correlations at time lag 2 in Kansas.
Figure 5. Empirical Temperature space–time correlations at time lag 2 in Kansas.
Entropy 27 00837 g005
Table 1. Kansas maximum temperature RMSE statistics.
Table 1. Kansas maximum temperature RMSE statistics.
MeasurePMMSMMCauchyTime Series
% Stations w/Lowest RMSE93.3%4.8%0%1.9%
AVG. RMSE at All Stations3.8870924.6867013.9383033.914282
95% Data Interval[3.81, 3.96][4.64, 4.73][3.86, 4.02][3.84, 3.99]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Te, R.; Du, J. Multivariate Modeling of Some Datasets in Continuous Space and Discrete Time. Entropy 2025, 27, 837. https://doi.org/10.3390/e27080837

AMA Style

Te R, Du J. Multivariate Modeling of Some Datasets in Continuous Space and Discrete Time. Entropy. 2025; 27(8):837. https://doi.org/10.3390/e27080837

Chicago/Turabian Style

Te, Rigele, and Juan Du. 2025. "Multivariate Modeling of Some Datasets in Continuous Space and Discrete Time" Entropy 27, no. 8: 837. https://doi.org/10.3390/e27080837

APA Style

Te, R., & Du, J. (2025). Multivariate Modeling of Some Datasets in Continuous Space and Discrete Time. Entropy, 27(8), 837. https://doi.org/10.3390/e27080837

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop