Next Article in Journal
A New Benford Test for Clustered Data with Applications to American Elections
Previous Article in Journal
Deriving the Optimal Strategy for the Two Dice Pig Game via Reinforcement Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Bivariate INAR(1) Model with Time-Dependent Innovation Vectors

1
School of Mathematics and Statistics, Henan University, Kaifeng 475004, China
2
School of Mathematics, Jilin University, Changchun 130012, China
3
College of Mathematics, Taiyuan University of Technology, Taiyuan 030024, China
*
Author to whom correspondence should be addressed.
Stats 2022, 5(3), 819-840; https://doi.org/10.3390/stats5030048
Submission received: 11 July 2022 / Revised: 15 August 2022 / Accepted: 15 August 2022 / Published: 19 August 2022
(This article belongs to the Section Time Series Analysis)

Abstract

:
Recently, there has been a growing interest in integer-valued time series models, especially in multivariate models. Motivated by the diversity of the infinite-patch metapopulation models, we propose an extension to the popular bivariate INAR(1) model, whose innovation vector is assumed to be time-dependent in the sense that the mean of the innovation vector is linearly increased by the previous population size. We discuss the stationarity and ergodicity of the observed process and its subprocesses. We consider the conditional maximum likelihood estimate of the parameters of interest, and establish their large-sample properties. The finite sample performance of the estimator is assessed via simulations. Applications on crime data illustrate the model.

1. Introduction

Bivariate count data occur in many contexts, often as the counts of two events, objects or individuals during a certain period of time. For example, such counts occur in epidemiology when two kinds of related diseases are examined, in criminology when two kinds of crimes are committed, in business when the volume of sales of two correlated products are observed or in manufacturing when two similar products are produced.
In real application, the observed time series data are often discrete, over-dispersed (the empirical variance is greater than the empirical mean) and have other features such as time dependence. Many univariate models have been proposed to deal with integer-valued time series data based on the univariate binomial thinning operator , which is proposed by Steutel and van Harn [1]:
α X = i = 1 X W i ,
where X is a non-negative integer-valued random variable and P ( W i = 1 ) = 1 P ( W i = 0 ) = α . The INAR(1) model [2], The BAR(1) model [3], the INAR(p) model [4], The PDINAR(1) model [5] and the BARIO model [6] are very popular in analyzing non-negative integer-valued time series; see Weiß [7], Scotto et al. [8] and Davis et al. [9] for recent reviews on this topic. Motivated by infinite-patch metapopulation models discussed in Buckley and Pollett [10], Weiß [11] proposed an extension to the popular Poisson INAR(1) model, which is characterized by time-dependent innovations, i.e., the mean of the innovation is linearly increased by the previous population size. An important advantage of this model is that it gives a reasonable interpretation for immigration, which becomes more attractive if the current population is large; see Weiß [11] for an application to iceberg order data.
Univariate models are extensively investigated in the literature, but relatively few multivariate models, especially for bivariate versions, have been studied in detail. Franke and Rao [12] proposed a multivariate INAR(1) model, which is generalized to the p-order case by Latour [13]. Pedeli and Karlis [14] discussed a tractable bivariate INAR(1) model, which can be used to deal with bivariate count data with equi-dispersion and over-dispersion, but with small flexibility. See Pedeli and Karlis [15] for the estimation of the BINAR model and Pedeli and Karlis [16] for a further discussion of the properties of the multivariate INAR(1) model. Based on hierarchical dynamic models, Ravishanker et al. [17] described a Bayesian framework for estimation and prediction for multivariate times series of counts. Popović [18] proposed a bivariate INAR(1) model with random coefficients based on different binomial thinning operators. The above models assumed the innovations of their marginal models are independent and identically distributed. Based on a finite range of counts, Scotto et al. [19] considered the density-dependent bivariate binomial autoregressive models by using their state-dependent thinning concept. Li et al. [20] proposed a bivariate random coefficient INAR(1) model with asymmetric Hermite innovations.
Inspired by Weiß [11], we aim at providing a bivariate INAR model to analyze bivariate time series with time dependence and cross-correlation. The first contribution is that this paper gives an available method to capture the time-dependence trend by imposing the past information in the distribution of the innovational vector, which in turn makes the cross-correlation between the two entries into an innovation vector. The second contribution is that the new model not only allows autocorrelation but also allows cross-correlation. The third contribution is that this paper illustrates the stationarity and ergodicity of the extended bivariate INAR process and its two subprocesses, which is important to derive the consistence and asymptotic normality of the CML estimation.
The remainder of the paper is organized as follows. In Section 2, we first give brief reviews of the bivariate Poisson distribution and the bivariate binomial thinning operator, based on which we give the definition of the new bivariate INAR(1) model. Conditional maximum likelihood (CML) estimates and the asymptotic properties of unknown parameters are discussed in Section 3. A simulation and two real data examples that show the effectiveness of the new model are given in Section 4 and Section 5, respectively. Conclusions are made in Section 6.

2. A New Bivariate INAR(1) Model

For readability, we first give a brief review of the bivariate Poisson distribution.
Definition 1.
If the joint probability mass function (pmf) of ( X , Y ) satisfies
P ( X = x , Y = y ) = e ( λ 1 + λ 2 ϕ ) ( λ 1 ϕ ) x x ! ( λ 2 ϕ ) y y ! i = 0 min ( x , y ) x i y i i ! ϕ ( λ 1 ϕ ) ( λ 2 ϕ ) i ,
where λ 1 , λ 2 > 0 and ϕ ( 0 , min ( λ 1 , λ 2 ) ) , then ( X , Y ) is called a bivariate Poisson random variable with parameters ( λ 1 , λ 2 , λ 3 ) , i.e.,BP ( λ 1 , λ 2 , ϕ ) .
From Kocherlakota and Kocherlakota [21], we obtain the fact that if ( X , Y ) follows BP ( λ 1 , λ 2 , ϕ ) , there must exist three mutually independent random variables Z 1 , Z 2 , Z 3 such that X = Z 1 + Z 3 and Y = Z 2 + Z 3 , where Z 1 , Z 2 and Z 3 follow Poisson ( λ 1 ϕ ) , Poisson ( λ 2 ϕ ) and Poisson ( ϕ ) , respectively. Then, we have the conclusion that Cov ( X , Y ) = ϕ . In addition, P ( X = x , Y = y ) , given in (2), is continuous and differentiable. For convenience, we denote f ( x , y , λ 1 , λ 2 , ϕ ) = P ( X = x , Y = y ) . By using Lemma A3 in Li et al. [20], we obtain that
f ( x , y , λ 1 , λ 2 , ϕ ) λ 1 = f ( x 1 , y , λ 1 , λ 2 , ϕ ) f ( x , y , λ 1 , λ 2 , ϕ ) ,
f ( x , y , λ 1 , λ 2 , ϕ ) λ 2 = f ( x , y 1 , λ 1 , λ 2 , ϕ ) f ( x , y , λ 1 , λ 2 , ϕ ) ,
f ( x , y , λ 1 , λ 2 , ϕ ) ϕ = f ( x , y , λ 1 , λ 2 , ϕ ) f ( x 1 , y , λ 1 , λ 2 , ϕ ) f ( x , y 1 , λ 1 , λ 2 , ϕ ) + f ( x 1 , y 1 , λ 1 , λ 2 , ϕ ) .
Applying the univariate binomial thinning operator given in (1) to the bivariate case with X = X 1 , X 2 leads to the bivariate binomial thinning operator:
A X = α 11 X 1 + α 12 X 2 α 21 X 1 + α 22 X 2 with A = ( α i j ) 2 × 2 ,
where α i j ( 0 , 1 ) , i , j = 1 , 2 , X 1 and X 2 are non-negative integer-valued random variables, and all the thinnings are performed independent of each other.
By calculation, E ( A X ) = A E ( X ) . Denoting V as the 2 × 2 variance matrix of the Bernoulli random variables α i j X j with ( V ) i j = α i j ( 1 α i j ) , i , j = 1 , 2 , we obtain that E ( ( A X ) ( A X ) ) = A E ( X X ) A + diag ( V E ( X ) ) . Furthermore, if all the counting series of A X and B Y are independent, E ( A X ) ( B Y ) = A E ( X Y ) B .
In the following, we give the definition of the new bivariate INAR(1) model, which not only includes the property of the models defined by Pedeli and Karlis [14,16] but also allows the innovation vectors { ϵ t } to be time-dependent.
Definition 2.
Let X t = X 1 t , X 2 t be non-negative integer-valued bivariate random vector. If the process { X t } satisfies
X t = A X t 1 + ϵ t , t Z ,
then { X t } is said to follow the extended bivariate INAR(1) process, where A = ( α i j ) 2 × 2 , 0 < α i j < 1 , for any i , j = 1 , 2 , ϵ t = ϵ 1 t , ϵ 2 t BP ( λ 1 t , λ 2 t , ϕ ) with ( λ 1 t , λ 2 t ) = B X t 1 + C , B = ( b i j ) 2 × 2 , C = ( c 1 , c 2 ) , 0 < b i j < 1 , c i > 0 , i , j = 1 , 2 .
For simplicity, we denote the new model as the EBINAR(1) model. It is easy to see that the ith equation of model (6) is presented by:
X i t = α i 1 X 1 , t 1 + α i 2 X 2 , t 1 + ϵ i t , i = 1 , 2 .
Notice that the model given by (7) is similar to the one discussed in Weiß [11], the main difference is that X i t involves two paralleled survivors X 1 , t 1 and X 2 , t 1 . It is known that the EBINAR(1) process { X t , t Z } has two parts: the first part consists of the survivors of the elements of the system at the preceding time t 1 , denoted by X t 1 ; the other part is comprised by the time-dependent innovation vector ϵ t , which implies that the mean of the innovation vector is linearly increased by the previous population size.
Remark 1.(1). If both A and B are diagonal matrices, the component equation given in (7) becomes to the one discussed by Weiß [11].
(2). If A is diagonal and B = 0 , model (6) becomes the one discussed in Pedeli and Karlis [14], but it is worth mentioning that the autoregression matrix in Pedeli and Karlis [14] is diagonal, which means that it causes no cross-correlation in the counts.
(3). If A is non-diagonal and B = 0 , model (6) becomes the one discussed in Pedeli and Karlis [16], which accounts for cross-correlation in the counts, but they still keep the innovations of their marginal models independent and identically distributed such that the time dependence can not to be captured.
To derive the pmf the EBINAR(1) process, we first denote h ( k , m 1 , m 2 , α 1 , α 2 ) : = P ( X + Y = k ) is the convolution of X + Y , k 0 with X Bin ( m 1 , α 1 ) and Y Bin ( m 2 , α 2 ) . By calculation, we obtain that
h ( k , m 1 , m 2 , α 1 , α 2 ) = j = 0 s P ( X = j | m 1 , α 1 ) P ( Y = k j | m 2 , α 2 ) .
Furthermore, by using Lemma A3 in Li et al. [20],
h ( k , m 1 , m 2 , α 1 , α 2 ) α 1 = m 1 h ( k 1 , m 1 1 , m 2 , α 1 , α 2 ) h ( k , m 1 1 , m 2 , α 1 , α 2 ) ,
h ( k , m 1 , m 2 , α 1 , α 2 ) α 2 = m 2 h ( k 1 , m 1 , m 2 1 , α 1 , α 2 ) h ( k , m 1 , m 2 1 , α 1 , α 2 ) .
Second, we denote ς = ς 1 , ς 2 , ϑ = ϑ 1 , ϑ 2 , k = ( k 1 , k 2 ) and let x = ς 1 k 1 and y = ς 2 k 2 . Then, the conditional probability distribution of the EBINAR(1) process takes the following form:
P ( ς | ϑ ) : = P ( X t = ς | X t 1 = ϑ ) = k 1 = 0 g 1 k 2 = 0 g 2 P ( A X t 1 = k | ϵ t = ς k ) P ( ϵ t = ς k ) = k 1 = 0 g 1 k 2 = 0 g 2 ( P ( α 11 X 1 , t 1 + α 12 X 2 , t 1 = k 1 ) × P ( α 21 X 1 , t 1 + α 22 X 2 , t 1 = k 2 ) . × P ( ϵ 1 t = ς 1 k 1 , ϵ 2 t = ς 2 k 2 ) ) = k 1 = 0 g 1 k 2 = 0 g 2 h ( k 1 , ϑ 1 , ϑ 2 , α 11 , α 12 ) h ( k 2 , ϑ 1 , ϑ 2 , α 21 , α 22 ) f ( x , y , λ 1 t , λ 2 t , ϕ ) ,
where g 1 = min ( ς 1 , ϑ 1 ) , g 2 = min ( ς 2 , ϑ 2 ) ,
f ( x , y , λ 1 t , λ 2 t , ϕ ) = P ( ϵ 1 t = x , ϵ 2 t = y ) = ( 2 ) exp ( λ 1 t + λ 2 t ϕ ) ( λ 1 t ϕ ) x x ! ( λ 2 t ϕ ) y y ! i = 0 min ( x , y ) x i y i i ! ϕ ( λ 1 t ϕ ) ( λ 2 t ϕ ) i
with λ 1 t = b 11 X 1 , t 1 + b 12 X 2 , t 1 + c 1 and λ 2 t = b 21 X 1 , t 1 + b 22 X 2 , t 1 + c 2 .
If the largest eigenvalue of non-negative matrix A is less than 1, then the bivariate marginal distribution of model (6) can be expressed in terms of the bivariate innovation vectors:
X t = d A k X t k + j = 0 k 1 A j ϵ t j = A t X 0 + j = 0 t 1 A j ϵ t j , k = 1 , 2 , , t ,
where A 0 is an identity matrix, and X 0 is the initial value of the process.
In what follows, we first discuss the stationarity and ergodicity of processes (6) and (7), respectively. Second, we obtain the first two-moment of { X t } and { ϵ t } , respectively. Third, we give a necessary and sufficient condition for the existence of E ( X 1 t ) k and E ( X 2 t ) k for any fixed positive integer k. These properties are necessary to derive the asymptotic properties of the estimators.
Theorem 1.
Let { X t = ( X 1 t , X 2 t ) } follow (6), Γ = A + B = ( γ i j ) i , j = 1 , 2 with 0 < γ i j < 1 . If the largest eigenvalue of Γ is less than 1, there exists a strictly stationary and ergodic process satisfying (7).
Proof. 
Let W t , k , V t , l and δ 1 t be independent of each other and each of them be independent and identically distributed, i.e., W t , k Bin ( 1 , α 11 ) + Poi ( b 11 ) , V t , l Bin ( 1 , α 12 ) + Poi ( b 12 ) and δ 1 t Poi ( c 1 ) , where Bin ( 1 , α 1 j ) + Poi ( b 1 j ) means the convolution of the distributions Bin ( 1 , α 1 j ) and Poi ( b 1 j ) , k = 1 , 2 , , X 1 , t 1 and l = 1 , 2 , , X 2 , t 1 ; see Weiß [11] for details. According to the concepts of bivariate binomial thinning and the additivity of binomial distribution and Poisson distribution, (7) can be rewritten as
X 1 t = d W t , 1 + + W t , X 1 , t 1 + V t , 1 + + V t , X 2 , t 1 + δ 1 t .
Since γ = max ( α i j + b i j ) < 1 , we have E ( W t , k ) = α 11 + b 11 < 1 and E ( V t , l ) = α 12 + b 12 < 1 . Denote H ( n ) = k = 1 n 1 k and H ( 0 ) = 0 , then E ( H ( δ 1 t ) ) = k = 1 + 1 k P ( δ 1 t k ) . In addition, that H ( δ 1 t ) δ 1 t and E ( δ 1 t ) = c 1 < , thus, E H ( δ 1 t ) E ( δ 1 t ) < . Therefore, the Theorem of Heathcote [22] holds. Hence, there exists a stationary marginal distribution of (7), i.e., there exists a strictly stationary process satisfying (7). Similarly, we also have a similar conclusion for X 2 t . □
To prove the stationaity of the EBINAR(1) process, we first introduce a sequence of random variables { X t ( n ) } that could be considered as approximations to { X t } with
X t ( n ) = 0 , n < 0 , R t , n = 0 , A X t 1 ( n 1 ) + B X t 1 ( n 1 ) + R t , n > 0 ,
where the largest eigenvalues of the non-negative matrices A , B and Γ : = A + B are less than 1, all of the non-negative matrices A , B and I Γ are invertible, R t = ( R 1 t , R 2 t ) , R 1 t is independent with R 2 t and R i t follows a Poisson distribution with the parameter c i , i = 1 , 2 .
Theorem 2.
If the conditions of Theorem 1 hold, there exists a strictly stationary process satisfying (6).
Proof. 
Because
X 1 ( 0 ) X k ( 0 ) = R 1 R k and X h + 1 ( 0 ) X h + k ( 0 ) = R h + 1 R h + k
are identically distributed for ( R 1 , , R k ) and ( R h + 1 , , R h + k ) are identically distributed. Thus, { X t ( 0 ) } is strictly stationary. Now, we suppose { X t ( n ) } is strictly stationary. Then,
X 1 ( n + 1 ) X k ( n + 1 ) = R 1 R k + A 0 0 A X 0 ( n ) X k 1 ( n ) + B 0 0 B X 0 ( n ) X k 1 ( n )
and
X h + 1 ( n + 1 ) X h + k ( n + 1 ) = R h + 1 R h + k + A 0 0 A X h ( n ) X h + k 1 ( n ) + B 0 0 B X h ( n ) X h + k 1 ( n ) .
Because the joint distributions of the variables involved in the right-hand sides of (14) and (15) are identical, thus, X 1 ( n + 1 ) , , X k ( n + 1 ) and X h + 1 ( n + 1 ) , , X h + k ( n + 1 ) in the left-hand side of the two equations are identically distributed. Hence, the process { X t ( n ) } is strictly stationary. Therefore, { X t } is a strictly stationary process. □
Theorem 3.
If the conditions of Theorem 1 hold, { X t } is a null recurrent and ergodic Markov chain.
Proof. 
First, we prove { X t } is null recurrent. Because X t = d j = 0 A j ϵ t j , then P 0 , 0 t = P ( X t = 0 | X 0 = 0 ) = j = 0 k 1 P ( A j ϵ t j = 0 ) with probability one. Let A j = ( p i j ) and γ = max ( p i j ) , i = 1 , 2 , j = 1 , 2 . Then, we obtain that
P ( A j ϵ t j 0 ) = P ( { p 11 ϵ 1 , t j + p 12 ϵ 2 , t j 1 } { p 21 ϵ 1 , t j + p 22 ϵ 2 , t j 1 } ) P ( p 11 ϵ 1 , t j + p 12 ϵ 2 , t j 1 ) + P ( p 21 ϵ 1 , t j + p 22 ϵ 2 , t j 1 ) P ( p 11 ϵ 1 , t j 1 ) + P ( p 12 ϵ 2 , t j 1 ) + P ( p 21 ϵ 1 , t j 1 ) + P ( p 22 ϵ 2 , t j 1 ) 2 P ( γ j ϵ 1 , t j 1 ) + P ( γ j ϵ 2 , t j 1 ) 2 E ( γ j ϵ 1 , t j ) + E ( γ j ϵ 2 , t j ) = 2 γ j ( μ ϵ 1 + μ ϵ 2 ) .
According to Theorem 2, there exists an M > 0 such that μ ϵ i M / 4 , i = 1 , 2 . Then, we have P ( A j ϵ t j = 0 ) 1 M γ j . Hence,
P 0 , 0 t j = 0 t 1 P ( A j ϵ t j = 0 ) j = 0 t 1 ( 1 M γ j ) = exp { j = 0 t 1 log ( 1 M γ j ) } exp { log ( 1 M γ τ ) / ( 1 r ) } > 0 , t > τ .
Therefore, lim t P 0 , 0 t > 0 . Thus, k = 0 P 0 , 0 t = , i.e., 0 is a recurrent state.
Second, we illustrate the ergodicity. For all states ς , ϑ , κ t 2 , κ t 3 , , we have
P ( X t = ς X t 1 = ϑ , X t 2 = κ t 2 , ) = P ( X t = ς X t 1 = ϑ ) = P ( ϑ , ς ) ,
where P ( ϑ , ς ) denotes the transition probability from state ϑ to state ς . Thus, { X t } is a homogeneous Markov chain. Since α i j , b i j ( 0 , 1 ) , thus P ( ϵ 1 t = ς 1 k 1 , ϵ 2 t = ς 2 k 2 ) > 0 . Denote n-state transition probability from state ς to state ϑ with P ς ϑ n . For a given X t 1 , the conditional probability function of the random vector X t is derived by:
P ( X 1 t = ς 1 , X 2 t = ς 2 | X 1 , t 1 = ϑ 1 , X 2 , t 1 = ϑ 2 ) = P ( X 1 t = ς 1 | X 1 , t 1 = ϑ 1 , X 2 , t 1 = ϑ 2 ) P ( X 2 t = ς 2 | X 1 , t 1 = ϑ 1 , X 2 , t 1 = ϑ 2 , X 1 t = ς 1 ) ,
then P τ υ 1 > 0 for all τ , υ N 0 2 . According to (12), every state can be reached from any other state with positive probability in a finite number of steps, analogously. Hence, { X t } is irreducible. By (12), k steps of conditional probability distribution P 0 , 0 k are obtained with:
P 0 , 0 k = P ( X t = 0 | X t k = 0 ) = P ( A k X t k + j = 0 k 1 A j ϵ t j = 0 | X t k = 0 ) = P ( A k X t k = 0 | X t k = 0 ) ( V ) j = 0 k 1 P ( A j ϵ t j = 0 ) ( VI ) .
Note that the first multiplier (V) is positive, which can be obtained by a similar method to (11). Denoting A j = ( p i j ) i , j = 1 , 2 , then we have:
P ( A j ϵ t j = 0 ) = P ( p 11 ϵ 1 , t j + p 12 ϵ 2 , t j = 0 , p 21 ϵ 1 , t j + p 22 ϵ 2 , t j = 0 ) = k = 0 s = 0 ( 1 p 11 ) k ( 1 p 12 ) s ( 1 p 21 ) k ( 1 p 22 ) s P ( ϵ 1 , t j = k , ϵ 2 , t j = s ) > 0 ,
thus, the second part, (VI), is also positive. Therefore, P 0 , 0 k > 0 , with probability one, i.e., { X t } , is aperiodic. Hence, { X t } is an ergodic Markov chain. □
Note that E ( X t ( n ) ) = ( I A B ) 1 C < and
E X t ( n ) X t ( n ) = Γ E X t ( n 1 ) X t ( n 1 ) Γ + Ψ = = Γ n E X t ( 0 ) X t ( 0 ) Γ n + Γ n 1 Ψ Γ n 1 + + Γ Ψ Γ + Ψ ,
where Ψ involves the first moments of X t ( n ) and R t . Hence, the first two moments of X t ( n ) are finite. Thus, { X t ( n ) } is stationary and ergodic by Theorem 2, Theorem 3 and Shumway and Stoffer [23].
Theorem 4.
If the conditions of Theorem 1 hold, the first two moments and covariance matrix of { X t } exist and
(1). E ( X t | X t 1 ) = ( A + B ) X t 1 + C ;
(2). E ( X t ) = ( I A B ) 1 C , if ( I A B ) 1 exists, where I denotes the identity matrix;
(3). R ( k ) = Cov X t + k , X t = ( A + B ) k R ( 0 ) , k = 1 , 2 , .
In addition, if k = 0 , R ( 0 ) = A R ( 0 ) A + H * + A R ( 0 ) B + B R ( 0 ) A + Σ , where H * = diag ( j = 1 M V i j E ( X j , t 1 ) ) , V i j = α i j ( 1 α i j ) and Σ = Cov ( ϵ t , ϵ t ) . Specifically, if A and B are diagonal matrices, R ( 0 ) = ( I A A 2 A B ) 1 Σ + H * .
Proof. 
(1) and (2) are easy to prove by the moment property of A , and we omit them. Here, we only give the proof of (3):
R ( k ) = Cov A X t + k 1 + ϵ t + k , X t = Cov A X t + k 1 , X t + Cov ϵ t + k , X t = A Cov X t + k 1 , X t + Cov B X t + k 1 + C , X t = ( A + B ) Cov X t + k 1 , X t = ( A + B ) R ( k 1 ) = = ( A + B ) k R ( 0 ) .
In fact, Cov X t 1 , ϵ t = Cov X t 1 , B X t 1 + C = Cov X t 1 , X t 1 B and Cov ϵ t , X t 1 = Cov B X t 1 + C , X t 1 = B Cov X t 1 , X t 1 . Hence,
R ( 0 ) = Cov X t , X t = Cov A X t 1 + ϵ t , A X t 1 + ϵ t = A Cov X t 1 , X t 1 A + H * + A Cov X t 1 , X t 1 B + B Cov X t 1 , X t 1 A + Σ = A R ( 0 ) A + H * + A R ( 0 ) B + B R ( 0 ) A + Σ ,
where H * = diag ( j = 1 2 V i j E ( X j , t 1 ) ) . Let λ , λ 1 and λ 2 be the largest eigenvalues of AA + 2 AB , A and B , respectively. If A and B are diagonal matrices,
| λ | | λ 1 2 + 2 λ 1 λ 2 | | λ 1 ( λ 1 + λ 2 ) + λ 1 λ 2 | λ 1 | ( λ 1 + λ 2 ) | + λ 2 | λ 1 | λ 1 + λ 2 < 1 ,
then I AA 2 AB is a nonsingular matrix. Hence, R ( 0 ) is obtained. □
Theorem 5.
If the conditions of Theorem 1 hold, the first two moments and covariance matrices of { ϵ t } exist and:
(1). E ( ϵ t | X t 1 ) = B X t 1 + C ;
(2). E ϵ t = I A ( I A B ) 1 C , if ( I A B ) 1 exists;
(3). R ϵ ( k ) = Cov ϵ t + k , ϵ t = B A + B k R ( 0 ) B , k = 0 , 1 , 2 , .
Proof. 
(1) is easy to prove by the distribution of ϵ t . We only need to prove (2) and (3).
(2). E ϵ t = B μ t + C = B ( I A B ) 1 C + ( I A B ) ( I A B ) 1 C = ( I A ) E ( X t ) by the definition of ϵ t . E ϵ t can be obtained directly by (6).
(3). According to the construction of the EBINAR(1) model, we have:
Cov X t , ϵ t = Cov A X t 1 , B X t 1 + C + Cov B X t 1 + C , B X t 1 + C = A Cov X t 1 , X t 1 B + B Cov X t 1 , X t 1 B = ( A + B ) R ( 0 ) B ,
R ϵ ( k ) = Cov B X t + k 1 + C , ϵ t = B Cov X t + k 1 , ϵ t = B Cov A X t + k 2 + ϵ t + k 1 , ϵ t = BA Cov X t + k 2 , ϵ t + B Cov ϵ t + k 1 , ϵ t = BA Cov X t + k 2 , ϵ t + B Cov B X t + k 2 + C , ϵ t = BA Cov X t + k 2 , ϵ t + B 2 Cov X t + k 2 , ϵ t = B A + B Cov X t + k 2 , ϵ t = = B A + B k 1 Cov X t , ϵ t ,
then R ϵ ( k ) is achieved by substituting (16) into (17), i.e., R ϵ ( k ) = B ( A + B ) k R ( 0 ) B . Note that Cov ϵ t , ϵ t = Cov B X t 1 + C , B X t 1 + C = B R ( 0 ) B , i.e., the formula holds for k = 0 . □
Theorem 6.
For any fixed positive integer k, it is a necessary and sufficient condition that E ( X i t ) k < is γ < 1 , i = 1 , 2 .
Proof. 
For convenience, let A and B be diagonal matrices.
Necessity. According to Lemma 2.1 of Silva and Oliveira [24], E [ ( α 11 X 1 , t 1 ) i ( ϵ 1 t ) k i ] = α 11 i b 11 k i E ( X 1 , t 1 ) k + ψ 1 , where ψ 1 = ψ 1 ( X 1 , t 1 ) involves the moments of X 1 , t 1 of order ( k 1 ) and i = 0 , 1 , 2 , , k . Then,
E ( X 1 t ) k = E ( α 11 X 1 , t 1 + ϵ 1 t ) k = i = 0 k k i E [ ( α 11 X 1 , t 1 ) i ( ϵ 1 t ) k i ] = i = 0 k k i α 11 i b 11 k i E ( X 1 , t 1 ) k + ψ 1 = ( α 11 + b 11 ) k E ( X 1 , t 1 ) k + ψ 1 .
Thus, E ( X 1 t ) k = ψ 1 1 ( α 11 + b 11 ) k by (18). Hence, 1 ( α 11 + b 11 ) k > 0 if E ( X 1 t ) k < , i.e., α 11 + b 11 < 1 . Similarly, α 22 + b 22 < 1 if E ( X 2 t ) k < . Hence, γ < 1 if E ( X i t ) k < , i = 1 , 2 .
Sufficiency. We know that E ( X i t ) k < holds for k = 1 , 2 by Theorems 4 and 5. The sufficient condition can be proved by induction with respect to k. Now suppose that E ( X i t ) k 1 < , k 3 . According to (13), we define
X 1 t ( n ) = 0 , n < 0 ; δ 1 t , n = 0 ; δ 1 t + j = 1 X 1 , t 1 ( n 1 ) W t , j , n > 0 and X 2 t ( n ) = 0 , n < 0 ; δ 2 t , n = 0 ; δ 2 t + s = 1 X 2 , t 1 ( n 1 ) V t , s , n > 0 ,
where W t , j , δ 1 t , V t , s and δ 2 t are independent of each other and each of them is independent and identically distributed, i.e., W t , j Bin ( 1 , α 11 ) + Poi ( b 11 ) , δ 1 t Poi ( c 1 ) , V t , s Bin ( 1 , α 22 ) + Pois ( b 22 ) and δ 2 t Pois ( c 2 ) . Using the univariate binomial thinning operator, X 1 t ( n ) and X 2 t ( n ) admit the representations:
X 1 t ( n ) = δ 1 t + ( α 11 X 1 , t 1 ( n 1 ) + Z 1 t ) ,
X 2 t ( n ) = δ 2 t + ( α 22 X 2 , t 1 ( n 1 ) + Z 2 t ) ,
where Z 1 t Poi ( b 11 X 1 , t 1 ( n 1 ) ) and Z 2 t Poisson ( b 22 X 2 , t 1 ( n 1 ) ) . It is easy to see both { X 1 t ( n ) } n N and { X 2 t ( n ) } n N are non-decreasing. According to Lemma 2.1 of [24], we have:
E ( α 11 X 1 , t 1 ( n 1 ) + Z 1 t ) k = ( α 11 + b 11 ) k E ( X 1 , t 1 ( n 1 ) ) k + ψ 2 ( α 11 + b 11 ) k E ( X 1 , t 1 ( n ) ) k + ψ 4 , E ( α 22 X 2 , t 1 ( n 1 ) + Z 2 t ) k = ( α 22 + b 22 ) k E ( X 2 , t 1 ( n 1 ) ) k + ψ 3 ( α 22 + b 22 ) k E ( X 2 , t 1 ( n ) ) k + ψ 5 ,
where ψ 2 = ψ 2 ( X 1 , t 1 ( n 1 ) ) and ψ 3 = ψ 3 ( X 2 , t 1 ( n 1 ) ) involve the moments of X 1 , t 1 ( n 1 ) and X 2 , t 1 ( n 1 ) of order ( k 1 ) , and ψ 4 = ψ 4 ( X 1 , t 1 ( n ) ) and ψ 5 = ψ 5 ( X 2 , t 1 ( n ) ) involve the moments of X 1 , t 1 ( n ) and X 2 , t 1 ( n ) of order ( k 1 ) , respectively. According to (19) and (20), we obtain:
E ( X 1 t ( n ) ) k = E ( δ 1 t ) k + ( α 11 X 1 , t 1 ( n 1 ) + Z 1 t ) k + j = 1 k 1 k j E ( δ 1 t ) k j E ( α 11 X 1 , t 1 ( n 1 ) + Z 1 t ) j E ( δ 1 t ) k + ( α 11 + b 11 ) k E ( X 1 , t 1 ( n ) ) k + ψ 4 + j = 1 k 1 k j E ( δ 1 t ) k j E ( α 11 X 1 , t 1 ( n ) + Z 1 t ) j c 1 k + γ k E ( X 1 , t 1 ( n ) ) k + ψ 4 + j = 1 k 1 k j E ( δ 1 t ) k j E ( α 11 X 1 , t 1 ( n ) + Z 1 t ) j ,
E ( X 2 t ( n ) ) k = E ( δ 2 t ) k + ( α 22 X 2 , t 1 ( n 1 ) + Z 2 t ) k + j = 1 k 1 k j E ( δ 2 t ) k j E ( α 22 X 2 , t 1 ( n 1 ) + Z 2 t ) j E ( δ 2 t ) k + ( α 22 + b 22 ) k E ( X 2 , t 1 ( n ) ) k + ψ 5 + j = 1 k 1 k j E ( δ 2 t ) k j E ( α 22 X 2 , t 1 ( n ) + Z 2 t ) j c 2 k + γ k E ( X 2 , t 1 ( n ) ) k + ψ 5 + j = 1 k 1 k j E ( δ 2 t ) k j E ( α 22 X 2 , t 1 ( n ) + Z 2 t ) j .
Using (21) and (22),
E ( X 1 t ( n ) ) k + E ( X 2 t ( n ) ) k i = 1 2 c i k + j = 1 k 1 k j E ( δ i t ) k j E ( α i i X i , t 1 ( n ) + Z i t ) j + ψ 6 1 γ k ,
where ψ 6 = ψ 4 + ψ 5 . Note that the numerator in (23) involves the moments of X 1 , t 1 ( n ) and X 2 , t 1 ( n ) of order k 1 and is finite; thus, E ( X 1 t ( n ) ) k + E ( X 2 t ( n ) ) k is finite if γ < 1 . In addition that both E ( X 1 t ( n ) ) and E ( X 2 t ( n ) ) are non-negative; thus, E ( X 1 t ( n ) ) and E ( X 2 t ( n ) ) are finite. □

3. Parameter Estimation

In this section, we consider the conditional maximum likelihood estimation for model (6). Let θ = α i j , b i j , c i , ϕ , i , j = 1 , 2 . Suppose that X 0 , X 1 , , X T are generated by the EBINAR(1) model with the true parameter value θ 0 .
By (11), the conditional log-likelihood function can be written as:
( θ ) = t = 1 T ln P θ X t X t 1 ,
where
P θ ( X t X t 1 ) = P ( X 1 t = X 1 t , X 2 t = X 2 t | X 1 , t 1 = X 1 , t 1 , X 2 , t 1 = X 2 , t 1 ) = k 1 = 0 g 1 k 2 = 0 g 2 h ( k 1 , X 1 , t 1 , X 2 , t 1 , α 11 , α 12 ) h ( k 2 , X 1 , t 1 , X 2 , t 1 , α 21 , α 22 ) f ( X 1 t k 1 , X 2 t k 2 , λ 1 t , λ 2 t , ϕ )
with λ 1 t = b 11 X 1 , t 1 + b 12 X 2 , t 1 + c 1 , λ 2 t = b 21 X 1 , t 1 + b 22 X 2 , t 1 + c 2 , g 1 = min ( X 1 t , X 1 , t 1 ) , g 2 = min ( X 2 t , X 2 , t 1 ) , f ( ) and h ( ) are given in (2) and (8), respectively.
By using (3)–(5), and (9) and (10), we can derive the score equation:
( θ 0 ) θ = t = 1 T 1 P θ 0 X t X t 1 P θ 0 X t X t 1 θ = 0 ,
where
P θ X t X t 1 α 11 = k 1 = 0 g 1 k 2 = 0 g 2 X 1 , t 1 h ( k 2 , X 1 , t 1 , X 2 , t 1 , α 21 , α 22 ) f ( X 1 t k 1 , X 2 t k 2 , λ 1 t , λ 2 t , ϕ ) × h ( k 1 1 , X 1 , t 1 1 , X 2 , t 1 , α 11 , α 12 ) h ( k 1 , X 1 , t 1 1 , X 2 , t 1 , α 11 , α 12 ) , P θ X t X t 1 α 12 = k 1 = 0 g 1 k 2 = 0 g 2 X 2 , t 1 h ( k 2 , X 1 , t 1 , X 2 , t 1 , α 21 , α 22 ) f ( X 1 t k 1 , X 2 t k 2 , λ 1 t , λ 2 t , ϕ ) × h ( k 1 1 , X 1 , t 1 , X 2 , t 1 1 , α 11 , α 12 ) h ( k 1 , X 1 , t 1 , X 2 , t 1 1 , α 11 , α 12 ) , P θ X t X t 1 α 21 = k 1 = 0 g 1 k 2 = 0 g 2 X 1 , t 1 h ( k 1 , X 1 , t 1 , X 2 , t 1 , α 11 , α 12 ) f ( X 1 t k 1 , X 2 t k 2 , λ 1 t , λ 2 t , ϕ ) × h ( k 2 1 , X 1 , t 1 1 , X 2 , t 1 , α 21 , α 22 ) h ( k 2 , X 1 , t 1 1 , X 2 , t 1 , α 21 , α 22 ) , P θ X t X t 1 α 22 = k 1 = 0 g 1 k 2 = 0 g 2 X 2 , t 1 h ( k 1 , X 1 , t 1 , X 2 , t 1 , α 11 , α 12 ) f ( X 1 t k 1 , X 2 t k 2 , λ 1 t , λ 2 t , ϕ ) × h ( k 2 1 , X 1 , t 1 , X 2 , t 1 1 , α 21 , α 22 ) h ( k 2 , X 1 , t 1 , X 2 , t 1 1 , α 21 , α 22 ) , P θ X t X t 1 b 11 = k 1 = 0 g 1 k 2 = 0 g 2 h ( k 1 , X 1 , t 1 , X 2 , t 1 , α 11 , α 12 ) h ( k 2 , X 1 , t 1 , X 2 , t 1 , α 21 , α 22 ) × X 1 , t 1 f ( X 1 t k 1 1 , X 2 t k 2 , λ 1 t , λ 2 t , ϕ ) , P θ X t X t 1 b 12 = k 1 = 0 g 1 k 2 = 0 g 2 h ( k 1 , X 1 , t 1 , X 2 , t 1 , α 11 , α 12 ) h ( k 2 , X 1 , t 1 , X 2 , t 1 , α 21 , α 22 ) × X 2 , t 1 f ( X 1 t k 1 1 , X 2 t k 2 , λ 1 t , λ 2 t , ϕ ) , P θ X t X t 1 c 1 = k 1 = 0 g 1 k 2 = 0 g 2 h ( k 1 , X 1 , t 1 , X 2 , t 1 , α 11 , α 12 ) h ( k 2 , X 1 , t 1 , X 2 , t 1 , α 21 , α 22 ) × f ( X 1 t k 1 1 , X 2 t k 2 , λ 1 t , λ 2 t , ϕ ) , P θ X t X t 1 b 21 = k 1 = 0 g 1 k 2 = 0 g 2 h ( k 1 , X 1 , t 1 , X 2 , t 1 , α 11 , α 12 ) h ( k 2 , X 1 , t 1 , X 2 , t 1 , α 21 , α 22 ) × X 1 , t 1 f ( X 1 t k 1 , X 2 t k 2 1 , λ 1 t , λ 2 t , ϕ ) , P θ X t X t 1 b 22 = k 1 = 0 g 1 k 2 = 0 g 2 h ( k 1 , X 1 , t 1 , X 2 , t 1 , α 11 , α 12 ) h ( k 2 , X 1 , t 1 , X 2 , t 1 , α 21 , α 22 ) × X 2 , t 1 f ( X 1 t k 1 1 , X 2 t k 2 1 , λ 1 t , λ 2 t , ϕ ) , P θ X t X t 1 c 2 = k 1 = 0 g 1 k 2 = 0 g 2 h ( k 1 , X 1 , t 1 , X 2 , t 1 , α 11 , α 12 ) h ( k 2 , X 1 , t 1 , X 2 , t 1 , α 21 , α 22 ) × f ( X 1 t k 1 , X 2 t k 2 1 , λ 1 t , λ 2 t , ϕ ) ,
P θ X t X t 1 ϕ = k 1 = 0 g 1 k 2 = 0 g 2 h ( k 1 , X 1 , t 1 , X 2 , t 1 , α 11 , α 12 ) h ( k 2 , X 1 , t 1 , X 2 , t 1 , α 21 , α 22 ) × ( f ( X 1 t k 1 , X 2 t k 2 , λ 1 t , λ 2 t , ϕ ) f ( X 1 t k 1 1 , X 2 t k 2 , λ 1 t , λ 2 t , ϕ ) . f ( X 1 t k 1 , X 2 t k 2 1 , λ 1 t , λ 2 t , ϕ ) + f ( X 1 t k 1 1 , X 2 t k 2 1 , λ 1 t , λ 2 t , ϕ ) ) .
The maximizer θ ^ T of (24) is the CML estimate of θ 0 , which is obtained by numerically maximizing the log-likelihood (24) or by solving the score Equation (25). To study the asymptotic behaviour of the estimator, we make the following two Assumptions about the parameter space and the underlying process.
Assumption 1.
The parametric space Θ is compact with Θ = { θ , θ = { α i j , b i j , c 1 , c 2 , ϕ } , i , j = 1 , 2 } , where δ ̲ α i j , b i j δ ¯ , c ̲ c i c ¯ , ϕ ̲ ϕ ϕ ¯ , γ = max ( α i j + b i j ) < 1 ,   δ ̲ , δ ¯ , c ̲ , c ¯ , ϕ ̲ and ϕ ¯ are finite positive constants, and θ 0 is an interior point in Θ.
Assumption 2.
If there exists a t 1 , such that X t ( θ 0 ) = X t ( θ ) , P θ 0 a.s., then θ = θ 0 , where P θ 0 is the probability measure under the true parameter θ 0 .
To derive the identification of the EBINAR(1) model, we give the following two Lemmas.
Lemma 1.
Let g 1 ( x , y , b 11 , b 12 , c 1 ) = b 11 x + b 12 y + c 1 , b 11 , b 12 , c 1 > 0 for ( x , y ) R + × R + . Then, if g 1 ( x , y , b 11 , b 12 , c 1 ) = g 1 ( x , y , b 11 0 , b 12 0 , c 1 0 ) , then b 11 = b 11 0 , b 12 = b 12 0 , c 1 = c 1 0 .
Proof. 
By the assumption:
g 1 ( x , y , b 11 , b 12 , c 1 ) x = g 1 ( x , y , b 11 0 , b 12 0 , c 1 0 ) x , g 1 ( x , y , b 11 , b 12 , c 1 ) y = g 1 ( x , y , b 11 0 , b 12 0 , c 1 0 ) y , g 1 ( 0 , 0 , b 11 , b 12 , c 1 ) = g 1 ( 0 , 0 , b 11 0 , b 12 0 , c 1 0 ) .
we obtain: b 11 = b 11 0 , b 12 = b 12 0 , c 1 = c 1 0 . □
Similarly, we denote g 2 ( x , y , b 21 , b 22 , c 2 ) = b 21 x + b 22 y + c 2 . If g 2 ( x , y , b 21 , b 22 , c 2 ) = g 2 ( x , y , b 21 0 , b 22 0 , c 2 0 ) , then we have b 21 = b 21 0 , b 22 = b 22 0 , c 2 = c 2 0 by the same method.
Lemma 2.
If { X t } is the strictly stationary and ergodic solution of model (6), Assumptions 1 and 2 hold, then model (6) is identifiable.
Proof. 
According to Lemma 1, we conclude that if λ 1 t ( b 11 , b 12 , c 1 ) = λ 1 t ( b 11 0 , b 12 0 , c 1 0 ) , then b 11 = b 11 0 , b 12 = b 12 0 , c 1 = c 1 0 . Similarly, if λ 2 t ( b 21 , b 22 , c 2 ) = λ 2 t ( b 21 0 , b 22 0 , c 2 0 ) , then b 21 = b 21 0 , b 22 = b 22 0 , c 2 = c 2 0 . Thus, if ϵ t ( b i j , c i , ϕ ) = ϵ t ( b i j 0 , c i 0 , ϕ 0 ) , t 1 , i , j = 1 , 2 , then b i j = b i j 0 , c i = c i 0 , ϕ = ϕ 0 . According to (7), we have ϵ i t = X i t α i 1 X 1 , t 1 α i 2 X 2 , t 1 , i = 1 , 2 . If ϵ i t ( θ ) = ϵ i t ( θ 0 ) , then we have α i 1 = α i 1 0 , α i 2 = α i 2 0 , otherwise
0 = E ( ϵ i t ( θ ) ) E ( ϵ i t ( θ 0 ) ) = ( α i 1 α i 1 0 ) E ( X 1 t ) + ( α i 2 α i 2 0 ) E ( X 2 t ) ,
then E ( X 1 t ) = 0 and E ( X 2 t ) = 0 , which contradicts the fact that E ( X i t ) > 0 , i = 1 , 2 .
By Assumption 2, for given X 1 , t 1 and X 2 , t 1 , we have
ϕ = Cov ( X 1 t ( θ ) , X 2 t ( θ ) ) = Cov ( X 1 t ( θ 0 ) , X 2 t ( θ 0 ) ) = ϕ 0 .
Thus, ϕ = ϕ 0 . Hence, model (6) is identifiable. □
Theorem 7.
Suppose that { X t } is the strictly stationary and ergodic solution of model (6) and Assumptions 1 and 2 hold. As T , there exists an estimator θ ^ T such that θ ^ T a . s . θ 0 .
Proof. 
To prove the strong consistence of θ ^ T , we need to check all the assumptions given in Theorems 4.1.2 and 4.1.3 in Amemiya [25]. Let W t ( θ ) = ln P θ ( X t | X t 1 ) , then ( θ ) = t = 1 T W t ( θ ) . We observe that W t ( θ ) is a measurable function of X t for all θ Θ , and is continuous in an open and convex neighborhood N ( θ 0 ) of θ 0 , then there at least exists a point θ ¯ N ( θ 0 ) such that W t ( θ ) attains the maximum value at θ ¯ .
Thus,
E sup θ N ( θ 0 ) W t ( θ ) = E ln P θ ¯ ( X t | X t 1 ) ln E P θ ¯ ( X t | X t 1 ) < .
Note that { X t } is a stationary and ergodic time series, and in terms of Theorem 4.1.2 in Amemiya [25], 1 T t = 1 T W t ( θ ) E W t ( θ ) in probability as T . By Jensen’s inequality, we have:
E ( W t ( θ ) ) E ( W t ( θ 0 ) ) = E ln P θ ( X t | X t 1 ) P θ 0 ( X t | X t 1 ) ln E P θ ( X t | X t 1 ) P θ 0 ( X t | X t 1 ) = 0 .
Thus, E W t ( θ ) attains a strict local maximum at θ 0 by (26) and Lemma 2. Hence, the conditions of Theorem 4.1.2 in Amemiya [25] are fulfilled; thus, there exists an estimator θ ^ T such that θ ^ T θ 0 , T . □
Theorem 8.
If the conditions of Theorem 7 hold, as T ,
T ( θ ^ T θ 0 ) d N 0 , ( J ( θ 0 ) ) 1 I ( θ 0 ) ( J ( θ 0 ) ) 1 ,
where I ( θ 0 ) = lim T T 1 E ( θ 0 ) θ ( θ 0 ) θ , J ( θ 0 ) = lim T T 1 E 2 ( θ 0 ) θ θ .
Proof. 
To prove the asymptotic normality of θ ^ T , we need to verify the assumptions of Theorem 4.1.3 in Amemiya [25].
First, by Proposition 1 in Freeland and McCabe [26], it is easy to obtain all the partial derivatives in a similar way, i.e., W t ( θ ) θ i exist and are three times continuous differentiable in Θ ; thus, 2 W t ( θ ) θ i θ j exists and is continuous in N ( θ 0 ) , for any i , j , k = 1 , 2 , , 11 . Thus, there at least exists a point θ ˜ N ( θ 0 ) such that 2 W t ( θ ) θ i θ j attains the maximum value at θ ˜ . Hence,
E sup θ N ( θ 0 ) 2 W t ( θ ) θ i θ j = E 2 W t ( θ ˜ ) θ i θ j < .
For convenience, we denote: 2 ( θ ) θ θ = G ( X t , θ ) = g i j ( X t , θ ) and E 2 ( θ ) θ θ = G ( θ ) = g i j ( X t , θ ) . We only need to prove g i j ( X t , θ ) converges to a finite and non-stochastic function g i j ( θ ) = E g i j ( X t , θ ) . Let h X t , θ = g i j ( X t , θ ) E g i j ( X t , θ ) , then E h X t , θ = 0 . Hence, the conditions of Theorem 4.1.3 in [25] are fulfilled. Thus, T 1 t = 1 T h X t , θ converges to 0 in probability uniformly in θ N ( θ 0 ) . Furthermore, T 1 t = 1 T h X t , θ T converges to 0 in probability, when θ T θ 0 , T .
Second, it is easy to see Cov W t ( θ 0 ) / θ = E W t ( θ 0 ) / θ W t ( θ 0 ) / θ because E W t ( θ 0 ) / θ = 0 .
Using the ergodic theorem,
1 T ( θ 0 ) θ p E 1 P θ 0 ( X t | X t 1 ) P θ 0 ( X t | X t 1 ) θ .
Using the martingale central limit theorem and the Cramér–Wold device, it is direct to show that
1 T ( θ 0 ) / θ d N ( 0 , I ( θ 0 ) ) with I ( θ 0 ) = lim T T 1 E ( θ 0 ) / θ ( θ 0 ) / θ .
Third, there exists an H ( X 1 t , X 2 t ) such that 3 ln ( θ ) θ i θ j θ k H ( X 1 t , X 2 t ) and E [ H ( X 1 t , X 2 t ) ] < by Theorem 4. By the Taylor expansion, we have
( θ ^ T ) θ = ( θ 0 ) θ + 2 ( θ T ) θ θ ( θ ^ T θ 0 ) ,
where θ T lies in between θ ^ T and θ 0 . We observe that the ( θ 0 ) θ = 0 in (27) by (25), then
T ( θ ^ T θ 0 ) = 1 T 2 ( θ T ) θ θ 1 1 T ( θ ^ T ) θ .
Hence, the asymptotic normality of θ ^ T follows from (28). □

4. Simulation

In this section, we conduct a simulation study to illustrate the finite sample property of the CML estimate. The simulation is carried out in R by using the optim function for the optimization of the conditional log-likelihood function.
In the simulation, we generate data from the non-diagonal EBINAR(1) model and the diagonal EBINAR(1) model. The sizes of samples are chosen to be 50, 100, 200, 500 and 1000 to reflect relatively small, small, moderate, large and relatively large sample sizes, and we use 500 replications. For the simulated sample, performances of the estimators are evaluated by mean squared error (MSE) and mean absolute deviation error (MADE), where MSE = 1 m i = 1 m ( φ ^ i φ ) 2 , MADE = 1 m i = 1 m | φ ^ i φ | , where φ ^ i is the estimator of φ in the ith replication and m denotes replication times. The used parameter combinations of θ = ( α 11 , α 12 , α 21 , α 22 , b 11 , b 12 , b 21 , b 22 , c 1 , c 2 , ϕ ) are listed as follows:
(1). For a non-diagonal model: θ = ( 0.3 , 0.1 , 0.1 , 0.1 , 0.2 , 0.1 , 0.1 , 0.3 , 0.6 , 0.6 , 0.5 ) ;
(2). For a diagonal model: θ = I : ( 0.2 , 0 , 0 , 0.3 , 0.3 , 0 , 0 , 0.2 , 0.6 , 0.6 , 0.5 ) , II : ( 0.2 , 0 , 0 , 0.3 , 0.3 , 0 , 0 , 0.2 , 2 , 2 , 1 ) ,   III : ( 0.1 , 0 , 0 , 0.4 , 0.4 , 0 , 0 , 0.1 , 0.6 , 0.6 , 0.5 ) ,   IV : ( 0.1 , 0 , 0 , 0.4 , 0.4 , 0 , 0 , 0.1 , 2 , 2 , 1 ) .
Table 1, Table 2, Table 3, Table 4 and Table 5 show that the MSE and MADE decrease with the increase in T for diagonal and non-diagonal models, which implies that the estimators are consistent.
To illustrate the location and dispersion of the estimates, we present the boxplots of the estimates for the non-diagonal and I of diagonal parameter combinations in Figure 1 and Figure 2; the others are similar.
Figure 1 and Figure 2 illustrate the large sample properties of the estimators on a limited sample size. In general, the estimated medians are apparently closer to the real parameter values with the sample size increases. Regarding dispersion issues, both the interquartile ranges and the overall ranges of the produced values are narrower with the sample size increases.

5. Illustrative Examples

In this section, we apply the proposed model to two crime datasets coming from different number of car beats, which is the unique ID for the observation unit’s geography in Pittsburgh Police Department. The crime data is available online at “The Forecasting Principles” site (http://www.forecastingprinciples.com/index.php/crimedata) in the section about Crime data and download on 23 September 2016.
According to Cohen and Gorr [27], the occurrence of criminal mischief may be accompanied by burglary behavior, so does for the robbery. Hence, the monthly counts of burglary and CMIS (or those of burglary and robbery) may exhibit dependence. In this section, we take the monthly counts of burglary and CMIS in beat 11 and the monthly counts of burglary and robbery in beat 26 as examples.

5.1. Monthly Counts of Burglary and CMIS in Beat 11

In this part, we consider the monthly number of burglary and criminal mischief (CMIS) from January 1990 to December 2001 in the geographic ID = 11. Table 6 gives the statistics of the counts of burglaries and CMIS.
Table 6 shows that both the counts of burglaries and CMIS are over-dispersed because their variances are greater than their means. In contrast, this relationship can also be illustrated by the cross-correlation graph of the samples, which are given in Figure 3.
From Figure 3, the counts of burglaries are weakly dependent with those of CMIS. Their plots of sample path, autocorrelation function (ACF) and partial autocorrelation function (PACF) are given in Figure 4, which show that the analyzed data sets are bivariate integer-valued time series with some characteristics of mutual influence.
To give quantitative results about cross-correlation, we compare our model with the following models:
  • Full BINAR-BP with ϵ t following BP ( λ 1 , λ 2 , ϕ ) [16];
  • Full BINAR-NB with ϵ t following bivariate negative binomial distribution with parameters ( λ 1 , λ 2 , β ) ; see [14,16] for detail.
As the goodness-of-fit criteria, we use the Akaike information criterion (AIC), the Bayesian information criterion (BIC) and the mean square error of the Pearson residuals (PRMS), which is equal to t = 1 n Z t 2 / ( n p * ) , where p * denotes the number of estimated parameters and Z t denotes standardized Pearson residuals.
The CML estimate and approximated standard error (SE) of the parameter, including the fitted values of PRMS, AIC, BIC and log-likelihood function (Log Lik), are summarized in Table 7, where the approximated standard error is computed by using the estimated version of the robust sandwich matrix ( J ( θ 0 ) ) 1 I ( θ 0 ) ( J ( θ 0 ) ) 1 ; see Theorem 8 for details.
Table 7 shows that the EBINAR(1) model takes the highest Log Lik value and the lowest AIC, BIC and PRMS for the monthly number of burglaries and CMIS. Hence, the EBINAR(1) model is more suitable for the data sets.

5.2. Monthly Counts of Burglaries and Robberies in Beat 26

In this part, we consider the monthly number of burglaries and robberies from January 1990 to December 2001 in the geographic ID = 26; see Table 8 for some of their statistics.
Table 8 shows the monthly number of burglary and robbery are over-dispersed. In contrast, this relationship can also be illustrated by their cross-correlation graph given in Figure 5, which shows that the counts of burglary are significantly dependent on those of robbery.
To further illustrate the the monthly number of burglaries and robberies in beat 26, we present their sample path, ACF and PACF plots in Figure 6, from which we can conclude that the analyzed data set exhibits some characteristics of mutual influence.
To give quantitative result about cross-correlation, we compare our model with the Full BINAR-BP and Full BINAR-NB models. The CML estimate and SE, including the fitted PRMS, AIC, BIC and Log Lik, are summarized in Table 9.
Table 9 shows that the EBINAR(1) model takes the highest Log Lik value and the lowest AIC, BIC and PRMS for burglaries and robberies in beat 26. Hence, EBINAR(1) model is more suitable.
To sum up, our findings reveal that there are some connections for the burglary and CMIS in beat 11 and those for the burglary and robbery in beat 26, which agrees with the conclusion of Cohen and Gorr [27]. Of course, the counts of burglary may be affected by other crime activities, such as simple assault, vagrancy and trespassing, which will be studied in a further study.
Remark 2.
For the two real datasets, our EBINAR(1) model performs best, but it is not clear enough regarding predicting unseen data. To further illustrate the better performance of the new model in prediction, one available solution is to conduct a further experiment when dividing the considered data into a training set and test set. In addition such experiment will be considered in future study of the crime data.

6. Concluding Remarks

This paper proposes a more flexible model for bivariate integer-valued time series data, i.e., the EBINAR(1) model, whose innovation vector is time-dependent. It is a generalization of the EINAR(1) model [11] to the two-dimensional case as well as a generalization of the BINAR(1) model [14,16], but with more flexibility. We discuss some necessary properties of the model, the CML estimators of parameters and their large-sample properties. Simulation was conducted to examine the finite sample performance of estimators. Real data examples are provided to illustrate our model to be effective relative to existing models.
To make the bivariate INAR-type models more flexible with respect to real-data applications in some cases, it may be interesting to include explanatory covariates or periodicity in the model to account for dependence through thinning operations on several factors, which will be considered in another project: see Aknouche et al. [28] and Chen and Khamthong [29].

Author Contributions

Conceptualization, H.C. and F.Z.; methodology, H.C. and F.Z.; software, H.C., F.Z. and X.L.; validation, H.C., F.Z. and X.L.; formal analysis, H.C.; investigation, H.C.; resources, F.Z.; data curation, X.L.; writing—original draft preparation, H.C. and F.Z.; writing—review and editing, H.C. and F.Z.; visualization, H.C. and F.Z.; supervision, F.Z. All authors have read and agreed to the published version of the manuscript.

Funding

Chen’s work is supported by the Natural Science Foundation of Henan province (No.222300420127). Zhu’s work is supported by the National Natural Science Foundation of China (Nos.11871027, 11731015) and the Natural Science Foundation of Jilin Province (No.20210101143JC). Liu’s work is supported by the Basic Research Programs of Shanxi Province (No.202103021223084).

Data Availability Statement

Crime data are available online at The Forecasting Principles site and were downloaded on 23 September 2016 from http://www.forecastingprinciples.com/index.php/crimedata.

Acknowledgments

We are very grateful to anonymous referees for providing several exceptionally helpful comments which led to a significant improvement in the manuscript.

Conflicts of Interest

The authors declare no conflict of interests in the publication of this paper.

Abbreviations

The following abbreviations are used in this manuscript:
A 1 inverse of matrix A ;
A transpose of the matrix or vector A ;
· Euclidean norm of a matrix or vector;
  | · | absolute value of a univariate variable;
d convergence in distribution;
p convergence in probability one;
pmfprobability mass function;
CMLconditional maximun likelihood;
AICAkaike information criterion;
BICBayesian information criterion;
SEstandard error;
PRMSmean square error of the Pearson residual;
Para.parameter.

References

  1. Steutel, F.W.; van Harn, K. Discrete analogues of self-decomposability and stability. Ann. Probab. 1979, 7, 893–899. [Google Scholar] [CrossRef]
  2. Al-Osh, M.A.; Alzaid, A.A. First-order integer-valued autoregressive process. J. Time Ser. Anal. 1987, 8, 261–275. [Google Scholar] [CrossRef]
  3. McKenzie, E. Some simple models for discrete variate time series. Water Resoure Bull. 1985, 21, 645–650. [Google Scholar] [CrossRef]
  4. Du, J.; Li, Y. The integer valued autoregressive INAR(p) model. J. Time Ser. Anal. 1991, 12, 129–142. [Google Scholar]
  5. Alzaid, A.A.; Omair, M.A. Poisson difference integer valued autoressive model of order one. Bull. Malays. Math. Sci. Soc. 2014, 37, 465–485. [Google Scholar]
  6. Chen, H.; Li, Q.; Zhu, F. Binomial AR(1) processes with innovational outliers. Commun. Stat. Theory Methods 2021, 50, 446–472. [Google Scholar] [CrossRef]
  7. Weiß, C.H. Thinning operations for modeling time series of counts—A survey. Adv. Stat. Anal. 2008, 92, 319–341. [Google Scholar] [CrossRef]
  8. Scotto, M.G.; Wei, C.H.; Gouveia, S. Thinning-based models in the analysis of integer-valued time series: A review. Stat. Model. 2015, 15, 590–618. [Google Scholar] [CrossRef]
  9. Davis, R.A.; Fokianos, K.; Holan, S.H.; Joe, H.; Livsey, J.; Lund, R.; Pipiras, V.; Ravishanker, N. Count time series: A methodological review. J. Am. Stat. Assoc. 2021, 116, 1533–1547. [Google Scholar] [CrossRef]
  10. Buckley, F.M.; Pollett, P.K. Limit theorems for discrete-time metapopulation models. Probab. Surv. 2010, 7, 53–83. [Google Scholar] [CrossRef]
  11. Weiß, C.H. A Poisson INAR(1) model with serially dependent innovations. Metrika 2015, 78, 829–851. [Google Scholar] [CrossRef]
  12. Franke, J.; Rao, T.S. Multivariate First-Order Integer Valued Autoregressions; Technical Report; Department of Mathematics, UMIST: Manchester, UK, 1995. [Google Scholar]
  13. Latour, A. The multivariate GINAR(p) process. Adv. Appl. Probab. 1997, 29, 228–248. [Google Scholar] [CrossRef]
  14. Pedeli, X.; Karlis, D. A bivariate INAR(1) processes with application. Stat. Model. 2011, 11, 325–349. [Google Scholar] [CrossRef]
  15. Pedeli, X.; Karlis, D. On estimation of the bivariate Poisson INAR process. Commun. Stat. Simul. Comput. 2013, 42, 514–533. [Google Scholar] [CrossRef]
  16. Pedeli, X.; Karlis, D. Some properties of multivariate INAR(1) processes. Comput. Stat. Data Anal. 2013, 67, 213–225. [Google Scholar] [CrossRef]
  17. Ravishanker, N.; Serhiyenko, V.; Willig, M.R. Hierarchical dynamic models for multivariate times series of counts. Stat. Its Interface 2014, 7, 559–570. [Google Scholar] [CrossRef]
  18. Popović, P.M. A bivariate INAR(1) model with different thinning parameters. Stat. Pap. 2016, 57, 517–538. [Google Scholar] [CrossRef]
  19. Scotto, M.G.; Wei, C.H.; Silva, M.E.; Pereira, I. Bivariate binomial autoregressive models. J. Multivar. Anal. 2014, 125, 233–251. [Google Scholar] [CrossRef]
  20. Li, Q.; Chen, H.; Liu, X. A new bivariate random coefficient INAR(1) model with applications. Symmetry 2022, 14, 39. [Google Scholar] [CrossRef]
  21. Kocherlakota, S.; Kocherlakota, K. Bivariate Discrete Distributions; Marcel Dekker: New York, NY, USA, 1992; pp. 87–97. [Google Scholar]
  22. Heathcote, C.R. Corrections and comments on the paper “A branching process allowing immigration”. J. R. Stat. Soc. Ser. B 1966, 28, 213–217. [Google Scholar] [CrossRef]
  23. Shumway, R.H.; Stoffer, D.S. Time Series Analysis and Its Applications with R examples, 3rd ed.; Springer: New York, NY, USA, 2011. [Google Scholar]
  24. Silva, M.E.; Oliveira, V.L. Difference equations for the higher-order moments and cumulants of the INAR(1) model. J. Time Ser. Anal. 2004, 25, 317–333. [Google Scholar] [CrossRef]
  25. Amemiya, T. Advanced Econometrics; Harvard University Press: Cambridge, MA, USA, 1985; pp. 110–112. [Google Scholar]
  26. Freeland, R.K.; McCabe, B.P.M. Analysis of low count time series data by poisson autoregression. J. Time Ser. Anal. 2004, 25, 701–722. [Google Scholar] [CrossRef]
  27. Cohen, J.; Gorr, W.L. Development of Crime Forecasting and Mapping Systems for Use by Police; Inter-University Consortium for Political and Social Research: New York, NY, USA, 2005. [Google Scholar] [CrossRef]
  28. Aknouche, A.; Bentarzi, W.; Demouche, N. On periodic ergodicity of a general periodic mixed Poisson autoregression. Stat. Probab. Lett. 2018, 134, 15–21. [Google Scholar] [CrossRef]
  29. Chen, C.W.S.; Khamthong, K. Bayesian modelling of nonlinear negative binomial integer-valued GARCHX models. Stat. Model. 2020, 20, 537–561. [Google Scholar] [CrossRef]
Figure 1. Boxplots of the CML estimates for non-diagonal EBINAR(1) model.
Figure 1. Boxplots of the CML estimates for non-diagonal EBINAR(1) model.
Stats 05 00048 g001
Figure 2. Boxplots of the CML estimates for diagonal EBINAR(1) model with parameter I.
Figure 2. Boxplots of the CML estimates for diagonal EBINAR(1) model with parameter I.
Stats 05 00048 g002
Figure 3. Cross-correlation between the monthly number of burglaries and CMIS in beat 11.
Figure 3. Cross-correlation between the monthly number of burglaries and CMIS in beat 11.
Stats 05 00048 g003
Figure 4. Beat 11: (1) monthly number of burglary, (2) monthly number of CMIS, (3) ACF of burglary, (4) ACF of CMIS, (5) PACF of burglary, (6) PACF of CMIS.
Figure 4. Beat 11: (1) monthly number of burglary, (2) monthly number of CMIS, (3) ACF of burglary, (4) ACF of CMIS, (5) PACF of burglary, (6) PACF of CMIS.
Stats 05 00048 g004
Figure 5. Cross-correlation between the monthly number of burglaries and robberies in beat 26.
Figure 5. Cross-correlation between the monthly number of burglaries and robberies in beat 26.
Stats 05 00048 g005
Figure 6. Beat 26: (1) monthly number of burglaries, (2) monthly number of robberies, (3) ACF of burglaries, (4) ACF of robberies, (5) PACF of burglaries, (6) PACF of robberies.
Figure 6. Beat 26: (1) monthly number of burglaries, (2) monthly number of robberies, (3) ACF of burglaries, (4) ACF of robberies, (5) PACF of burglaries, (6) PACF of robberies.
Stats 05 00048 g006
Table 1. Results for non-diagonal EBINAR(1) model.
Table 1. Results for non-diagonal EBINAR(1) model.
Size α 11 α 12 α 21 α 22 b 11 b 12 b 21 b 22 c 1 c 2 ϕ
50MSE0.00950.01220.00820.01770.00480.01870.01000.03800.01720.02460.0160
MADE0.05610.05370.03900.04600.03710.04870.04090.07500.08570.09910.0811
100MSE0.00690.00430.00330.01270.00630.00700.00990.00900.01280.02100.0150
MADE0.04730.02690.02870.04170.03990.03550.03780.05780.07630.09130.0736
200MSE0.00440.00190.00340.01080.00580.00630.00540.00630.01780.01850.0170
MADE0.04210.02810.03260.02910.04260.03630.03720.05320.09010.08760.0824
500MSE0.00330.00080.00110.01050.00440.00270.00080.00610.00290.00440.0063
MADE0.03170.01610.02130.04690.03790.02930.02250.05560.04460.05290.0465
1000MSE0.00020.00010.00050.00410.00140.00070.00020.00060.00060.00150.0048
MADE0.01160.00790.01670.04130.02800.02250.01140.01900.01990.03450.0379
Table 2. Results for diagonal EBINAR(1) model with parameter I.
Table 2. Results for diagonal EBINAR(1) model with parameter I.
Size α 11 α 22 b 11 b 22 c 1 c 2 ϕ
50MSE0.00310.01460.02050.00301.21880.61340.3256
MADE0.04060.10210.12500.03980.77100.58800.5059
100MSE0.00200.00620.01340.00230.78870.45270.2778
MADE0.03230.06650.09760.03300.61250.49780.4843
200MSE0.00150.00450.00880.00100.48320.32500.2572
MADE0.03000.05240.07750.02170.50160.39950.4742
500MSE0.00070.00310.00430.00100.22400.15280.2198
MADE0.02020.03960.04950.02270.36550.26700.4312
1000MSE0.00050.00200.00220.00040.19650.11470.1789
MADE0.01560.03520.03770.01420.31000.20840.3954
Table 3. Results for diagonal EBINAR(1) model with parameter II.
Table 3. Results for diagonal EBINAR(1) model with parameter II.
Size α 11 α 22 b 11 b 22 c 1 c 2 ϕ
50MSE0.01540.01830.01220.01910.75101.10070.3183
MADE0.08710.09880.09030.09490.68940.82690.5008
100MSE0.00590.00890.00590.00720.43330.67280.2336
MADE0.04700.07420.05990.05820.49570.58890.4442
200MSE0.00420.00440.00410.00530.28760.49390.1983
MADE0.04110.04990.04750.04700.37960.48660.4193
500MSE0.00270.00350.00360.00250.10380.32400.1899
MADE0.03440.04000.04140.03260.26360.42160.4107
1000MSE0.00130.00220.00170.00110.07300.08550.1352
MADE0.02380.03030.03070.02040.19780.22210.3512
Table 4. Results for diagonal EBINAR(1) model with parameter III.
Table 4. Results for diagonal EBINAR(1) model with parameter III.
Size α 11 α 22 b 11 b 22 c 1 c 2 ϕ
50MSE0.00270.00480.04730.00830.00780.00830.0013
MADE0.02580.04280.15330.05460.06200.05860.0316
100MSE0.00360.00640.04290.01020.00890.00870.0017
MADE0.03590.04850.14860.06400.06800.06380.0330
200MSE0.00600.00590.03800.00590.00540.00470.0017
MADE0.03410.04690.12390.04690.05410.05070.0321
500MSE0.00180.00420.00820.00310.00460.00390.0016
MADE0.03120.04290.06380.03800.04770.04260.0287
1000MSE0.00110.00300.00370.00200.00200.00160.0005
MADE0.02530.03950.03800.03310.03840.03410.0182
Table 5. Results for diagonal EBINAR(1) model with parameter IV.
Table 5. Results for diagonal EBINAR(1) model with parameter IV.
Size α 11 α 22 b 11 b 22 c 1 c 2 ϕ
50MSE0.00310.01460.02050.00301.21880.61340.3256
MADE0.04060.10210.12500.03980.77100.58800.5059
100MSE0.00200.00620.01340.00230.78870.45270.2778
MADE0.03230.06650.09760.03300.61250.49780.4843
200MSE0.00150.00450.00880.00100.48320.32500.2572
MADE0.03000.05240.07750.02170.50160.39950.4742
500MSE0.00070.00310.00430.00100.22400.15280.2198
MADE0.02020.03960.04950.02270.36550.26700.4312
1000MSE0.00050.00200.00220.00040.19650.11470.1789
MADE0.01560.03520.03770.01420.31000.20840.3954
Table 6. Summary statistics for the monthly number of burglaries and CMIS in beat 11.
Table 6. Summary statistics for the monthly number of burglaries and CMIS in beat 11.
DataMeanVarianceMinimumMedianMaximum
Burglary2.88194.11880310
CMIS6.381910.08391622
Table 7. Estimates for the monthly numbers of burglaries and those of CMIS in beat 11.
Table 7. Estimates for the monthly numbers of burglaries and those of CMIS in beat 11.
EBINAR(1) Full BINAR(1)-NB Full BINAR(1)-BP
Para.EstimateSEPara.EstimateSEPara.EstimateSE
α ^ 11 0.16890.1559 α ^ 11 0.27840.0665 α ^ 11 0.29930.0838
α ^ 12 0.01790.0411 α ^ 12 0.02170.0092 α ^ 12 0.02170.0215
α ^ 21 0.03900.1447 α ^ 21 0.10600.0550 α ^ 21 0.10600.0719
α ^ 22 0.11310.1236 α ^ 22 0.50100.0295 α ^ 22 0.19340.0551
b ^ 11 0.06900.1460
b ^ 12 0.00930.0559
b ^ 21 0.10140.1809
b ^ 22 0.13540.1414
c ^ 1 1.00070.4372 λ ^ 1 1.98140.0186 λ ^ 1 1.51640.2561
c ^ 2 3.31900.5478 λ ^ 2 2.31370.0166 λ ^ 2 4.52580.4493
ϕ ^ 0.52730.2628 β ^ 0.13740.9759 ϕ ^ 0.40440.2274
PRMS0.0064 0.0245 0.0103
AIC1315.4620 1387.8913 1350.9488
BIC1348.1300 1408.6800 1371.7375
Log Lik−646.7310 −686.9457 −668.4744
Table 8. Summary statistics for the monthly number of burglaries and robberies in beat 26.
Table 8. Summary statistics for the monthly number of burglaries and robberies in beat 26.
DataMeanVarianceMinimumMedianMaximum
Burglary3.93069.74340315
Robbery3.06259.63940217
Table 9. Estimates for the monthly number of burglaries and robberies in beat 26.
Table 9. Estimates for the monthly number of burglaries and robberies in beat 26.
EBINAR(1) Full BINAR(1)-NB Full BINAR(1)-BP
Para.EstimateSEPara.EstimateSEPara.EstimateSE
α ^ 11 0.31170.0654 α ^ 11 0.23140.3042 α ^ 11 0.27650.0537
α ^ 12 0.20860.0611 α ^ 12 0.31720.2442 α ^ 12 0.09270.0471
α ^ 21 0.09000.0511 α ^ 21 0.10990.2834 α ^ 21 0.00010.0000
α ^ 22 0.19060.1163 α ^ 22 0.43610.2244 α ^ 22 0.42490.0415
b ^ 11 0.06710.0706
b ^ 12 0.22800.0653
b ^ 21 0.12330.0511
b ^ 22 0.33580.1161
c ^ 1 0.20430.2048 λ ^ 1 2.23100.0026 λ ^ 1 1.76520.2048
c ^ 2 0.41390.1139 λ ^ 2 1.17080.0076 λ ^ 2 0.96040.1601
ϕ ^ 0.55990.1187 β ^ 0.40730.7189 ϕ ^ 0.77780.1494
PRMS0.0087 0.0748 0.0992
AIC1320.8092 1344.6968 1357.7718
BIC1353.4771 1365.4855 1378.5604
Log Lik−649.4046 −665.3484 −671.8859
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, H.; Zhu, F.; Liu, X. A New Bivariate INAR(1) Model with Time-Dependent Innovation Vectors. Stats 2022, 5, 819-840. https://doi.org/10.3390/stats5030048

AMA Style

Chen H, Zhu F, Liu X. A New Bivariate INAR(1) Model with Time-Dependent Innovation Vectors. Stats. 2022; 5(3):819-840. https://doi.org/10.3390/stats5030048

Chicago/Turabian Style

Chen, Huaping, Fukang Zhu, and Xiufang Liu. 2022. "A New Bivariate INAR(1) Model with Time-Dependent Innovation Vectors" Stats 5, no. 3: 819-840. https://doi.org/10.3390/stats5030048

APA Style

Chen, H., Zhu, F., & Liu, X. (2022). A New Bivariate INAR(1) Model with Time-Dependent Innovation Vectors. Stats, 5(3), 819-840. https://doi.org/10.3390/stats5030048

Article Metrics

Back to TopTop