Next Article in Journal
Opinion Formation in the World Trade Network
Previous Article in Journal
Ref-MEF: Reference-Guided Flexible Gated Image Reconstruction Network for Multi-Exposure Image Fusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Time-Varying Mixture Integer-Valued Threshold Autoregressive Process Driven by Explanatory Variables

1
School of Mathematics and Statistics, Liaoning University, Shenyang 110031, China
2
School of Mathematics and Statistics, Changchun University of Technology, Changchun 130012, China
3
State Grid Jilin Electric Power Company Limited Information and Telecommunication Company, Changchun 132400, China
*
Authors to whom correspondence should be addressed.
Entropy 2024, 26(2), 140; https://doi.org/10.3390/e26020140
Submission received: 4 January 2024 / Revised: 1 February 2024 / Accepted: 2 February 2024 / Published: 4 February 2024
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
In this paper, a time-varying first-order mixture integer-valued threshold autoregressive process driven by explanatory variables is introduced. The basic probabilistic and statistical properties of this model are studied in depth. We proceed to derive estimators using the conditional least squares (CLS) and conditional maximum likelihood (CML) methods, while also establishing the asymptotic properties of the CLS estimator. Furthermore, we employed the CLS and CML score functions to infer the threshold parameter. Additionally, three test statistics to detect the existence of the piecewise structure and explanatory variables were utilized. To support our findings, we conducted simulation studies and applied our model to two applications concerning the daily stock trading volumes of VOW.

1. Introduction

An integer-valued time series represents count data reflecting the states of a particular phenomenon at different time points. It finds widespread applications across various real-world domains. For instance, in the field of economics, Ref. [1] employed the integer-valued moving average model to describe the number of transactions in intra-day stock data. In industrial contexts, Ref. [2] utilized the compound Poisson integer-valued autoregressive (INAR) model to characterize the count of workers in the heavy manufacturing industry receiving benefits due to burn-related injuries. Within the realm of insurance actuarial studies, Ref. [3] explored an extension of the classical discrete-time risk model, incorporating an INAR(1) process to capture temporal dependence among claim counts. A straightforward approach in the modeling and analyzing of count time series involves creating an integer-valued autoregressive model by using thinning operators. Ever since [4] introduced the first INAR(1) time series model, which relied on the binomial thinning operator [5], it has become a prevalent method to model INAR-type models utilizing various thinning operators (see [6,7,8,9,10], among others). This approach has been extensively applied across diverse fields such as epidemiology, social sciences, economics, life sciences, and more.
Even though INAR-type models are commonly used in practical applications, they often fall short when confronted with nonlinear phenomena. For instance, researchers in the field of epidemiology, as exemplified by [11], have detected temporal fluctuations in the incidence rate of epidemics. Therefore, trying to represent such data with an INAR-type model may not be the most-appropriate approach. Furthermore, time series data frequently undergo sudden changes that can either temporarily or permanently disrupt their dynamics. In such situations, the conventional INAR model may also prove inadequate in delivering an accurate fit. To address the nonlinear aspects of integer-valued time series data, Ref. [12] introduced the integer-valued self-exciting threshold autoregressive (SETINAR(2,1)) process, which relies on the binomial thinning operator (“∘”); Ref. [13] presented the self-excited threshold Poisson autoregressive (SETPAR) model and applied it to analyze major global earthquake data; Ref. [14] proposed a basic self-exciting threshold binomial AR(1) model (SETBAR(1)) with values across a finite range of counts; Ref. [15] investigated an integer-valued threshold autoregressive process (NBTINAR(1)) based on the negative binomial thinning operator (“∗”) and applied it to analyze the annual counts of major earthquakes with a magnitude of 7 or above from 1900 to 2015. In a comprehensive review, Ref. [16] surveyed threshold models for integer-valued time series with an infinite range and introduced two novel models tailored to cases with a finite range of values. In the latest research, Ref. [17] pointed out that employing different operators before and after the threshold can enhance the model’s ability to explain a wider range of phenomena. As a result, she has proposed the following threshold autoregressive model using the mixed operators.
X t = α 1 X t 1 + Z 1 , t , X t 1 r α 2 X t 1 + Z 2 , t , X t 1 > r ;
where { Z 1 , t } and { Z 2 , t } are sequences of i.i.d. Poisson and Geometric distributed random variables, respectively. However, it is worth noting that the use of constant autoregressive coefficients in this model ignores the effect of exogenous variables on the observed data. For example, denote the daily trading volume of a specific stock as X t . Clearly, in practice, its autoregressive coefficient is often not static and is often subject to some external factors related to change over time, such as: market factors: the overall volatility of the stock market can impact the volatility of individual stocks; industry factors: specific events or trends within an industry can also affect stock price fluctuations; interest rates and monetary policy: changes in interest rates and monetary policies can have a wide-ranging impact on the stock market; political and geopolitical factors: political events, elections, international relations, and geopolitical tensions can introduce uncertainty and volatility to the stock market. emotional and investor behavior: investor sentiment and behavior can significantly influence stock price movements, among other factors.
Inspired by the above discussion and learning the method of constructing models driven by explanatory variables (see [18,19], just to name a few), in this paper, we propose a first-order time-varying mixture thinning integer-valued threshold autoregressive (TVMTTINAR(1)) process driven by explanatory variables. For this, the definition of the TVMTTINAR(1) model is given, and the statistical inference for the proposed model is studied. Furthermore, considering that verifying the existence of a piecewise structure and explanatory variables is key to model construction, we propose three kinds of test statistics. Finally, from the simulation and two applications, we can also see that our proposed model is very competitive.
The organization of this paper is as follows. Section 2 gives the definition of the proposed model, and some properties are also investigated. In Section 3, the estimates of the model parameters are derived by using the conditional least squares (CLS) and conditional maximum likelihood (CML) methods. Three test statistics are also constructed to test the existence of the piecewise structure and explanatory variables, respectively. Some simulation studies are carried out to investigate the performances of the proposed estimates and test statistics in Section 4. Two real data examples are given in Section 5. Some concluding remarks are given in Section 6. All proofs are postponed to Appendix A.

2. The First-Order Time-Varying Mixture Thinning Integer-Valued Threshold Autoregressive Model

We first introduce the definition of the TVMTTINAR(1) process. Furthermore, we investigate the statistical properties of the proposed model.
Definition 1.
The process { X t } is called the TVMTTINAR(1) process if X t follows the recursion:
X t = ϕ 1 , t X t 1 + ε t , X t 1 r ϕ 2 , t X t 1 + ε t , X t 1 > r ,
or
X t = ϕ 2 , t X t 1 + ε t , X t 1 r ϕ 1 , t X t 1 + ε t , X t 1 > r .
For convenience, we write the above two models by the symbol R as follows:
X t = ( ϕ 1 , t X t 1 ) I 1 , t R + ( ϕ 2 , t X t 1 ) I 2 , t R + ε t , t Z ,
where:
  • I 1 , t R = I { X t 1 r } , R = 0 , I { X t 1 > r } , R = 1 , and I 2 , t R = 1 I 1 , t R = I { X t 1 > r } , R = 0 , I { X t 1 r } , R = 1 ; That is, R = 0 indicates that TVMTTINAR(1) represents the process (1).
  • For fixed i 1 , 2 , ϕ i , t ( 0 , 1 )
    log ϕ i , t 1 ϕ i , t = Z t β i ,
    where β i = ( β i , 0 , β i , 1 , , β i , q ) are the regression coefficients, { Z t : = ( 1 , Z 1 , t , , Z q , t ) } t Z is a sequence of stationary, weakly dependent, and observable explanatory variables with a constant mean vector and covariance matrix. For fixed t, Z t is assumed to be independent of { X t l } l 1 .
  • The binomial thinning operator “∘”, proposed by [5], is defined as ϕ 1 X = i = 1 X B i , where ϕ ( 0 , 1 ) , { B i } is a sequence of i.i.d. Bernoulli random variables satisfying P ( B i = 1 ) = 1 P ( B i = 0 ) = ϕ = and B i is independent of X.
  • The negative binomial thinning operator “∗”, proposed by [20], is defined as ϕ X = i = 1 X W i , where ϕ ( 0 , 1 ) , { W i } is a sequence of i.i.d. Geometric random variables with parameter ϕ 1 + ϕ 2 and W i is independent of X.
  • { ε t } is a sequence of i.i.d. Poisson distributed random variables with mean λ. For fixed t, ε t is assumed to be independent of ϕ X t 1 , ϕ X t 1 , and X t l for all l 1 .
In contrast to the common SETINAR-type model, the TVMTTINAR model does not require β 1 = β 2 . This is mainly due to the existence of mixed thinning operators. Even when β 1 = β 2 , there is a piecewise structure. However, this is a small probability event, and the model inference problem in this case is not specially considered in this paper. In addition, in practical applications, we can usually choose which of the two TVMTTINAR(1) models ( R = 0 and R = 1 ) is more applicable based on criteria such as the AIC and BIC. We will conduct a specific analysis through data examples in Section 5.
Next, we are ready to state that there exists the strict stationarity and ergodicity of the VTMTTINAR(1) process. Note that, under the assumption that β i satisfies sup Z R q + 1 | β i Z | < for i = 1 , 2 , there is
ϕ i , t = exp ( Z t β i ) 1 + exp ( Z t β i ) ( 0 , 1 ) , t .
Thereby, similar to the method in [19], it is easy to verify that Theorem 3.1 in [21] holds, and the TVMTTINAR(1) process is a strictly stationary and ergodic Markov chain. Moreover, its transition probabilities are given by
P ( z t , x t 1 , x t ) = P ( X t = x t | X t 1 = x t 1 , Z t = z t ) = P ϕ 1 , t X t 1 I 1 , t R + ϕ 2 , t X t 1 I 2 , t R + ε t = x t | X t 1 = x t 1 , Z t = z t = p ( x t 1 , x t , ϕ 1 , t , λ ) I 1 , t R + p ( x t 1 , x t , ϕ 2 , t , λ ) I 2 , t R = i = 1 2 p ( x t 1 , x t , ϕ i , t I i , t R , λ ) ,
where
p ( x t 1 , x t , ϕ 1 , t I 1 , t R , λ ) = I 1 , t R m = 0 min ( x t 1 , x t ) x t 1 m e λ λ x t m ( x t m ) ! ϕ 1 , t m ( 1 ϕ 1 , t ) x t 1 m , p ( x t 1 , x t , ϕ 2 , t I 2 , t R , λ ) = I 2 , t R m = 0 x t Γ ( x t 1 + m ) Γ ( x t 1 ) Γ ( m + 1 ) ϕ 2 , t m ( 1 + ϕ 2 , t ) x t 1 + m e λ λ x t m ( x t m ) ! .
Since the existence of the first four moments under observable explanatory variables is a necessary condition for deriving the asymptotic properties of the parameter estimation in Section 3, we then give the following proposition.
Proposition 1.
Let  { X t }  be the process defined by Definition 1. Then, the first four conditional moments are bounded, that is  E ( X t k | Z 0 , , Z t ) <  for  k = 1 , 2 , 3 , 4 .
Next, we consider the moments and conditional moments of TVMTTINAR(1). For the simplicity of the notation, we denote E ( I 1 , t R ) = p 1 , E ( I 2 , t R ) = p 2 = 1 p 1 , μ 1 : = E ( X t | X t r ) , μ 2 : = E ( X t | X t > r ) , μ ϕ 1 ( h ) : = E ( exp ( Z t + h β 1 ) 1 + exp ( Z t + h β 1 ) | X t + h 1 r ) , μ ϕ 2 ( h ) : = E ( exp ( Z t + h β 2 ) 1 + exp ( Z t + h β 2 ) | X t + h 1 > r ) , ϕ i : = E exp ( Z t β i ) 1 + exp ( Z t β i ) ( i = 1 , 2 ) , σ ϕ i 2 : = Var exp ( Z t β i ) 1 + exp ( Z t β i ) ( i = 1 , 2 ) , σ 1 2 : = Var ( X t | X t r ) , σ 2 2 : = Var ( X t | X t > r ) , γ h ( 1 ) : = Cov ( X t , X t + h | X t + h r ) , γ h ( 2 ) : = Cov ( X t , X t + h | X t + h > r ) , where γ 0 ( i ) = [ ( σ i 2 + μ i 2 ) μ i E ( X t ) ] , i = 1 , 2 .
The conditional mean and variance for the TVMTTINAR(1) model can be given by
E ( X t | X t 1 , Z t ) = i = 1 2 exp ( Z t β i ) 1 + exp ( Z t β i ) X t 1 I i , t R + λ , E ( X t | Z t ) = i = 1 2 exp ( Z t β i ) 1 + exp ( Z t β i ) p i μ i + λ , Var ( X t | X t 1 , Z t ) = exp ( Z t β 1 ) [ 1 + exp ( Z t β 1 ) ] 2 X t 1 I 1 , t R + exp ( Z t β 2 ) [ 1 + 2 exp ( Z t β 2 ) ] [ 1 + exp ( Z t β 2 ) ] 2 X t 1 I 1 , t R + λ , Var ( X t | Z t ) = i = 1 2 exp ( Z t β i ) 1 + exp ( Z t β i ) 2 p i ( σ i 2 + μ i 2 ) p i 2 μ i 2 + exp ( Z t β 1 ) [ 1 + exp ( Z t β 1 ) ] 2 p 1 μ 1 + exp ( Z t β 2 ) [ 1 + 2 exp ( Z t β 2 ) ] [ 1 + exp ( Z t β 2 ) ] 2 p 2 μ 2 2 i = 1 2 exp ( Z t β i ) 1 + exp ( Z t β i ) p i μ i + λ .
The unconditional expressions for the marginal mean and variance of the TVMTTINAR(1) model are
E ( X t ) = i = 1 2 p i ϕ i μ i + λ , Var ( X t ) = i = 1 2 ϕ i 2 p i ( σ i 2 + μ i 2 ) p i 2 μ i 2 + p i σ ϕ i 2 ( σ i 2 + μ i 2 ) + p 1 μ 1 ( ϕ 1 σ ϕ 1 2 ϕ 1 2 ) + p 2 μ 2 ( ϕ 2 + σ ϕ 2 2 + ϕ 2 2 ) 2 p 1 p 2 ϕ 1 ϕ 2 μ 1 μ 2 + λ .
Then, we have that the autocovariance function and autocorrelation function (ACF):
Cov ( X t , X t + h | Z t + 1 , , Z t + h ) = i = 1 2 exp ( Z t + h β i ) 1 + exp ( Z t + h β i ) p i γ h 1 ( i ) , Cov ( X t , X t + h ) = i = 1 2 μ ϕ i ( h ) p i γ h 1 ( i ) , ρ ( h ) : = Corr ( X t , X t + h ) = i = 1 2 μ ϕ i ( h ) p i γ h 1 ( i ) \ Var ( X t )

3. Parameters’ Estimation and Testing

Suppose we have a series of observations { X t } t = 1 n generated from the TVMTTINAR(1) process. Denote θ = ( β 1 , β 2 , λ ) as the parameter vector under the known threshold parameter r, and η = ( β 1 , β 2 , λ , r ) as the parameter vector under the unknown r case. Their parameter spaces are
Θ θ = { θ R q + 1 × R q + 1 × ( 0 , + ) } ,
Θ η = { η R q + 1 × R q + 1 × ( 0 , + ) × N } .
Furthermore, suppose the true values θ 0 = ( β 1 , 0 , β 2 , 0 , λ 0 ) and η 0 = ( β 1 , 0 , β 2 , 0 , λ 0 , r 0 ) of θ 0 and η 0 are the interior points of Θ θ and Θ η , respectively. In this section, we mainly implement parameter estimation based on two different approaches, namely the conditional least squares (CLS) and conditional maximum likelihood (CML) methods. The objective function is not differentiable with respect to the threshold variable r since r is an integer. Therefore, we firstly propose solutions to estimate θ under the assumption that the threshold variable r is known. Later, in Section 3.3, we turn to estimating the threshold variable r based on the estimation methods mentioned before. All the proofs are presented in Appendix A.

3.1. Conditional Least Squares Estimation

Denote
g ( θ , X t 1 , Z t ) = E ( X t | X t 1 , Z t ) = exp ( Z t β 1 ) 1 + exp ( Z t β 1 ) X t 1 I 1 , t R + exp ( Z t β 2 ) 1 + exp ( Z t β 2 ) X t 1 I 2 , t R + λ ,
Q ( θ ) : = t = 1 n ( X t g ( θ , X t 1 , Z t ) ) 2 = t = 1 n U t 2 ( θ ) ,
where
U t ( θ ) = X t exp ( Z t β 1 ) 1 + exp ( Z t β 1 ) X t 1 I 1 , t R exp ( Z t β 2 ) 1 + exp ( Z t β 2 ) X t 1 I 2 , t R λ .
Then, the CLS estimator θ ^ C L S : = ( β ^ 1 , C L S , β ^ 2 , C L S , λ ^ C L S ) T of θ is obtained by minimizing the sum of the squared deviations, that is
θ ^ C L S = arg min θ Θ θ Q ( θ ) .
Since the TVMTTINAR(1) model is stationary and ergodic and the first four moments are bounded, then using the Taylor expansion and the martingale central limit theorem, the following theorem about the consistency and asymptotic normality of the parameter estimator can be obtained. The detailed proof is presented in Appendix A.
Theorem 1.
Let  { X t }  be a TVMTTINAR(1) process. Then, the CLS estimator  θ ^ C L S  is consistent and has the asymptotic distribution:
n ( θ ^ C L S θ 0 ) d N ( 0 , V 1 W V 1 ) ,
where  V  and  W  are square matrices of order  2 q + 3  with the  j k t h  element given by
V j k : = E θ j g ( θ , X t 1 , Z t ) θ k g ( θ , X t 1 , Z t ) θ 0 ,
W j k : = E U t 2 ( θ ) θ j g ( θ , X t 1 , Z t ) θ k g ( θ , X t 1 , Z t ) θ 0 .

3.2. Conditional Maximum Likelihood Estimation

For a fixed value of x 0 , the conditional log-likelihood function for the TVMTTINAR(1) model can be written as
L ( θ ) : = t = 1 n t ( θ ) = t = 1 n log P ( z t , X t 1 , X t )
where P ( z t , X t 1 , X t ) is the transition probabilities defined in (4). The CML estimator θ ^ C M L : = ( β ^ 1 , C M L , β ^ 2 , C M L , λ ^ C M L ) T of θ is obtained by maximizing the conditional log-likelihood function, that is
θ ^ C M L = arg max θ Θ θ L ( θ ) .
Since ϕ i , t is nonlinear with respect to β i , j for arbitrary i = 1 , 2 and j = 0 , 1 , , q , there is no closed-form expressions for the CLS and CML estimators. The numerical solutions can be solved by the MATLAB(2021b) function “fmincon” or the R(4.2.1) function “optim”. The implementation details and performance are discussed in Section 4.
Theorem 2.
Let  { X t }  be a TVMTTINAR(1) process. Assume that the function  E [ t ( θ ) ]  has a unique maximizer in the compact parameter space Θ E 1 n 2 ( θ ) θ θ θ 0  is a nonsingular matrix, and for a neighborhood of  θ 0 , say  N ( θ 0 ) For any  i , j , k = 1 , , 2 q + 3 there is
lim n ¯ sup θ N ( θ 0 ) | 3 t ( θ ) θ i θ j θ k | < .
Then, the CML estimator  θ ^ C M L  is consistent and has the asymptotic distribution:
n ( θ ^ C M L θ 0 ) d N 0 , J 1 ( θ 0 ) I ( θ 0 ) J 1 ( θ 0 ) ,
as  n where  I ( θ 0 ) = E t ( θ ) θ t ( θ ) θ θ 0 J ( θ 0 ) = E 2 t ( θ ) θ θ θ 0 .

3.3. Inference Methods for Threshold r

In this section, we concentrate on the estimation of threshold variable r. Since r is an integer, different from the continuous-type threshold models, the integer-type threshold models usually use a one-by-one search on a fixed interval [ r ̲ , r ¯ ] to make the loss function optimal to obtain the threshold estimator. In applications, typically, the empirical 10th and 90th quantile value of the sample as r ̲ and r ¯ is used, respectively. The methods commonly used at present are the CLS and CML. For the CLS method, the estimation of the threshold variable r can be achieved based on the following steps:
  • Step 1. Denote r ̲ and r ¯ as the 10th and 90th quantile value of the observations { X 1 , , X n } , for each r [ r ̲ , r ¯ ] N , and find r ^ C L S such that
    r ^ C L S = arg min r [ r ̲ , r ¯ ] N Q θ ( r ) .
  • Step 2. The parameter vector θ ^ C L S ( r ^ C L S ) is estimated by (6) under the estimator r ^ C L S , and all the parameters under r unknown cases are as follows:
    η ^ C L S = ( θ ^ C L S ( r ^ C L S ) , r ^ C L S ) .
  • Similarly, the CML estimates for the threshold variable r can also be achieved based on the following steps:
  • Step 1. Denote r ̲ and r ¯ as the 10th and 90th quantile value of the observations { X 1 , , X n } , for each r [ r ̲ , r ¯ ] N , and find r ^ C M L such that
    r ^ C M L = arg max r [ r ̲ , r ¯ ] N L θ ( r ) .
  • Step 2. The parameter vector θ ^ C M L ( r ^ C M L ) is estimated by (6) under the estimator r ^ C M L , and all the parameters under r unknown cases are as follows:
    η ^ C M L = ( θ ^ C M L ( r ^ C M L ) , r ^ C M L ) .
  • Similar procedures can be found in [13,14].

3.4. Testing the Existence of the Piecewise Structure

Threshold models are typically characterized by piecewise linearization, which divides a complex system into regimes using a specific threshold. Therefore, testing to detect the existence of segmented structures is very necessary. To date, many researchers have come up with different test statistics. A common high-performance method is to construct the likelihood ratio (LR) test based on the conditional likelihood function; see [14]. However, the LR test cannot be implemented because the TVMTTINAR(1) model is constructed by two operators. In this paper, we constructed the Wald test statistics to detect the existence of piecewise structures in the TVMTTINAR(1) model. The null hypothesis and the alternative hypothesis take the form:
H 0 ( 1 ) : β 1 = β 2 vs . H 1 ( 1 ) : β 1 β 2 .
Note that, although the TVMTTINAR(1) model does not degenerate to the INAR-type model when β 1 = β 2 , the probability of this happening is extremely small and will not be considered here. That is to say, we only prove the existence of the piecewise structure by testing β 1 β 2 . A simple idea, learned from [22], is to use the test of the difference between two normal population means based on the asymptotic normality of some consistent estimators. Then, we construct the Wald test based on the asymptotic distribution (9) of the CLS estimator θ ^ C L S and obtain the following result.
Let Σ ^ = V ^ 1 W ^ V ^ 1 , where V ^ and W ^ are square matrices of order 2 q + 3 with the j k th element given by
V ^ j k : = 1 n t = 1 n θ j g ( θ , X t 1 , Z t ) θ k g ( θ , X t 1 , Z t ) θ = θ ^ C L S ,
W ^ j k : = 1 n t = 1 n U t 2 ( θ ) θ j g ( θ , X t 1 , Z t ) θ k g ( θ , X t 1 , Z t ) θ = θ ^ C L S .
Obviously, they are the consistent estimators of V and W (defined in Theorem 1). Then, the statistic for testing the problem (10) is defined by
T n ( 1 ) = I ( β ^ 1 , C L S β ^ 2 , C L S ) A Σ ^ A / n ,
where I = ( 1 , , 1 ) 1 × ( q + 1 ) , A = ( I , I ) 1 × 2 ( q + 1 ) . Furthermore, under H 0 ( 1 ) ,
T n ( 1 ) d N ( 0 , 1 ) , as n .

3.5. Testing the Existence of Explanatory Variables

The existence of observable explanatory variables in the TVMTTINAR(1) model constructs time-varying characteristics. Once the explanatory variables are not present, the model degrades to a constant-coefficient mixture thinning operator threshold INAR model (MTTINAR(1)), i.e.,
X t = ( α 1 X t 1 ) I 1 , t R + ( α 2 X t 1 ) I 2 , t R + ε t , t Z ,
where α i ( 0 , 1 ) , i = 1 , 2 . Therandom variables are similar to Definition 1. A more-general version of such a problem is to test whether the explanatory variable coefficients β i , j ( i = 1 , 2 , j = 1 , 2 , , q ) in each regime are all zeros, i.e.,
H 0 ( 2 ) : β i , j = 0 , i = 1 , 2 , j = 1 , , q vs . H 1 ( 2 ) : At least one β i , j 0 , i { 1 , 2 } , 1 j q .
For this, we construct the following two test statistics. The first method is to construct a test statistic using the asymptotic normality of the estimator θ ^ C L S . Let 0 j × k be a zero matrix with j rows and k columns, B = ( 0 q × 1 , I q × q ) , C = B 0 q × ( q + 1 ) 0 q × ( q + 1 ) B . We construct the test statistic as follows:
T n ( 2 ) = n θ ^ C L S C ( C Σ ^ C ) 1 C θ ^ C L S .
Then, under H 0 ( 2 ) ,
T n ( 2 ) d χ 2 q 2 , as n .
Another approach is to construct a classical likelihood ratio (LR) test statistic. Let θ ˜ : = ( α 1 , α 2 , λ ) be the parameter of the MTTINAR(1) model with the parameter set Θ θ ˜ :
Θ θ ˜ = { θ ˜ ( 0 , 1 ) × ( 0 , 1 ) × ( 0 , + ) } .
Then, the LR statistic for testing problem (12) is defined by
T n ( 3 ) = 2 ( max Θ θ L ( θ ) max Θ θ ˜ L ˜ ( θ ˜ ) ) ,
where L ˜ ( θ ˜ ) is the conditional log-likelihood function for the MTTINAR(1) model (11). Suppose we have a series of observations { x t } t = 1 n generated from the MTTINAR(1) process, then L ˜ ( θ ˜ ) is given by
L ˜ ( θ ˜ ) : = t = 1 n ˜ t ( θ ˜ ) = t = 1 n log P ˜ ( x t 1 , x t ) , P ˜ ( x t 1 , x t ) = I 1 , t R m = 0 min ( x t 1 , x t ) x t 1 m e λ λ x t m ( x t m ) ! α 1 m ( 1 α 1 ) x t 1 m + I 2 , t R m = 0 x t Γ ( x t 1 + m ) Γ ( x t 1 ) Γ ( m + 1 ) α 2 m ( 1 + α 2 ) x t 1 + m e λ λ x t m ( x t m ) ! .
Furthermore, under H 0 ( 2 ) ,
T n ( 3 ) d χ 2 q 2 , as n .

4. Simulation Studies

To evaluate the finite-sample performance of the proposed inference methods and testing statistics, we conducted extensive simulation studies and split the simulation studies into the following four parts. In the first two parts, we considered the performance of the CLS and CML estimators in two cases where threshold r is known and unknown. In the third and forth parts, we mainly focused on the performance of the proposed test statistics by empirical sizes and powers.
To get started, we first introduce the following models applied to Section 4.1 and Section 4.2. The models are divided into the A-type and B-type models, which represent the R = 0 and R = 1 TVMTTINAR(1) models, respectively. The two types of models choose similar parameters. In order to save space, we introduce the A-type model, while the parentheses represent the similar B-type model:
  • Model A1 (B1): Generated from the TVMTTINAR(1) process (3) with R = 0 ( R = 1 ), λ = 5 , r = 6 , ( β 1 , 0 , β 1 , 1 ) = ( 0.1 , 0.3 ) , ( β 2 , 0 , β 2 , 1 ) = ( 0.5 , 0.6 ) . The explanatory variables Z 1 , t are generalized from the i.i.d. normal distribution N ( 0 , 1 ) .
  • Model A2 (B2): Generated from the TVMTTINAR(1) process (3) with R = 0 ( R = 1 ), λ = 5 , r = 6 , ( β 1 , 0 , β 1 , 1 ) = ( 0.3 , 0.3 ) , ( β 2 , 0 , β 2 , 1 ) = ( 0.5 , 0.6 ) . The explanatory variables Z 1 , t are generalized from an AR(1) process, i.e., Z 1 , t = 0.5 Z 1 , t 1 + ϵ t with Z 1 , 0 = 0 , ϵ t N ( 0 , 1 ) .
  • Model A3 (B3): Generated from the TVMTTINAR(1) process (3) with R = 0 ( R = 1 ), λ = 5 , r = 6 , ( β 1 , 0 , β 1 , 1 , β 1 , 2 ) = ( 0.1 , 0.5 , 0.3 ) , ( β 2 , 0 , β 2 , 1 , β 2 , 2 ) = ( 0.3 , 0.5 , 0.6 ) . The explanatory variables Z 1 , t are generalized from an AR(1) process, i.e., Z 1 , t = 0.5 Z 1 , t 1 + ϵ t with Z 1 , 0 = 0 , ϵ t N ( 0 , 1 ) ; Z 2 , t is generalized from a seasonal series, i.e., Z 2 , t = sin ( 2 π t / 12 ) + ϵ t with ϵ t N ( 0 , 0.25 ) .
All simulations were implemented in MATLAB. The sample size considered in all simulations was n = 200 , 500 , 1000 . For each model, the value of r was chosen such that the observations in each regime comprised at least 20% of the total sample size. The empirical results displayed in the tables and box plots, that is the empirical biases and mean square errors (MSEs), were computed over 10,000 replications.

4.1. Simulation Study When r Is Known

Table 1 and Table 2 report the bias and MSE of the CLS and CML estimators for Models A1–B3 when r is known. It is easy to see that all the simulation results performed better as n increased, which implies that the two estimation methods can lead to good and consistent estimators when r is known. It is worth mentioning that, although it is not mentioned in the main Conclusion, the simulation showed that the CML estimators are consistent. In addition, θ ^ C M L has smaller bias and MSE, which means that θ ^ C M L is better than θ ^ C L S .
For the sake of intuition, we also give the box plots and QQ plots of the CLS and CML estimators. Figure 1 plots the bias of 10,000 CLS and CML simulation estimators for Models A1 and B1. Note that the box plots are symmetric and centered on zero-bias; both the bias and MSE for the CML estimators are smaller than the CLS estimators, which is consistent with the previous conclusions. Figure 2 shows the QQ plots of the CLS and CML estimators for Models A1 and B1 with the sample size n = 1000 . It is easy to see that the CLS and CML estimators are asymptotically normal for all parameters, especially for the CML estimator without the asymptotically normal theorem. Similar results were obtained for the remaining models, and the figures are omitted here to save space.

4.2. Simulation Study When r Is Unknown

Table 3 and Table 4 report the performance of the proposed CLS and CML estimators in Section 3.3 for Models A1–B3 when r is known. It is easy to draw the following conclusions from the tabular results. For small sample sizes (such as n = 200 ), the bias and MSE of the estimator are still relatively large. However, with the increase of the sample size, this deviation decreases very quickly, mainly because the accuracy of threshold estimation is greatly improved with the increase of the sample size. Moreover, the CML estimator demonstrates a noticeable advantage over the CLS estimator. Nevertheless, this does not imply that CLS estimators lack any advantages. Table 5 reports the percentage of correctly identifying r (Frequency) and average time (s) across 10,000 replications. Without the closed-form solutions of the two methods, the CLS estimation method is still very advantageous in the calculation speed.

4.3. Empirical Sizes and Powers of the Wald Test

Some simulations were conducted to investigate the performances of the Wald test T n ( 1 ) . We selected the significance level as α = 0.05 (the associated critical value was 1.65 ). For analyzing the empirical size, we first introduced the following two time-varying integer-valued autoregressive models (TVINAR).
TVINAR ( 1 ) - B : X t = exp ( Z t β 1 ) 1 + exp ( Z t β 1 ) X t 1 + ε t ,
TVINAR ( 1 ) - G : X t = exp ( Z t β 2 ) 1 + exp ( Z t β 1 ) X t 1 + ε t .
where the explanatory variables Z 1 , t are generalized from an AR(1) process, i.e., Z 1 , t = 0.5 Z 1 , t 1 + ϵ t with Z 1 , 0 = 0 , ϵ t N ( 0 , 1 ) ; Z 2 , t is generalized from a seasonal series, i.e., Z 2 , t = sin ( 2 π t / 12 ) + ϵ t with ϵ t N ( 0 , 0.25 ) .
For analyzing the empirical size, Models T 11 T 22 were considered. For analyzing the empirical power, Models T 31 T 32 were considered:
  • Model T 11 : Generated from the TVINAR(1)-B process (13) with ( β 1 , λ ) = ( β 1 , 0 , β 1 , 1 , β 1 , 2 , λ ) = ( 0.1 , 0.5 , 0.3 , 5 ) .
  • Model T 12 : Generated from the TVINAR(1)-B process (13) with ( β 1 , λ ) = ( β 1 , 0 , β 1 , 1 , β 1 , 2 , λ ) = ( 0.7 , 0.8 , 0.6 , 2 ) .
  • Model T 21 : Generated from the TVINAR(1)-G process (14) with ( β 2 , λ ) = ( β 2 , 0 , β 2 , 1 , β 2 , 2 , λ ) = ( 0.3 , 0.3 , 0.6 , 2 ) .
  • Model T 22 : Generated from the TVINAR(1)-G process (14) with ( β 2 , λ ) = ( β 2 , 0 , β 2 , 1 , β 2 , 2 , λ ) = ( 0.6 , 0.8 , 0.5 , 7 ) .
  • Model T 31 : Generated from the TVMTTINAR(1) process (3) with R = 0 , λ = 5 , r = 6 , β 1 = ( β 1 , 0 , β 1 , 1 , β 1 , 2 ) = ( 0.1 , 0.5 , 0.3 ) , β 2 = ( β 2 , 0 , β 2 , 1 , β 2 , 2 ) = ( 0.3 , 0.3 , 0.6 ) . The explanatory variables Z 1 , t are generalized from an AR(1) process, i.e., Z 1 , t = 0.5 Z 1 , t 1 + ϵ t with Z 1 , 0 = 0 , ϵ t N ( 0 , 1 ) ; Z 2 , t is generalized from a seasonal series, i.e., Z 2 , t = sin ( 2 π t / 12 ) + ϵ t with ϵ t N ( 0 , 0.25 ) .
  • Model T 32 : Generated from the TVMTTINAR(1) process (3) with R = 1 , λ = 3 , r = 6 , β 1 = ( β 1 , 0 , β 1 , 1 , β 1 , 2 ) = ( 0.4 , 0.8 , 0.3 ) , β 2 = ( β 1 , 0 , β 1 , 1 , β 1 , 2 ) = ( 0.3 , 0.7 , 0.4 ) . The explanatory variables Z 1 , t are generalized from an AR(1) process, i.e., Z 1 , t = 0.5 Z 1 , t 1 + ϵ t with Z 1 , 0 = 0 , ϵ t N ( 0 , 1 ) ; Z 2 , t is generalized from a seasonal series, i.e., Z 2 , t = sin ( 2 π t / 12 ) + ϵ t with ϵ t N ( 0 , 0.25 ) .
The results are reported in Table 6. As can be seen from Table 6, for the empirical sizes, the Wald test T n ( 1 ) gave satisfactory performances and the empirical sizes for Models T 11 T 22 were closer to the significant level of α = 0.05 as the sample size increased. For the empirical powers, Table 6 also indicates that the T n ( 1 ) succeeded in showing high values in almost each case. The above discussion shows the success of the proposed Wald test T n ( 1 ) to detect the existence of the piecewise structure.

4.4. Empirical Sizes and Powers of the Proposed Test in Section 3.5

Similarly, we investigated the performances of the test T n ( 2 ) and the LR test T n ( 3 ) with the following models:
  • Model T 41 : Generated from the MTTINAR(1) process (11) with ( α 1 , α 2 , λ , r , R ) = ( 0.6225 , 0.4502 , 5 , 6 , 0 ) .
  • Model T 42 : Generated from the MTTINAR(1) process (11) with ( α 1 , α 2 , λ , r , R ) = ( 0.4502 , 0.6225 , 3 , 4 , 1 ) .
  • Model T 51 : Generated from the TVMTTINAR(1) process (3) with R = 0 , λ = 5 , r = 6 , β 1 = ( β 1 , 0 , β 1 , 1 ) = ( 0.5 , 0.3 ) , β 2 = ( β 2 , 0 , β 2 , 1 ) = ( 0.2 , 0.2 ) . The explanatory variables Z 1 , t are generalized from the i.i.d. uniform distribution U ( 10 , 1 ) .
  • Model T 52 : Generated from the TVMTTINAR(1) process (3) with R = 1 , λ = 5 , r = 6 , β 1 = ( β 1 , 0 , β 1 , 1 ) = ( 0.5 , 0.3 ) , ( β 2 ) = ( β 2 , 0 , β 2 , 1 ) = ( 0.2 , 0.2 ) . The explanatory variables Z 1 , t are generalized from the i.i.d. uniform distribution U ( 10 , 1 ) .
Models T 41 and T 42 were used to analyze the empirical sizes; Models T 41 and T 42 were applied to analyze the empirical powers. We selected the significance level of α = 0.05 (since q = 1 , the associated critical value was 5.991 ). Table 7 shows the empirical sizes and powers result. It is easy to see that, for the empirical sizes, both tests gave satisfactory performances and the empirical sizes for Models T 41 T 42 were closer to the significant level of α = 0.05 as the sample size increased. For the empirical power, both proposed tests were increasingly close to 1 as the sample size increased in each case. In addition, although both tests can successfully detect the existence of the explanatory variables, the LR test performed significantly better.

5. Real Data Example

In this section, we will utilize the TVMTTINAR(1) model to match the daily stock trading volume dataset of an automotive company, Volkswagen Corporation (VOW). There are some explanations for the selection of the factors affecting the trading volume of stocks in the automotive industry. The volume of stock trading in the automotive industry can be affected by a number of factors, the most well-known of which are the state of the economy and the state of the oil market. After all, the state of the economy largely determines consumers’ ability and willingness to buy. On the one hand, the fluctuation of oil prices increases the production cost, and on the other hand, it affects the willingness of consumers to buy traditional fuel vehicles and directly or indirectly affects the automobile industry.
Economic conditions can be represented by some stock market indices. We selected the Dow Jones Industrial average indices (DJI) here. The DJI can reflect the overall performance of the stock market and is also used as an indicator of the health of the economy. Therefore, it is reasonable to choose the DJI’s stock data series as economic indicators. On the other hand, oil prices vary widely between countries and regions, making it difficult to find a uniform measure, the Crude oil (Co) stock data series was selected here to represent the oil market.
All datasets were originally downloaded online from the Yahoo finance web site (https://hk.finance.yahoo.com/, accessed on 7 December 2010). Both the DJI and Co stock data series include open, high, low, close, and adjusted (Adj) close prices, wherein the Adj close price datasets were selected as the explanatory variables for the analysis. In addition, the two explanatory variable datasets need to be differentiated to reflect fluctuations in the economy and the oil market.

5.1. Volkswagen Corporation Daily Stock Trading Volume Data

We first considered the VOW daily stock trading volume dataset, which consist of 281 observations starting in 7 December 2010 and ending in 11 January 2012. As the data are relatively large, considering the convenience of calculation, we analyzed the data by the unit of a “2 × 10 5 ” trading volume. Figure 3 shows the sample path and the sample autocorrelation (ACF) of the observations, where the first line shows the sample path and ACF of the VOW daily stock trading volume dataset and the second line shows the sample path after the difference in the DJI and Co’s Adj close prices.
Next, we used the TVMTTINAR(1) model and the following integer-valued threshold autoregressive models to fit the VOW corporation dataset and compare different models via the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC):
  • SETINAR(2,1) model [12].
  • NBTINAR(1) model [15].
  • RCTINAR(1) model [11].
  • BiNB-MTTINAR(1) ( R = 0 ) model [17].
  • BiNB-MTTINAR(1) ( R = 1 ) model [17].
For each of the above models, we estimated the CML of the parameters and the threshold r with the range r { 4 , 5 , , 12 } , where 4 and 12 are the 10th and 90th quantiles of the data. Furthermore, the standard error (SE) of θ ^ C M L , the root mean square of the differences between the observations and forecasts (RMS), and the AIC and BIC values are also given. Among them, the standard error for the CML estimator can be obtained as the square roots of the elements in the diagonal of the inverse of the negative Hessian of the log-likelihood calculated at the CML estimates. The RMS is defined as follows:
RMS = 1 n 1 t = 2 n X t i = 1 2 exp ( Z t β i ) 1 + exp ( Z t β i ) X t 1 I i , t R λ 2 .
The fitting results are summarized in Table 8. As seen from the results presented in Table 8, the proposed TVMTTINAR(1) ( R = 0 ) model outperformed the other SETINAR models when considering the AIC as an information criterion. However, due to the excessive number of parameters, when considering the BIC as an information criterion, the model appeared slightly less favorable. Additionally, the TVMTTINAR(1) ( R = 0 ) model had the lowest RMS value. Taking all factors into account, the TVMTTINAR(1) ( R = 0 ) model was highly competitive, making it reasonable to apply it for fitting this VOW dataset.
Then, we computed the (standardized) Pearson residuals Pr t ( θ ^ ) to check if the fit model was adequate for the data.
Pr t ( θ ^ ) = X t i = 1 2 exp ( Z t β i ) 1 + exp ( Z t β i ) X t 1 I i , t R λ exp ( Z t β 1 ) [ 1 + exp ( Z t β 1 ) ] 2 X t 1 I 1 , t R + exp ( Z t β 2 ) [ 1 + 2 exp ( Z t β 2 ) ] [ 1 + exp ( Z t β 2 ) ] 2 X t 1 I 1 , t R + λ , .
We proceeded to apply the TVMTTINAR(1) ( R = 0 ) model to fit this dataset and computed some additional fitting-related information beyond Table 8. These details are summarized in Table 9, encompassing the proportion of samples below the threshold value relative to the total sample size (rate), the test statistic T n ( 1 ) for testing the presence of the segmented structure, the test statistic T n ( 3 ) for testing the existence of explanatory variables, along with the mean and variance of the Pearson residuals.
After computing, the proportion of estimated values below the threshold was 0.5018, indicating reliable results on both sides of the threshold. T n ( 1 ) = 2.2343 , surpassing the critical value of 1.65 at a 0.05 significance level, leading us to reject the null hypothesis H 0 ( 1 ) : β 1 = β 2 , confirming the presence of a segmented structure. T n ( 3 ) = 35.2307 , exceeding the critical value of 9.487 at a 0.05 significance level, compelling us to accept the alternative hypothesis H 1 ( 2 ) : At least one β i , j 0 , i { 1 , 2 } , 1 j q . Although the TVMTTINAR(1) ( R = 0 ) model in Table 8 demonstrates superiority in terms of the AIC and RMS, its superiority is not immediately evident. However, it is important to highlight that these test results fully justify the introduction of mixture thinning operators and observable explanatory variables. This provides another level of evidence supporting the suitability of the TVMTTINAR(1) ( R = 0 ) model for fitting this dataset, thus highlighting the model’s competitiveness. Additionally, the model’s fit Pearson residuals exhibited a mean of 0.0041 and a variance of 1.1116, indicating a well-balanced fit.
Finally, Figure 4 shows the diagnostic checking plots for our out fit model, including the (a) standardized residuals, (b) Histogram of the standardized residuals, (c) ACF plot of the residuals, and (d) PACF plot of the residuals. From Figure 4, it can be seen that the Pearson residual samples’ ACF and PACF had values close to zero, which reveals that our fit model was more suitable.

5.2. Another VOW Daily Stock Trading Volume Dataset

Similar to the first data analysis, we considered another dataset of the VOW daily stock trading volume, which also consists of 281 observations starting in 4 May 2010 and ending in 6 June 2011. As the data are relatively large, considering the convenience of calculation, we analyzed the data by the unit of a “2 × 10 5 ” trading volume. Also, Figure 5 shows the sample path and the sample autocorrelation (ACF) of the observations.
Next, we compared the performances of SETINAR(2,1), NBTINAR(1), RCTINAR(1), and BiNB-MTTINAR(1) versus TVMTTINAR(1) for this series. The estimation results are shown in Table 10. Clearly, for this dataset, the TVMTTINAR(1) ( R = 1 ) model demonstrated superior performance in terms of the AIC and RMS. Additionally, it is worth noting that, while both sets of data analysis belong to the VOW dataset, particularly with overlapping data (from 7 December 2010 to 6 June 2011), the optimal model selection changed. This indicates the presence of a change point in this period, suggesting a need for further discussion and analysis in subsequent research.
Then, we also summarize the rate, the test statistic T n ( 1 ) , T n ( 3 ) , and the mean and variance of the Pearson residuals in Table 11. After computing, the proportion of estimated values below the threshold was 0.5267, indicating reliable results on both sides of the threshold. T n ( 1 ) = 1.6682 , surpassing the critical value of 1.65 at a 0.05 significance level, leading us to reject the null hypothesis H 0 ( 1 ) : β 1 = β 2 , confirming the presence of a segmented structure. T n ( 3 ) = 63.9769 , exceeding the critical value of 9.487 at a 0.05 significance level, compelling us to accept the alternative hypothesis H 1 ( 2 ) : At least one β i , j 0 , i { 1 , 2 } , 1 j q . Also, it is worth noting that the TVMTTINAR(1) ( R = 1 ) model in Table 10 exhibited some superiority in terms of the AIC and RMS. However, the degree of superiority was not clearly evident. It is important to mention that the aforementioned test results thoroughly justified the need to introduce mixture thinning operators and observable explanatory variables. These findings provide further evidence that the TVMTTINAR(1) ( R = 1 ) model is highly suitable for fitting the dataset, thereby highlighting its competitive nature. Additionally, the model’s fit Pearson residuals exhibited a mean of 0.0015 and a variance of 1.0148, indicating a well-balanced fit.
Finally, Figure 6 shows the diagnostic checking plots for our out fit model, including the (a) standardized residuals, (b) Histogram of the standardized residuals, (c) ACF plot of the residuals, and (d) PACF plot of the residuals. From Figure 4, it can be seen that the Pearson residual samples’ ACF and PACF had values close to zero, which reveals that our fitted model was suitable.

6. Conclusions

This article introduces a first-order time-varying coefficient mixture thinning threshold integer-valued autoregressive process. The process was proven to be stationary and ergodic. We investigated the CLS and CML techniques for parameter estimation, and the asymptotic properties of the estimators were demonstrated. Two methods were suggested for estimating the unknown threshold parameter r, based on the CLS and CML score functions. Additionally, we constructed the Wald test statistic to check for the existence of the piecewise structure and constructed two test statistics to test the existence of the explanatory variables. Finally, we successfully applied the TVMTINAR(1) model to Volkswagen Corporation’s daily stock trading volume datasets. From real data studies, potential problems for future research include extending the results to a mixture thinning threshold INAR model with random coefficients and studying a TVMTTINAR(1) model with change points. These will remain the subject of future research.

Author Contributions

Conceptualization, D.S. and J.Z.; methodology, D.S. and X.W.; software, D.S. and Y.Z.; validation, D.S., J.Z. and X.W.; formal analysis, D.S.; data curation, Y.Z.; writing—original draft preparation, D.S., J.Z. and X.W.; writing—review and editing, D.S., J.Z. and X.W.; visualization, Y.Z.; supervision, D.S.; project administration, D.W.; funding acquisition, D.W., J.Z. and X.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Nos. 12271231, 12101417), the Natural Science Foundation of Jilin Province (No. YDZJ202301ZYTS384), and the Liaoning Provincial Social Science Foundation (No. L22ZD065).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

Author Yiran Zhai was employed by the company State Grid Jilin Electric Power Company Limited Information and Telecommunication Company. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Appendix A

Proof of Proposition 1.
Denote ϕ max , t = max { ϕ 1 , t , ϕ 2 , t } for all t. To simplify the notation, denote Z t = ( Z 0 , , Z t ) . Compute to see that, under the stationary distribution,
E ( X t | Z t ) = E [ ( ϕ 1 , t X t 1 ) I 1 , t R | Z t ] + E [ ( ϕ 2 , t X t 1 ) I 2 , t R | Z t ] + λ = E [ E ( ( ϕ 1 , t X t 1 ) I 1 , t R | X t 1 ) | Z t ] + E [ E ( ( ϕ 2 , t X t 1 ) I 2 , t R | X t 1 ) | Z t ] + λ ϕ max , t E ( X t 1 | Z t ) + λ ( ϕ max , t ) t E ( X 0 | Z t ) + λ i = 0 t 1 ( ϕ max , t ) i ,
Similarly, we have
E ( X t 2 | Z t ) = E [ ( ϕ 1 , t X t 1 ) 2 I 1 , t R + ( ϕ 2 , t X t 1 ) 2 I 2 , t R + ε t 2 + 2 ( ϕ 1 , t X t 1 I 1 , t R ε t + ϕ 2 , t X t 1 I 2 , t R ε t ) | Z t ] [ ( ϕ 1 , t ϕ 1 , t 2 + 2 ϕ 1 , t λ ) E ( X t 1 | Z t ) + ϕ 1 , t 2 E ( X t 1 2 | Z t ) + ( ϕ 2 , t + ϕ 2 , t 2 + 2 ϕ 2 , t λ ) E ( X t 1 | Z t ) + ϕ 2 , t 2 E ( X t 1 2 | Z t ) + λ + λ 2 u max , t E ( X t 1 | Z t ) + ϕ max , t 2 E ( X t 1 2 | Z t ) + λ + λ 2
where u max , t = max { ( ϕ 1 , t ϕ 1 , t 2 + 2 ϕ 1 , t λ ) , ( ϕ 2 , t + ϕ 2 , t 2 + 2 ϕ 2 , t λ ) } for all t. If t = 1 , E ( X 1 2 | Z t ) u max E ( X 0 | Z t ) + ϕ max 2 E ( X 0 2 | Z t ) + λ + λ 2 < . Else, if t 2 ,
E ( X t 2 | Z t ) i = 0 t 1 u max ϕ max t 1 + i E ( X 0 | Z t ) + λ i = 0 t 2 u max ϕ max t 2 + i + ( λ + λ 2 ) i = 0 t 1 ϕ max 2 i < .
A similar, but tedious calculation shows that E ( X t 3 | Z t ) and E ( X t 4 | Z t ) . Combining (A1) and (A2), one can see that E ( X t k | Z t ) < for k = 1 , 2 , 3 , 4 . □
Proof of Section 2.
The results E ( X t | X t 1 , Z t ) , E ( X t | Z t ) and Var ( X t | X t 1 , Z t ) are straightforward to verify. We prove the other results of the moments and conditional moments:
(1) The variance of X t under the condition Z t is given by
Var ( X t | Z t ) = Var [ I 1 , t R ( ϕ 1 , t X t 1 ) | Z t ] + Var [ I 2 , t R ( ϕ 2 , t X t 1 ) | Z t ] + 2 Cov ( I 1 , t R ( ϕ 1 , t X t 1 ) , I 2 , t R ( ϕ 2 X t 1 ) | Z t ) + λ = I + I I + I I I + λ .
A direct calculation shows
I = Var E I 1 , t R ( ϕ 1 X t 1 ) | X t 1 | Z t + E Var I 1 , t R ( ϕ 1 X t 1 ) | X t 1 | Z t = ϕ 1 , t 2 Var ( I 1 , t R X t 1 ) + ( ϕ 1 , t ( 1 ϕ 1 , t ) E ( I 1 , t R X t 1 ) ) = ϕ 1 , t 2 E ( I 1 , t R X t 1 2 ) ϕ 1 2 p 1 2 μ 1 2 + p 1 ϕ 1 , t ( 1 ϕ 1 , t ) μ 1 = exp ( Z t β 1 ) 1 + exp ( Z t β 1 ) 2 p 1 ( σ 1 2 + μ 1 2 ) p 1 2 μ 1 2 + exp ( Z t β 1 ) [ 1 + exp ( Z t β 1 ) ] 2 p 1 μ 1 .
Similarly, we have
I I = Var E I 2 , t R ( ϕ 2 X t 1 ) | X t 1 | Z t + E Var I 2 , t R ( ϕ 2 X t 1 ) | X t 1 | Z t = ϕ 2 , t 2 Var ( I 2 , t R X t 1 ) + ( ϕ 2 , t ( 1 ϕ 2 , t ) E ( I 2 , t R X t 1 ) ) = ϕ 2 , t 2 E ( I 2 , t R X t 1 2 ) ϕ 2 2 p 2 2 μ 2 2 + p 2 ϕ 2 , t ( 1 + ϕ 2 , t ) μ 2 = exp ( Z t β 2 ) 1 + exp ( Z t β 2 ) 2 p 2 ( σ 2 2 + μ 2 2 ) p 2 2 μ 2 2 + exp ( Z t β 2 ) ( 2 + exp ( Z t β 2 ) ) [ 1 + exp ( Z t β 2 ) ] 2 p 2 μ 2 .
and
I I I = 2 Cov ( I 1 , t R ( ϕ 1 , t X t 1 ) , I 2 , t R ( ϕ 2 X t 1 ) | Z t ) = 2 exp ( Z t β 1 ) 1 + exp ( Z t β 1 ) exp ( Z t β 2 ) 1 + exp ( Z t β 2 ) p 1 p 2 μ 1 μ 2 .
Then, Var ( X t | Z t ) follows by replacing (A4), (A5) and (A6) in (A3) and some algebra.
(2) The variance of X t is given by
Var ( X t ) = Var [ I 1 , t R ( ϕ 1 , t X t 1 ) ] + Var [ I 2 , t R ( ϕ 2 , t X t 1 ) ] + 2 Cov ( I 1 , t R ( ϕ 1 , t X t 1 ) , I 2 , t R ( ϕ 2 X t 1 ) ) + λ = I + I I + I I I + λ .
Similar to the derivation of Var ( X t | Z t ) ,
I = Var E I 1 , t R ( ϕ 1 X t 1 ) | X t 1 , Z t + E Var I 1 , t R ( ϕ 1 X t 1 ) | X t 1 , Z t = Var I 1 , t R f ( ϕ 1 , t ) ϕ 1 , t X t 1 d ϕ 1 , t + E E I 1 , t R ϕ 1 , t ( 1 ϕ 1 , t ) X t 1 + Var ( I 1 , t R ϕ 1 , t X t 1 ) = Var I 1 , t R X t 1 E ( ϕ 1 , t ) + E I 1 , t R X t 1 E [ ϕ 1 , t ( 1 ϕ 1 , t ) ] + I 1 , t R X t 1 Var ( ϕ 1 , t ) = ϕ 1 2 [ p 1 ( σ 1 2 + μ 1 2 ) p 1 2 μ 1 2 ] + p 1 μ 1 ( ϕ 1 σ ϕ 1 2 ϕ 1 2 ) + p 1 σ ϕ 1 2 ( σ 1 2 + μ 1 2 ) .
Similarly, we have
I I = Var E I 2 , t R ( ϕ 2 = X t 1 ) | X t 1 , Z t + E Var I 2 , t R ( ϕ 2 X t 1 ) | X t 1 , Z t = Var I 2 , t R f ( ϕ 2 , t ) ϕ 2 , t X t 1 d ϕ 2 , t + E E I 1 , t R ϕ 2 , t ( 1 + ϕ 2 , t ) X t 1 + Var ( I 2 , t R ϕ 2 , t X t 1 ) = Var I 2 , t R X t 1 E ( ϕ 2 , t ) + E I 2 , t R X t 1 E [ ϕ 2 , t ( 1 + ϕ 2 , t ) ] + I 2 , t R X t 1 Var ( ϕ 2 , t ) = ϕ 2 2 [ p 2 ( σ 2 2 + μ 2 2 ) p 2 2 μ 2 2 ] + p 2 μ 2 ( ϕ 2 + σ ϕ 2 2 + ϕ 2 2 ) + p 2 σ ϕ 2 2 ( σ 2 2 + μ 2 2 ) .
and
I I I = 2 Cov ( I 1 , t R ( ϕ 1 , t X t 1 ) , I 2 , t R ( ϕ 2 X t 1 ) ) = 2 ϕ 1 ϕ 2 p 1 p 2 μ 1 μ 2 .
Then, Var ( X t ) follows by replacing (A8)–(A10) in (A7) and some algebra.
(3) For the conditional autocovariance Cov ( X t , X t + h | Z t + 1 , , Z t + h ) , when h = 1 ,
Cov ( X t , X t + 1 | Z t + 1 ) = Cov [ X t , E ( X t + 1 | X t ) | Z t + 1 ] = Cov { X t , [ ϕ 1 , t + 1 X t + λ ] I 1 , t + 1 R + [ ϕ 2 , t + 1 X t + λ ] I 2 , t + 1 R | Z t + 1 } = i = 1 2 [ ϕ i , t + 1 Cov ( X t , I i , t + 1 R X t ) ] ,
where
Cov ( X t , I i , t + 1 R X t ) = E ( I i , t + 1 R X t X t ) E ( I i , t + 1 R X t ) E ( X t ) = p i ( σ i 2 + μ i 2 ) p i μ i E ( X t ) ,
Then,
Cov ( X t , X t + 1 ) = i = 1 2 ϕ i p i [ ( σ i 2 + μ i 2 ) μ i E ( X t ) ] = i = 1 2 ϕ i p i γ 0 ( i ) .
When h > 1 , there is
Cov ( X t , X t + h | Z t + h ) = Cov [ X t , E ( X t + h | X t + h 1 ) | Z t + h ] = Cov { X t , ( ϕ 1 , t + h X t + h 1 ) I 1 , t + h R + ( ϕ 2 , t + h X t + h 1 ) I 2 , t + h R + λ | Z t + h } = i = 1 2 ϕ i , t + h [ Cov ( X t , I i , t + h R X t + h 1 ) ] , = i = 1 2 exp ( Z t + h β i ) 1 + exp ( Z t + h β i ) Cov ( X t , I i , t + h R X t + h 1 ) ,
where
Cov ( X t , I i , t + h R X t + h 1 ) = p i Cov ( X t , X t + h 1 | X t + h 1 r ) = p i γ h 1 ( i ) ,
Then, we obtain
Cov ( X t , X t + h ) = i = 1 2 exp ( Z t + h β i ) 1 + exp ( Z t + h β i ) p i γ h 1 ( i ) .
(4) For the unconditional autocovariance Cov ( X t , X t + h ) , when h = 1 ,
Cov ( X t , X t + 1 | Z t + 1 ) = Cov [ X t , E ( X t + 1 | X t ) ] = Cov { X t , ( ϕ 1 , t + 1 X t ) I 1 , t + 1 R + ( ϕ 2 , t + 1 X t ) I 2 , t + 1 R + λ } = i = 1 2 Cov ( X t , [ ϕ i , t + 1 I i , t + 1 R X t ) ] ,
where
Cov ( X t , ϕ i , t + 1 I i , t + 1 R X t ) = E ( ϕ i , t + 1 I i , t + 1 R X t X t ) E ( ϕ i , t + 1 I i , t + 1 R X t ) E ( X t ) = μ ϕ i ( 1 ) p i ( σ i 2 + μ i 2 ) μ ϕ i ( 1 ) p i μ i E ( X t ) ,
Then,
Cov ( X t , X t + 1 ) = i = 1 2 μ ϕ i ( 1 ) p i [ ( σ i 2 + μ i 2 ) μ i E ( X t ) ] = i = 1 2 μ ϕ i ( 1 ) p i γ 0 ( i ) .
When h > 1 , there is
Cov ( X t , X t + h ) = Cov [ X t , E ( X t + h | X t + h 1 ) ] = Cov { X t , ( ϕ 1 , t + h X t + h 1 ) I 1 , t + h R + ( ϕ 2 , t + h X t + h 1 ) I 2 , t + h R + λ | Z t + h } = i = 1 2 [ Cov ( X t , ϕ i , t + h I i , t + h R X t + h 1 ) ] ,
where
Cov ( X t , ϕ i , t + h I i , t + h R X t + h 1 ) = μ ϕ i ( h ) p i Cov ( X t , X t + h 1 | X t + h 1 r ) = μ ϕ i ( h ) p i γ h 1 ( i ) ,
Then, we obtain
Cov ( X t , X t + h ) = i = 1 2 μ ϕ i ( h ) p i γ h 1 ( i ) .
Thus, the autocorrelation function ρ ( h ) = [ i = 1 2 μ ϕ i ( h ) ϕ i p i γ h 1 ( i ) ] Var ( X t ) . □
Proof of Theorem 1.
By Taylor’s expansion, there is
0 = 1 n t = 1 n 1 2 U t 2 ( θ ^ C L S ) θ = 1 n t = 1 n 1 2 U t 2 ( θ 0 ) θ + 1 n t = 1 n 1 2 2 U t 2 ( θ 0 ) θ θ n ( θ ^ C L S θ 0 ) + o p ( n 1 / 2 ) .
We first prove
1 n t = 1 n 1 2 U t 2 ( θ ^ C L S ) θ d N ( 0 , W ) .
Now, let F n = σ { X 0 , X 1 , , X n , Z 1 , , Z n + 1 } , and for i = 1 , 2 ,
M n ( i , 0 ) = t = 1 n 1 2 U t 2 ( θ 0 ) β i , 0 = t = 1 n U t ( θ 0 ) X t 1 I i , t R exp ( Z t β i ) [ 1 + exp ( Z t β i ) ] 2 .
Then, we have
E ( M n ( i , 0 ) | F n 1 ) = E { M n 1 ( i , 0 ) + U n ( θ 0 ) X n 1 I i , n R exp ( Z n β i ) [ 1 + exp ( Z n β i ) ] 2 | F n 1 } = M n 1 ( i , 0 ) + E { U n ( θ 0 ) X n 1 I i , n R exp ( Z n β i ) [ 1 + exp ( Z n β i ) ] 2 | F n 1 } = M n 1 ( i , 0 ) ,
i.e., { M n ( i , 0 ) , F n , n 0 } is a martingale. By Proposition (1), E ( X t 4 | Z t ) < , there is
E ( ( U t ( θ 0 ) ) 2 X t 1 2 I i , n R exp ( Z t β i ) [ 1 + exp ( Z t β i ) ] 2 2 | F t 1 ) < .
Then, by the ergodic theorem, we have
1 n t = 2 n ( ( U t ( θ 0 ) ) 2 X t 1 2 I i , n R ( exp ( Z t β i ) [ 1 + exp ( Z t β i ) ] 2 ) 2 a . s . E U t ( θ 0 ) X t 1 I i , n R exp ( Z t β i ) [ 1 + exp ( Z t β i ) ] 2 2 = E U t 2 ( θ 0 ) ( β i , 0 g ( θ 0 , X t 1 , Z t ) ) 2 = W ( i 1 ) q + i , ( i 1 ) q + i ,
Thus, by Corollary 3.2 from [23], the martingale central limit theorem applies, and we obtain M n ( i , 0 ) / n d N ( 0 , W ( i 1 ) q + i , ( i 1 ) q + i ) . Similarly, we can prove that, for any i = 1 , 2 , s = 1 , , q ,
M n ( i , s ) = t = 1 n 1 2 U t 2 ( θ 0 ) β i , s = t = 1 n U t ( θ 0 ) X t 1 I i , t R Z s , t exp ( Z t β i ) [ 1 + exp ( Z t β i ) ] 2 .
is a martingale, and we have that
1 n t = 1 n [ U t ( θ 0 ) X t 1 I i , t R Z s , t exp ( Z t β i ) [ 1 + exp ( Z t β i ) ] 2 ] 2 a . s . E [ U t ( θ 0 ) X t 1 I i , t R Z s , t exp ( Z t β i ) [ 1 + exp ( Z t β i ) ] 2 ] 2 = E U t 2 ( θ 0 ) ( β i , s g ( θ 0 , X t 1 , Z t ) ) 2 = W ( i 1 ) q + i + s , ( i 1 ) q + i + s ,
that is M n ( i , s ) / n d N ( 0 , W ( i 1 ) q + i + s , ( i 1 ) q + i + s ) . Furthermore, we can also prove that
M n ( 2 q + 3 ) = t = 1 n 1 2 U t 2 ( θ 0 ) λ = t = 1 n U t ( θ 0 ) .
is a martingale and
1 n t = 1 n [ U t ( θ 0 ) ] 2 a . s . E U t 2 ( θ 0 ) = W 2 q + 3 , 2 q + 3 ,
that is M n ( 2 q + 3 ) / n d N ( 0 , W 2 q + 3 , 2 q + 3 ) .
In the same way, for any c = ( c 1 , , c 2 q + 3 ) R 2 q + 3 0 2 q + 3 , 1 , we have
1 n c M n ( 1 , 0 ) , M n ( 1 , q ) , M n ( 2 , 0 ) , , M n ( 2 , q ) , M n ( 2 q + 3 ) = 1 n t = 1 n U t ( θ 0 ) i = 1 2 s = 0 q c ( i 1 ) q + s + i X t 1 Z s , t exp ( Z t β i ) [ 1 + exp ( Z t β i ) ] 2 I i , t R + c 2 q + 3 d N 0 , E U t ( θ 0 ) i = 1 2 s = 0 q c ( i 1 ) q + s + i X t 1 Z s , t exp ( Z t β i ) [ 1 + exp ( Z t β i ) ] 2 I i , t R + c 2 q + 3 2 .
Thus, by the Cramér–Wold device,
1 n t = 1 n 1 2 U t 2 ( θ 0 ) θ = 1 n M n ( 1 , 0 ) , M n ( 1 , q ) , M n ( 2 , 0 ) , , M n ( 2 , q ) , M n ( 2 q + 3 ) d N ( 0 , W ) ,
Consider the second item of Taylor’s expansion, 1 n t = 1 n 1 2 2 U t 2 ( θ 0 ) θ θ :
1 n t = 1 n 1 2 2 U t 2 ( θ 0 ) θ θ = 1 n t = 1 n g ( θ 0 , X t 1 , Z t ) θ g ( θ 0 , X t 1 , Z t ) θ + 2 g ( θ 0 , X t 1 , Z t ) θ θ ( X t g ( θ 0 , X t 1 , Z t ) ) .
Note that
E ( 2 g ( θ 0 , X t 1 , Z t ) θ θ ( X t g ( θ 0 , X t 1 , Z t ) ) ) = 0
and then, by the ergodic theorem, we have
lim n 1 n t = 1 n 1 2 2 U t 2 ( θ 0 ) θ θ = V .
Hence, we have that
n ( θ ^ C L S θ 0 ) d N ( 0 , V 1 W V 1 ) ,
and the proof has been completed. □
Proof of Theorem 2.
Considering that the case R = 0 is similar to R = 1 , we only prove the case R = 0 . We first prove the consistency of θ ^ C M L . See [24] for a similar technique. Clearly, L ( θ ) is a measurable function of x t for all θ Θ θ ; it is continuous in an open and convex neighborhood N ( θ 0 ) . From the assumption in Theorem 2, it ensures that E [ t ( θ ) ] has a unique maximizer in the compact set Θ θ , say θ ˇ . We can assume θ ˇ N ( θ 0 ) . Meanwhile, for arbitrary point θ N ( θ 0 ) , by Jensen’s inequality, we have
E [ t ( θ ) t ( θ 0 ) ] = E log P θ ( z t , X t 1 , X t ) P θ 0 ( z t , X t 1 , X t ) log E P θ ( z t , X t 1 , X t ) P θ 0 ( z t , X t 1 , X t ) = 0 .
That is, E [ t ( θ ) ] is a strict local maximum at θ 0 .
In the following, we will show that the log-likelihood function t = 1 n t ( θ ) / n converges almost surely and uniformly in Θ θ to E [ t ( θ ) ] , that is
1 n t = 1 n t ( θ ) E [ t ( θ ) ] Θ θ a . s . 0 ,
as n . Then, the almost sure convergence of θ ^ C M L such that θ ^ C M L a . s . θ 0 follows by standard arguments due to [25]. Note that { X t } is stationary and ergodic, then t ( θ ) is a stationary and ergodic sequence of random elements that take values in the space of continuous functions C ( Θ θ , R ) equipped with the uniform norm | | · | | Θ θ . Therefore, the desired convergence result follows by an application of the ergodic theorem of [26], if the uniform integrability condition E | | t ( θ ) | | Θ θ < is satisfied. Note that
| t ( θ ) | = log ( I 1 , t R m = 0 min ( x t 1 , x t ) x t 1 m e λ λ x t m ( x t m ) ! ϕ 1 , t m ( 1 ϕ 1 , t ) x t 1 m + I 2 , t R m = 0 x t Γ ( x t 1 + m ) Γ ( x t 1 ) Γ ( m + 1 ) ϕ 2 , t m ( 1 + ϕ 2 , t ) x t 1 + m e λ λ x t m ( x t m ) ! ) .
For convenience, we first assume that x t 1 r and x t 1 < x t , i.e., ( x t x t 1 ) 1 , then together with log ( x ) < x , there is
| t ( θ ) | log e λ λ x t x t 1 ( x t x t 1 ) ! ϕ 1 , t x t 1 λ + ( x t x t 1 ) log ( λ ) log [ ( x t x t 1 ) ] + x t 1 log ( ϕ 1 , t ) λ ( x t x t 1 ) log ( λ ) + ( x t x t 1 ) x t 1 log ( ϕ 1 , t ) x t [ 1 log ( λ ) ] x t 1 [ log ( ϕ 1 , t ) log ( λ ) + 1 ] + λ
almost surely for any θ Θ θ . Similarly, if x t 1 = x t and we assume x t 1 r ,
| t ( θ ) | λ x t log ϕ 1 , t
almost surely for any θ Θ θ . If x t 1 > x t and we assume x t 1 r ,
| t ( θ ) | log x t 1 x t e λ ϕ 1 , t x t ( 1 ϕ 1 , t ) x t 1 x t j = 1 x t 1 x t log j j = 1 x t 1 log j + λ + x t { log [ ( 1 ϕ 1 , t ) ] log [ ϕ 1 , t ] } x t 1 log [ ( 1 ϕ 1 , t ) ] λ + x t { log [ ( 1 ϕ 1 , t ) ] log [ ϕ 1 , t ] } x t 1 log [ ( 1 ϕ 1 , t ) ]
almost surely for any θ Θ θ . Analogously, in terms of x t 1 > r , there is
| t ( θ ) | λ + x t { log [ ( 1 ϕ 2 , t ) ] log [ ϕ 2 , t ] } + x t 1 log [ ( 1 + ϕ 1 , t ) ]
almost surely for any θ Θ θ . Clearly, E ( X t ) < . We have the conclusion E | | t ( θ ) | | Θ θ < , and the strong consistency has been proven.
To prove this asymptotically, we study the method in [15] and perform the Taylor expansion of the score vector around θ 0 :
0 = 1 n L ( θ ^ C M L ) θ = 1 n L ( θ 0 ) θ + 1 n 2 L ( θ n ) θ θ n ( θ ^ C M L θ 0 ) ,
where θ n lies in between θ ^ C M L and θ 0 . It is easy to see E t ( θ ) θ θ 0 = 0 , which implies that { t ( θ 0 ) / θ } is a martingale difference, and
Cov t ( θ 0 ) θ = E t ( θ ) θ t ( θ ) θ θ 0 .
By the Cramér–Wold device and the central limit theorem in Theorem 18.3 of Billingsley [27], it follows that
1 n L ( θ 0 ) θ d N 0 , I ( θ 0 ) with I ( θ 0 ) = E t ( θ ) θ t ( θ ) θ θ 0 .
Next, we aim to show that 1 n 2 ( θ n ) θ θ converges to a finite nonsingular matrix:
J ( θ 0 ) = E 2 t ( θ ) θ θ θ 0
for any θ n a . s . θ 0 , where θ n lies in between θ 0 and θ ^ C M L . Note that, for any i , j = 1 , , 2 q + 3 , the Taylor’s expansion of the second-order derivatives of L ( θ n ) at θ 0 is
1 n t = 1 n 2 l t ( θ n ) θ i θ j = 1 n t = 1 n 2 l t ( θ 0 ) θ i θ j + 1 n t = 1 n 3 l t ( θ ˜ ) θ i θ j θ ( θ ˜ θ 0 ) ,
where θ ˜ is between θ 0 and θ n . According to the assumption in Theorem 2, there is
lim n ¯ sup θ N ( θ 0 ) | 3 t ( θ ) θ i θ j θ k | < ,
for any i , j , k = 1 , , 2 q + 3 . Thus, combining the assumption E 1 n 2 ( θ ) θ θ θ 0 is a nonsingular matrix, the strong consistency of the estimates, and the ergodic theorem in [28], it is easy to obtain a conclusion:
1 n 2 ( θ n ) θ θ a . s . J ( θ 0 ) ,
as n . To sum up, there is
n ( θ ^ C M L θ 0 ) d N 0 , J 1 ( θ 0 ) I ( θ 0 ) J 1 ( θ 0 ) .
as n , and the proof has been completed. □

References

  1. Brännäs, K.; Shahiduzzaman Quoreshi, A.M.M. Integer-valued moving average modelling of the number of transactions in stocks. Appl. Financ. Econ. 2010, 20, 1429–1440. [Google Scholar] [CrossRef]
  2. Schweer, S.; Weiß, C.H. Compound Poisson INAR (1) processes: Stochastic properties and testing for overdispersion. Comput. Stat. Data Anal. 2014, 77, 267–284. [Google Scholar] [CrossRef]
  3. Guan, G.; Hu, X. On the analysis of a discrete-time risk model with INAR (1) processes. Scand. Actuar. J. 2022, 2022, 115–138. [Google Scholar] [CrossRef]
  4. Al-Osh, M.A.; Alzaid, A.A. First-order integer-valued autoregressive (INAR(1)) process. J. Time Ser. Anal. 1987, 8, 261–275. [Google Scholar] [CrossRef]
  5. Steutel, F.W.; Van Harn, K. Discrete analogues of self-decomposability and stability. Ann. Probab. 1979, 7, 893–899. [Google Scholar] [CrossRef]
  6. Weiß, C.H. Thinning operations for modeling time series of counts—A survey. Asta Adv. Stat. Anal. 2008, 92, 319. [Google Scholar] [CrossRef]
  7. Freeland, R.K. True integer value time series. Asta Adv. Stat. Anal. 2010, 94, 217–229. [Google Scholar] [CrossRef]
  8. Kachour, M.; Truquet, L. A p-order signed integer-valued autoregressive (SINAR(p)) model. J. Time Ser. Anal. 2011, 32, 223–236. [Google Scholar] [CrossRef]
  9. Nasti, A.S.; Risti, M.M.; Bakouch, H.S. A combined geometric INAR(p) model based on negative binomial thinning. Math. Comput. Model. 2012, 55, 1665–1672. [Google Scholar] [CrossRef]
  10. Khoo, W.C.; Ong, S.H.; Biswas, A. Modeling time series of counts with a new class of INAR(1) model. Stat. Pap. 2017, 58, 1–24. [Google Scholar] [CrossRef]
  11. Li, H.; Yang, K.; Zhao, S.; Wang, D. First-order random coefficients integer-valued threshold autoregressive processes. Asta Adv. Stat. Anal. 2018, 102, 305–331. [Google Scholar] [CrossRef]
  12. Monteiro, M.; Scotto, M.G.; Pereira, I. Integer-valued self-exciting threshold autoregressive processes. Commun.-Stat.-Theory Methods 2012, 41, 2717–2737. [Google Scholar] [CrossRef]
  13. Wang, C.; Liu, H.; Yao, J.F.; Davis, R.A.; Li, W.K. Self-excited threshold Poisson autoregression. J. Am. Stat. Assoc. 2014, 109, 776–787. [Google Scholar] [CrossRef]
  14. Möller, T.A.; Silva, M.E.; Weiß, C.H.; Scotto, M.G.; Pereira, I. Self-exciting threshold binomial autoregressive processes. Asta Adv. Stat. Anal. 2016, 100, 369–400. [Google Scholar] [CrossRef]
  15. Yang, K.; Wang, D.; Jia, B.; Li, H. An integer-valued threshold autoregressive process based on negative binomial thinning. Stat. Pap. 2018, 59, 1131–1160. [Google Scholar] [CrossRef]
  16. Möller, T.A.; Weiß, C.H. Threshold models for integer-valued time series with infinite or finite range. Stoch. Model. Stat. Their Appl. 2015, 122, 327–334. [Google Scholar]
  17. Sheng, D.; Wang, D.; Sun, L.Q. A new First-Order mixture integer-valued threshold autoregressive process based on binomial thinning and negative binomial thinning. J. Stat. Plan. Inference 2024, 231, 106143. [Google Scholar] [CrossRef]
  18. Yang, K.; Li, H.; Wang, D.; Zhang, C. Random coefficients integer-valued threshold autoregressive processes driven by logistic regression. Asta Adv. Stat. Anal. 2021, 105, 533–557. [Google Scholar] [CrossRef]
  19. Sheng, D.; Wang, D.; Kang, Y. A new RCAR(1) model based on explanatory variables and observations. Commun. Stat.-Theory Methods 2022, 1–22. [Google Scholar] [CrossRef]
  20. Ristić, M.M.; Bakouch, H.S.; Nastixcx, A.S. A new geometric first-order integer-valued autoregressive (NGINAR(1)) process. J. Stat. Plan. Inference 2009, 139, 2218–2226. [Google Scholar] [CrossRef]
  21. Tweedie, R.L. Sufficient conditions for ergodicity and recurrence of Markov chains on a general state space. Stoch. Processes Appl. 1975, 3, 385–403. [Google Scholar] [CrossRef]
  22. Yang, K.; Li, H.; Wang, D. Estimation of parameters in the self-exciting threshold autoregressive processes for nonlinear time series of counts. Appl. Math. Model. 2018, 57, 226–247. [Google Scholar] [CrossRef]
  23. Hall, P.; Heyde, C.C. Martingale Limit Theory and Its Application; Academic Press: New York, NY, USA, 1980. [Google Scholar]
  24. Gorgi, P. Integer-valued autoregressive models with survival probability driven by a stochastic recurrence equation. J. Time Ser. Anal. 2018, 39, 150–171. [Google Scholar] [CrossRef]
  25. Wald, A. Note on the consistency of the maximum likelihood estimate. Ann. Math. Stat. 1949, 20, 595–601. [Google Scholar] [CrossRef]
  26. Rao, R.R. Relations between weak and uniform convergence of measures with applications. Ann. Math. Stat. 1962, 33, 659–680. [Google Scholar] [CrossRef]
  27. Billingsley, P. Convergence of Probability Measures, 2nd ed.; John Wiley & Sons, Inc.: New York, NY, USA, 1999. [Google Scholar]
  28. Durrett, R. Probability: Theory and Examples; Cambridge University Press: Cambridge, UK, 2019. [Google Scholar]
Figure 1. Box plots from 10,000 CLS and CML simulation estimators for Models A1 and B1, with the sample size n = 200 , 500 , 1000 .
Figure 1. Box plots from 10,000 CLS and CML simulation estimators for Models A1 and B1, with the sample size n = 200 , 500 , 1000 .
Entropy 26 00140 g001
Figure 2. QQ plots of CLS and CML estimators for Models A1 and B1, with the sample size n = 200 .
Figure 2. QQ plots of CLS and CML estimators for Models A1 and B1, with the sample size n = 200 .
Entropy 26 00140 g002
Figure 3. Sample path and ACF of the VOW daily stock trading volume dataset (from 7 December 2010 and ending in 11 January 2012).
Figure 3. Sample path and ACF of the VOW daily stock trading volume dataset (from 7 December 2010 and ending in 11 January 2012).
Entropy 26 00140 g003
Figure 4. Diagnostic checking plots for the first VOW dataset.
Figure 4. Diagnostic checking plots for the first VOW dataset.
Entropy 26 00140 g004
Figure 5. Sample path and ACF of the VOW daily stock trading volume dataset (from 7 December 2010 and ending in 6 June 2011).
Figure 5. Sample path and ACF of the VOW daily stock trading volume dataset (from 7 December 2010 and ending in 6 June 2011).
Entropy 26 00140 g005
Figure 6. Diagnostic checking plots for another VOW dataset.
Figure 6. Diagnostic checking plots for another VOW dataset.
Entropy 26 00140 g006
Table 1. Simulation results for Models A1–B2 when r is known.
Table 1. Simulation results for Models A1–B2 when r is known.
A1A2
CLSCMLCLSCML
Sample SizePara.BiasMSEBiasMSEBiasMSEBiasMSE
n = 200 β 1 , 0 −0.05440.35790.00760.2669−0.08560.33380.00710.2476
β 1 , 1 0.04300.10170.03980.09550.02610.09150.02730.0882
β 2 , 0 −0.06070.0991−0.03020.0731−0.08370.0931−0.04190.0671
β 2 , 1 −0.02160.0255−0.01640.0229−0.01860.0216−0.01600.0189
λ 0.08560.41810.02450.30470.13430.39080.04380.2752
n = 500 β 1 , 0 −0.02450.19470.00200.1150−0.04720.18860.00320.1149
β 1 , 1 0.02090.03740.01560.03370.01580.03230.01410.0297
β 2 , 0 −0.03240.0547−0.01560.0344−0.04640.0515−0.01940.0313
β 2 , 1 −0.01140.0105−0.00770.0090−0.00980.0085−0.00770.0068
λ 0.04490.23570.01410.14170.07780.22240.02140.1307
n = 1000 β 1 , 0 −0.01570.1056−0.00140.0564−0.01900.10210.00460.0534
β 1 , 1 0.00940.01690.00670.01540.00840.01520.00630.0135
β 2 , 0 −0.01970.0300−0.01010.0169−0.02260.0283−0.00750.0154
β 2 , 1 −0.00700.0053−0.00480.0045−0.00580.0044−0.00370.0035
λ 0.02790.13240.01080.07170.03690.12390.00730.0645
B1B2
CLSCMLCLSCML
Sample SizePara.BiasMSEBiasMSEBiasMSEBiasMSE
n = 200 β 1 , 0 −0.00840.05530.00890.0338−0.03040.04660.00310.0271
β 1 , 1 0.00570.00700.00450.00660.00520.00390.00410.0036
β 2 , 0 0.07180.34590.06840.29180.11480.40750.13620.3766
β 2 , 1 0.05830.20600.05870.20370.07670.23850.07450.2307
λ 0.01000.3801−0.03150.23660.07940.3770−0.01580.2228
n = 500 β 1 , 0 −0.00310.02570.00130.0147−0.01350.0225−0.00060.0121
β 1 , 1 0.00240.00260.00180.00250.00260.00180.00190.0017
β 2 , 0 0.01550.19140.01010.14540.03230.22140.03930.1878
β 2 , 1 −0.00020.09820.00050.09510.02090.12200.01980.1179
λ 0.00500.1838−0.00430.10830.03700.18550.00030.1014
n = 1000 β 1 , 0 −0.00530.0140−0.00070.0077−0.01000.0110−0.00320.0058
β 1 , 1 0.00120.00130.00070.00120.00140.00080.00100.0008
β 2 , 0 −0.00840.1157−0.00180.0860−0.00450.13760.00550.1119
β 2 , 1 −0.01630.0590−0.01350.0567−0.01270.0662−0.01140.0639
λ 0.01210.10200.00070.05780.02780.09340.00830.0504
Table 2. Simulation results for Models A3–B3 when r is known.
Table 2. Simulation results for Models A3–B3 when r is known.
A3B3
CLS CML CLS CML
Sample Size Para. Bias MSE Bias MSE Bias MSE Bias MSE
n = 200 β 1 , 0 −0.09050.41530.02030.3111−0.02850.05930.00050.0371
β 1 , 1 0.05390.10560.05880.10190.00880.00700.00750.0065
β 1 , 2 −0.00040.20130.00260.19290.00710.01260.00410.0118
β 2 , 0 −0.08370.1029−0.03600.06980.02690.39110.04320.3349
β 2 , 1 −0.01790.0196−0.01500.0172−0.01590.1750−0.01010.1707
β 2 , 2 −0.02830.0371−0.02030.03290.10300.29270.10250.2824
λ 0.14190.45080.03690.30430.05980.3590−0.00900.2323
n = 500 β 1 , 0 −0.04670.22230.00270.1373−0.01370.0258−0.00140.0149
β 1 , 1 0.03940.04230.03510.03920.00430.00250.00400.0023
β 1 , 2 0.01830.07680.01480.07080.00400.00520.00280.0050
β 2 , 0 −0.03980.0490−0.01660.0300−0.02210.2194−0.00520.1678
β 2 , 1 −0.00820.0069−0.00660.0059−0.04220.0848−0.03520.0803
β 2 , 2 −0.01180.0142−0.00680.01210.01820.13390.02040.1276
λ 0.07160.23300.02160.13950.03120.16350.00110.0974
n = 1000 β 1 , 0 −0.01520.11610.00760.0638−0.00890.0140−0.00180.0077
β 1 , 1 0.02220.02160.01720.01930.00160.00130.00130.0012
β 1 , 2 0.01010.03560.00700.03290.00190.00250.00130.0023
β 2 , 0 −0.01740.0252−0.00530.0144−0.01970.1215−0.00470.0866
β 2 , 1 −0.00370.0036−0.00280.0032−0.02820.0507−0.02250.0470
β 2 , 2 −0.00620.0071−0.00280.0061−0.00640.0781−0.00250.0748
λ 0.03050.12600.00490.06840.02020.09220.00250.0517
Table 3. Simulation results for Models A1–B2 when r is unknown.
Table 3. Simulation results for Models A1–B2 when r is unknown.
A1A2
CLSCMLCLSCML
Sample SizePara.BiasMSEBiasMSEBiasMSEBiasMSE
n = 200 β 1 , 0 −0.16730.4649−0.00950.3159−0.21180.4925−0.01090.2932
β 1 , 1 −0.06000.20970.02360.1342−0.06090.18860.01980.1131
β 2 , 0 −0.08110.0973−0.03190.0738−0.10500.0948−0.04500.0680
β 2 , 1 −0.03720.0451−0.02590.0283−0.02500.0350−0.02220.0212
λ 0.15260.46100.03380.31770.21310.45660.05470.2883
r1.01425.30960.25771.23130.89685.29760.16420.8174
n = 500 β 1 , 0 −0.05660.2308−0.00090.1222−0.06820.21720.00140.1180
β 1 , 1 0.00540.05360.01570.03670.00610.04160.01400.0307
β 2 , 0 −0.04080.0566−0.01660.0350−0.05170.0534−0.01990.0316
β 2 , 1 −0.01870.0132−0.00930.0093−0.01230.0096−0.00820.0069
λ 0.07030.25660.01660.14510.09350.23940.02260.1321
r0.17190.84270.01810.07050.10540.60420.00670.0281
n = 1000 β 1 , 0 −0.01880.1093−0.00150.0565−0.01910.10230.00460.0534
β 1 , 1 0.00800.01810.00670.01550.00830.01520.00640.0135
β 2 , 0 −0.02070.0304−0.01010.0169−0.02260.0284−0.00750.0154
β 2 , 1 −0.00780.0055−0.00480.0045−0.00580.0044−0.00370.0035
λ 0.03040.13500.01090.07180.03700.12400.00730.0645
r0.01210.04110.00090.00590.00030.0003−0.00010.0001
B1B2
CLSCMLCLSCML
Sample SizePara.BiasMSEBiasMSEBiasMSEBiasMSE
n = 200 β 1 , 0 0.03940.07380.01460.03820.00010.0673−0.00080.0333
β 1 , 1 0.01640.02320.00780.00740.02000.01550.00660.0040
β 2 , 0 0.37130.65430.13860.34080.56240.82810.35720.4869
β 2 , 1 0.31900.40570.17680.26460.51790.51360.38180.3591
λ −0.21250.7856−0.03670.3023−0.11150.87180.00310.3169
r2.512715.99650.49381.21523.243525.33750.10351.0955
n = 500 β 1 , 0 0.01880.03300.00210.01520.03610.0372−0.00370.0137
β 1 , 1 0.00770.00480.00250.00250.01140.00550.00250.0017
β 2 , 0 0.13330.31800.03250.15660.55180.67540.31910.2900
β 2 , 1 0.09260.17700.03900.11330.48520.41480.37080.2623
λ −0.08580.3009−0.00530.1144−0.18310.46470.01120.1245
r0.86165.25220.11700.16402.349618.1162−0.15130.3999
n = 1000 β 1 , 0 −0.00190.0150−0.00070.00770.04230.0197−0.00040.0063
β 1 , 1 0.00200.00150.00080.00120.00720.00180.00130.0008
β 2 , 0 0.01050.13460.00230.08840.53460.51290.34810.2311
β 2 , 1 0.00050.0723−0.00540.06080.47260.32700.39730.2285
λ −0.00110.11620.00080.0582−0.17830.2391−0.00170.0570
r0.13030.72990.02070.02311.25629.8408−0.14820.2074
Table 4. Simulation results for Models A3–B3 when r is unknown.
Table 4. Simulation results for Models A3–B3 when r is unknown.
A3B3
CLS CML CLS CML
Sample Size Para. Bias MSE Bias MSE Bias MSE Bias MSE
n = 200 β 1 , 0 −0.20880.49020.00250.3416−0.00760.06990.00500.0394
β 1 , 1 −0.10310.24600.03720.12740.01790.01520.01070.0069
β 1 , 2 −0.13520.3164−0.01690.22250.01440.02740.00550.0123
β 2 , 0 −0.10690.1053−0.03890.07110.14780.46840.06480.3516
β 2 , 1 −0.03300.0369−0.02190.01960.14500.32110.04550.2047
β 2 , 2 −0.03150.0598−0.02510.03650.23490.38620.14810.3138
λ 0.23730.53800.05120.3236−0.03110.5100−0.01610.2623
r1.25387.77420.18270.84431.39439.21270.21160.5360
n = 500 β 1 , 0 −0.06120.23410.00120.1395−0.01090.0266−0.00090.0150
β 1 , 1 0.02840.05130.03520.04020.00520.00270.00420.0023
β 1 , 2 0.00840.08550.01480.07210.00450.00570.00290.0050
β 2 , 0 −0.04330.0499−0.01710.0302−0.00820.2310−0.00290.1692
β 2 , 1 −0.01060.0078−0.00690.0059−0.03010.0983−0.03040.0834
β 2 , 2 −0.01370.0151−0.00710.01220.03050.14470.02510.1307
λ 0.08370.24430.02280.14060.02060.17450.00010.0982
r0.08530.44390.00440.01400.10020.58100.01630.0277
n = 1000 β 1 , 0 −0.01610.11700.00750.0639−0.00880.0141−0.00180.0077
β 1 , 1 0.02160.02180.01720.01930.00170.00130.00130.0012
β 1 , 2 0.00950.03590.00700.03300.00200.00250.00130.0023
β 2 , 0 −0.01770.0253−0.00530.0144−0.01900.1224−0.00450.0867
β 2 , 1 −0.00390.0036−0.00280.0032−0.02800.0513−0.02210.0473
β 2 , 2 −0.00630.0071−0.00280.0061−0.00600.0785−0.00200.0750
λ 0.03130.12690.00490.06850.01980.09270.00240.0517
r0.00310.01030.00000.00060.00360.02280.00140.0020
Table 5. The performances of r ^ for Models A1–B3.
Table 5. The performances of r ^ for Models A1–B3.
CLSCML
ModelSample SizeFrequencyAverage Time (s)FrequencyAverage Time (s)
A12000.63300.52690.78412.3501
5000.91590.63430.96944.8531
10000.99130.76660.99899.0141
A22000.71640.46140.85642.2148
5000.96190.64950.98855.2616
10000.99970.75820.99999.3752
A32000.67000.67280.84793.8789
5000.96190.92980.98809.2877
10000.99741.07120.999416.3961
CLSCML
ModelSample SizeFrequencyAverage Time (s)FrequencyAverage Time (s)
B12000.43410.38500.65112.0574
5000.77670.45810.89094.6666
10000.95740.55140.97939.2278
B22000.16330.40050.44632.4881
5000.27200.50620.63985.6456
10000.52500.61050.794411.5242
B32000.65130.48650.80693.1971
5000.95400.64480.97767.5834
10000.99650.80100.998014.9456
Table 6. Empirical sizes and powers of T n ( 1 ) at level 0.05.
Table 6. Empirical sizes and powers of T n ( 1 ) at level 0.05.
Empirical Sizes, Significance Level α = 0.05
Sample Size Sample Size
ModelMethod n = 200 n = 500 n = 1000 ModelMethod n = 200 n = 500 n = 1000
T 11 T n ( 1 ) 0.04970.05150.0517 T 12 T n ( 1 ) 0.02550.05170.0559
T 21 T n ( 1 ) 0.01230.04480.0527 T 22 T n ( 1 ) 0.00120.01270.0501
Empirical Powers, Significance Level α = 0.05
Sample Size Sample Size
ModelMethod n = 200 n = 500 n = 1000 ModelMethod n = 200 n = 500 n = 1000
T 31 T n ( 1 ) 0.84140.95070.9972 T 32 T n ( 1 ) 0.50160.79790.967
Table 7. Empirical sizes and powers of T n ( 2 ) and T n ( 3 ) at level 0.05.
Table 7. Empirical sizes and powers of T n ( 2 ) and T n ( 3 ) at level 0.05.
Empirical Sizes, Significance Level α = 0.05
Sample Size Sample Size
ModelMethod n = 200 n = 500 n = 1000 ModelMethod n = 200 n = 500 n = 1000
T 41 T n ( 2 ) 0.05800.04660.0475 T 42 T n ( 2 ) 0.03200.04130.0445
T n ( 3 ) 0.05040.05020.0495 T n ( 3 ) 0.04560.04870.0509
Empirical Power, Significance Level α = 0.05
Sample Size Sample Size
ModelMethod n = 200 n = 500 n = 1000 ModelMethod n = 200 n = 500 n = 1000
T 51 T n ( 2 ) 0.99871.00001.0000 T 52 T n ( 2 ) 0.99871.00001.0000
T n ( 3 ) 1.00001.00001.0000 T n ( 3 ) 1.00001.00001.0000
Table 8. Fitting results of different models: CML, SE, r ^ C M L , AIC, BIC, and RMS based on the first VOW dataset.
Table 8. Fitting results of different models: CML, SE, r ^ C M L , AIC, BIC, and RMS based on the first VOW dataset.
ModelPara.CMLSE r ^ CML AICBICRMS
SETINAR(2,1) α 1 0.01690.022161395.96851406.88363.3266
α 2 0.30710.0092
λ 5.90640.0032
NBTINAR(1) α 1 0.26890.003461369.29361380.208712.3344
α 2 0.33710.0041
v17.00000.0004
RCTINAR(1) ϕ 1 0.00004.412361405.03711415.95213.3656
ϕ 2 0.22950.0034
λ 6.38690.0006
BiNB−MTTINAR(1) ϕ 1 0.48520.0076101358.65731369.57243.3655
(R = 0) ϕ 2 0.52020.0027
λ 3.77810.0013
BiNB−MTTINAR(1) ϕ 1 0.39100.003841430.58871441.50383.7434
(R = 1) ϕ 2 0.60030.0012
λ 4.66210.0061
TVMTTINAR(1) β 1 , 0 −0.5391−0.000661352.13361377.60203.1897
(R = 0) β 1 , 1 0.02030.0001
β 1 , 2 0.02150.0000
β 2 , 0 0.03000.0001
β 2 , 1 −0.2134−0.0005
β 2 , 2 0.05730.0003
λ 4.08010.0030
TVMTTINAR(1) β 1 , 0 −0.1093−0.0002121366.33131391.79983.2179
(R = 1) β 1 , 1 −0.1298−0.0003
β 1 , 2 0.07930.0002
β 2 , 0 −0.0765−0.0002
β 2 , 1 −0.2039−0.0007
β 2 , 2 0.03670.0002
λ 3.89560.0030
Table 9. The other fitting results of the TVMTTINAR(1) model.
Table 9. The other fitting results of the TVMTTINAR(1) model.
Rate T n ( 1 ) T n ( 3 ) Mean( Pr t ( θ ^ ) )Var( Pr t ( θ ^ ) )
0.50182.234335.23070.00411.1116
Table 10. Fitting results of different models: CML, SE, r ^ C M L , AIC, BIC, and RMS based on the second VOW dataset.
Table 10. Fitting results of different models: CML, SE, r ^ C M L , AIC, BIC, and RMS based on the second VOW dataset.
ModelPara.CMLSE r ^ CML AICBICRMS
SETINAR(2,1) α 1 0.24700.002861257.60001268.51512.5162
α 2 0.35110.0068
λ 4.26920.0053
NBTINAR(1) α 1 0.12450.017261259.09431270.009433.9008
α 2 0.16050.0104
v39.00000.0001
RCTINAR(1) ϕ 1 0.02280.007961268.95081279.86592.5308
ϕ 2 0.21810.0017
λ 5.42320.0007
BiNB−MTTINAR(1) ϕ 1 0.47880.007381259.90321270.81832.5413
(R = 0) ϕ 2 0.49550.0022
λ 3.15260.0010
BiNB−MTTINAR(1) ϕ 1 0.44410.002241295.93141306.84642.9048
(R = 1) ϕ 2 0.66820.0018
λ 3.31340.0041
TVMTTINAR(1) β 1 , 0 −0.3448−0.000361258.90901284.37752.5260
(R = 0) β 1 , 1 0.04040.0000
β 1 , 2 0.01760.0000
β 2 , 0 −0.2200−0.0002
β 2 , 1 0.01630.0000
β 2 , 2 −0.00800.0000
λ 3.43590.0063
TVMTTINAR(1) β 1 , 0 −1.1063−0.001051254.51451279.98292.4856
(R = 1) β 1 , 1 0.14170.0001
β 1 , 2 −0.04270.0000
β 2 , 0 −19.9998−4.3550
β 2 , 1 −0.2643−0.0002
β 2 , 2 3.36290.0030
λ 4.98690.0072
Table 11. The other fitting results of the TVMTTINAR(1) model.
Table 11. The other fitting results of the TVMTTINAR(1) model.
Rate T n ( 1 ) T n ( 3 ) Mean( Pr t ( θ ^ ) )Var( Pr t ( θ ^ ) )
0.52671.668263.97690.00151.0148
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sheng, D.; Wang, D.; Zhang, J.; Wang, X.; Zhai, Y. A Time-Varying Mixture Integer-Valued Threshold Autoregressive Process Driven by Explanatory Variables. Entropy 2024, 26, 140. https://doi.org/10.3390/e26020140

AMA Style

Sheng D, Wang D, Zhang J, Wang X, Zhai Y. A Time-Varying Mixture Integer-Valued Threshold Autoregressive Process Driven by Explanatory Variables. Entropy. 2024; 26(2):140. https://doi.org/10.3390/e26020140

Chicago/Turabian Style

Sheng, Danshu, Dehui Wang, Jie Zhang, Xinyang Wang, and Yiran Zhai. 2024. "A Time-Varying Mixture Integer-Valued Threshold Autoregressive Process Driven by Explanatory Variables" Entropy 26, no. 2: 140. https://doi.org/10.3390/e26020140

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop