Next Article in Journal
A New Varying-Factor Finite-Time Recurrent Neural Network to Solve the Time-Varying Sylvester Equation Online
Next Article in Special Issue
Test of the Equality of Several High-Dimensional Covariance Matrices: A Normal-Reference Approach
Previous Article in Journal
On Large Sum-Free Sets: Revised Bounds and Patterns
Previous Article in Special Issue
High-Dimensional U-Statistics Type Hypothesis Testing via Jackknife Pseudo-Values with Multiplier Bootstrap
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Random Coefficient Autoregressive Model Driven by an Unobservable State Variable

1
School of Mathematics, Jilin University, Changchun 130012, China
2
School of Mathematics and Statistics, Liaoning University, Shenyang 110031, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2024, 12(24), 3890; https://doi.org/10.3390/math12243890
Submission received: 30 October 2024 / Revised: 5 December 2024 / Accepted: 9 December 2024 / Published: 10 December 2024
(This article belongs to the Special Issue Computational Statistics and Data Analysis, 2nd Edition)

Abstract

:
A novel random coefficient autoregressive model is proposed, and a feature of the model is the non-stationarity of the state equation. The autoregressive coefficient is an unknown function with an unobservable state variable, which can be estimated by the local linear regression method. The iterative algorithm is constructed to estimate the parameters based on the ordinary least squares method. The ordinary least squares residuals are used to estimate the variances of the errors. The Kalman-smoothed estimation method is used to estimate the unobservable state variable because of its ability to deal with non-stationary stochastic processes. These methods allow deriving the analytical solutions. The performance of the estimation methods is evaluated through numerical simulation. The model is validated using actual time series data from the S&P/HKEX Large Cap Index.

1. Introduction

Autoregressive (AR) models constitute a pivotal class within the realm of time series analysis and have found extensive applications across various domains, including economics, finance, industry, and meteorology. The traditional AR model, with fixed autoregressive coefficients, has seen rapid advancements. However, in the process of solving practical problems, observational data are often characterized as nonlinear and dynamic, which cannot be explained by traditional AR models. By introducing random coefficients, the parameters of the model are allowed to change over time. This makes the models better adapt to the characteristics and changes in complex data, thereby enhancing the model’s fitting and predictive capabilities. This research has attracted wide interest among statisticians and scholars in related fields. The references to nonlinear time series are burgeoning. Notably, a well-known class of nonlinear AR models is random coefficient AR (RCAR) models. A study by Nicholls and Quinn (1982) [1] contained a detailed study of RCAR models, including statistical properties of the model, estimation methods, and hypothesis testing. Based on this foundation, Aue et al. (2006) [2] proposed the quasi-maximum likelihood method for estimating the parameters of an RCAR(1) process. Zhao et al. (2015) [3] investigated the parameter estimation of the RCAR model and its limiting properties under a sequence of martingale errors. This work was subsequently expanded by Horváth and Trapani (2016) [4] to encompass panel data, broadening the applicability of the theory. Most recently, Proia and Soltane (2018) [5] introduced an RCAR process in which the coefficients are correlated. Regis (2022) [6] presented a structured overview of the literature on RCAR models. The autoregressive coefficient discussed in Regis (2022) [6] incorporates a stochastic process, which is a characteristic also shared by our model. However, a key distinction lies in the time-varying nature of the random coefficient’s variance in our model. Furthermore, the existing structure in Regis (2022) [6] to which our model relates is the Dynamic Factor Model (DFM) mentioned in Section 6.2 of Regis (2022) [6]. The DFM consists of three equations relating to an observation, an unobservable latent process, and a time-varying parameter. Similarly, both models incorporate a state variable evolving as a random walk. The difference is that in the DFM the latent process has a linear time-varying autoregressive (TV-AR) structure, whereas in our model it is the observed process that has a nonlinear TV-AR structure. In this regard, the proposed model is at a higher level than the existing structures. This research introduces a novel model aimed at addressing gaps in the existing literature, with an enhanced ability to handle non-stationarity and non-linearity. By leveraging the strengths of multiple approaches, the proposed model offers a more robust framework for capturing the inherent complexity of the data. Moreover, the conventional nonparametric autoregressive models were discussed by Härdle et al. (1998) [7], Kreiss and Neumann (1998) [8], and Vogt (2012) [9], where the nonparametric component is typically related to past observational variables. It is important to note that, despite their nonlinear nature, these models are fundamentally based on stationary processes.
Differing from the aforementioned, this paper presents a nonlinear, non-stationary AR model. There has been some work on non-stationary RCAR models. For instance, Berkes et al. (2009) [10] considered a non-stationary RCAR(1) model by controlling the log expectation of the random coefficients. They demonstrated that, under such conditions, the variance of the error term cannot be estimated via the quasi-maximum-likelihood method, while proving the asymptotic normality of the quasi-maximum-likelihood estimators for the other parameters. Subsequently, Aue and Horváth (2011) [11] proposed a unified quasi-likelihood estimation procedure for both the stationary and non-stationary cases of the model. By contrast, our model features an autoregressive coefficient represented as a potentially nonlinear unknown function. Additionally, its argument at each time-point is defined by a non-stationary unobservable state equation. While the estimation process becomes more intricate, it significantly broadens the model’s applicability. Our model is also regarded as a novel random coefficient autoregressive model, incorporating a time-varying state equation.
Time-varying parameter models are recognized for their superior predictive capabilities and adaptability to data, especially in economic contexts. One of the characteristics of time-varying parameter models is that their parameters change over time, also known as time-varying parameters. The class of TV-AR time series models has been extensively studied. Ito et al. (2022, 2014, 2016) [12,13,14] studied the estimation of TV-AR and time-varying vector autoregressive (TV-VAR) models and applied them to stock prices and exchange rates. Dahlhaus et al. (1999) [15] delved into nonparametric estimation for TV-AR processes. The autoregressive coefficient in our model is an unknown function that depends on an unobservable state variable. This state variable is time-varying. Compared with the traditional TV-AR model, our model is more suitable for handling nonlinear time series data and dynamic data.
Another important feature of our model is its non-stationarity. The combination of time-varying elements and non-stationary traits is particularly significant. The non-stationarity is primarily reflected through its state equation, which allows for the model to be interpreted within the framework of state-space models. These models were initially introduced for predicting rocket trajectories in the field of engineering control, as described by Kalman (1960) [16]. A key advantage of state-space models is their capacity to incorporate unobservable state variables into an observable model, thereby enabling the derivation of estimation results. The integration of “state-space” concepts with time series analysis has emerged as a significant trend in statistics and economics, offering effective solutions to a variety of time series analysis problems. Theil and Wage (1964) [17] discussed the model that decomposes time series into a trend term and a seasonal term from the perspective of adaptive forecasting. The model was based on an autoregressive integrated moving average (ARIMA) process. Gardner (1985) [18] discussed the ARIMA(0,1,1) process from the perspective of exponential smoothing and demonstrated the rationality of the exponential smoothing method through a state-space model. State-space models offer a flexible way to decompose time series into observation and state equations. This flexibility enables better modeling of various components, including level, trend, and seasonality. Durbin and Koopman (2012) [19] provided a comprehensive overview of the application of state-space methods to time series, encompassing linear and nonlinear models, estimation methods, statistical properties, and simulation studies. In recent years, there have been some advancements in applying state-space models to autoregressive time series. Kreuzer and Czado (2020) [20] presented a Gibbs sampling approach for general nonlinear state-space models with an autoregressive state equation. Azman et al. (2022) [21] implemented the state-space model framework for volatility incorporating the Kalman filter and directly forecasted cryptocurrency prices. Additionally, Giacomo (2023) [22] proposed a novel state-space method by integrating a random walk with drift and autoregressive components for time series forecasting.
Existing RCAR and state-space models primarily analyze stationary processes. However, in practical applications like financial markets and macroeconomic data, time series often exhibit non-stationarity. And in time series, the effect of the previous moment on the current moment may not be fixed and is likely to change over time. To address these challenges, this paper introduces a novel model designed to accommodate such dynamic changes in time series. The autoregressive coefficient of our model is an unknown function of an unobservable state variable that is controlled by a non-stationary autoregressive state equation. The combination of nonlinearity, state-space formulation, and non-stationarity is novel and more suitable to explain real phenomena. Our model is applicable to a wide range of data types without presupposing the data distribution. Moreover, it offers an enhanced portrayal of data volatility, particularly for the non-stationary data with the trends and seasonality. Given the non-linear and non-stationary nature of our model, it is well suited to capture the complexities of time-varying financial and economic data. The accuracy of model fitting and prediction is improved because it takes into account the characteristics and variations of the data in a more comprehensive way.
Regarding the estimation methods, the existing methods are mainly based on the least squares and maximum likelihood classes of methods. These traditional methods usually require many conditions, such as the error terms being independent identically distributed (i.i.d.) or normally distributed. The introduction of the state equation enables estimation using the Kalman-smoothing method.This method eliminates the need for error terms to be i.i.d. or Gaussian, making it better suited for nonlinear and non-stationary scenarios while enhancing computational efficiency. This provides another efficient approach, especially when the model is extended to more complicated cases. A brief description of the methods used in this paper follows. The unknown function is initially estimated by using the local linear method. The ordinary least squares (OLS) and Kalman-smoothing methods are used to estimate the unobservable state variable. The variances of the errors are estimated using the OLS residuals. The theoretical underpinnings and development of these methods can be referred to in many studies in the literature. Fan and Gijbels (1996) [23] and Fan (1993) [24] presented the theory and utilization of local linear regression techniques. The OLS method, noted for its flexibility and broad applicability, is further elaborated by Balestra (1970) [25] and Young and Basawa (1993) [26]. For non-stationary stochastic processes, the Kalman-smoothing estimation method is particularly suitable, as discussed by Durbin and Koopman (2012) [19]. Given the nonlinearity of our model, the ideology of extended Kalman filtering (EKF) (Durbin and Koopman (2012) [19]) is used in this paper. Under the detectability condition that the filtering error tends to zero, Picard (1991) [27] proved the EKF is a suboptimal filter, while the smoothing problem is also investigated. Pascual (2019) [28] has also contributed to the understanding of EKF for the simultaneous estimation of state and parameters in a generalized autoregressive conditional heteroscedastic (GARCH) process. Some traditional understanding of Kalman filtering can also be found in Craig and Robert (1985) [29], Durbin and Koopman (2012) [19], Yan et al. (2019) [30], Hamilton (1994) [31].
The rest of this paper is organized as follows. Section 2 introduces the random coefficient autoregressive model driven by an unobservable state variable. Section 3 details the estimation methods and their corresponding algorithms. Section 4 presents the results of numerical simulations. The model is applied to a real stock indices dataset for trend forecasting in Section 5. Finally, Section 6 concludes the paper.

2. Model Definition

In this section, a new random coefficient autoregressive model is introduced. It differs from traditional random coefficients, which usually consist of a constant and a function of time. This aspect of the model presents one of the methodological challenges discussed in this paper.
The random coefficient autoregressive model driven by an unobservable state variable is defined as
y t = g ( β t ) y t 1 + ε t , ε t N ( 0 , σ ε 2 ) , β t = β t 1 + η t , η t N ( 0 , σ η 2 ) ,
for t = 1 , , T , where:
(i)
y t is an observable variable and β t is an unobservable state variable;
(ii)
{ ε t } and { η t } are two independent sequences of i.i.d. random variables. ε t is independent of y t 1 , and η t is independent of β t 1 . The residual variances, σ ε 2 and σ η 2 , are assumed to be constant;
(iii)
The unknown function g ( · ) is bounded. It has bounded and continuously differentiable up to order 2;
(iv)
For the initial value of { y t } and { β t } , we assume y 0 = 0 and β 0 = 0 .
Remark 1.
For the setting of initial values, the more general conditions can also be considered. The use of ’burn-in’ period would yield a more appropriate representation of a time series for y 0 . And β 0 could be assumed to follow a normal distribution with known mean and variance, i.e., β 0 N ( μ 0 , σ 0 2 ) . This may increase the volatility of the model. As our model is already non-stationary, these assumptions would not affect subsequent studies.
The first equation of (1) is known as the observation equation, while the second is referred to as the state equation. The non-stationarity in our model stems from the cumulative nature of the state equation. The recursive formulation of the state equation, β t = β 0 + i = 1 t η i , represents a random walk process. In particular, when g ( · ) is the identity, the model coincides with the TV-AR model of Ito et al. (2022) [12].
The subsequent proposition establishes the conditional mean, second-order conditional origin moment, and conditional variance of the model, which plays an important role in the study of the process properties and parameter estimation.
Proposition 1.
Suppose { y t } is a process defined by (1), and F y , t 1 is a σ-field generated by { y 1 , , y t 1 , η 1 , , η t , β 0 } . Then, when t 1 , we have
(1) 
E [ y t | F y , t 1 ] = g ( β t ) y t 1 ;
(2) 
E [ y t 2 | F y , t 1 ] = g 2 ( β t ) y t 1 2 + σ ε 2 ;
(3) 
V a r [ y t | F y , t 1 ] = σ ε 2 .
Proof. 
According to (1), we have
(1)
E [ y t | F y , t 1 ] = E [ g ( β t ) y t 1 + ε t | F y , t 1 ] = E [ g ( β t ) y t 1 | F y , t 1 ] + E [ ε t | F y , t 1 ] = g ( β t ) y t 1 ;
(2)
E [ y t 2 | F y , t 1 ] = E [ ( g ( β t ) y t 1 + ε t ) 2 | F y , t 1 ] = E [ ( g ( β t ) y t 1 ) 2 | F y , t 1 ] + E [ ε t 2 | F y , t 1 ] + 2 g ( β t ) y t 1 E [ ε t | F y , t 1 ] = g 2 ( β t ) y t 1 2 + σ ε 2 ;
(3)
V a r [ y t | F y , t 1 ] = E [ y t 2 | F y , t 1 ] E 2 [ y t | F y , t 1 ] = σ ε 2 .

3. Methodology

The model presented in (1) has three interesting aspects: the unknown function g ( · ) , the two variances of the errors ( σ ε 2 and σ η 2 ), and the unobservable state variable { β t } t = 1 T . The primary objective of this section is to estimate them. Assume { y t } t = 1 T is a sample derived from the model (1). The function g ( · ) is approximated using the local linear regression method. The unobservable state variable { β t } t = 1 T is estimated utilizing both the OLS and Kalman-smoothing methods. The OLS residuals are used to estimate σ ε 2 and σ η 2 . Then, these estimators σ ^ ε 2 and σ ^ η 2 are incorporated into the Kalman-smoothing method. Additionally, the corresponding algorithmic implementations for these estimation methods are provided.

3.1. Local Linear Regression Method

Suppose that { y t } t = 1 T is a sample generated from model (1) and { β t } t = 1 T is known at this stage. To estimate the unknown function g ( · ) , we employ the local linear method. Namely, for any given point w, consider the first-order Taylor series expansion
g ( W ) g ( w ) + g ( w ) ( W w ) ,
where W is a point in a small neighborhood of w. Denote a = g ( w ) and b = g ( w ) . For predetermined parameters β t , find the estimators g ^ ( w ; β t ) = a ^ and g ^ ( w ; β t ) = b ^ of g ( w ) and g ( w ) in (2) by minimizing the sum of weighted squares
t = 1 T { y t [ a + b ( β t w ) ] y t 1 } 2 K h ( β t w ) ,
with respect to a and b. The function K h ( · ) is a kernel-weighted function defined as K h ( · ) = K ( · / h ) h , with K ( · ) being a non-negative kernel function, often a symmetric probability density function, and h = h ( T ) > 0 is a bandwidth. Upon straightforward derivation, we obtain the following estimators:
g ^ ( w ; β t ) = A 1 B 1 A 2 B 0 A 1 2 A 0 A 2 , g ^ ( w ; β t ) = A 1 B 0 A 0 B 1 A 1 2 A 0 A 2 ,
provided that A 1 2 A 0 A 2 0 . The sums A j and B j are defined as follows, respectively:
A j = t = 1 T y t 1 2 ( β t w ) j K h ( β t w ) , j = 0 , 1 , 2 ,
and
B j = t = 1 T y t y t 1 ( β t w ) j K h ( β t w ) , j = 0 , 1 .
The derivation process is detailed in Appendix A. With the estimator g ^ ( · ; β t ) for the unknown function now available, the estimation of the parameter β t will be discussed in the subsequent sections.

3.2. OLS Estimation and Its Implementation

In practice, the variances of the errors, which represent the uncertainty or noise in the data, are often unknown. In this subsection, we estimate them using OLS. First, β t can be estimated using OLS. Then, the OLS residuals are used to estimate σ ε 2 and σ η 2 .
Now, we have the estimator g ^ ( · ; β t ) . Assume an initial estimate β ^ t of β t . Similar to (2), g ^ ( β t ) could be replaced by the first-order Taylor series expansion of g ^ ( β t ) at β ^ t . By neglecting higher-order terms, the model (1) can be linearized as
y t = [ g ^ ( β ^ t ; β ^ t ) + g ^ ( β ^ t ; β ^ t ) ( β t β ^ t ) ] y t 1 + ε t , β t = β t 1 + η t .
To simplify the problem, we express it in the following matrix form:
Y = Z g ^ β ^ t Z g ^ β ^ t β ^ t + Z g ^ β ^ t β + ε , β = C ( β 0 + η ) ,
where
Y = y 1 y T , β = β 1 β T , Z = y 0 0 0 y T 1 ,
β ^ t = β ^ 1 β ^ T , g ^ β ^ t = g ^ ( β ^ 1 ) g ^ ( β ^ T ) , g ^ β ^ t = g ^ ( β ^ 1 ) 0 0 g ^ ( β ^ T ) ,
β 0 = β 0 0 0 , ε = ε 1 ε T , η = η 1 η T , C = 1 0 1 1 .
The model (6) can be written in another matrix form to apply conventional regression analysis:
Y Z g ^ β ^ t + Z g ^ β ^ t β ^ t β 0 = Z g ^ β ^ t C 1 β + ε η .
Then, the OLS estimate of β is
β ^ = g ^ β ^ t 2 Z 2 + ( C T ) 1 C 1 1 g ^ β ^ t Z Y g ^ β ^ t Z 2 g ^ β ^ t + g ^ β ^ t 2 Z 2 β ^ t + ( C T ) 1 β 0 .
The derivation is detailed in Appendix B.
For the implementation of the OLS estimation, an iterative algorithm is outlined as follows:
  • Step 1. The initialization step is to specify the initial estimator β ^ by fitting the state equation. Or, just specify the initial estimator as some reasonable numerical value.
  • Step 2. Calculate g ^ ( · ; β ^ t ) and g ^ ( · ; β ^ t ) using Equation (4), replacing β t with β ^ t in all formulas.
  • Step 3. Update β ^ using Equation (8).
  • Step 4. Repeat Steps 2 and 3 until convergence to obtain the iterative estimates of β .
Then, the estimators of σ ε 2 and σ η 2 are defined by
σ ^ ε 2 = 1 T t = 1 T [ y t g ^ ( β ^ t ; β ^ t ) y t 1 ] 2 , σ ^ η 2 = 1 T t = 1 T ( β ^ t β ^ t 1 ) 2 .

Kalman-Smoothing Estimation and Its Implementation

The extended Kalman filter (EKF) operates on the principle of linearizing the model before applying the Kalman filter to the linearized version. We apply the EKF concept to linearize the model and derive the Kalman-smoothing estimation for the parameters within this linearized framework. Expand the unknown function g ( β t ) from (1) using a Taylor series around the expected value E ( β t ) . The model (1) implicitly poses β t N ( β 0 , t σ η 2 ) . By neglecting higher-order terms, the linearized model can be derived as
y t = [ g ( β 0 ) + g ( β 0 ) ( β t β 0 ) ] y t 1 + ε t , β t = β t 1 + η t ,
where g ( β 0 ) and g ( β 0 ) are determined using (4). Following Durbin and Koopman (2012) [19], we use the matrix form of model (10) to derive the Kalman-smoothed estimate of β under the assumption β 0 = 0 . For t = 1 , , T , the model is expressed as
Y = Z g β 0 + Zg β 0 β + ε , ε N ( 0 , Σ ε ) , β = C η , η N ( 0 , Σ η ) ,
with
Σ ε = σ ε 2 0 0 σ ε 2 , Σ η = σ η 2 0 0 σ η 2 .
According to the regression lemma by Durbin and Koopman (2012) [19], the Kalman-smoothed estimate of β is the conditional expectation given all y t observations:
β ^ = E ( β | Y ) = E ( β ) + C o v ( β , Y ) [ V a r ( Y ) ] 1 [ Y E ( Y ) ] .
Proposition 2.
Under the condition that the variance matrixes of the errors ( Σ ε and Σ η ) are known, the Kalman-smoothed estimator for model (11) is given by
β ^ = C Σ η C T g β 0 T Z T [ Z g β 0 C Σ η C T g β 0 T Z T + Σ ε ] 1 [ Y Z g β 0 ] .
Proof. 
From model (11), we establish the following:
(i)
E ( β ) = E ( C η ) = 0 ;
(ii)
C o v ( β , Y | Z ) = V a r ( β ) g β 0 T Z T = C Σ η C T g β 0 T Z T ;
(iii)
V a r ( Y | Z ) = Z g β 0 V a r ( β ) g β 0 T Z T + Σ ε = Z g β 0 C Σ η C T g β 0 T Z T + Σ ε ;
(iv)
E ( Y | Z ) = E ( Z g β 0 + Zg β 0 β + ε | Z ) = Z g β 0 .
Substituting these into Equation (12) yields the Kalman-smoothed estimate of β . □
Remark 2.
The elements in matrixes Σ ε and Σ η using the OLS residuals obtained from Equation (9).
To implement the Kalman-smoothing estimation, the iterative algorithm is as follows:
  • Step 1. The initialization step is to specify the initial estimator β ^ by fitting the state equation. Or, just specify the initial estimator as some reasonable numerical value.
  • Step 2. Compute g ^ β 0 and g ^ β 0 by calculating g ^ ( β 0 ; β ^ t ) and g ^ ( β 0 ; β ^ t ) with (4).
  • Step 3. Update β ^ using
    β ^ n e w = C Σ η C T g ^ ( β 0 ; β ^ ) T Z T [ Z g ^ ( β 0 ; β ^ ) C Σ η C T g ^ ( β 0 ; β ^ ) T Z T + Σ ε ] 1 [ Y Z g ^ ( β 0 ; β ^ ) ] .
  • Step 4. Repeat Steps 2 and 3 until convergence to obtain the iterative estimates of β .

4. Simulation

In this section, the numerical simulations of the two proposed methods are utilized to assess the effectiveness of parameter estimation under identical conditions. We select sample of sizes T ( T = 100 , 200 , 300 ) with M ( M = 1000 ) replications for each parameter configuration. Given the non-stationarity of our model, the Gaussian kernel function is applied throughout the simulation, defined as K ( x ) = 1 2 π e x 2 2 . Compared to other kernel functions, the Gaussian kernel is widely used for its smoothing and good mathematical properties, especially when dealing with data with different variances and distributions, which helps to reduce the variance of the estimates and provide stable estimates. Since the Gaussian kernel function is used for kernel density estimation, we employ Silverman’s (1986) [32] “rule of thumb” to select the bandwidth, which is given by h = 1.06 σ T 1 / 5 , where σ is the standard deviation of the dependent variable. This is a common practice in the field. In the data generation process, experimental data are generated according to model (1), with g ( w ) = cos ( π w ) , σ ε 2 = { 0.07 2 , 0.02 2 , 0.01 2 } , σ η 2 = 0.02 2 . The selection of these parameters is relevant to the results of the real data example. Meanwhile, the signal-to-noise ratio (SNR) serves as a guide, calculated as the variance of η t relative to ε t (Ito et al. (2022) [12]). Consequently, three representative SNR values, {0.08, 1, 4}, are considered by adjusting the variance of the error term in the observation equation. These three sample paths of our model are plotted in Figure 1. We can see that the sample paths are non-stationary and that variation in the parameter combinations results in a change in the sample dispersion of the samples.
The performance of the parameter estimates derived from the two methods is assessed using the mean absolute deviation (MAD) and the mean squared error (MSE). Let β t , m represent the true values and β ^ t , m represent the corresponding estimates. The sample means and the evaluation criteria are defined as:
β ¯ = 1 M T m = 1 M t = 1 T β t , m , β ^ ¯ = 1 M T m = 1 M t = 1 T β ^ t , m , MAD = 1 M T m = 1 M t = 1 T | β t , m β ^ t , m | , MSE = 1 M T m = 1 M t = 1 T ( β t , m β ^ t , m ) 2 .
The MAD compares each element of β and can be interpreted as the median distance between the estimate and the true process, reflecting the level of similarity between β and β ^ . The non-stationary nature of the data generation process can sometimes lead to the occurrence of outliers. In our analysis, we focus on the means of the estimated parameters. The simulated results are summarized in Table 1 and Table 2. The estimators of σ ε 2 and σ η 2 , derived from Equation (9), are formulated as follows:
σ ^ ε 2 = 1 M T m = 1 M t = 1 T [ y t , m g ^ ( β ^ t , m ; β ^ t , m ) y t 1 , m ] 2 , σ ^ η 2 = 1 M T m = 1 M t = 1 T ( β ^ t , m β ^ t 1 , m ) 2 ,
where y t , m represents the observation at time t, generated by each repetition. The simulation results are given in Table 3. As the sample size increases, the variance estimates asymptotically approach the true values, reflecting the consistency of the estimators.
Table 1 and Table 2 display some important statistical metrics for the estimators obtained by 1000 replications across various sample sizes. First, note that the order of magnitude of the MAD is much larger compared to the order of magnitude of β ¯ . As shown in (14), the comparison between β ¯ and β ^ ¯ is the mean of all the true and estimated values. Since the expectation of β t is 0, the overall means of the sample are all very close to 0. In contrast, the MAD compares each element of the true and estimated values at the corresponding time point. Also, the values of β t , m can be both positive and negative. As a result, they may cancel each other out when absolute values are not used. This, combined with the variability of β t , explains why MAD exhibits a relatively large magnitude. Observing the trend, both the MAD and MSE decrease with an increase in sample size. This trend indicates that the precision of the OLS and Kalman-smoothing estimations improves as the sample size grows. Specifically, for smaller sample sizes (e.g., T = 100 ), the estimation error is relatively large. However, as the sample size increases to T = 300 , the error metrics for both methods decrease significantly, indicating that larger sample sizes reduce estimation error and improve the predictive performance. Additionally, while varying parameter selections have a minimal impact on the estimation outcomes, it is evident that the impact on the two methods is different. OLS performs better at lower SNR values, as indicated by the smaller MAD and MSE in Table 1 for T = 300 . The Kalman-smoothing works relatively well when the SNR approaches 1. Furthermore, in the assessment of performance for nine different parameter settings, including various SNR and sample sizes T, based on the comparison of the 18 MAD and MSE values in Table 1 and Table 2, the OLS estimation performs better for 4 values, while the Kalman-smoothing estimation performs better for the remaining 14 values. But their differences are very small. In practice, the Kalman-smoothing estimation is observed to be more computationally efficient. Therefore, for models of this nature, the Kalman-smoothing estimation may be deemed more appropriate. Table 3 shows the behavior of the OLS residuals. As the sample size increases, the residuals get closer to the true values. This is an expected result since larger sample sizes typically have higher accuracy.
We calculate the mean of the estimation measures to focus on the overall performance of the methods. However, this approach does not allow us to verify the proximity of each estimated value β ^ t , n to the true β t , n across the sample period. Moreover, as the sample size increases, the number of parameters to be estimated also grows, potentially introducing a significant bias in fitting the function g ( · ) . To address this, we fit the curves of cos ( π w ) by substituting the true β t , n into the estimator g ^ ( w ; β t , n ) from Equation (4). Due to the fact that there are as many estimators as observations T and the non-stationarity of the model, the asymptotic theories are not involved in the previous section. However, the asymptotic nature of g ^ ( · ) is verified using histograms and Q-Q plots. Since the values of β t are near 0, Figure 2 shows the histograms and Q-Q plots of g ^ ( · ) at point 0 for three parameter selections when the sample size is T = 300 . We can see that the vertical bars and distribution curves in the histograms and the scatter and straight lines in the Q-Q plots are very close to each other, meaning that empirically the estimates of the unknown function are asymptotically normal. Figure 3 presents the fitted curves of the function for SNR = {0.08, 1, 4} with sample size T = 300 . It is evident that the fitted curves are close to the real curves. The estimators for the unknown function g ( · ) are assessed using the root mean squared error (RMSE), defined as RMSE = 1 T g r i d t = 1 T g r i d ( g ^ ( w t ) g ( w t ) ) 2 1 / 2 , where { w t , t = 1 , , T g r i d } represent the regular grid points. Table 4 indicates that the RMSE values for g ^ ( · ) are small and decrease as the sample size increases.
In summary, the simulation results substantiate the validity and effectiveness of the estimation methods used in this paper.

5. Real Data Example

This section utilized the closing points of the S&P/HKEX Large Cap Index (SPHKL) to demonstrate an application of our model and estimation methods. The dataset, spanning from 6 September 2020 to 28 January 2024, is accessible online at the website (https://cn.investing.com/indices/s-p-hkex-lc-chart, accessed on 11 February 2024). A stock index encapsulates the overall trend and volatility of stock prices in the market, making it a complex financial time series with characteristics such as time dependence, nonlinearity, and non-stationarity. Given these attributes, forecasting stock indices holds substantial practical importance for both investors and regulatory bodies. The dataset comprises 178 weekly observations, and Figure 4 illustrates their closing points along with the Partial Autocorrelation Function (PACF) plots. It is intuitively obvious from the sample path that the dataset is non-stationary.
We have also performed the Augmented Dickey–Fuller (ADF) test to examine the stationarity of the data. The results indicate the presence of unit root, with the p-value of 0.2172, suggesting that the dataset exhibits non-stationary behavior with a stochastic trend. The PACF plot corroborates this by revealing first-order autocorrelation, validating the suitability of our model for this dataset. The descriptive statistics for the data are displayed in Table 5. Its large variance implies that the data exhibit higher volatility and randomness, and the complexity of data interpretation also increases accordingly.
For the analysis, the dataset is divided into a training set, encompassing 168 data points from 6 September 2020 to 19 November 2023 and a test set, comprising 10 data points from 26 November 2023 to 28 January 2024. The training set is used to fit the model and estimate the parameters, while the test set serves to evaluate the predictive ability of the model. The OLS method is used to estimate the parameters. The local linear regression method, as defined by g ( β ^ t ; β ^ t ) in Equation (4), is used to estimate the function. The estimator of function g ( · ) is presented in Figure 5. From the figure, we can see that there is a clear unsteady volatility. This volatility suggests that the model captures the dynamics in the time series data and that the autoregressive coefficient is not constant but varies over time. The estimated value of β ^ ¯ is 0.0016, while the variances are estimated as σ ^ ε 2 = 0.00787 and σ ^ η 2 = 0.00005 .
The h-step-ahead predictive values of the stock index data are formulated as follows:
E ( y t + h | y t ) = k = 1 h g ( β ^ t + k ) y t .
The training set expands as more observations become available because β ^ t changes with t. That is, for each additional prediction step, a further β t must be estimated. Because of the large order of magnitude of the sample data, the data are Min-Max Normalized ( y t min ( y t ) ) / ( max ( y t ) min ( y t ) ) to eliminate the impact of the order of magnitude, allowing the data to be analyzed at the same scale. The descriptive statistics for the transformed data are displayed in Table 6. The sample path of the normalized data and the forecast data are presented in Figure 5. It is shown that the prediction on the test set seems to overestimate variability. This may be due to the complexity of our model, which could lead to overfitting. This overfitting may result in the model capturing noise in the training data rather than the underlying data-generating process, thereby inflating the estimated variability.
The mean absolute deviation (MAD) and the root mean square error (RMSE) between the real stock index data and the predicted data are used to evaluate the predictive effectiveness of the model. Table 7 presents the results, comparing our model’s predictive performance with the RCA(1) model proposed by Nicholls and Quinn (1982) [1], which is defined as y t = ( α + B t ) y t 1 + ε t , where α is a constant parameter, B t is a random term with mean zero and variance γ . The parameter estimates we obtained using the least squares method are α ^ = 0.99207 and γ ^ = 0.05494. The data predicted by the RCA(1) model are also shown in Figure 5. A comparison of the prediction curves demonstrates that our model outperforms the RCA(1) model in predicting non-stationary data.
The results verify the performance of our model and method. To verify the adequacy of the model, we analyze the standardized Pearson residuals. Figure 6 exhibits the ACF and PACF plots of the residuals, which indicate the absence of correlation among the residuals. For our model, the mean and variance of the Pearson residuals are 0.0685 and 1.0003, respectively. As discussed in Aleksandrov and Wei (2019) [33], for an adequately chosen model, the variance of the residuals should take a value approximating 1. Accordingly, the proposed model is deemed to fit the data satisfactorily.

6. Conclusions

In recent years, despite advances in autoregressive models with random coefficients, research on nonlinear time series models driven by non-stationary state-space remains scarce. To address the intricacies of nonlinear and non-stationary characteristics, we introduce a novel random coefficient autoregressive model, incorporating an unobservable state variable. This model significantly enhances flexibility and efficiency in handling non-stationary data, particularly within the realms of economics and finance. To estimate the model’s unknown function and the parameters, we have developed methodologies using local linear regression, OLS, and Kalman smoothing. The analytical formulas for g ^ ( · ) ,   g ^ ( · ) , β ^ , σ ^ ε 2 , and σ ^ η 2 derived from these methods are presented. Numerical simulations demonstrate that our estimation approach is reliable, given a reasonably large sample size. When applied to a real data example, our model exhibits commendable performance. Additionally, our future research focuses on proving the asymptotic theory of estimators in non-stationary processes, while extending our findings to a higher-order AR(p) model represents a significant direction for advancing this field.

Author Contributions

All authors contributed equally to the development of this paper. Conceptualization, D.W. and Y.P.; methodology, Y.P.; software, Y.P.; validation, Y.P. and D.W.; formal analysis, Y.P.; investigation, Y.P.; resources, D.W.; data curation, Y.P.; writing—original draft preparation, Y.P.; writing—review and editing, D.W.; visualization, Y.P.; supervision, D.W.; project administration, D.W.; funding acquisition, D.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Nos. 12271231, 12001229, 1247012719) and the Social Science Planning Foundation of Liaoning Province (No. L22ZD065).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Solving Problem (3)

The equation
t = 1 T { y t [ a + b ( β t w ) ] y t 1 } 2 K h ( β t w )
takes partial derivatives with respect to a and b. Setting the partial derivatives equal to 0 yields the following system of equations:
2 t = 1 T y t a + b ( β t w ) y t 1 y t 1 K h ( β t w ) = 0 , 2 t = 1 T y t a + b ( β t w ) y t 1 ( β t w ) y t 1 K h ( β t w ) = 0 .
From Equation (A1), we have
a = t = 1 T y t y t 1 K h ( β t w ) b y t 1 2 ( β t w ) K h ( β t w ) t = 1 T y t 1 2 K h ( β t w ) ,              (A2) b = t = 1 T y t y t 1 ( β t w ) K h ( β t w ) a y t 1 2 ( β t w ) K h ( β t w ) t = 1 T y t 1 2 ( β t w ) 2 K h ( β t w ) .              (A3)
Substituting (A3) into (A2) gives
a = t = 1 T y t 1 2 ( β t w ) K h ( β t w ) t = 1 T y t y t 1 ( β t w ) K h ( β t w ) t = 1 T y t 1 2 ( β t w ) 2 K h ( β t w ) t = 1 T y t y t 1 K h ( β t w ) t = 1 T y t 1 2 ( β t w ) K h ( β t w ) 2 t = 1 T y t 1 2 K h ( β t w ) t = 1 T y t 1 2 ( β t w ) 2 K h ( β t w ) .
Denote
A j = t = 1 T y t 1 2 ( β t w ) j K h ( β t w ) , j = 0 , 1 , 2 ,
and
B j = t = 1 T y t y t 1 ( β t w ) j K h ( β t w ) , j = 0 , 1 .
Then,
a = A 1 B 1 A 2 B 0 A 1 2 A 0 A 2 .
By substituting a into (A3) and simplifying, we obtain
b = A 1 B 0 A 0 B 1 A 1 2 A 0 A 2 .

Appendix B. Solving Problem (7)

The OLS estimate of β ^ is
β ^ = Z g ^ β ^ t C 1 T Z g ^ β ^ t C 1 1 × Z g ^ β ^ t C 1 T Y Z g ^ β ^ t + Z g ^ β ^ t β ^ t β 0 = g ^ β ^ t T Z T ( C T ) 1 Z g ^ β ^ t C 1 1 g ^ β ^ t T Z T ( C T ) 1 Y Z g ^ β ^ t + Z g ^ β ^ t β ^ t β 0 = g ^ β ^ t T Z T Z g ^ β ^ t + ( C T ) 1 C 1 1 g ^ β ^ t T Z T Y g ^ β ^ t T Z T Z g ^ β ^ t + g ^ β ^ t T Z T Z g ^ β ^ t β ^ t + ( C T ) 1 β 0 .
Since g ^ β ^ t and Z are diagonal matrices, the above equation can be simplified to:
β ^ = g ^ β ^ t 2 Z 2 + ( C T ) 1 C 1 1 g ^ β ^ t Z Y g ^ β ^ t Z 2 g ^ β ^ t + g ^ β ^ t 2 Z 2 β ^ t + ( C T ) 1 β 0 .

References

  1. Nicholls, D.S.; Quinn, B.G. Random Coefficient Autoregressive Models: An Introduction; Springer: New York, NY, USA, 1982. [Google Scholar]
  2. Aue, A.; Horváth, L.; Steinebach, J. Estimation in random coefficient autoregressive models. J. Time Ser. Anal. 2006, 27, 61–76. [Google Scholar] [CrossRef]
  3. Zhao, Z.; Wang, D.; Peng, C.; Zhang, M. Empirical likelihood-based inference for stationary-ergodicity of the generalized random coefficient autoregressive model. Commun. Statistics. Theory Methods 2015, 44, 2586–2599. [Google Scholar] [CrossRef]
  4. Horváth, L.; Trapani, L. Statistical inference in a random coefficient panel model. J. Econom. 2016, 193, 54–75. [Google Scholar] [CrossRef]
  5. Proia, F.; Soltane, M. A test of correlation in the random coefficients of an autoregressive process. Math. Methods Stat. 2018, 27, 119–144. [Google Scholar] [CrossRef]
  6. Regis, M.; Serra, P.; van den Heuvel, E.R. Random autoregressive models: A structured overview. Econom. Rev. 2022, 41, 207–230. [Google Scholar] [CrossRef]
  7. Härdle, W.; Tsybakov, A.; Yang, L. Nonparametric vector autoregression. J. Stat. Plan. Inference 1998, 68, 221–245. [Google Scholar] [CrossRef]
  8. Kreiss, J.P.; Neumann, M.H. Regression-type inference in nonparametric autoregression. Ann. Stat. 1998, 26, 1570–1613. [Google Scholar] [CrossRef]
  9. Vogt, M. Nonparametric regression for locally stationary time series. Ann. Stat. 2012, 40, 2601–2633. [Google Scholar] [CrossRef]
  10. Berkes, I.; Horváth, L.; Ling, S. Estimation in nonstationary random coefficient autoregressive models. J. Time Ser. Anal. 2009, 30, 395–416. [Google Scholar] [CrossRef]
  11. Aue, A.; Horváth, L. Quasi-likelihood estimation in stationary and nonstationary autoregressive models with random coefficients. Stat. Sin. 2011, 21, 973–999. [Google Scholar]
  12. Ito, M.; Noda, A.; Wada, T. An Alternative Estimation Method for Time-Varying Parameter Models. Econometrics 2022, 10, 23. [Google Scholar] [CrossRef]
  13. Ito, M.; Noda, A.; Wada, T. International stock market efficiency: A non-Bayesian time-varying model approach. Appl. Econ. 2014, 46, 2744–2754. [Google Scholar] [CrossRef]
  14. Ito, M.; Noda, A.; Wada, T. The evolution of stock market efficiency in the us: A non-bayesian time-varying model approach. Appl. Econ. 2016, 48, 621–635. [Google Scholar] [CrossRef]
  15. Dahlhaus, R.; Neumann, M.H.; Von, S.R. Nonlinear wavelet estimation of time-varying autoregressive processes. Bernoulli 1999, 5, 873–906. [Google Scholar] [CrossRef]
  16. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Fluids Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef]
  17. Theil, H.; Wage, S. Sorne observations on adaptive forecasting. Manag. Sci. 1964, 10, 198–206. [Google Scholar] [CrossRef]
  18. Gardner, E. Exponential smoothing: The state of the art. J. Forecast. 1985, 4, 1–28. [Google Scholar] [CrossRef]
  19. Durbin, J.; Koopman, S.J. Time Series Analysis by State Space Methods, 2nd ed.; Oxford University Press: Oxford, UK, 2012. [Google Scholar]
  20. Kreuzer, A.; Czado, C. Efficient Bayesian Inference for Nonlinear State Space Models With Univariate Autoregressive State Equation. J. Comput. Graph. Stat. 2020, 29, 523–534. [Google Scholar] [CrossRef]
  21. Azman, S.; Pathmanathan, D.; Thavaneswaran, A. Forecasting the Volatility of Cryptocurrencies in the Presence of COVID-19 with the State Space Model and Kalman Filter. Mathematics 2022, 10, 3190. [Google Scholar] [CrossRef]
  22. Giacomo, S. The RWDAR model: A novel state-space approach to forecasting. Int. J. Forecast. 2023, 39, 922–937. [Google Scholar]
  23. Fan, J.; Gijbels, I. Local Polynomial Modelling and Its Spplications; Chapman and Hall: London, UK, 1996. [Google Scholar]
  24. Fan, J. Local linear regression smoothers and their minimax efficiencies. Ann. Stat. 1993, 21, 196–216. [Google Scholar] [CrossRef]
  25. Balestra, P. On the Efficiency of Ordinary Least-Squares in Regression Models. J. Am. Stat. Assoc. 1970, 65, 1330–1337. [Google Scholar] [CrossRef]
  26. Young, H.S.; Basawa, I.V. Parameter estimation in a regression model with random coefficient autoregressive errors. J. Stat. Plan. Inference 1993, 36, 57–67. [Google Scholar]
  27. Picard, J. Efficiency of the extended Kalman filter for nonlinear systems with small noise. SIAM J. Appl. Math. 1991, 51, 843–885. [Google Scholar] [CrossRef]
  28. Pascual, J.P.; Von-Ellenrieder, N.; Areta, J.; Muravchik, C.H. Non-linear Kalman filters comparison for generalised autoregressive conditional heteroscedastic clutter parameter estimation. IET Signal Process. 2019, 13, 606–613. [Google Scholar] [CrossRef]
  29. Craig, F.A.; Robert, K. Estimation, Filtering, and Smoothing in State Space Models with Incompletely Specified Initial Conditions. Ann. Stat. 1985, 13, 1286–1316. [Google Scholar]
  30. Yan, P.; Swarnendu, B.; Donald, S.F.; Keshav, P. An elementary introduction to Kalman filtering. Commun. ACM 2019, 62, 122–133. [Google Scholar]
  31. Hamilton, J.D. Time Series Analysis; Princeton University Press: Princeton, NJ, USA, 1994. [Google Scholar]
  32. Silverman, B.W. Density Estimation for Statistics and Data Analysis; Chapman & Hall: London, UK; CRC: Boca Raton, FL, USA, 1986; p. 48. [Google Scholar]
  33. Aleksandrov, B.; Weiß, C.H. Testing the dispersion structure of count time series using Pearson residuals. AStA Adv. Stat. Anal. 2019, 104, 325–361. [Google Scholar] [CrossRef]
Figure 1. Sample paths for SNR = {0.08, 1, 4} with sample size T = 300 .
Figure 1. Sample paths for SNR = {0.08, 1, 4} with sample size T = 300 .
Mathematics 12 03890 g001
Figure 2. The histograms and Q-Q plots of the g ^ ( 0 ) for the three parameter selections with sample size T = 300 . The red line in the histogram is the curve of normal density.
Figure 2. The histograms and Q-Q plots of the g ^ ( 0 ) for the three parameter selections with sample size T = 300 . The red line in the histogram is the curve of normal density.
Mathematics 12 03890 g002
Figure 3. The real curve (red solid curve) and the fitted curve (blue dashed curve) of function g ( w ) = cos ( π w ) when the sample size is T = 300 .
Figure 3. The real curve (red solid curve) and the fitted curve (blue dashed curve) of function g ( w ) = cos ( π w ) when the sample size is T = 300 .
Mathematics 12 03890 g003
Figure 4. Time plot and PACF plot of SPHKL closing points from 6 September 2020 to 28 January 2024.
Figure 4. Time plot and PACF plot of SPHKL closing points from 6 September 2020 to 28 January 2024.
Mathematics 12 03890 g004
Figure 5. The estimated function curve for t time points and sample path of the normalized data. The vertical line is used to distinguish between the training sample and the test sample. Shown to the right of the vertical line are blue for real data, red for forecast data using the proposed model, and yellow for forecast data using the RCA(1) model.
Figure 5. The estimated function curve for t time points and sample path of the normalized data. The vertical line is used to distinguish between the training sample and the test sample. Shown to the right of the vertical line are blue for real data, red for forecast data using the proposed model, and yellow for forecast data using the RCA(1) model.
Mathematics 12 03890 g005
Figure 6. The ACF and PACF plots of the Pearson residual.
Figure 6. The ACF and PACF plots of the Pearson residual.
Mathematics 12 03890 g006
Table 1. Simulation results for different settings; we report mean of true values ( β ¯ ), mean of estimated parameters ( β ^ ¯ ), MAD, and MSE for OLS estimation.
Table 1. Simulation results for different settings; we report mean of true values ( β ¯ ), mean of estimated parameters ( β ^ ¯ ), MAD, and MSE for OLS estimation.
σ ε 2 SNRT β ¯ β ^ ¯ MADMSE
0.07 2 0.08100−0.00170−0.002640.114970.01997
2000.000260.002480.081600.01003
3000.000290.001690.066270.00667
0.02 2 11000.002720.003690.114230.01983
200−0.00259−0.002990.081840.01008
3000.001600.002320.066950.00673
0.01 2 41000.001150.003280.112720.02003
200−0.00107−0.003360.081140.00993
3000.000100.002740.068200.00690
Table 2. Simulation results for different settings; we report mean of true values ( β ¯ ), mean of estimated parameters ( β ^ ¯ ), MAD, and MSE for Kalman-smoothing estimation.
Table 2. Simulation results for different settings; we report mean of true values ( β ¯ ), mean of estimated parameters ( β ^ ¯ ), MAD, and MSE for Kalman-smoothing estimation.
σ ε 2 SNRT β ¯ β ^ ¯ MADMSE
0.07 2 0.081000.002960.001570.111520.01912
200−0.00193−0.000990.080110.00974
300−0.00105−0.002540.066980.00674
0.02 2 1100−0.00003−0.000930.113340.01964
2000.001610.000990.081330.00992
300−0.00368−0.002770.065870.00654
0.01 2 41000.000050.003480.112940.01954
2000.002870.001130.081190.00992
3000.000290.000690.067200.00675
Table 3. Simulation results for different settings; we report the OLS residuals.
Table 3. Simulation results for different settings; we report the OLS residuals.
SNR = 0.08SNR = 1SNR = 4
T σ ^ ε 2 σ ^ η 2 σ ^ ε 2 σ ^ η 2 σ ^ ε 2 σ ^ η 2
1000.005410.000300.000310.000340.000210.00036
2000.004490.000340.000370.000370.000180.00039
3000.004790.000380.000420.000380.000130.00039
Table 4. Simulation results of RMSE for g ^ ( · ) under various SNR and sample sizes.
Table 4. Simulation results of RMSE for g ^ ( · ) under various SNR and sample sizes.
SNR = 0.08SNR = 1SNR = 4
T 100200300100200300100200300
RMSE0.47630.19240.18470.52470.14580.10860.56090.14830.0800
Table 5. Descriptive statistics for stock index data.
Table 5. Descriptive statistics for stock index data.
Sample SizeMinimumMaximumMedianMeanVariance
17820,559.7648,702.2230,917.1032,965.2447,385,547.70
Table 6. Descriptive statistics for the normalized data.
Table 6. Descriptive statistics for the normalized data.
Sample SizeMinimumMaximumMedianMeanVariance
178010.36800.44080.0598
Table 7. The distance and error between forecasts and observations for stock index data.
Table 7. The distance and error between forecasts and observations for stock index data.
ModelMADRMSE
Model (1)0.05830.0642
RCA (1)0.06780.0762
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pang, Y.; Wang, D. A New Random Coefficient Autoregressive Model Driven by an Unobservable State Variable. Mathematics 2024, 12, 3890. https://doi.org/10.3390/math12243890

AMA Style

Pang Y, Wang D. A New Random Coefficient Autoregressive Model Driven by an Unobservable State Variable. Mathematics. 2024; 12(24):3890. https://doi.org/10.3390/math12243890

Chicago/Turabian Style

Pang, Yuxin, and Dehui Wang. 2024. "A New Random Coefficient Autoregressive Model Driven by an Unobservable State Variable" Mathematics 12, no. 24: 3890. https://doi.org/10.3390/math12243890

APA Style

Pang, Y., & Wang, D. (2024). A New Random Coefficient Autoregressive Model Driven by an Unobservable State Variable. Mathematics, 12(24), 3890. https://doi.org/10.3390/math12243890

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop