Next Article in Journal
Complex Correntropy with Variable Center: Definition, Properties, and Application to Adaptive Filtering
Next Article in Special Issue
The Decomposition and Forecasting of Mutual Investment Funds Using Singular Spectrum Analysis
Previous Article in Journal
Towards Modified Entropy Mutual Information Feature Selection to Forecast Medium-Term Load Using a Deep Learning Model in Smart Homes
Previous Article in Special Issue
Universal Sample Size Invariant Measures for Uncertainty Quantification in Density Estimation
 
 
Due to planned maintenance work on our platforms, there might be short service disruptions on Saturday, December 3rd, between 15:00 and 16:00 (CET).
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prior Sensitivity Analysis in a Semi-Parametric Integer-Valued Time Series Model

1
Instituto de Matemática e Estatística, Universidade de São Paulo, São Paulo 05508-090, Brazil
2
Department of Decision Sciences and BIDSA, Bocconi University, via Röntgen 1, 20136 Milano, Italy
3
Insper Institute of Education and Research, Rua Quatá 300, São Paulo 04546-042, Brazil
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(1), 69; https://doi.org/10.3390/e22010069
Received: 26 November 2019 / Revised: 23 December 2019 / Accepted: 3 January 2020 / Published: 6 January 2020
(This article belongs to the Special Issue Data Science: Measuring Uncertainties)

Abstract

:
We examine issues of prior sensitivity in a semi-parametric hierarchical extension of the INAR(p) model with innovation rates clustered according to a Pitman–Yor process placed at the top of the model hierarchy. Our main finding is a graphical criterion that guides the specification of the hyperparameters of the Pitman–Yor process base measure. We show how the discount and concentration parameters interact with the chosen base measure to yield a gain in terms of the robustness of the inferential results. The forecasting performance of the model is exemplified in the analysis of a time series of worldwide earthquake events, for which the new model outperforms the original INAR(p) model.

1. Introduction

Integer-valued time series are relevant to many fields of knowledge, ranging from finance and econometrics to ecology and meteorology. An extensive number of models for this kind of data has been proposed since the introduction of the INAR(1) model in the pioneering works of McKenzie [1] and Al-Osh and Alzaid [2] (see also the book by Weiss [3]). A higher-order INAR(p) model was considered in the work of Du and Li [4].
In this paper, we generalize the Bayesian version of the INAR(p) model studied by Neal and Kypraios [5]. In our model, the innovation rates are allowed to vary through time, with the distribution of the innovation rates being modeled hierarchically by means of a Pitman–Yor process [6]. In this way, we account for potential heterogeneity in the innovation rates as the process evolves through time, and this feature is automatically incorporated in the Bayesian forecasting capabilities of the model.
The semi-parametric form of the model demands a robustness analysis of our inferential conclusions as we vary the hyperparameters of the Pitman–Yor process. We investigate this prior sensitivity issue carefully and find ways to control the hyperparameters in order to achieve robust results.
This paper is organized as follows. In Section 2, we construct a generalized INAR(p) model with variable innovation rates. The likelihood function of the generalized model is derived and a data augmentation scheme is developed, which gives a specification of the model in terms of conditional distributions. This data augmented representation of the model enables the derivation in Section 4 of full conditional distributions in simple analytical form, which are essential for the stochastic simulations in Section 5. Section 3 recollects the main properties of the Pitman–Yor process which are used to define the PY-INAR(p) model in Section 4, including its clustering properties. In building the PY-INAR(p), we propose a form for the prior distribution of the thinning parameters vector which improves on the choice made for the Bayesian INAR(p) model studied in [5]. In Section 5, we investigate the robustness of the inference with respect to changes in the Pitman–Yor process hyperparameters. Using the full conditional distributions of the innovation rates derived in Section 4, we inspect the behavior of the model as we concentrate or spread the mass of the Pitman–Yor base measure. This leads us to a graphical criterion that identifies an elbow in the posterior expectation of the number of clusters as we vary the hyperparameters of the base measure. Once we have control over the base measure, we study its interaction with the concentration and discount hyperparameters, showing how to make choices that yield robust results. In the course of this development, we use geometrical tools to inspect the clustering of the innovation rates produced by the model. Section 6 puts the graphical criterion to work for simulated data. In Section 7, using a time series of worldwide earthquake events, we finish the paper comparing the forecasting performance of the PY-INAR(p) model against the original INAR(p) model, with favorable results.

2. A Generalization of the INAR(p) Model

We begin by generalizing the original INAR(p) model of Du and Li [4] as follows.
Let { Y t } t 1 be an integer-valued time series, and, for some integer p 1 , let the innovations { Z t } t p + 1 , given positive parameters { λ t } t p + 1 , be a sequence of conditionally independent Poisson ( λ t ) random variables. For a given vector of parameters α = ( α 1 , , α p ) [ 0 , 1 ] p , let F i = { B i j ( t ) : j 0 , t 2 } be a family of conditionally independent and identically distributed Bernoulli ( α i ) random variables. For i k , suppose that F i and F k are conditionally independent, given α . Furthermore, assume that the innovations { Z t } t p + 1 and the families F 1 , , F p are conditionally independent, given α and λ . The generalized INAR(p) model is defined by the functional relation
Y t = α 1 Y t 1 + + α p Y t p + Z t ,
for t p + 1 , in which ∘ denotes the binomial thinning operator, defined by α i Y t i = j = 1 Y t i B i j ( t ) , if Y t i > 0 , and α i Y t i = 0 , if Y t i = 0 . In the homogeneous case, when all the λ t ’s are assumed to be equal, we recover the original INAR(p) model.
When p = 1 , this model can be interpreted as specifying a birth-and-death process, in which, at epoch t, the number of cases Y t is equal to the new cases Z t plus the cases that survived from the previous epoch; the role of the binomial thinning operator being to remove a random number of the Y t 1 cases present at the previous epoch t 1 (see [7] for an interpretation of the order p case as a birth-and-death process with immigration).
Let y = ( y 1 , , y T ) denote the values of an observed time series. For simplicity, we assume that Y 1 = y 1 , , Y p = y p with probability one. The joint distribution of Y 1 , , Y T , given parameters α and λ = ( λ p + 1 , , λ T ) , can be factored as
Pr { Y 1 = y 1 , , Y T = y T α , λ } = t = p + 1 T Pr { Y t = y t Y t 1 = y t 1 , , Y t p = y t p , α , λ t } .
Since, with probability one, α i Y t i Y t i and Z t 0 , the likelihood function of the generalized INAR(p) model is given by
L y ( α , λ ) = t = p + 1 T m 1 , t = 0 min { y t , y t 1 } m p , t = 0 min { y t j = 1 p 1 m j , t , y t p } i = 1 p y t i m i , t α i m i , t ( 1 α i ) y t i m i , t × e λ t λ t y t j = 1 p m j , t ( y t j = 1 p m j , t ) ! .
For some epoch t and i = 1 , , p , suppose that we could observe the values of the latent maturations M i , t . Postulate that M i , t Y t i = y t i , α i Binomial ( y t i , α i ) , so that the conditional probability function of M i , t is given by
p ( m i , t y t i , α i ) = Pr { M i , t = m i , t Y t i = y t i , α i } = y t i m i , t α i m i , t ( 1 α i ) y t i m i , t I { 0 , , y t i } ( m i , t ) .
Furthermore, suppose that
p ( y t m 1 , t , , m p , t , λ t ) = Pr { Y t = y t M 1 , t = m 1 , t , , M p , t = m p , t , λ t } = e λ t λ t y t j = 1 p m j , t ( y t j = 1 p m j , t ) ! I { j = 1 p m j , t , j = 1 p m j , t + 1 , } ( y t ) .
Using the law of total probability and the product rule, we have that
p ( y t y t 1 , , y t p , α , λ t ) = m 1 , t = 0 y t 1 m p , t = 0 y t p p ( y t , m 1 , t , , m p , t y t 1 , , y t p , α , λ t ) = m 1 , t = 0 y t 1 m p , t = 0 y t p p ( y t m 1 , t , , m p , t , λ t ) × i = 1 p p ( m i , t y t i , α i ) .
Since
I { j = 1 p m j , t , j = 1 p m j , t + 1 , } ( y t ) = I { 0 , , y t } j = 1 p m j , t = I { 0 , , y t } ( m 1 , t ) × × I { 0 , , y t j = 1 p 1 m j , t } ( m p , t )
and
I { j = 1 p m j , t , j = 1 p m j , t + 1 , } ( y t ) × I { 0 , , y t i } ( m i , t ) = I { 0 , 1 , , min { y t j i m j , t , y t i } } ( m i , t ) ,
we recover the original likelihood of the generalized INAR(p), showing that the introduction of the latent maturations M i , t with the specified distributions is a valid data augmentation scheme (see [8,9] for a general discussion of data augmentation techniques).
In the next section, we review the needed definitions and properties of the Pitman–Yor process.

3. Pitman–Yor Process

Let the random probability measure G DP ( τ , G 0 ) be a Dirichlet process [10,11,12] with concentration parameter τ and base measure G 0 . If the random variables X 1 , , X n , given G = G , are conditionally independent and identically distributed as G, then it follows that
Pr { X n + 1 B X 1 = x 1 , , X n = x n } = τ τ + n G 0 ( B ) + 1 τ + n i = 1 n I B ( x i ) ,
for every Borel set B. If we imagine the sequential generation of the X i ’s, for i = 1 , , n , the former expression shows that a value is generated anew from G 0 with probability proportional to τ , or we repeat one the previously generated values with probability proportional to its multiplicity. Therefore, almost surely, realizations of a Dirichlet process are discrete probability measures, perhaps with denumerable infinite support, depending on the nature of G 0 . Also, this data-generating process, known as the Pólya–Blackwell–MacQueen urn, implies that the X i ’s are “softly clustered”, in the sense that in one realization of the process the elements of a subset of the X i ’s may have exactly the same value.
The Pitman–Yor process [6] is a generalization of the Dirichlet process which results in a model with added flexibility. Essentially, the Pitman–Yor process modifies the expression of the probability associated with the Pólya-Blackwell-MacQueen urn introducing a new parameter so that the posterior predictive probability becomes
Pr { X n + 1 B X 1 = x 1 , , X n = x n } = τ + k σ τ + n G 0 ( B ) + 1 τ + n i = 1 n 1 σ n i I B ( x i ) ,
in which 0 σ < 1 is the discount parameter, τ > σ , k is the number of distinct elements in { X 1 , , X n } , and n i is the number of elements in { X 1 , , X n } which are equal to X i , for i = 1 , , n . It is well known that E [ G ( B ) ] = G 0 ( B ) and
Var [ G ( B ) ] = 1 σ τ + 1 G 0 ( B ) ( 1 G 0 ( B ) ) ,
for every Borel set B. Hence, G is centered on the base probability measure G 0 , while τ and σ control the concentration of G around G 0 . We use the notation G PY ( τ , σ , G 0 ) . When σ = 0 , we recover the Dirichlet process as a special case. The PY process is also defined for σ < 0 and τ = | σ | m , for some positive integer m. For our purposes, it is enough to consider the case of non-negative σ .
Pitman [6] derived the distribution of the number of clusters K (the number of distinct X i ’s), conditionally on both the concentration parameter τ and the discount parameter σ , as
Pr { K = k τ , σ } = i = 1 k 1 ( τ + i σ ) σ k × ( τ + 1 ) n 1 × C ( n , k ; σ ) ,
in which ( x ) n = Γ ( x + n ) / Γ ( x ) is the rising factorial and C ( n , k ; σ ) is the generalized factorial coefficient [13].
In the next section, we use a Pitman–Yor process to model the distribution of the innovation rates in the generalized INAR(p) model.

4. PY-INAR(p) Model

The PY-INAR(p) model is as a hierarchical extension of the generalized INAR(p) model defined in Section 2. Given a random measure G PY ( τ , σ , G 0 ) , in which G 0 is a Gamma ( a 0 , b 0 ) distribution, let the innovation rates λ p + 1 , , λ T be conditionally independent and identically distributed with distribution Pr { λ t B G = G } = G ( B ) .
To complete the PY-INAR(p) model, we need to specify the form of the prior distribution for the vector of thinning parameters α = ( α 1 , , α p ) . By comparison with standard results from the theory of the AR(p) model [14], Du and Li [4] found that in the INAR(p) model the constraint i = 1 p α i < 1 must be fulfilled to guarantee the non-explosiveness of the process. In their Bayesian analysis of the INAR(p) model, Neal and Kypraios [5] considered independent beta distributions for the α i ’s. Unfortunately, this choice is problematic. For example, in the particular case when the α i ’s have independent uniform distributions, it is possible to show that Pr { i = 1 p α i < 1 } = 1 / p ! , implying that we would be concentrating most of the prior mass on the explosive region even for moderate values of the model order p. We circumvent this problem using a prior distribution for α that places all of its mass on the nonexplosive region and still allows us to derive the full conditional distributions of the α i ’s in simple closed form. Specifically, we take the prior distribution of α to be a Dirichlet distribution with hyperparameters ( a 1 , , a p ; a p + 1 ) , and corresponding density
π ( α ) = Γ i = 1 p + 1 a i i = 1 p + 1 Γ ( a i ) i = 1 p + 1 α i a i 1 ,
in which a i > 0 , for i = 1 , , p + 1 , and α p + 1 = 1 i = 1 p α i .
Let m = { m i , t : i = 1 , , p , t = p + 1 , , T } denote the set of all maturations, and let μ G be the distribution of G . Our strategy to derive the full conditionals distributions of the model parameters and latent variables is to consider the marginal distribution
p ( y , m , α , λ ) = p ( y , m , α , λ G ) d μ G ( G ) = t = p + 1 T p ( y t m 1 , t , , m p , t , λ t ) i = 1 p p ( m i , t y t i , α i ) × π ( α ) × t = p + 1 T p ( λ t G ) d μ G ( G ) .
From this expression, using the results in Section 3, the derivation of the full conditional distributions is straightforward. In the following expressions, the symbol ∝ denotes proportionality up to a suitable normalization factor, and the label “all others” designate the observed counts y and all the other latent variables and model parameters, with the exception of the one under consideration.
Let λ \ t denote the set { λ p + 1 , , λ T } with the element λ t removed. Then, for t = p + 1 , , T , we have
λ t all others w t × Gamma ( y t m t + a 0 , b 0 + 1 ) + r t 1 σ n r λ r y t m t e λ r δ { λ r } ,
in which the weight
w t = ( τ + k \ t σ ) × b 0 a 0 × Γ ( y t m t + a 0 ) Γ ( a 0 ) × ( b 0 + 1 ) y t m t + a 0 ,
n r is the number of elements in λ \ t which are equal to λ r , and k \ t is the number of distinct elements in λ \ t . In this mixture, we suppressed the normalization constant that makes all weights add up to one.
Making the choice a p + 1 = 1 , we have
α i all others TBeta a i + t = p + 1 T m i , t , 1 + t = p + 1 T ( y t i m i , t ) , 1 j i α j ,
for i = 1 , , p , in which TBeta denotes the right truncated Beta distribution with support ( 0 , 1 j i p α j ) .
For the latent maturations, we find
p ( m i , t all others ) 1 ( m i , t ) ! ( y t j = 1 p m j , t ) ! ( y t i m i , t ) ! α i λ t ( 1 α i ) m i , t × I { 0 , 1 , , min { y t j i m j , t , y t i } } ( m i , t ) .
To explore the posterior distribution of the model, we build a Gibbs sampler [15] using these full conditional distributions. Escobar and West [16] showed, in a similar context, that we can improve mixing by resampling simultaneously the values of all λ t ’s inside the same cluster at the end of each iteration of the Gibbs sampler. Letting ( λ 1 * , , λ k * ) be the k unique values among ( λ p + 1 , , λ T ) , define the number of occupants of cluster j by ν j = t = p + 1 T I { λ j * } ( λ t ) , for j = 1 , , k . It follows that
λ j * all others Gamma a 0 + t = p + 1 T y t i = 1 p m i , t · I { λ j * } ( λ t ) , b 0 + ν j .
for j = 1 , , k . At the end of each iteration of the Gibbs sampler, we update the values of all λ t ’s inside each cluster by the corresponding λ j * using this distribution.

5. Prior Sensitivity

As it is often the case for Bayesian models with nonparametric components, a choice of the prior parameters for the PY-INAR(p) model which yields robustness of the posterior distribution is nontrivial [17].
The first aspect to be considered is the fact that the base measure G 0 plays a crucial role in the determination of the posterior distribution of the number of clusters K. This can be seen directly by inspecting the form of the full conditional distributions derived in Section 4. Recalling that G 0 is a gamma distribution with mean a 0 / b 0 and variance a 0 / b 0 2 , from the full conditional distribution of λ t one may note that the probability of generating, on each iteration of the Gibbs sampler, a value for λ t anew from G 0 is proportional to
( τ + k \ t σ ) × b 0 a 0 × Γ ( y t m t + a 0 ) Γ ( a 0 ) ( b 0 + 1 ) y t m t + a 0 .
Therefore, supposing that all the other terms are fixed, if we concentrate the mass of G 0 around zero by making b 0 , this probability decreases to zero. This is not problematic, because it is hardly the case that we want to make such a drastic choice for G 0 . The behavior in the other direction is more revealing, since taking b 0 0 , in order to spread the mass of G 0 , also makes the limit of this probability to be zero. Due to this behavior, we need to establish a criterion to choose the hyperparameters of the base measure which avoids these extreme cases.
In our analysis, it is convenient to have a single hyperparameter regulating how the mass of G 0 is spread over its support. For a given λ max > 0 , we find numerically the values of a 0 and b 0 which minimize the Kullback-Leibler divergence between G 0 and a uniform distribution on the interval [ 0 , λ max ] . This Kullback-Leibler divergence can be computed explicitly as
log λ max a 0 log b 0 + log Γ ( a 0 ) ( a 0 1 ) ( log λ max 1 ) + b 0 λ max 2 .
In this new parameterization, our goal is to make a sensible choice for λ max . It is worth emphasizing that by this procedure we are not truncating the support of G 0 , but only using the uniform distribution on the interval [ 0 , λ max ] as a reference for our choice of the base measure hyperparameters a 0 and b 0 .
Our proposal to choose λ max goes as follows. We fix some value 0 σ < 1 for the discount parameter and choose an integer k 0 as the prior expectation of the number of clusters K, which, using the results at the end of Section 3, can be computed explicitly as
E [ K ] = τ × ( ψ ( τ + T p ) ψ ( τ ) ) if σ = 0 ; ( ( τ + σ ) T p / ( σ × ( τ + 1 ) T p 1 ) ) τ / σ if σ > 0 ,
in which ψ ( x ) is the digamma function (see [6] for a derivation of this result). Next, we find the value of the concentration parameter τ by solving E [ K ] = k 0 numerically. After this, for each λ max in a grid of values, we run the Gibbs sampler and compute the posterior expectation of the number of clusters E [ K y ] . Finally, in the corresponding graph, we look for the value of λ max located at the “elbow” of the curve, that is, the value of λ max at which the values of E [ K y ] level off.

6. Simulated Data

As an explicit example of the graphical criterion in action, we used the functional form of a first-order model with thinning parameter α = 0.15 to simulate a time series of length T = 1000 , for which the distribution of the innovations is a symmetric mixture of three Poisson distributions with parameters 1, 8, and 15. Figure 1 shows the formations of the elbows for two values of the discount parameter: σ = 0.5 and σ = 0.75 .
For the simulated time series, Figure 2, Figure 3, Figure 4 and Figure 5 display the behavior of the posterior distributions obtained using the elbow method for ( k 0 , σ ) { 4 , 10 , 16 , 30 } × { 0 , 0.25 , 0.5 , 0.75 } . These figures make the relation between the choice of the value of the discount parameter σ and the achieved robustness of the posterior distribution quite explicit: as we increase the value of the discount parameter σ , the posterior becomes insensitive to the choice of k 0 . In particular, for σ = 0.75 , the posterior mode is always near 3, which is the number of components used in the distribution of the innovations of the simulated time series.
Once we understand the influence of the prior parameters on the robustness of the posterior distribution, an interesting question is how to get a point estimate for the distribution of clusters, in the sense that each λ t , for t = p + 1 , , T , would be assigned to one of the available clusters.
From the Gibbs sampler, we can easily get a Monte Carlo approximation for the probabilities d r t = Pr { λ r λ t y } , for r , t = p + 1 , , T . These probabilities define a dissimilarity matrix D = ( d r t ) among the innovation rates. Although D is not a distance matrix, we can use it as a starting point to represent the innovation rates in a two-dimensional Euclidean space using the technique of metric multidimensional scaling (see [18] for a general discussion). From this two-dimensional representation, we use hierarchical clustering techniques to build a dendrogram, which is appropriately cut in order to define three clusters, allowing us to assign a single cluster label to each innovation rate.
Table 1 displays the confusion matrix of this assignment, showing that 83 % of the innovations were grouped correctly in the clusters which correspond to the mixture components used to simulate the time series.

7. Earthquake Data

In this section, we analyze a time series of yearly worldwide earthquakes events of substantial magnitude (equal or greater than 7 points on the Richter scale) from 1900 to 2018 (http://www.usgs.gov/natural-hazards/earthquake-hazards/earthquakes).
The forecasting performances of the INAR(p) and the PY-INAR(p) models are compared using a cross-validation procedure in which the models are trained with data ranging from the beginning of the time series up to a certain time, and predictions are made for epochs outside this training range.
Using this cross-validation procedure, we trained the INAR(p) and the PY-INAR(p) models with orders p = 1 , 2 , and 3 , and made one-step-ahead predictions. Table 2 shows the out-of-sample mean absolute errors (MAE) for the INAR(p) and the PY-INAR(p) models. In this table, the MAE’s are computed predicting the counts for the last 36 months. For the three model orders, the PY-INAR(p) model yields a smaller MAE than the original INAR(p) model.

Author Contributions

Theoretical development: H.G., A.L., H.F.L., P.C.M.F, I.P. Software development: H.G., P.C.M.F. All authors have read and agreed to the published version of the manuscript.

Funding

Helton Graziadei and Hedibert F. Lopes thank FAPESP (Fundação de Amparo à Pesquisa do Estado de São Paulo) for financial support through grants numbers 2017/10096-6 and 2017/22914-5. Antonio Lijoi and Igor Prünster are partially supported by MIUR, PRIN Project 2015SNS29B.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. McKenzie, E. Some simple models for discrete variate time series. J. Am. Water Resour. Assoc. 1985, 21, 645–650. [Google Scholar] [CrossRef]
  2. Al-Osh, M.; Alzaid, A. First-order integer-valued autoregressive (INAR(1)) process: Distributional and regression properties. Stat. Neerl. 1988, 42, 53–61. [Google Scholar] [CrossRef]
  3. Weiß, C. An Introduction to Discrete-Valued Time Series; John Wiley & Sons: Hoboken, NJ, USA, 2018. [Google Scholar]
  4. Du, J.G.; Li, Y. The integer-valued autoregressive (INAR(p)) model. J. Time Ser. Anal. 1991, 12, 129–142. [Google Scholar] [CrossRef]
  5. Neal, P.; Kypraios, T. Exact Bayesian inference via data augmentation. Stat. Comput. 2015, 25, 333–347. [Google Scholar] [CrossRef][Green Version]
  6. Pitman, J. Combinatorial stochastic processes. Technical Report 621; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar] [CrossRef][Green Version]
  7. Dion, J.; Gauthier, G.; Latour, A. Branching processes with immigration and integer-valued time series. Serdica Math. J. 1995, 21, 123–136. [Google Scholar]
  8. Van Dyk, D.; Meng, X.L. The art of data augmentation. J. Comput. Graph. Stat. 2001, 10, 1–50. [Google Scholar] [CrossRef]
  9. Tanner, M.; Wong, W. The calculation of posterior distributions by data augmentation. J. Am. Stat. Assoc. 1987, 82, 528–540. [Google Scholar] [CrossRef]
  10. Ferguson, T. A Bayesian analysis of some nonparametric problems. Ann. Stat. 1973, 1, 209–230. [Google Scholar] [CrossRef]
  11. Schervish, M.J. Theory of Statistics; Springer: Berlin/Heidelberg, Germany, 1995; pp. 52–60. [Google Scholar]
  12. Hjort, N.; Holmes, C.; Müller, P.; Walker, S. Bayesian Nonparametrics; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  13. Lijoi, A.; Mena, R.H.; Prünster, I. Bayesian nonparametric estimation of the probability of discovering new species. Biometrika 2007, 94, 769–786. [Google Scholar] [CrossRef]
  14. Hamilton, J. Time Series Analysis; Princeton University Press: Princeton, NJ, USA, 1994; Volume 2, pp. 43–71. [Google Scholar]
  15. Gamerman, D.; Lopes, H. Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference; Chapman & Hall/CRC: Boca Raton, FL, USA, 2006. [Google Scholar]
  16. Escobar, M.; West, M. Computing nonparametric hierarchical models. In Practical Nonparametric and Semiparametric Bayesian Statistics; Dey, D., Müller, P., Sinha, D., Eds.; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar] [CrossRef]
  17. Canale, A.; Prünster, I. Robustifying Bayesian nonparametric mixtures for count data. Biometrics 2017, 73, 174–184. [Google Scholar] [CrossRef] [PubMed][Green Version]
  18. Friedman, J.; Hastie, T.; Tibshirani, R. The Elements of Statistical Learning; Springer: Berlin/Heidelberg, Germany, 2009; pp. 570–572. [Google Scholar]
Figure 1. Formation of the elbows for σ = 0.5 (left) and σ = 0.75 (right). The red dotted lines indicate the chosen values of λ max .
Figure 1. Formation of the elbows for σ = 0.5 (left) and σ = 0.75 (right). The red dotted lines indicate the chosen values of λ max .
Entropy 22 00069 g001
Figure 2. Posterior distributions of the number of clusters K for the simulated time series with σ = 0 and k 0 = 4 , 10 , 16 , 30 . The red dotted lines indicate the value of k 0 .
Figure 2. Posterior distributions of the number of clusters K for the simulated time series with σ = 0 and k 0 = 4 , 10 , 16 , 30 . The red dotted lines indicate the value of k 0 .
Entropy 22 00069 g002
Figure 3. Posterior distributions of the number of clusters K for the simulated time series with σ = 0.25 and k 0 = 4 , 10 , 16 , 30 . The red dotted lines indicate the value of k 0 .
Figure 3. Posterior distributions of the number of clusters K for the simulated time series with σ = 0.25 and k 0 = 4 , 10 , 16 , 30 . The red dotted lines indicate the value of k 0 .
Entropy 22 00069 g003
Figure 4. Posterior distributions of the number of clusters K for the simulated time series with σ = 0.5 and k 0 = 4 , 10 , 16 , 30 . The red dotted lines indicate the value of k 0 .
Figure 4. Posterior distributions of the number of clusters K for the simulated time series with σ = 0.5 and k 0 = 4 , 10 , 16 , 30 . The red dotted lines indicate the value of k 0 .
Entropy 22 00069 g004
Figure 5. Posterior distributions of the number of clusters K for the simulated time series with σ = 0.75 and k 0 = 4 , 10 , 16 , 30 . The red dotted lines indicate the value of k 0 .
Figure 5. Posterior distributions of the number of clusters K for the simulated time series with σ = 0.75 and k 0 = 4 , 10 , 16 , 30 . The red dotted lines indicate the value of k 0 .
Entropy 22 00069 g005aEntropy 22 00069 g005b
Table 1. Confusion matrix for the cluster assignments.
Table 1. Confusion matrix for the cluster assignments.
True
Predicted123
1297320
21121742
3084316
Table 2. Out-of-sample MAE’s for the INAR(p) and the PY-INAR(p) models, with orders p = 1 , 2 , and 3 . The last column shows the relative variations of the MAE’s for the PY-INAR(p) models with respect to the corresponding MAE’s for the INAR(p) models.
Table 2. Out-of-sample MAE’s for the INAR(p) and the PY-INAR(p) models, with orders p = 1 , 2 , and 3 . The last column shows the relative variations of the MAE’s for the PY-INAR(p) models with respect to the corresponding MAE’s for the INAR(p) models.
INARPY-INAR Δ PY - INAR
p = 1 3.8613.583−0.072
p = 2 3.5833.417−0.046
p = 3 3.9723.305−0.202

Share and Cite

MDPI and ACS Style

Graziadei, H.; Lijoi, A.; Lopes, H.F.; Marques F., P.C.; Prünster, I. Prior Sensitivity Analysis in a Semi-Parametric Integer-Valued Time Series Model. Entropy 2020, 22, 69. https://doi.org/10.3390/e22010069

AMA Style

Graziadei H, Lijoi A, Lopes HF, Marques F. PC, Prünster I. Prior Sensitivity Analysis in a Semi-Parametric Integer-Valued Time Series Model. Entropy. 2020; 22(1):69. https://doi.org/10.3390/e22010069

Chicago/Turabian Style

Graziadei, Helton, Antonio Lijoi, Hedibert F. Lopes, Paulo C. Marques F., and Igor Prünster. 2020. "Prior Sensitivity Analysis in a Semi-Parametric Integer-Valued Time Series Model" Entropy 22, no. 1: 69. https://doi.org/10.3390/e22010069

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop