Next Article in Journal
Machine Learning-Based Prediction of External Pressure in High-Speed Rail Tunnels: Model Optimization and Comparison
Previous Article in Journal
A Unified Transformer–BDI Architecture for Financial Fraud Detection: Distributed Knowledge Transfer Across Diverse Datasets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sensitivity Analysis of Priors in the Bayesian Dirichlet Auto-Regressive Moving Average Model

1
Department of Statistics & Data Science, University of California, Los Angeles, 8125 Math Sciences Building, Los Angeles, CA 90095-1554, USA
2
Data Science—Forecasting, Airbnb, Inc., 888 Brannan Street, San Francisco, CA 94103, USA
3
Department of Biostatistics, Fielding School of Public Health, University of California, Los Angeles, 650 Charles E. Young Dr. South, Los Angeles, CA 90095-1772, USA
*
Authors to whom correspondence should be addressed.
Forecasting 2025, 7(3), 32; https://doi.org/10.3390/forecast7030032
Submission received: 18 April 2025 / Revised: 22 May 2025 / Accepted: 17 June 2025 / Published: 20 June 2025
(This article belongs to the Section Forecasting in Economics and Management)

Abstract

We examine how prior specification affects the Bayesian Dirichlet Auto-Regressive Moving Average (B-DARMA) model for compositional time series. Through three simulation scenarios—correct specification, overfitting, and underfitting—we compare five priors: informative, horseshoe, Laplace, mixture of normals, and hierarchical. Under correct model specification, all priors perform similarly, although the horseshoe and hierarchical priors produce slightly lower bias. When the model overfits, strong shrinkage—particularly from the horseshoe prior—proves advantageous. However, none of the priors can compensate for model misspecification if key VAR/VMA terms are omitted. We apply B-DARMA to daily S&P 500 sector trading data, using a large-lag model to demonstrate overparameterization risks. Shrinkage priors effectively mitigate spurious complexity, whereas weakly informative priors inflate errors in volatile sectors. These findings highlight the critical role of carefully selecting priors and managing model complexity in compositional time-series analysis, particularly in high-dimensional settings.

1. Introduction

Compositional time series, in which observations are vectors of proportions constrained to sum to one, arise in a wide range of applications. For instance, market researchers track evolving shares of competing products [1] ecologists monitor species composition over time, and sociologists or political scientists follow shifting demographic or budgetary profiles [2]. In each case, the data lie within a simplex, making the Dirichlet model a common starting point [3,4]. However, when temporal dependence is also present—for example, today’s composition influences tomorrow’s—the Dirichlet framework must be extended to account for dynamics in a way that respects the simplex constraints.
Several approaches have emerged to handle dynamic compositional data, notably by coupling the Dirichlet with Auto-Regressive Moving Average (ARMA)-like structures [5], by using a logistic-normal representation [6], or by adopting new families of innovation distributions [7]. Others have adapted the model to cope with zeros [8] or extreme heavy-tailed behaviors. In particular, the Bayesian Dirichlet Auto-Regressive Moving Average (B-DARMA) model [9] addresses key compositional modeling challenges by introducing Vector Auto-Regression (VAR) and Vector Moving Average (VMA) terms for multivariate compositional data. Under B-DARMA, each day’s, or time point’s, composition is Dirichlet-distributed with parameters that evolve with a VARMA process in the additive log-ratio space, capturing both compositional constraints and serial correlation.
Although B-DARMA provides a flexible foundation, practitioners must still specify priors for potentially high-dimensional parameter spaces, which can be prone to overfitting or omitted-lag bias. The growing literature on Bayesian shrinkage priors offers numerous solutions, from global–local shrinkage frameworks [10] to hierarchical approaches [11], along with classic spike-and-slab [12,13] and Laplace priors [14]. These priors can encourage sparsity and suppress extraneous lags in over-parameterized models, a crucial feature for many real-world compositional applications in which the number of possible lags or covariates exceeds the sample size. The horseshoe prior [15,16] has shown particular promise in forecasting studies where only a minority of parameters matter and many are effectively zero. At the same time, hierarchical shrinkage [11] facilitates partial pooling across related coefficient blocks, an appealing property when working with multi-sector or multi-species compositions.
In this paper, we systematically investigate how five different prior families—informative normal, horseshoe, Laplace, spike-and-slab, and hierarchical shrinkage—affect parameter recovery and predictive accuracy in the B-DARMA model. We compare their performance across three main scenarios using simulated data: (i) correct specification, where the model order matches the true process; (ii) overfitting, where extraneous VAR/VMA terms inflate model dimensionality; and (iii) underfitting, where key VAR/VMA terms are missing altogether. Our findings confirm that shrinkage priors—especially horseshoe and hierarchical variants—can successfully rein in overfitting, providing more robust parameter estimates and improved forecasts. Conversely, no amount of shrinkage compensates for omitted terms, highlighting the need for careful model specification.
To demonstrate the practical impact, we also apply B-DARMA to daily S&P 500 sector trading data, a large-scale compositional time series characterized by multiple seasonalities and long-lag behavior. Consistent with the simulation insights, we find that more aggressive shrinkage priors significantly reduce spurious complexity and improve forecast accuracy, especially for volatile sectors. These outcomes reinforce that while B-DARMA provides a natural scaffolding for compositional dependence, judicious prior selection and meaningful lag choices are pivotal.
In the remainder of this paper, we first review the B-DARMA model (Section 2) and outline how our five prior families (informative, horseshoe, Laplace, spike-and-slab, and hierarchical) encode distinct shrinkage behaviors. We then present the design and results of three simulation studies (Section 3 and Section 4) before turning to the empirical S&P 500 application (Section 5). We conclude with recommendations for practitioners modeling compositional time series with complex temporal dynamics and large parameter counts.

2. Background

2.1. Compositional Data and the Dirichlet Distribution

Compositional data consist of vectors of proportions, each strictly between zero and one and summing to unity [3]. Formally, let
y t = y t 1 , y t 2 , , y t J , t = 1 , , T ,
where each y t j > 0 and j = 1 J y t j = 1 . The vector y t resides in the ( J 1 ) -dimensional simplex.
A natural choice for modeling such compositional vectors is the Dirichlet distribution. In its basic form, a Dirichlet random vector x = ( x 1 , , x K ) is parameterized by a concentration vector α = ( α 1 , , α K ) with α k > 0 . The probability density function is
p ( x α ) = 1 B ( α ) k = 1 K x k α k 1 ,
where
B ( α ) = k = 1 K Γ ( α k ) Γ k = 1 K α k ,
and Γ ( · ) is the Gamma function. This parameterization captures both the support of compositional data (the simplex) and the potential correlation structure among components.

2.2. B-DARMA Model

To capture temporal dependence in compositional data, the Bayesian Dirichlet Auto-Regressive Moving Average (B-DARMA) model [9] augments the Dirichlet framework with VAR and VMA dynamics. Specifically, for each time t = 1 , , T , let y t be the observed composition. We assume
y t | μ t , ϕ t Dirichlet ϕ t μ t ,
with density f ( y t | μ t , ϕ t ) j = 1 J y t j ϕ t μ t j 1 , where μ t = ( μ t 1 , , μ t J ) ’ is the mean composition and ϕ t > 0 is a precision parameter. Both μ t and ϕ t may vary with time.
  • ALR link.
Because each μ t lies in the simplex, we map it to an unconstrained ( J 1 ) -dimensional vector via the additive log-ratio transform
alr μ t = ln μ t 1 μ t J , , ln μ t , J 1 μ t J .
We denote
η t = alr μ t R J 1 .
  • VARMA structure.
To incorporate serial dependence, we assume that η t follows a vector VARMA process in the transformed space
η t = p = 1 P A p alr ( y t p ) X t p β + q = 1 Q B q alr ( y t q ) η t q + X t β ,
for t = m + 1 , , T , where m = max ( P , Q ) . In this notation, A p and B q are each ( J 1 ) × ( J 1 ) coefficient matrices; X t is a known ( J 1 ) × r β covariate matrix, including any intercepts, trends, or seasonality; and β R r β is a vector of regression coefficients shared across the ( J 1 ) components.
  • Precision parameter.
The Dirichlet precision ϕ t can also evolve over time. For an r γ -vector of covariates z t , we set
ϕ t = exp z t γ ,
with γ R r γ . In the absence of covariates, we simply have log ϕ t = γ for all t, so ϕ t becomes a constant.
  • Parameter vector.
We gather all unknown parameters into a vector of length C,
θ = A p r s , B q r s , β , γ ,
where p = 1 , , P , q = 1 , , Q , and r , s = 1 , , J 1 , and θ j is the j-th element of θ . The total number of free parameters is thus
C = ( P + Q ) ( J 1 ) 2 + r β + r γ .
Bayesian inference begins by positing a prior distribution p ( θ ) over the model parameters θ . Bayes’ theorem updates this prior to form the posterior
p ( θ y 1 : T ) = p ( θ ) p ( y ( m + 1 ) : T θ , y 1 : m ) p ( y ( m + 1 ) : T y 1 : m ) ,
where
p ( y ( m + 1 ) : T θ , y 1 : m ) = t = m + 1 T p y t | θ , y ( t m ) ! : , ( t 1 ) ,
and p ( y ( m + 1 ) : T y 1 : m ) is the normalizing constant obtained by integrating over θ . Each p ( y t θ , y ( t m ) : ( t 1 ) ) follows the Dirichlet likelihood (1).
Next, to generate predictions for the subsequent S time points, y ( T + 1 ) : ( T + S ) , we construct the joint predictive distribution
p y ( T + 1 ) : ( T + S ) y 1 : T = p y ( T + 1 ) : ( T + S ) θ p θ y 1 : T d θ .
In practice, analysts often summarize this distribution at future time points t ( T + 1 ) : ( T + S ) by reporting measures such as the posterior mean or median.

2.3. Bayesian Shrinkage Priors for B-DARMA Coefficients

In a fully Bayesian approach, all unknown parameters in the B-DARMA model—including the VARMA coefficients in A p and B q , the regression vector β , and the precision-related parameters γ —require prior distributions. Different shrinkage priors can produce significantly different outcomes, particularly in high-dimensional or sparse settings where many coefficients may be small. This section discusses five popular priors (normal, horseshoe, Laplace, spike-and-slab, and hierarchical), highlighting how each encodes shrinkage or sparsity assumptions.
An informative normal prior serves as a straightforward baseline. We model each coefficient θ j by θ j N ( a , b 2 ) . The mean a often defaults to zero unless prior knowledge indicates a different center. The variance b 2 determines the shrinkage strength; smaller b 2 yields tighter concentration around a. In B-DARMA applications, it may be sensible to set b = 1 for the VARMA parameters if they are believed to be small on average, whereas for covariates β , a smaller prior variance such as b 2 = 0.01 can reflect stronger beliefs that regression effects are modest.
The horseshoe prior [15] is well-suited for sparse problems. We model each coefficient ν as ν τ , λ ν N ( 0 , τ 2 λ ν 2 ) , with a global scale τ Cauchy + ( 0 , 1 ) and local scales λ ν Cauchy + ( 0 , 1 ) . Large local scales allow some coefficients to remain sizable, whereas most are heavily shrunk. This can be beneficial if the B-DARMA model includes many possible lags or covariates, only a small subset of which are expected to matter for compositional forecasting.
A Laplace (double-exponential) prior [14] employs the density p ( ν b ) = 1 2 b exp ( | ν | / b ) . This enforces an 1 -type penalty that can drive many coefficients close to zero while still allowing moderate signals to persist. The scale b can be chosen a priori or assigned its own hyperprior, such as a half-Cauchy, so that the data adaptively determine the shrinkage level. In a B-DARMA context, choosing a smaller b for high-dimensional VARMA terms can prevent spurious estimates from inflating the parameter space.
A spike-and-slab prior [12] can introduce explicit sparsity by placing a point mass at zero. Let τ j Beta ( 1 , 1 ) be the mixing parameter for the jth coefficient. Then, each coefficient θ j follows the mixture
θ j τ j N ( 0 , 1 ) + ( 1 τ j ) δ 0 ,
where δ 0 is the Dirac measure at zero. Coefficients drawn from the spike component remain exactly zero, effectively excluding them from the model, while coefficients from the slab remain freely estimated. This setup allows the B-DARMA specification to adapt by discarding irrelevant lags or covariates.
A hierarchical shrinkage prior [17] can encourage partial pooling across groups of coefficients. We model each coefficient ν by ν σ N ( 0 , σ 2 ) and then place a half-Cauchy prior on σ . In B-DARMA, one could assign separate group scales to the AR, MA, and regression blocks, thereby allowing the model to learn an appropriate overall variability for each group of parameters.
These priors differ in how strongly they push coefficients toward zero and whether they favor a few large coefficients or moderate shrinkage for all. The normal prior provides a baseline continuous shrinkage, the horseshoe prior excels when only a minority of parameters are truly large, the Laplace prior induces an 1 -type penalty that can zero out many coefficients, the spike-and-slab prior explicitly discards some parameters, and the hierarchical prior alternative enables group-level learning of shrinkage scales. The next sections illustrate how these choices affect both parameter recovery and compositional forecasting in various simulation and real-data scenarios.

2.4. Posterior Computation

All B-DARMA models are fit with STAN [18] in R using Hamiltonian Monte Carlo. We run 4 chains, each with 500 warm-up and 750 sampling iterations, yielding 3000 posterior draws. The sampler uses adapt_delta = 0.85, max_treedepth = 11, and random initial values drawn uniformly from [−0.25, 0.25].

2.5. Stationarity and Structural Considerations

  • Open theoretical gap. B-DARMA inherits the difficulty noted by Zheng and Chen [5]: after the additive-log-ratio (ALR) link, the innovation is not a martingale-difference sequence (MDS). Consequently, classical stationarity results for VARMA (p,q) processes do not transfer directly, and a full set of strict- or weak-stationarity conditions remains an open problem ([9] [IJF, §2.2]).
  • Why Bayesian inference remains valid. Bayesian estimation proceeds via the joint posterior p ( θ y 1 : T ) p ( y 1 : T θ ) p ( θ ) , which does not require the likelihood-error sequence to be an MDS [19]. We therefore regard B–DARMA as a flexible likelihood-based filter for compositional time series rather than assume that the fitted parameters represent a strictly stationary data-generating process.

3. Simulation Studies

We conduct three simulation studies to investigate how different priors affect parameter inference and forecasting in a B-DARMA model. All studies use the same sparse DARMA(2,1) data-generating process (DGP) but vary the fitted model to be correctly specified, deliberately overfitted, or underfitted. We first describe the DGP and the priors considered, and then outline the study designs and performance metrics.

3.1. Data-Generating Process

We simulate a six-dimensional compositional time series y t , each component constrained to sum to one. The true process is DARMA(2,1) with fixed precision ϕ = 500 , β = ( 0.1 , 0.05 , 0.03 , 0.02 , 0.04 ) , and X t = I 5 , a 5 × 5 identity matrix. We set our VAR and VMA matrices to
A 1 = 0.80 0.05 0.04 0.05 0.05 0.01 0.70 0.03 0.02 0.01 0.02 0.00 0.90 0.02 0.04 0.03 0.07 0.02 0.85 0.01 0.04 0.02 0.01 0.01 0.75 A 2 = 0.30 0.03 0.02 0.05 0.04 0.02 0.20 0.01 0.02 0.01 0.01 0.05 0.25 0.01 0.01 0.01 0.04 0.01 0.15 0.00 0.06 0.00 0.11 0.02 0.20 ,
B 1 = 0.50 0.02 0.03 0.00 0.03 0.05 0.40 0.03 0.01 0.02 0.02 0.01 0.45 0.02 0.13 0.01 0.10 0.05 0.35 0.01 0.01 0.04 0.11 0.10 0.40 .
We initialize y 1 and y 2 from Dirichlet ( 1 ) , simulate T = 100 observations, and replicate this procedure 50 times. We focus on posterior inference for A p , B q , and β , as well as forecasting accuracy.

3.2. Prior Distributions and Hyperparameters

We specify five candidate priors for the coefficient vectors: normal (mean 0, variance 1), horseshoe (global and local Cauchy + ( 0 , 1 ) scales), Laplace (scale b = 1 ), spike-and-slab (with local mixing parameters τ j Beta ( 1 , 1 ) ), and hierarchical normal (with a half-Cauchy prior on the group-level scale). Although these priors remain the same across simulations, they are applied to different sets of coefficients: A 1 , A 2 , B 1 , and β in Simulation 1; A 1 A 4 , B 1 B 2 , and β in Simulation 2; and A 1 and β in Simulation 3. For the Dirichlet precision parameter, we use γ ϕ N ( 7 , 1.5 ) . The normal prior on β is N ( 0 , 0.1 ) , and the Laplace and hierarchical normal versions adopt smaller scales to impose stronger shrinkage on intercepts.

3.3. Study Designs

We fit three B-DARMA specifications to the same DARMA(2,1) data:
  • Study 1 (Correct Specification): B-DARMA(2,1) matches the true DGP.
  • Study 2 (Overfitting): B-DARMA(4,2) includes extraneous higher-order VAR and VMA terms.
  • Study 3 (Underfitting): B-DARMA(1,0) omits the second VAR lag and the MA(1) term.
Each configuration is paired with each of the five priors, yielding 15 total fitted models. We repeat the simulation for 50 synthetic datasets of length T.

3.4. Evaluation Metrics

We assess both parameter recovery and forecasting performance. Let θ true be a true parameter and θ ^ ( s ) be the posterior mean from simulation s. We compute
Bias j = 1 50 s = 1 50 θ ^ j ( s ) θ j , true , RMSE j = 1 50 s = 1 50 θ ^ j ( s ) θ j , true 2 ,
along with 95% credible-interval coverage and interval length. If a parameter is omitted (as in underfitting), we exclude it from these summaries.
For forecasting, we use the first 80 points for training and the remaining 20 points for testing. Let y ^ t ( s ) be the posterior mean forecast at time t in simulation s. We define
RMSE forecast = 1 50 × 20 × 6 s = 1 S t = 1 20 k = 1 6 y t , k ( s ) y ^ t , k ( s ) 2 ,
where y t ( s ) denotes the true composition in the test set for the s-th simulation. Each of the three study designs is then evaluated under each of the five priors, illuminating how prior choice interacts with model misspecification.

4. Results

The summarized parameter estimation results of the three simulation studies are shown in Table 1, Table 2, Table 3, Table 4 and Table 5, and the summarized forecast results are shown in Table 2, Table 3, Table 4, Table 5 and Table 6.

4.1. Study 1: Correct Model Specification (DARMA(2,1))

4.1.1. Parameter Estimation

Table 1 shows the results for the correctly specified DARMA(2,1). All priors yield estimates that align with the true parameter values, and both bias and RMSE remain modest. Certain shrinkage priors, including the horseshoe and Laplace, produce slightly lower RMSE and narrower intervals than the normal prior. Hierarchical priors also reduce parameter uncertainty to some extent, although the differences relative to other shrinkage methods are not pronounced. Under spike-and-slab, small but non-zero signals can be set to zero, occasionally lowering coverage (for example, a coverage rate of 0.8240 for the intercept β ).

4.1.2. Forecasting Performance

In Table 2 (Sim. 1 columns), the average forecast RMSE spans 0.031–0.032, with minimal variation among priors. The horseshoe, Laplace, and hierarchical priors provide slight gains in predictive accuracy compared to the informative normal or spike-and-slab approaches.

4.2. Study 2: Overfitting Scenario (B-DARMA(4,2))

4.2.1. Parameter Estimation

When B-DARMA(4,2) is applied to data generated by a DARMA(2,1) process, additional VAR and VMA terms that should be zero are introduced. In Table 3, relatively large biases and inflated RMSE values appear under the informative normal prior for these extraneous coefficients. Horseshoe priors shrink many of those coefficients near zero, reflected in low RMSE (for instance, 0.0176 or 0.0161 for A 3 and A 4 ) and coverage rates close to the nominal level. Spike-and-slab also suppresses unwanted terms, although borderline signals may be excessively reduced.

4.2.2. Forecasting Performance

The forecast RMSE under the overfitting conditions is listed in Table 2 (Sim. 2 columns). Higher errors (0.0341) are observed for the informative normal prior, while the horseshoe prior produces the lowest RMSE (0.0315). The Laplace and hierarchical approaches also outperform the weakly regularized alternative. The ratios in Table 4 indicate that the horseshoe’s forecast RMSE in the overfitted model differs little from the correctly specified case.

4.3. Study 3: Underfitting Scenario (B-DARMA(1,0))

4.3.1. Parameter Estimation

In Table 5, broad increases in RMSE and reduced coverage are noted when the second VAR lag and the VMA(1) term are omitted, regardless of the prior used. Horseshoe, Laplace, and spike-and-slab cannot compensate for missing structural components, which leads to biased AR(1) estimates and coverage gaps. The hierarchical prior exhibits comparable issues.

4.3.2. Forecasting Performance

The forecast RMSE remains near 0.032–0.033, indicating a 3–4% rise over the correct specification. This shortfall is consistent across priors (Table 4), confirming that underspecification cannot be resolved by shrinkage.

4.4. Overview of Results

These simulations illustrate how model specification and prior choice jointly affect B-DARMA outcomes. Under a correctly specified DARMA(2,1), all priors capture the process well, although the horseshoe and hierarchical priors yield slightly lower RMSE and narrower intervals. In overfitted scenarios, shrinkage priors—especially the horseshoe—readily suppress spurious parameters and maintain robust forecasts. Underfitting cannot be mitigated by any prior, as the exclusion of critical VAR or VMA terms inflates bias and diminishes coverage. Further exploration of alternative coefficient matrices, described in the Supplementary Materials, reinforces these findings: shrinkage priors protect against overfitting but cannot repair underspecified models. The horseshoe, Laplace, and hierarchical priors yield stable forecasts across various parameter settings, whereas the informative normal approach is more susceptible to inflated estimates in large or sparse models.

5. Application to S&P 500 Sector Trading Values

5.1. Motivation and Data Description

A fixed daily trading value in the S&P 500 is allocated across different sectors, giving rise to compositional data that captures how investors distribute their capital each day. Tracking these proportions over time can reveal macroeconomic trends, sector rotation, and shifts in investor sentiment. In our analysis, we examined the daily proportions of eleven S&P 500 sectors from January 2021 to December 2023. These sectors include the following:
  • Technology: software, hardware, and related services.
  • Healthcare: pharmaceuticals, biotechnology, and healthcare services.
  • Financial Services: banking, insurance, and investment services.
  • Consumer Discretionary: non-essential goods and services.
  • Industrials: manufacturing, aerospace, defense, and machinery.
  • Consumer Staples: essential goods, such as food and household items.
  • Energy: oil, gas, and renewable resources.
  • Utilities: public services such as electricity and water.
  • Real Estate: REITs and property management.
  • Materials: chemicals, metals, and construction materials.
  • Communication Services: media, telecommunication, and internet services.
Let V k t denote the dollar trading value executed in sector k on trading day t for k = 1 , , K ( K = 11 sectors). The market-wide total is
g t = k = 1 K V k t ,
and the composition we analyze is the vector of sector shares
y t = ( y 1 t , , y K t ) T , y k t = V k t / g t , k = 1 K y k t = 1 .
The stationarity (or non-stationarity) of the compositional time series y t is independent of that of the gross total g t : g t may drift without affecting the stationarity of the shares, and, conversely, the shares could be non-stationary even if g t is itself stationary.
We used data from 1 January 2021 to 30 June 2023, for training, as shown in Figure 1. The forecast evaluation was conducted on 126 trading days, spanning 1 July through 31 December 2023.
Short-term variability, cyclical tendencies, and gradual long-term shifts emerged across the series. Technology and Financial Services consistently held the largest fractions of the fixed daily trading value, whereas Utilities and Real Estate occupied much smaller shares. Day-to-day fluctuations tended to be more pronounced in Consumer Discretionary and Communication Services, indicating greater sensitivity to rapidly changing market conditions. Seasonal patterns and moderate differences across weekdays were also evident, as illustrated in Figure 2, Figure 3 and Figure 4, further underscoring the multifaceted nature of sector-level trading dynamics.
Model specification (purposeful over-parameterization). A B-DARMA ( P = 10 , Q = 0 ) model is fitted by design with a lag order that exceeds the horizon over which sector reallocations are generally thought to propagate. Ten trading days (≈ two calendar weeks), therefore, represent a deliberate overfit, allowing us to evaluate how strongly the global–local shrinkage priors suppress redundant dynamics. Each sector’s additive-log-ratio is regressed on its own composition at lags 1 , , 10 , yielding 1000 VAR coefficients in the matrices A 1 , , A 10 .
Seasonality is modeled with Fourier bases that are fixed regardless of the chosen lag: two sine–cosine pairs capture the 5-day trading cycle (Monday–Friday), and five pairs capture the annual cycle of roughly 252 trading days. These 14 terms, together with a sector-specific intercept, form the design matrix for the linear predictor; the same seasonal structure enters the model for the Dirichlet precision ϕ t .
In total, the specification estimates 1165 parameters: 1000 VAR coefficients, 140 seasonal coefficients, 10 intercepts, and 15 precision-related terms. The intentionally generous lag order thus functions as a stress test.

5.2. Priors and Hyperparameters

For all priors, the intercept in γ ϕ is given an N ( 7 , 1.5 ) prior, and all seasonal Fourier terms in γ ϕ receive N ( 0 , 0.1 ) .
Under the informative normal prior, every element of β is modeled with an N ( 0 , 0.1 ) prior, and each element of A p has an N ( 0 , 1 ) prior.
The Laplace prior [14] is implemented via an exponential mixture: each coefficient θ i satisfies θ i N ( 0 , ν ) with ν Exponential ( 1 / b ) , leading to an overall Laplace ( 0 , b ) prior. We set b β = 1.0 and b A = 1.0 for β and A p , respectively.
For the spike-and-slab prior [12], each coefficient θ i is drawn from a mixture: θ i = 0 with probability ( 1 τ i ) or θ i N ( 0 , 1 ) with probability τ i , where τ i Beta ( 1 , 1 ) is the mixing weight.
The horseshoe prior [15] introduces one global scale parameter τ and local scales λ i , all drawn from Cauchy + ( 0 , 1 ) . Each coefficient θ i then follows N ( 0 , τ λ i ) .
Finally, under the hierarchical prior [17], the coefficients are partitioned into three groups: (i) the elements of β , (ii) the diagonal entries of each lag matrix A p , and (iii) the off-diagonal entries of each A p . Then, the coefficients in β are modeled as β i σ β N ( 0 , σ β ) , while for each lag p in A p , the diagonal entries follow N ( 0.5 , σ A ) and the off-diagonal entries follow N ( 0 , σ A , off ) .

5.3. Evaluation Metrics and Forecasting

A B-DARMA(10,0) model was fit with each of the five priors using the training data. A 126-day forecast was then generated for the test period using the mean of the joint predictive distribution. Mean absolute error (MAE) was used as a measure of average discrepancy between the predicted and observed shares, while root mean squared error (RMSE) weighs larger deviations more explicitly.

5.4. Results

In Figure 5, the forecasts (in turquoise) are shown together with the actual daily proportions (in red) for each sector over the 126-day test interval. The hierarchical and horseshoe priors led to predictions that aligned more closely with the observed values than the informative prior, particularly in volatile sectors such as Energy and Technology. Under the spike-and-slab prior, smaller signals were at times set to zero, which occasionally delayed responses to abrupt shifts in sector composition.
The sector-level RMSE values under each prior are shown in Figure 6, and the average RMSE and MAE values are listed in Table 7. The hierarchical and spike-and-slab priors yielded the smallest RMSE in multiple sectors, and horseshoe priors were found to be effective for larger fluctuations in the Consumer Cyclical and Financial Services sectors. The informative prior generally had higher errors, reflecting the value of regularization in a high-dimensional parameter space.

Summary of Findings

The findings in this real-data application align with our earlier simulation results. In the large-scale B-DARMA model, shrinkage priors reduced extraneous complexity and lowered forecast errors. The horseshoe, Laplace, and spike-and-slab priors limited the number of active coefficients and delivered stable predictions. Hierarchical priors performed similarly, in part because they partially pool information across related parameters, shrinking them toward a common distribution while still allowing differences where the data support them. Sectors with higher volatility, such as Technology, Communication Services, and Consumer Cyclical, appeared to benefit most from strong shrinkage, whereas more stable sectors like Utilities and Basic Materials performed comparably under all priors. Overall, these outcomes highlight the value of effective regularization in models with multiple lags and complex seasonality, especially in markets that experience rapid fluctuations.

6. Discussion

6.1. Comparisons Across Simulations and Real Data

Our three simulation studies demonstrated how prior selection interacts with model specification in B-DARMA. When the model order matched the true data-generating process (DARMA(2,1)), all priors yielded acceptable results, although the horseshoe and hierarchical priors produced slightly lower RMSE and better interval coverage. Overfitting by fitting B-DARMA(4,2) confirmed that strong shrinkage—particularly the horseshoe—helps suppress spurious higher-order coefficients and avoids inflating forecast errors. Conversely, underfitting remains impervious to prior choice, as omitting critical AR(2) and MA(1) terms led to uniformly higher biases and coverage shortfalls across all priors. All four shrinkage priors neutralized redundant lags, and any of them is preferable to an under-specified alternative.
These lessons translate directly to real data, where we intentionally specified a large-lag B-DARMA model for S&P 500 sector trading. Just as in the overfitting simulation, the horseshoe, Laplace, and spike-and-slab priors effectively shrank extraneous parameters and preserved forecast accuracy. Hierarchical priors provided comparably strong performance through partial pooling, whereas the informative prior yielded higher forecast errors in volatile sectors such as Energy and Technology. These empirical patterns mirror the simulation findings, reinforcing that the combination of a potentially over-parameterized model and minimal shrinkage can lead to unstable estimates and suboptimal forecasts.
Computation time. Using identical Stan settings (four chains, 1250 iterations, 500 warm-ups), the informative prior completed the fastest in about 10 min per chain and served as our baseline. Laplace added roughly 20 % to the wall-clock time (12 min), reflecting the heavier double-exponential tails that require more leapfrog steps. The horseshoe and hierarchical priors were slower by 40 % (14–15 min) because their global–local scale hierarchies force smaller step sizes during HMC integration. Spike-and-slab was the clear outlier, nearly doubling run-time (20 min) owing to the latent inclusion indicators that create funnel-shaped geometry. Practitioners with tight computational budgets might prefer the Laplace prior, while more aggressive sparsity (horseshoe, hierarchical, and spike-and-slab) simply warrants proportionally longer runs.

6.2. Implications and Guidelines

Overall, our results underscore three main points relevant to both simulated and real-world compositional time-series modeling:
  • Shrinkage priors mitigate overfitting. Horseshoe priors are especially adept at handling sparse dynamics and large parameter spaces, as shown by the minimal performance degradation in both the simulated and real overfitting scenarios.
  • Hierarchical priors offer robust partial pooling. They attain performance comparable to the horseshoe prior while retaining smooth shrinkage across correlated parameters, making them a flexible choice for multi-sector or multi-component series.
  • Model misspecification overshadows prior advantages. Underfitting in simulations, or failing to include essential lags in practice, leads to systematic bias and reduced coverage that shrinkage alone cannot remedy. Model identification and appropriate lag selection remain critical for accurate inference.
The S&P 500 data analysis underscores these points: the shrinkage priors consistently outperformed the informative prior in managing a high-dimensional model, yet sector-specific volatility still caused reduced accuracy. Sectors with greater day-to-day variability, such as Technology or Consumer Cyclical, benefited more from aggressive regularization than stable ones (e.g., Utilities and Basic Materials).
Incorporating a dynamic selection of lag structures or exploring alternative priors may further improve compositional forecasting in settings with complex seasonalities or extremely large parameter spaces. Meanwhile, practitioners should pair robust prior modeling with careful model diagnostics (e.g., residual checks and information criteria) to ensure that underfitting and overfitting are detected early.
Overall, these findings reinforce the need for carefully chosen priors, especially in high-dimensional compositional time-series contexts. The horseshoe, Laplace, and hierarchical priors successfully protect against overfitting while preserving essential signals, as illustrated by both the simulation and real-data (S&P 500) analyses. Nonetheless, fundamental model adequacy remains paramount, as even the best prior cannot rescue a structurally under-specified model. We recommend that analysts combine thorough sensitivity checks of priors with well-informed decisions about lag order, covariate inclusion, and potential seasonal effects in B-DARMA applications.
Code Availability
All R analysis scripts and Stan model files are publicly archived at https://github.com/harrisonekatz/bdarma_sensitivity_analysis (accessed on 15 June 2025) (snapshot main@18Jun2025).

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/forecast7030032/s1: Table S1. Parameter Estimation Summary for Simulation Study 4 (Correct Specification); Table S2. Parameter Estimation Summary for Simulation Study 5 (Overfitting); Table S3. Parameter Estimation Summary for Simulation Study 6 (Underfitting); Table S4. Forecast Performance Summary Across Supplementary Simulations; Table S5. Forecasting Performance Ratios for Mean RMSE and SD RMSE; Table S6. Forecasting Performance Ratios Within Simulations (Best Model as Denominator).

Author Contributions

Conceptualization, H.K. and R.E.W.; methodology, H.K.; software, H.K. and L.M.; validation, H.K. and L.M.; formal analysis, H.K.; investigation, H.K. and L.M.; data curation, L.M.; writing—original draft preparation, H.K. and L.M.; writing—review and editing, H.K., L.M. and R.E.W.; visualization, H.K.; supervision, R.E.W.; project administration, H.K.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Raw market data were downloaded from the open-access **“S&P 500 Stocks”** dataset by Andrew Mvd on Kaggle (https://www.kaggle.com/datasets/andrewmvd/sp-500-stocks?select=sp500_stocks.csv; accessed 1 January 2025). Processed sector-share matrices derived from these CSV files are stored in data_analysis/ within the GitHub archive. No new data were created or analysed beyond these publicly available sources; all Monte-Carlo outputs can be regenerated by rerunning the scripts with their fixed random-number seeds.

Conflicts of Interest

Author Harrison Katz and Liz Medina were employed by the company Airbnb. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Boonen, T.J.; Guillén, M.; Santolino, M. Forecasting compositional risk allocations. Insur. Math. Econ. 2019, 84, 79–86. [Google Scholar] [CrossRef]
  2. Lipsmeyer, C.S.; Philips, A.Q.; Rutherford, A.; Whitten, G.D. Comparing Dynamic Pies: A Strategy for Modeling Compositional Variables in Time and Space. Political Sci. Res. Methods 2019, 7, 523–540. [Google Scholar] [CrossRef]
  3. Aitchison, J. The Statistical Analysis of Compositional Data; Chapman and Hall: London, UK, 1986. [Google Scholar]
  4. Greenacre, M. Compositional Data Analysis in Practice; Chapman and Hall/CRC: New York, NY, USA, 2018; pp. 1–120. ISBN 978-0-429-45553-7. [Google Scholar] [CrossRef]
  5. Zheng, T.; Chen, R. Dirichlet ARMA models for compositional time series. J. Multivar. Anal. 2017, 158, 31–46. [Google Scholar] [CrossRef]
  6. Casarin, R.; Grassi, S.; Ravazzolo, F.; Van Dijk, H.K. A Bayesian Dynamic Compositional Model for Large Density Combinations in Finance; Discussion Paper 21-016/III; Tinbergen Institute: Amsterdam, The Netherlands, 2021; pp. 1–51. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3783342 (accessed on 17 June 2025).
  7. Makgai, S.; Bekker, A.; Arashi, M. Compositional Data Modeling through Dirichlet Innovations. Mathematics 2021, 9, 2477. [Google Scholar] [CrossRef]
  8. Dong, Z.; Shang, H.L.; Hui, F.; Bruhn, A. A compositional approach to modelling cause-specific mortality with zero counts. Ann. Actuar. Sci. 2025, 1–26. [Google Scholar] [CrossRef]
  9. Katz, H.; Brusch, K.T.; Weiss, R.E. A Bayesian Dirichlet auto-regressive moving average model for forecasting lead times. Int. J. Forecast. 2024, 40, 1556–1567. [Google Scholar] [CrossRef]
  10. Griffin, J.E.; Brown, P.J. Bayesian global-local shrinkage methods for regularisation in the high-dimensional linear model. Chemom. Intell. Lab. Syst. 2021, 210, 104255. [Google Scholar] [CrossRef]
  11. Bitto, A.; Frühwirth-Schnatter, S. Achieving shrinkage in a time-varying parameter model framework. J. Econom. 2019, 210, 75–97. [Google Scholar] [CrossRef]
  12. Mitchell, T.J.; Beauchamp, J.J. Bayesian Variable Selection in Linear Regression. J. Am. Stat. Assoc. 1988, 83, 1023–1032. [Google Scholar] [CrossRef]
  13. Follett, L.; Yu, C. Achieving parsimony in Bayesian vector autoregressions with the horseshoe prior. Econom. Stat. 2019, 11, 130–144. [Google Scholar] [CrossRef]
  14. Park, T.; Casella, G. The Bayesian Lasso. J. Am. Stat. Assoc. 2008, 103, 681–686. [Google Scholar] [CrossRef]
  15. Carvalho, C.M.; Polson, N.G.; Scott, J.G. The Horseshoe Estimator for Sparse Signals. Biometrika 2010, 97, 465–480. [Google Scholar] [CrossRef]
  16. Polson, N.G.; Sokolov, V. Bayesian regularization: From Tikhonov to horseshoe. Wiley Interdiscip. Rev. Comput. Stat. 2019, 11, e1463. [Google Scholar] [CrossRef]
  17. Polson, N.G.; Scott, J.G. Local Shrinkage Rules, Lévy Processes and Regularized Regression. J. R. Stat. Soc. Ser. B Stat. Methodol. 2012, 74, 287–311. [Google Scholar] [CrossRef]
  18. Stan Development Team. RStan: The R Interface to Stan, R Package Version 2.32.7; Stan Development Team: New York, NY, USA, 2025. Available online: https://mc-stan.org/rstan/ (accessed on 17 June 2025).
  19. Gelman, A.; Carlin, J.B.; Stern, H.S.; Dunson, D.B.; Vehtari, A.; Rubin, D.B. Bayesian Data Analysis, 4th ed.; CRC Press: Boca Raton, FL, USA, 2020. [Google Scholar]
Figure 1. Daily proportions of S&P 500 trading flows by sector from January 2021 to June 2023. Sector colors are indicated in the legend.
Figure 1. Daily proportions of S&P 500 trading flows by sector from January 2021 to June 2023. Sector colors are indicated in the legend.
Forecasting 07 00032 g001
Figure 2. Daily S&P 500 trading flows by sector for January 2021. Sector colors are indicated in the legend.
Figure 2. Daily S&P 500 trading flows by sector for January 2021. Sector colors are indicated in the legend.
Forecasting 07 00032 g002
Figure 3. Time series of S&P 500 trading flows from January 2021 to January 2022. Sector colors are indicated in the legend.
Figure 3. Time series of S&P 500 trading flows from January 2021 to January 2022. Sector colors are indicated in the legend.
Forecasting 07 00032 g003
Figure 4. Box plots of S&P 500 sector trading proportions by day of the week.
Figure 4. Box plots of S&P 500 sector trading proportions by day of the week.
Forecasting 07 00032 g004
Figure 5. Forecast and actuals by sector and prior for S&P 500 data. Red lines indicate actual daily sector proportions; turquoise lines are the forecasts from the fitted B-DARMA model. The horseshoe and hierarchical priors tend to track actual series more tightly than the informative prior in many sectors.
Figure 5. Forecast and actuals by sector and prior for S&P 500 data. Red lines indicate actual daily sector proportions; turquoise lines are the forecasts from the fitted B-DARMA model. The horseshoe and hierarchical priors tend to track actual series more tightly than the informative prior in many sectors.
Forecasting 07 00032 g005
Figure 6. RMSE by prior and sector for S&P 500 forecasting. Each facet shows one sector’s RMSE across the five priors.
Figure 6. RMSE by prior and sector for S&P 500 forecasting. Each facet shows one sector’s RMSE across the five priors.
Forecasting 07 00032 g006
Table 1. Parameter estimation summary for simulation study 1 (correct specification). We show the mean bias, RMSE, average credible-interval length, and coverage for each prior, summarizing over all parameters in each of β , A 1 , A 2 , and B 1 . Lower RMSE and shorter intervals typically indicate more effective shrinkage, while coverage near the nominal 0.95 is desirable.
Table 1. Parameter estimation summary for simulation study 1 (correct specification). We show the mean bias, RMSE, average credible-interval length, and coverage for each prior, summarizing over all parameters in each of β , A 1 , A 2 , and B 1 . Lower RMSE and shorter intervals typically indicate more effective shrinkage, while coverage near the nominal 0.95 is desirable.
CoefficientPriorMean BiasMean RMSEMean CI LengthCoverage
β Informative−0.0140.0400.2100.976
Horseshoe−0.0160.0420.1810.948
Laplace−0.0120.0500.2510.972
Spike-and-Slab−0.0120.0740.2830.824
Hierarchical−0.0140.0420.1820.948
A 1 Informative−0.0370.1790.6440.889
Horseshoe−0.0070.0840.3060.969
Laplace−0.0070.1110.5230.978
Spike-and-Slab−0.0100.2080.5890.804
Hierarchical−0.0320.1390.5110.937
A 2 Informative0.0410.1640.5620.876
Horseshoe0.0080.0690.2570.955
Laplace0.0070.0940.4420.982
Spike-and-Slab0.0090.1800.4970.791
Hierarchical0.0200.1100.4390.957
B 1 Informative0.0210.1700.6840.941
Horseshoe−0.0170.1000.3950.961
Laplace−0.0050.1300.5990.966
Spike-and-Slab0.0080.2410.6420.754
Hierarchical0.0020.1220.5800.975
Table 2. Forecast performance summary across simulations. M-RMSE is the mean (across simulations) of the root mean squared error on the test set, and SD-RMSE is its standard deviation.
Table 2. Forecast performance summary across simulations. M-RMSE is the mean (across simulations) of the root mean squared error on the test set, and SD-RMSE is its standard deviation.
Sim. 1: True DGPSim. 2: OverfittingSim. 3: Underfitting
Prior M-RMSE SD RMSE M-RMSE SD RMSE M-RMSE SD RMSE
Informative0.03130.00390.03240.00390.03220.0041
Horseshoe0.03100.00360.03050.00350.03230.0041
Laplace0.03130.00390.03140.00400.03260.0043
Spike-and-Slab0.03230.00440.03270.00420.03310.0048
Hierarchical0.03120.00380.03050.00330.03220.0041
Table 3. Parameter estimation summary for simulation study 2 (overfitting). This table reflects the setting where the fitted model (B-DARMA(4,2)) exceeds the true DARMA(2,1) order. We report the mean bias, RMSE, credible-interval length, and coverage for each prior and parameter.
Table 3. Parameter estimation summary for simulation study 2 (overfitting). This table reflects the setting where the fitted model (B-DARMA(4,2)) exceeds the true DARMA(2,1) order. We report the mean bias, RMSE, credible-interval length, and coverage for each prior and parameter.
CoefficientPriorMean BiasMean RMSEMean CI LengthCoverage
β Informative−0.0120.0430.2050.956
Horseshoe−0.0100.0420.1760.944
Laplace−0.0080.0530.2190.932
Spike-and-Slab−0.0040.0990.3030.740
Hierarchical−0.0110.0420.1920.960
A 1 Informative−0.0610.2240.7510.809
Horseshoe−0.0530.1570.3100.915
Laplace−0.0460.1780.6570.954
Spike-and-Slab−0.0220.2170.6840.805
Hierarchical0.0720.1570.1580.650
A 2 Informative0.0330.1760.7560.925
Horseshoe0.0320.0910.2250.898
Laplace0.0170.1280.6110.958
Spike-and-Slab0.0020.2030.6910.810
Hierarchical−0.0780.1800.1970.683
A 3 Informative0.0170.1430.6700.966
Horseshoe−0.0040.0220.1681.000
Laplace0.0020.1000.5140.986
Spike-and-Slab0.0040.1830.6000.818
Hierarchical0.0350.0930.1720.962
A 4 Informative0.0020.1130.4730.949
Horseshoe−0.0020.0170.1370.999
Laplace−0.0050.0810.3630.974
Spike-and-Slab−0.0060.1350.4090.805
Hierarchical−0.0080.0350.1180.990
B 1 Informative0.0510.2310.8450.854
Horseshoe0.0260.1410.4190.946
Laplace0.0400.2020.7690.949
Spike-and-Slab0.0120.2330.7490.812
Hierarchical−0.0960.1870.2370.756
B 2 Informative0.0550.2310.8620.861
Horseshoe0.0140.0720.3490.996
Laplace0.0280.1710.7550.969
Spike-and-Slab0.0190.2160.7530.851
Hierarchical−0.0050.0230.2361.000
Table 4. Forecasting performance ratios for mean RMSE and SD RMSE. “S2” = overfitting scenario; “S3” = underfitting scenario; “S1” = correct DGP. Columns show how each simulation compares in terms of the mean RMSE (left) and SD RMSE (right). Ratios > 1 indicate worse performance relative to the denominator; < 1 indicates better.
Table 4. Forecasting performance ratios for mean RMSE and SD RMSE. “S2” = overfitting scenario; “S3” = underfitting scenario; “S1” = correct DGP. Columns show how each simulation compares in terms of the mean RMSE (left) and SD RMSE (right). Ratios > 1 indicate worse performance relative to the denominator; < 1 indicates better.
Mean RMSE RatioSD RMSE Ratio
Prior S2/S1 S3/S1 S3/S2 S2/S3 S2/S1 S3/S1 S3/S2 S2/S3
Informative1.0911.0300.9451.0581.3851.0510.7591.318
Horseshoe1.0161.0421.0260.9751.2781.1390.8911.122
Laplace1.0601.0420.9831.0171.3851.1030.7961.256
Spike-and-Slab1.0421.0250.9841.0161.2271.0910.8891.125
Hierarchical1.0221.0321.0100.9901.2111.0790.8911.122
Table 5. Parameter estimation summary for simulation study 3 (underfitting). This table reports key metrics (mean bias, RMSE, average CI length, and coverage) when crucial AR(2) and MA(1) terms are omitted. All priors suffer from higher errors and coverage shortfalls, indicating that structural misspecification is the dominant source of inaccuracy.
Table 5. Parameter estimation summary for simulation study 3 (underfitting). This table reports key metrics (mean bias, RMSE, average CI length, and coverage) when crucial AR(2) and MA(1) terms are omitted. All priors suffer from higher errors and coverage shortfalls, indicating that structural misspecification is the dominant source of inaccuracy.
CoefficientPriorMean BiasMean RMSEMean CI LengthCoverage
β Informative−0.0170.0420.2290.980
Horseshoe−0.0180.0460.2240.944
Laplace−0.0170.0510.3540.988
Spike-and-Slab−0.0130.0640.8371.000
Hierarchical−0.0170.0440.2220.964
A 1 Informative−0.0320.2140.3610.657
Horseshoe−0.0310.1950.3150.708
Laplace−0.0290.2100.3460.693
Spike-and-Slab−0.0260.2330.3900.656
Hierarchical−0.0360.2150.3580.653
Table 6. Forecasting performance ratios within simulations (best model as denominator).
Table 6. Forecasting performance ratios within simulations (best model as denominator).
Sim. 1: True DGPSim. 2: OverfittingSim. 3: Underfitting
Prior Mean RMSE SD RMSE Mean RMSE SD RMSE Mean RMSE SD RMSE
Informative1.0101.0831.0831.1741.0001.000
Horseshoe1.0001.0001.0001.0001.0031.000
Laplace1.0101.0831.0531.1741.0121.049
Spike-and-Slab1.0421.2221.0711.1741.0291.171
Hierarchical1.0071.0561.0111.0001.0001.000
Table 7. Mean RMSE and MAE by model for the S&P 500 analysis. We aggregate forecast errors across all 11 sectors under each prior. Lower RMSE and MAE values indicate better overall predictive performance, revealing that the hierarchical and horseshoe strategies often outperform the more permissive informative prior.
Table 7. Mean RMSE and MAE by model for the S&P 500 analysis. We aggregate forecast errors across all 11 sectors under each prior. Lower RMSE and MAE values indicate better overall predictive performance, revealing that the hierarchical and horseshoe strategies often outperform the more permissive informative prior.
ModelRMSEMAE
Informative0.03580.03190
Horseshoe0.01380.01060
Laplace0.01480.01180
Spike-and-Slab0.01400.01110
Hierarchical0.01210.00910
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Katz, H.; Medina, L.; Weiss, R.E. Sensitivity Analysis of Priors in the Bayesian Dirichlet Auto-Regressive Moving Average Model. Forecasting 2025, 7, 32. https://doi.org/10.3390/forecast7030032

AMA Style

Katz H, Medina L, Weiss RE. Sensitivity Analysis of Priors in the Bayesian Dirichlet Auto-Regressive Moving Average Model. Forecasting. 2025; 7(3):32. https://doi.org/10.3390/forecast7030032

Chicago/Turabian Style

Katz, Harrison, Liz Medina, and Robert E. Weiss. 2025. "Sensitivity Analysis of Priors in the Bayesian Dirichlet Auto-Regressive Moving Average Model" Forecasting 7, no. 3: 32. https://doi.org/10.3390/forecast7030032

APA Style

Katz, H., Medina, L., & Weiss, R. E. (2025). Sensitivity Analysis of Priors in the Bayesian Dirichlet Auto-Regressive Moving Average Model. Forecasting, 7(3), 32. https://doi.org/10.3390/forecast7030032

Article Metrics

Back to TopTop