Bayesian Learning in an Affine GARCH Model with Application to Portfolio Optimization

: This paper develops a methodology to accommodate uncertainty in a GARCH model with the goal of improving portfolio decisions via Bayesian learning. Given the abundant evidence of uncertainty in estimating expected returns, we focus our analyses on the single parameter driving expected returns. After deriving an Uncertainty-Implied GARCH (UI-GARCH) model, we investigate how learning about uncertainty affects investments in a dynamic portfolio optimization problem. We consider an investor with constant relative risk aversion (CRRA) utility who wants to maximize her expected utility from terminal wealth under an Affine GARCH(1,1) model. The corresponding stock evolution, and therefore, the wealth process, is treated as a Bayesian information model that learns about the expected return with each period. We explore the one-and two-period cases, demonstrating a significant impact of uncertainty on optimal allocation and wealth-equivalent losses, particularly in the case of a small sample size or large standard errors in the parameter estimation. These analyses are conducted under well-documented parametric choices. The methodology can be adapted to other GARCH models and applications beyond portfolio optimization.


Introduction
Working with statistical models requires the estimation of parameters.In most cases, these parameters are not directly observable, and therefore, carry the risk of an inaccurate (e.g., large variance, biased) estimate.When using these models in portfolio construction, inaccurate estimates of parameters driving expected returns, variances, and covariances represent an additional source of risk to the investor.This risk, also known as estimation error or parameter risk, should be considered when making decisions based on the model.
A main direction in the literature to deal with estimation error is the Bayesian approach.Here, the unknown parameters are treated as random variables.A prior over the parameters together with observations are used to calculate a posterior distribution of the parameters and an update of the return distribution.Hence, a Bayesian optimal portfolio can be built by maximizing an objective function with respect to the updated distribution.This direction of Bayesian statistics has been applied in various studies, such as [1], relying on diffuse priors, and [2], adding economic objectives to the prior.Both studies work within one-period mean-variance theory [3], while [4] extended them to a multi-period model for a general posterior.Similarly, ref. [5] proposed a Bayesian approach in a discrete setting for normally distributed returns with an unknown mean and constant volatility, while ref.[6] extended this approach to a multivariate setting, still assuming constant volatility.In general, ref. [7] offered a comprehensive framework for addressing parameter error within the context of stochastic control models and Bayesian analysis.
An alternative approach to study estimation error consists of building a confidence interval around the parameter of interest and adding the built as a constraint to the optimization; see, for instance, [8] for one-period mean-variance models and [9] for continuous-time models.We should also mention the work of [10], who used fuzzy numbers to tackle parameter risk in a mean-variance framework.
The formulation and study of an expected-utility-maximizing control model, within a Bayesian setting for parameter error in a GARCH model, is the main objective of this work.We focus on the error arising from the market price of risk (MPR) driving the expected asset return to derive a semi closed-form representation for the optimal allocation.The expected return is much harder to estimate than variances and covariances; see [11].Ref. [12] found that errors in estimating expected returns are over 10 times as costly as errors in estimating variances, and over 20 times as costly as errors in estimating covariances.This is in terms of the cash-equivalent loss from using estimated rather than true parameters.
It is important to emphasize that this work does not go into the larger but also more complex area of ambiguity aversion, also known as model uncertainty or robustness analysis which is a future area of research.A Bayesian approach assumes a single prior on the unknown parameter, which is recognized in the literature as neutral to uncertainty in the sense of [13].As per the seminal work of [14], decision makers might not be neutral to ambiguity; see [15,16] for works allowing for multiple prior as a way of capturing ambiguity/uncertainty.This phenomenon occurs when people have little competence in assessing the probability distribution or feel that other people are more qualified to evaluate the risk of their portfolios.
One of the objectives here is to derive closed-form solutions, which are rare for multiperiod portfolio analysis.This is not the case for continuous-time analyses, where the ground-breaking work of [17] provided analytical expressions in the context of Expected Utility Theory (EUT).Other possible frameworks for portfolio optimization such as minimax or multi-objective optimization are shown in [18].In the common EUT framework, not only solutions for models with stochastic volatility have been derived (see [19]), but also solutions in the presence of ambiguity aversion; see the seminal work of [20], who used EUT and robust control to obtain closed-form solutions while modeling the stocks via geometric Brownian motion (GBM).This work was extended to study ambiguity aversion in a stochastic volatility setting by [21].To the best of our knowledge, estimation error, as opposed to ambiguity aversion, has not been fully addressed in continuous time.A related, but not equivalent, topic is filtering analysis.In filtering, instead of parameters, the investor wants to learn about an unobservable process, reminiscent of the parameter from the Bayesian perspective; see [22].
In practice, continuous-time models can be challenging to calibrate, particularly those with stochastic volatility.This has led to a preference for models in discrete time, not only due to their ease of estimation, but also the ability to provide a more flexible representation of financial markets for investors.Nonetheless, to the best of our knowledge, there is no study in the literature addressing GARCH models in portfolio optimization and parameter errors via Bayesian analysis.The Affine GARCH model, pioneered by [23], which permits closed-form pricing of options, has led to recent analytical results within EUT; see [24,25] for extensions to other Affine GARCH models.In these works, the authors did not account for parameter uncertainty.The availability of analytical solutions in the world of Affine GARCH models makes them the best candidate for the Bayesian analysis conducted here.
For clarity, we outline the following specific contributions of this work:

•
We adapt the control model theory developed in [7] to the setting of an Affine GARCH model with Bayesian learning on the expected return parameter λ.This leads to a new non-affine GARCH model named Uncertainty-Implied GARCH (UI-GARCH) model.Although starting with an Affine GARCH, a non-affine structure appears due to the formulation of parameter uncertainty.

•
We pioneer the study of an expected utility portfolio optimization problem with a CRRA utility under the UI-GARCH model.This allows us to understand the impact of uncertainty on portfolio decisions.We derive closed-form solutions for an optimal allocation in a one-period and a two-period representation.

•
We perform numerical analyses for two well-documented parametric sets.Using maximum likelihood estimates and standard errors for the prior.Comparing to the case of no uncertainty, we find large changes in optimal allocation (in the range of 20% to 120%), and significant wealth-equivalent losses, which could be up to 20% in the extreme case of low risk aversion and low sample size.

•
Given the impact of uncertainty on λ within a GARCH model for one-and twoperiod representations, we conclude that the importance of accounting for uncertainty increases with the number of periods and the variety of uncertain parameters.This highlights the need for further research in this area.

•
The methodology developed here can be expanded to other GARCH models, not exclusively Affine, as well as objectives beyond portfolio theory.
The paper is structured as follows.Section 2 introduces the mathematical setup, presents the control model as well as extensions incorporating uncertainty, and considers the optimization problem within this framework.Section 3 presents numerical results on the portfolio optimization, displays a sensitivity analysis, and performs a comparison to strategies without the inclusion of uncertainty.Section 4 concludes the paper.Appendix A provides proofs and calculations as well as complementary material.

GARCH Model and Control Settings
Let (Ω, F , P) be a complete probability space with filtration {F n } N n∈{0,...,N} .All stochastic processes are defined on this space.In this setting the logarithm of a risky asset price will be modeled by an affine GARCH model introduced by [23] (referred to as HN-GARCH).The dynamics of this model are given by: where X 0 is non-random, r is the continuously compounded single-period risk-free rate, z n is a sequence of independent standard normal innovations, and h n is the conditional variance of the log-return X n of the asset between n − 1 and n with β + αθ 2 < 1 ensuring stationarity.The long-term average of the variance h is given by: As the model is using the log-returns of the stock prices we are interested in the log-wealth process of a portfolio consisting of a risky and a risk-free asset as well.By B n we denote the risk-free asset, i.e., the bank account, continuously compounded by the interest rate r.From the self financing condition: where φ S,n denotes the number of stocks and φ B,n the number of units in the cash account at time n, while the proportion of wealth invested in the risky asset S n at any time n is defined as π n .Hence, . Using a Taylor approximation of order two, [24] derived an approximated log-wealth process given by: Thus, Y n follows a N (r + λh n , h n ) distribution.With the HN-GARCH model substituted for the log-return, the log-wealth process becomes: The authors show that the impact of the approximation is negligible.
In this paper, we wish to perform a portfolio optimization in a HN-GARCH model assuming the risk premium λ to be unknown, but described by a prior distribution µ 0 .The prior distribution's variance quantifies uncertainty.With zero prior variance, there is no uncertainty.For the portfolio optimization, we choose a power utility function of the form U(V) = V γ γ .This power utility characterizes the investor as having a constant level of relative risk aversion (CRRA) of (1 − γ).For the portfolio optimization problem we assume γ < 0.
The case without uncertainty and a final time horizon N has already been studied in [24] with a value function describing the maximal expected utility over N periods given by: Including uncertainty transforms the problem into: We begin by rewriting Problem (1) in the framework of a stochastic control model.We will extend this model and use this formal setup to study Problem (2).

Stochastic Control Model
Following [7] we set up the N-staged stochastic control model (CM) described by the tuple: where the state space is given by (W, h) = s ∈ S and Q : S × Y → R ≥0 is the transition probability measure on Y, a σ-algebra on Y, where we use the Borel σ-algebra on R resp.R ≥0 as a standard.More specifically, Q is given by: The transition function (The transition function is given by T(s n , a n , which is given by: The terminal utility function Φ 0 : S → R is: The control model can be read as follows.At time n ∈ N 0 , we have a log-wealth of W n and observe a variance of h n+1 stored in state s n ∈ S. We now choose a portfolio weight a n ∈ A. A random movement Y n of the underlying asset occurs according to Q and leads via the transition function T(s n , a n , Y n ) to a new state s n+1 ∈ S. At the final time horizon N ∈ N 0 , a terminal utility of Φ 0 (s N ) is obtained.
We define an N-stage policy π N = ( f n ) N−1 n=0 as a sequence of measurable mappings f n : S → A. Defining F := { f : S → A measurable, f (s) ∈ A, ∀ s ∈ S} as the set of all one-stage policies, F N denotes the set of all N-staged policies π N .
Next, we define the expected terminal utility.For this and f ∈ F, we will use the following notation: and consequently: Let now N ∈ N 0 be the terminal time and π = ( f 0 , . . . ,f N−1 ) ∈ F N a sequence of one-stage policies.The expected terminal utility is then given by: and the maximal expected terminal utility is given by: Thus, Problem (1) is equivalent to solving Φ N in the control model (CM).A useful tool to solve Problem (1) is the value iteration.For this we introduce the following two operators for all admissible functions v : S → R (following [24] a function Φ 0 : S → R is called admissible if there exists a set of functions M ⊂ {v : S → R : E[|v|] < ∞, vs. concave in the first component of S}, such that U : M → M, Φ 0 (s 0 ) ∈ M and that for all v ∈ M there exists an f v ∈ F such that f v (s) maximizes a → Lv(s, a) on A for all s ∈ S): Uv(s, a) := max a∈A Lv(s, a).

Theorem 1. (Value iteration)
Let M be a set of functions, such that Φ 0 is admissible.Then: The value iteration implies that Problem (1) can be rewritten as: and therefore, be solved recursively.

Uncertainty Control Model
In the previous model (CM), there was no parameter uncertainty.To include this, we set up an Uncertainty Control Model (UCM).In this model, the risk premium λ is now considered to be unknown, but fixed.The model (UCM) is given by the tuple: where S, A, Y, and Φ 0 are as in (CM) and Λ describes the parameter space.The transition probability measure Q : Λ × S × Y → R ≥0 on Y now depends on the uncertain but fixed parameter: as does the transition function: We note that in the case of |Λ| = 1, the (UCM) becomes a (CM).
The idea is now to collect information about the unknown parameter over time, and thus, improve the decision process.Therefore, we will need the notion of the set of prehistories up to time n, which is recursively defined by: An element H n = (s 0 , a 0 , y 1 , s 1 , a 1 , y 2 , . . ., s n ) is the prehistory at time n.Using the prehistory, we can define a policy π N = ( f n ) N−1 n=0 as a sequence of measurable mappings In a similar manner as before, we set s, dy), and thus, have the expected terminal utility given by: and the maximal expected terminal utility by: This model takes uncertainty into account, but in general, there is no N-stage policy which is optimal for all λ ∈ Λ.To overcome this problem, we apply the so-called Bayes principle, i.e., we assume a prior distribution µ 0 ∈ P(Λ), the set of all probability measures on Λ, and aim to solve the optimization problem:

Bayesian Information Model
From historical data, we have information on the risk premium λ that we can incorporate into the model.Assuming a prior distribution of the risk premium, we can derive the sequence of posterior distributions using Bayes theorem.Therefore, we define the Bayes operator by: , if the denominator = 0 for some distribution µ ∈ P(Λ) and q, the density of Q.With a prior distribution µ 0 we can use the Bayes operator to derive the sequence of posterior distributions depending on the prehistory recursively, that is: We assume a N (m 0 , σ 2 0 ) prior distribution for λ.This implies a sequence of posterior distributions: with: see Appendix A.1 for the derivation.
The current information at time n on λ is, thus, captured by µ n ∈ P(Λ) and we recognize that we only need the actual state s n and the information on the posterior µ n (H n ; •) to decide on the optimal one-period policy at time n.With time and via the posterior, we improve our knowledge through observing the underlying.We will make use of this feature by adding an information state to (UCM).
To date, the transition function T 2 depends on the unknown parameter λ.To derive a transition function for the Bayesian information model, independent of λ, we add the variance as a second random variable to the transition probability measure, as both y and h are driven by the innovation z.The transition probability measure Q : S × P(Λ) × Y ⊗ H → R ≥0 is now defined on Y ⊗ H, the product σ-algebra of Y and H, the σ-algebra on the variance space R ≥0 .Using the Dirac-function δ, the joint conditional density of Q, written in terms of z, is now given by: Combining Q with the current state of information, the transition probability measure Q′ is defined by: We can explicitly calculate Q′ 's density q′ as: for some constants c and c ′ (see Appendix A.1 for complete calculations).
From this, we can derive the uncertainty adjusted transition function T 2 , allowing us to reduce Q′ back to the adjusted transition probability measure Q′ of y.For this, let s = (W, h) be the state and H the history available at some point in time.
Thus, we define the transition function T ′ : S × P(Λ) × A × Y → S ′ by: with: We can derive the following corollary.
Corollary 1.The information implied process dynamics are given by: We call this process the UI-GARCH model.This process, which is derived from an affine GARCH, is no longer affine.The non-affine structure is a result of considering parameter uncertainty.For a strategy π, the UI-GARCH implies a wealth process W ′ given by: We note that the current sequence of posterior distributions so far depends on the complete prehistory H n .However, to update the posterior distribution only some of the information stored in the prehistory is relevant.Filtering this information is achieved by the sufficient statistic t n (H n ) : H n → I, mapping into the information space I = R 2 : Using this, we can rewrite the update of the posterior distribution in terms of the information i n with a corresponding probability measure μ : I → Λ: .
Using the posterior distributions and the sufficient statistic, we can define a so-called Bayesian Information Model (BI M) describing the flow of information and given by the tuple: where A, Y, and Φ 0 remain the same as in (CM).The transition probability measure Q ′ is now given by: and the transition function with: The one-and N-stage strategies in this model are also denoted with a prime and depend on the state s ′ = (s, i) ∈ S ′ .All other definitions of the (CM) transfer correspondingly.
Using Q ′ we can set, as above, the expected terminal utility in the BIM as: and the maximal expected terminal utility in the BIM as: With the adjusted form: where the value iteration, Theorem 1, also holds for the maximal expected terminal utility in the BIM if we replace S, Φ, L, U by S ′ , Φ ′ , L ′ , U ′ , Furthermore, due to the martingale property of the Bayes operator the following holds for the expected terminal utilities of the (UCM) and the (BI M) (see [7], Proposition 23.1.16,p. 400): In combination with the value iteration of the (BI M) this implies that we can reformulate Problem (2) to:

Analytical Solutions
To solve the problem we start by considering it in one period, i.e., N = 1.At time n = 0 we have not yet collected any information and thus the information state is given by i 0 = (0, 0).The implied process is: where h 1 , h ′ 1 are F 0 -measurable.The problem in one period is: and its solution is shown next.
Proposition 1.The solution to ( 14) is given by: We note that this solution looks very similar to the solution of [17] However, it has an additional component in the denominator that, as γ < 0, reduces the portfolio weighting as the degree of uncertainty, either h 1 or σ 0 , increases.
In two periods, the problem becomes: To solve it completely, we have to resort to numerical methods, but we can prove the well-definedness of the problem.Proposition 2. Assuming that α > 0, γ < 0, and the remaining parameters non-negative, the solution to problem (15) is well-defined.In particular, for n = 1: .
We again find that the portfolio weight decreases with increasing uncertainty, but it is additionally affected by the first-period observation contained in m(i 1 ) and σ 2 (i 1 ).As a result, the learning about the uncertain parameter λ impacts the optimal portfolio weight via the updated mean and variance.

Wealth Equivalent Loss
To asses the strategy under uncertainty in one and two-periods we derive its wealthequivalent loss (WEL) in comparison to a suboptimal strategy.According to [21] we define the WEL from following a suboptimal strategy, represented by superscript s, in comparison to the optimal strategy, represented by a superscript *, and starting with an initial log-wealth W 0 = log(v 0 ), as the solution L of: The wealth-equivalent loss in one period can be found by solving (16) for L and N = 1: From Equation (A5) in the proof of Proposition 1, we know: Using this, we get: Hence: Analogously, we can express the WEL occurring in two periods by: Calculations for (18) can be found in Appendix A.1.In Section 3.3, we will evaluate this expression numerically.

Numerical Analysis
This section presents the numerical analyses of our study.In Section 3.1, we describe the optimal allocation as a function of uncertainty-related quantities like sample size and standard error.The sensitivity of the optimal solution is then discussed in Section 3.2.In Section 3.3, we compare the performance of our strategy under uncertainty in terms of wealth-equivalent losses with the strategy under certainty.

Portfolio Optimization-Numerical Results
We will determine the optimal solution for two different sets of parameters of the HN-GARCH (since both parameter sets show similar behavior, only one is discussed in the main section (C-H-J-2006)-the numerical results for the second parameter set (B-C-H-J-2018) can be found in Appendix A.2). Namely the maximum likelihood estimates (MLE) by [26] (referred to as C-H-J-2006) and the ones by [27] (referred to as B-C-H-J-2018), which can be found in Table 1.Standard errors for the estimates are provided in brackets below the parameter, indicating their robustness.λ is a critical parameter not only conceptually but also exhibits a substantial error estimate, necessitating the investigation of the impact of this uncertainty.From the MLEs, we have an estimate for λ, a priori our best candidate for λ.The degree of variation of this estimate is given by its standard error.It is, therefore, natural to choose these two values as the mean and standard deviation for the prior distribution of λ.It is also convenient for the presentation next to think of uncertainty as the standard error described before.We will denote this parameter as σ 0 .
The parameters in the table were obtained using daily data.In practice, portfolio weights are not always adjusted daily, e.g., some investors adjust quarterly or yearly.Thus, we follow [23], Appendix B, Equation (B3) to derive scaled parameters for different frequencies, and therefore, construct HN-GARCH models for other than daily frequency.This can be interpreted as investors who are modeling non-daily, therefore rebalancing their portfolios at the corresponding non-daily frequency.
This extra degree of freedom, the frequency, will allow us to study the impact of uncertainty on investors with different rebalancing frequencies, e.g, daily and annual.The parameters of the daily and ∆ frequencies can be related as follows (the stationary variance for a ∆ frequency relates to the stationary variance of the estimated frequency as follows: h∆ = h): Applying this scaling to the MLEs of the two parameter sets to an annual frequency, i.e., ∆ = 252, yields the parameters displayed in Table 2.Note that working with parameters for models with lower frequencies can be interpreted as if fewer data are available for parameter estimation.The impact of sample size on the standard error can be accounted for explicitly using that the standard error of the MLE depends on the sample size, n, via σ 0 = σ √ n , where n is the sample size and σ 2 is the true, unknown variance in the population.Such a relation shows that standard errors for lower frequencies, e.g., yearly, can be much larger than for daily frequencies.
As an example of sample size and standard error, we show the effect of a varying sample size on the standard error for the estimator σ 0 in Figure 1.In the following simulations, we generate 100,000 scenarios for the innovation z 1 with the following configuration for the remaining parameters: r = 0.01/252, h 0 := h = 3.56 × 10 −5 , γ = −5.
We are now ready to assess the impact of uncertainty in the first period of the portfolio analysis.We start with daily rebalancing.The results of the portfolio optimization are shown in Table 3, while Figure 2 provides the histogram of optimal allocations (due to the unobservability of the innovation z caused by the uncertainty induced from lambda, we assume z 0 = 0 in this section.For more details; see Appendix A.2).
Table 3. Portfolio weights of optimal solution in two periods for daily parameters.
Recall that π 1 is not deterministic (see proposition 2).It includes the first Bayesian update from the first observation.The histogram in Figure 2 shows the variation of π 1 , which is in a range of 2%.The mean of the updated portfolio weights is indistinguishable from the no-uncertainty portfolio weight.This has to be seen under the fact that we are dealing with daily parameters and an investment horizon of N = 2, i.e., two days.
It seems intuitive that the range of portfolio weights would increase for larger time horizons, i.e., a lower frequency of rebalancing.The left histogram in Figure 3 confirms this intuition.The updated portfolio weights range from 35% up to 70%.The mean of the updated weights deviates slightly from the certain weight.The right histogram in Figure 3 shows the effect of increased uncertainty, i.e., greater standard error.The increase in standard error, interpretable as a decreased sample size, led to an updated portfolio weight in the range from 0% to 120%.Comparing this range of 120% to the former range of 2% underlines the effect of incorporating parameter uncertainty and Bayesian updating.

Sensitivity Analysis
This subsection deals with the sensitivity of the optimal solution stated in Proposition 2 to changes in the parameters.We first examine the sensitivity with respect to the risk aversion γ.Then, given the importance of σ 0 , we perform a more detailed analysis of the impact of σ 0 (i.e., uncertainty) on the optimal allocation.In all cases, we hold all other parameters constant.
We vary the risk aversion γ in a range from −10 to −0.1 and the standard error σ 0 up to a value corresponding to a standard error adjustment for a sample size of 50 observations.We do the analysis for both daily and annually scaled parameters.The results are shown in Figures 4-6.In Figures 4 and 5, the abscissa shows the different values of the parameter with the corresponding portfolio weight on the ordinate.Figure 6 shows a histogram for π 1 in dependence of the varying parameter.
In Figure 4, we see an increase in investment as the risk aversion decreases, i.e., γ getting closer to zero.This is what one would expect.We can further observe the solution under uncertainty always being below the solution under certainty.This is in line with the theoretical comparison of the solution under uncertainty and the solution without uncertainty.More interesting is the behavior induced by a variation in the standard error σ 0 displayed in Figure 5.It shows the solution without uncertainty in two periods as well as the solution in two periods including uncertainty, exhibiting the mean as proxy for π 1 .
In the left part of Figure 5, we are able to observe an interesting phenomenon (the uneven pattern in the sensitivity of π 0 is due to precision in numerical integration when solving Equation (A6)).In the case of no uncertainty, the portfolio weight slightly decreases when getting closer to the investment horizon.If parameter uncertainty is included, we can already observe a different behavior of the solution in two periods.Namely, depending on the degree of uncertainty, the average portfolio weight after the first observation may be greater than the starting weight of the portfolio, reflecting the knowledge gained about the unknown parameter.The greater the initial level of uncertainty, the larger is the effect of an additional observation resulting in a higher portfolio weight at time n = 1 compared to n = 0.The same effect can be seen in the right part of Figure 5 where the portfolio weight after an information update always increases.Looking at the mean of π 1 gives us an incomplete picture of the first Bayesian update.Therefore, Figure 6 shows the sensitivity of the distribution of the updated portfolio weights.In this figure, we observe a similar qualitative behavior of the optimal weight.When varying the level of uncertainty σ 0 , we observe (left figure) a significant increase in the range of π 1 's distribution.This effect is consistent with the intuition that additional information has a greater impact on the optimal allocation when the initial level of uncertainty is high.The solution including uncertainty converges towards the solution without uncertainty for σ 0 moving towards zero.In the right figure, we can spot the same behavior as the mean exhibited before, i.e., an increasing investment in the risky asset as risk aversion decreases.Interestingly, the range of the allocation seems to increase at the same time.Overall, the degree of uncertainty has a strong influence on the behavior of the solution.This will be further quantified, in dollar terms, in the next subsection.

Wealth Equivalent Loss
We saw the impact of uncertainty on the optimal allocation.However, does this really matter?To address this question we determine the wealth-equivalent loss (WEL) occurring if one does not account for uncertainty.In Section 2, we derived the WEL theoretically.In this subsection, we report the WEL for daily as well as annual parameters.Furthermore, we investigate the impact of having small sample sizes for a first estimate of λ (WEL in (18) has been calculated via numerical integration; simulation yields similar results.).
Table 4 displays the wealth-equivalent losses corresponding to the level of uncertainty for an annual parameter frequency for C-H-J-2006 parameters.The WEL observed in this configuration is relatively low.Yet, when working with lower frequencies, such as annual, one should adjust the level of uncertainty as fewer data are available for the MLE. Figure 7 shows the WEL in dependence of the sample size starting with the initial daily available sample size down to a sample size of 50.In this extreme case, the WEL can be as high as 12%.Combining a smaller sample size with a lower risk aversion can lead to even higher losses as can be seen in Figure 8.

Conclusions
This paper presents an approach for incorporating parameter uncertainty in a dynamic portfolio optimization problem by utilizing stochastic control model theory.Starting with a simple stochastic control model, we extend it to a Bayesian information model, incorporating the risk of parameter uncertainty in the optimal allocation.Our study focuses on portfolio optimization for a risk-averse investor, maximizing a terminal CRRA utility function, where the log returns are assumed to follow a HN-GARCH model.
The proposed Bayesian information model leads to the development of a new GARCH process, called the UI-GARCH, which accounts for the uncertainty of the risk premium parameter λ.Unlike its Affine-GARCH predecessor, this new process is not affine.Using a two-period investment horizon, we derive the optimal allocation while incorporating a Bayesian update and prove the well-definedness of the initial portfolio weight.Finally, we performed numerical evaluations of the derived expressions and analyzed their sensitivity to parameter changes.We evaluated the optimal allocation for annual trading periods over an investment horizon of two periods.The results showed that the behavior of the portfolio weights could differ significantly from the solution without uncertainty, exposing a high sensitivity to the degree of parameter uncertainty (σ 0 , which can be translated into sample size available for estimation).This results in significant wealth-equivalent losses.
A multitude of research questions emerge from this study.A natural continuation of this work is to extend the numerical implementation to a larger number of investment periods.Additionally, it remains an open question whether an analytical solution can be derived when the HN-GARCH is replaced by non-Gaussian Affine-GARCH or inverse Gaussian GARCH models.
Simplifying the sum: with the sufficient statistic t n : , we set: so that we get: Following Lemma 2.1 in [28] the posterior distribution is then given by: .
Using that: and rearranging , we get: and: Calculations for Equation (7).
Proof.Let y n+1 := y n+1 (z) and h n+2 := h n+2 (z) for the ease of notation.Then: Having a closer look at the exponential we see that: Inserting all together into the exponential in the above integral gives: We can, therefore, derive that q ′ (y n+1 , , and thus, follows a normal distribution: Proof of Proposition 1.
Table A1.Portfolio weights of optimal solution in two-periods for daily parameters.

Figure 1 .
Figure 1.Standard error dependence on sample size.

Figure 3 .
Figure 3. Histogram of 100,000 scenarios for π 1 with C-H-J-2006 with annually scaled parameters (left) and an adjusted σ 0 corresponding to a sample size of n = 100 (right).The portfolio weight without uncertainty (red line) is π 1 = 0.5453.The mean (black line) is given by 0.5319 (left) and 0.5436 (right).

Figure 4 .
Figure 4. Sensitivity of π 0 and the mean of π 1 with respect to variation in risk aversion γ for daily C-H-J-2006 parameters (left) and annual C-H-J-2006 parameters (right) parameters.

Figure 5 .
Figure 5. Sensitivity of π 0 and the mean of π 1 with respect to variation in σ 0 for daily C-H-J-2006 parameters (left) and annual C-H-J-2006 parameters (right) parameters.

Figure 7 .
Figure 7. Annualized WEL in two-periods for daily C-H-J-2006 parameters (left) and annually scaled C-H-J-2006 parameters (right) depending on sample size.

Figure 8 .
Figure 8. Annualized WEL in two-periods for daily C-H-J-2006 parameters (left) and annual C-H-J-2006 parameters (right) depending on sample size and risk aversion γ.
, is a solution to the optimization problem.Proof of Proposition 2.

Figure A3 .
Figure A3.Histogram for π 1 with B-C-H-J-2018 parameters of 100,000 scenarios.The portfolio weight without uncertainty is shown in red.

Figure A4 .
Figure A4.Histogram of 100,000 scenarios for π 1 with B-C-H-J-2018 with annually scaled parameters (left) and to a sample size of n = 100 adjusted σ 0 (right).The portfolio weight without uncertainty is shown in red.

Figure A5 .
Figure A5.Sensitivity of π 0 and the mean of π 1 with respect to variation in σ 0 for daily B-C-H-J-2018 parameters (left) and annual B-C-H-J-2018 parameters (right) parameters.

Figure A6 .
Figure A6.Annualized WEL in two-periods for daily B-C-H-J-2018 parameters (left) and annual B-C-H-J-2018 parameters (right) depending on sample size and risk aversion γ.

Table 4 .
Annualized wealth-equivalent losses for standard parameter configuration.