General Conditions of Weak Convergence of Discrete-Time Multiplicative Scheme to Asset Price with Memory

: We present general conditions for the weak convergence of a discrete-time additive scheme to a stochastic process with memory in the space D [ 0, T ] . Then we investigate the convergence of the related multiplicative scheme to a process that can be interpreted as an asset price with memory. As an example, we study an additive scheme that converges to fractional Brownian motion, which is based on the Cholesky decomposition of its covariance matrix. The second example is a scheme converging to the Riemann–Liouville fractional Brownian motion. The multiplicative counterparts for these two schemes are also considered. As an auxiliary result of independent interest, we obtain sufﬁcient conditions for monotonicity along diagonals in the Cholesky decomposition of the covariance matrix of a stationary Gaussian process. authors have read and agree to the published version of the manuscript. Conceptualization, Y.M.; investigation, Y.M., K.R. and S.S.; writing–original draft preparation, Y.M., K.R. and S.S.; writing–review and editing,Y.M., K.R. and S.S.


Introduction
The question of approximating prices in financial markets with continuous time using prices in markets with discrete time goes back to the approximation of Black-Scholes prices with prices changing in discrete time. For an initial acquaintance with the subject, we recommend a book, Föllmer and Schied (2011), that starts with the central limit theorem for approximation of the Black-Scholes model by the Cox-Ross-Rubinstein model. However, this area of research is immeasurably wider, since there are many more market models. Of course, they are functioning in discrete time, but their analytical research is easier to carry out in continuous time. Therefore, we need various theorems on the convergence of random sequences to random processes with continuous time, and it is also desirable to produce the convergence of some connected functionals. For example, functional limit theorems make it possible to go to a limit in stochastic integrals and stochastic differential equations. Such equations are widely used for modeling in physics, biology, finance and other fields. Concerning finance, functional limit theorems allow us to investigate how the convergence of stock prices affects the convergence of option prices. The last of these questions is widely considered in many papers; we mention now only Hubalek and Schachermayer (1998). Concerning the weak limit theorems for financial market we mention the book Prigent (2003). Diffusion approximation of financial markets was described, in particular, in the papers Mishura (2015aMishura ( , 2015bMishura ( , 2015c, see references therein. All the above-mentioned works relate to the case when the ultimate stochastic process and the corresponding market model are Markov, that is, they have no memory. However, the presence of memory in financial markets has already been so convincingly recorded that for many years models have been studied that could well model this memory, and the question of approximation of non-Markov asset prices and other components of financial market processes by the discrete-time random sequences is also studied. As regards purely theoretical results on functional limit theorems in which the limit process is not Markov, we cite the papers Davydov (1970), where the limit process is stationary, and Gorodetskii (1977), where the limit process is semi-stable and Gaussian. In turn, with regard to memory modeling, considering the processes with short-or long-range dependence, it is easiest to use fractional Brownian motion that is a Gaussian self-similar process with stationary correlated increments. There are two approaches to the problem: To model prices themselves using processes with memory, in particular, to consider the models involving fBm, or to concentrate the model's memory in stochastic volatility. The first approach has the peculiarity that an ultimate market with memory allows arbitrage, while prelimit markets can be arbitrage-free. The existence of arbitrage was first established in the paper Rogers (1997) and discussed in detail in the book Mishura (2008). However, such an approach has the right to exist, if only because regardless of possible financial applications, it is reasonable to prove functional limit theorems in which the limit process is a fractional Brownian motion or some related process. For the first time, a discrete approximation of fractional Brownian motion by a binomial market model was considered in the paper Sottinen (2001), and a fairly thorough analysis of the number of arbitrage points in such a market was made in the paper Cordero et al. (2016). However, even fractional Black-Scholes model can be approximated by various discrete-time sequences, and the purpose of this article is to formulate and illustrate by examples the functional limit theorem and its multiplicative version, in which both the prelimit sequence of processes and the limiting process are quite general, but simple to consider. Moreover, the fractional binomial market considered by Sottinen (2001) is a special case of our model.
Thus, the main objectives of this article and its novelty are as follows. To start with, we consider an additive stochastic sequence that is based on the sequence of iid random variables and has the coefficients that allow for this stochastic sequence to be dependent on the past. For such a sequence, we formulate the conditions of the weak convergence to some limit process in terms of coefficients and the characteristic function of any basic random variable. These conditions are stated in Theorem 1. This theorem is of course a special case of general functional limit theorems, but it has the advantages that it is formulated in terms of coefficients, that the coefficients are such that they immediately show the dependence on the past, and that the limit process in it is not required to have any special properties with respect to distribution, self-similarity etc. However, then, in order to apply our general theorem to more practical situations, in Theorem 2 we adapt the general conditions to the case where the limit process is Gaussian. Then we go to the multiplicative scheme in order to get the almost surely positive limit process that can model the asset price on the financial market. So, we assume that all multipliers in the prelimit multiplicative scheme are positive, and this imposes additional restrictions on the coefficients, and in addition, we consider only Bernoulli basic random variables. The next goal is to apply these general results to the case, where the limit processes in the additive scheme are fractional Brownian motion (fBm) and Riemann-Liouville fBm. In the case of the limit fBm we consider the prelimit processes that are constructed regarding to Cholesky decomposition of the covariance function of fBm. The result concerning fBm is new in the sense that nobody before considered the multiplicative scheme with exponent of fBm in the limit, and the result concerning Riemann-Liouville fBm is new in the sense that nobody before considered Riemann-Liouville fBm itself and in exponent as the limiting process. In both cases we were lucky in the sense that such coefficients are suitable also for the multiplicative scheme. Our proofs require deep study of the properties of the Cholesky decomposition for the covariance matrix of fBm. It turns out that all elements of the upper-triangular matrix in this decomposition are positive and moreover, the rows of this matrix are increasing. We also suppose that the columns of this matrix are decreasing. This conjecture is confirmed by numeric results, however its proof remains an interesting open problem. For the moment, we can only prove the uniform upper bound for the elements in each column, which is sufficient for our purposes.
As for stochastic volatility with memory, it is not considered in the present paper, but we can refer the reader to Gatheral et al. (2018) and Bezborodov et al. (2019), among many others.
The paper is organized as follows. In Section 2 we establish sufficient conditions for the weak convergence of continuous-time random walks of rather general form to some limit in the space D[0, T]. The case of Gaussian limit is studied in more detail. Multiplicative version of this result is also obtained. Sections 3 and 5 are devoted to two particular examples of the general scheme investigated in Section 2. In Section 3 we consider a discrete process that converges to fractional Brownian motion. This example is based on Cholesky decomposition of the covariance matrix of fractional Brownian motion. In Section 4 we investigate possible perturbations of the coefficients in the scheme, studied in Section 3. Moreover, Section 4 contains a numerical example, which illustrates the results of Section 3; moreover, we discuss there some our conjectures and open problems. Section 5 is devoted to another example, where the limit process is a so-called Riemann-Liouville fractional Brownian motion. In Appendix A we establish auxiliary results concerning the Cholesky decomposition of the covariance matrix of a stationary Gaussian process. In particular, we explore the connection between this decomposition and time series prediction problems.

General Conditions of Weak Convergence
Let T > 0, (Ω, F , F = (F t ) t∈[0,T] , P) be a stochastic basis, i.e., a complete probability space (Ω, F , P) with a filtration F satisfying standard assumptions.

Convergence of Sums
For any N ≥ 1 consider the uniform partition Assume we are given for each N ≥ 1 a triangular array of real numbers c N j,k , 1 ≤ j ≤ k ≤ N . Define a stochastic process Because of the dependence of the coefficients c N j,k on k, the increments of X N may depend on the past, and the dependence may be strong. Let us first establish general conditions of weak convergence of the sequence X N , N ≥ 1 in terms of coefficients c N j,k and characteristic function ϕ(λ) = E e iλξ 1 of the underlying noise. We use the Skorokhod topology in the space of càdlàg functions D([0, T]). A detailed discussion of the selection of topology can be found in D([0, T]) (Billingsley 1999, Chapter 13).
Theorem 1. Assume that the following assumptions hold: (A1) There exists a stochastic process {X(t), t ∈ [0, T]} such that X N → X, N → ∞, in the sense of finite-dimensional distributions, that is for any l ≥ 1, (A2) There exist positive constants K and α such that for all integer N ≥ 1 and 0 (2) Then the weak convergence of measures holds: X N ⇒ X, N → ∞ in D([0, T]).

Proof. First, note that
Therefore, the convergence of finite-dimensional distributions is equivalent to Condition (1). In order to prove the weak convergence, it suffices to establish the tightness of the sequence X N , N ≥ 1 . To start with, let us mention that for 0 ≤ t 1 < t 2 ≤ T, Further, let us prove that there exists C > 0 such that for all N ≥ 1 and for all 0 ≤ t 1 < t 2 < t 3 ≤ T. We consider two cases. Case 1: t 3 − t 1 < T/N. In this case we have that [Nt 3 /T] − [Nt 1 /T] < N(t 3 − t 1 )/T + 1 < 2, which means that [Nt 3 /T] − [Nt 1 /T] ≤ 1. This implies that at least one of the following equalities is true: Case 2: t 3 − t 1 ≥ T/N. In this case Then Condition (2) implies that The same bound holds for A N (t 2 , t 3 ). Therefore, we have that is (5) holds with C = K 2 (2/T) 2+2α .
Thus, (5) is proved in both cases. Now using Inequalities (4) and (5), we may write Consequently, the sequence of processes is tight (Billingsley 1999, Theorem 13.5). Hence, the statement follows.
If X is a Gaussian process, we can formulate the sufficient conditions for the convergence of finite-dimensional distributions in terms of the covariance function.
Theorem 2. Assume that there exists a stochastic process X = {X(t), t ∈ [0, 1]} such that the following conditions hold: (C1) X is Gaussian and centered.
(C2) For all t, s ∈ [0, 1], Then the finite-dimensional distributions of X N converge to those of X as N → ∞.
Proof. Let us consider the characteristic function of l-dimensional distribution: According to (3), For every N ≥ 1, the random variables η N,p := α N,p ξ p , p ≥ 1, are independent. We will apply Lindeberg's CLT (see Billingsley 1995, Theorem 27.2) for the scheme of series η N,1 , . . . , η N,[Nt l ] . Let us calculate the variance: Hence, by the assumption (C2), we have Now we are ready to verify Lindeberg's condition. We have for any ε > 0 We can estimate α N,p for [Nt n−1 ] + 1 ≤ p ≤ [Nt n ] as follows Note that due to (C3) and (6), a N → +∞, N → ∞. Hence, by the dominated convergence theorem. According to Lindeberg's CLT, λ m X(t m ) .

Multiplicative Scheme
Consider now a multiplicative counterpart of the process considered above. Namely, let b N j,k , 1 ≤ j ≤ k ≤ N be a triangular array of real numbers. Define To assure that the values of S N are positive, we assume that {ξ n , n ≥ 1} are iid Rademacher variables, i.e., P(ξ n = 1) = P(ξ n = −1) = 1/2, and that b N j,k satisfy the following assumption: Our aim is to investigate the weak convergence of S N to some positive process S. It is more convenient to work with logarithms, i.e. to consider We will need a uniform version of the above boundedness assumption: We will also need the following assumption Theorem 3. Assume (B1) and (B2). Let also the assumptions (A1) and (A2) hold for Proof. Let sup N≥1,k=1,...,N ∑ k j=1 b N j,k = a ∈ (0, 1). By the Taylor formula, for x ∈ (−a, a), . By Slutsky's theorem (see, e.g., Grimmett and Stirzaker 2001, p. 318 whence the claim follows.

Remark 1. The statement of Theorem 3 remains valid, if we replace Rademacher random variables by any other
n ] = 1, and |ξ n | ≤ 1 for all n ≥ 1. The latter condition along with the assumption (B1) ensures that Z N k > −1 for all 1 ≤ k ≤ N, and consequently, the values of S N are positive.

Fractional Brownian Motion as a Limit Process and Prelimit Coefficients Taken from Cholesky Decomposition of its Covariance Function
Let H ∈ ( 1 2 , 1), T = 1. Let B H = B H t , t ∈ [0, 1] be a fractional Brownian motion, i.e., a centered Gaussian process with covariance function For N ≥ 1 we define the triangular array d j,k , 1 ≤ j ≤ k ≤ N by the following relation: It is known that such sequence d j,k , 1 ≤ j ≤ k ≤ N exists and it is unique, since (8) is the Cholesky decomposition of positively definite matrix (the covariance matrix of fBm). Define In order to prove Theorem 4, we need to verify the conditions of Theorem 1 in the particular case X = B H . Since fractional Brownian motion is a Gaussian process, we will use Theorem 2 in order to prove the convergence of finite-dimensional distributions. Let us start with assumption (A2).
It is not hard to see that the sequences d j,k , 1 ≤ j ≤ k ≤ N and j,k , 1 ≤ j ≤ k ≤ N are related as follows Indeed, by (8) we have (here d p,p−1 := 0), and comparing this expression with (14), we see that (20) holds. From (20) and Lemma 2, we immediately obtain the following result.
Proof. By (8), for 1 < j < r we have Therefore, taking into account Inequality (21), we get Note that the maximal value of the function r], is attained at the point x = r 2 and equals f ( r 2 ) = 2 · ( r 2 ) 2H−1 = 2 2−2H r 2H−1 . Therefore Using (21), we get Similarly, in the case j = 1 we have Finally, if j = r, then using (21), (22) and (24), we obtain Remark 3. Lemma 4 claims that max 1≤j≤r d j,r = O(r 2H−1 ), as r → ∞. Note that the asymptotic rate of convergence O(r 2H−1 ) is exact, since Moreover, we suppose that max 1≤j≤r d j,r = d 1,r , however, the proof of this equality remains an open problem, Section 4 for further details.
Lemma 5. Let X N be the process defined in Theorem 4. Then the finite-dimensional distributions of X N converge to those of B H .
Proof. Let us check the conditions of Theorem 2. Evidently, the assumption (C1) holds, because fractional Brownian motion is a centered Gaussian process.

Multiplicative Scheme
Now let us verify the conditions of Theorem 3 for the sequence where the coefficients c N j,k are defined by (9). Hence, the conditions of Theorem 1 for c N j,k are satisfied. In the following Lemmas 6 and 7 we check the assumptions (B2) and (B1), respectively.
Proof. It follows from (27), (8) and (7) that for all 1 ≤ k ≤ N, Proof. Using the Cauchy-Schwarz inequality and equality (29), we obtain that for N ≥ 2 and 1 ≤ k ≤ N, Thus, we have proved that all assumptions of Theorem 3 are satisfied. As a consequence, we obtain the following result.
Theorem 5. Assume that b N j,k , 1 ≤ j ≤ k ≤ n is a triangular array of real numbers defined by (27) and ξ j , j ≥ 1 are iid Rademacher random variables. Then the sequence of stochastic processes Remark 4. Theorems 4 and 5 suggest one of possible ways to approximate fractional Brownian motion by a discrete model. Another scheme was proposed by Sottinen (2001). It worth noting that his approximation is also a particular case of the general scheme described in Section 2, but with the following coefficients: Note that assumptions (A1) and (A2) for such c N j,k are verified in the proof of (Sottinen 2001, Theorem 1); in particular, the tightness condition (A2) is established in (Sottinen 2001, Equation (8)).
The coefficients b N j,k of the corresponding multiplicative scheme are equal to Then, by the Cauchy-Schwarz inequality, we have The function z(t, s) is the kernel of the following Molchan-Golosov representation of the fractional Brownian motion as an integral with respect to Wiener process W = {W t , t ≥ 0}: Therefore the covariance function of B H equals R(t, s) = t∧s 0 z(t, u)z(s, u) du, and we obtain from (31) that Conditions (B2) and (B1) are derived from the bound (32) similarly to the derivation of Lemmas 6 and 7 from Equality (29).

Possible Perturbations of the Coefficients in Cholesky Decomposition
We now discuss the question how it is possible to perturb the coefficients (8) and (9) in the pre-limit sequence so that the convergence to fractional Brownian motion is preserved. In this sense, we estimate the rate of convergence of the perturbations to zero, sufficient to preserve the convergence.
Theorem 6. 1. Let the coefficients c N j,k , 1 ≤ j ≤ k ≤ N and the random variables {ξ i , i ≥ 1} be the same as in Theorem 4. Consider the perturbed coefficients where a sequence {ε N j , 1 ≤ j ≤ N} satisfies the following conditions: (i) There exist positive constants C and α such that Then the processes converge to B H , as N → ∞, weakly in D([0, 1]).
2. Assume, additionally, that the following assumption holds: (iii) There exists N 0 > 0 such that for all N ≥ N 0 , Then the sequence of stochastic processes Proof. 1. Let us prove that the conditions (A2), (C2) and (C3) remain valid, if we replace the coefficients c N j,k byc N j,k . Applying (i) and Lemma 1, we get ∑ j=1 ε N j c N j, [Nt] . (33) The first term in the right-hand side of (33) converges to R(t, s) by (25), as N → ∞. The second term is bounded by the sum ∑ N j=1 ε N j 2 , which tends to zero according to assumption (ii). Using the Cauchy-Schwarz and (7), we can bound the third term as follows: 2. Now let us verify the conditions (B1) and(B2) for the convergence of the corresponding multiplicative scheme. Note that Therefore, Consequently, using (ii) and Lemma 6 we get i. e., (B2) holds. Applying successively (34), (30), and (iii), we get for all N ≥ N 0 and all 1 ≤ k ≤ N. Hence, the assumption (B1) is also satisfied, and the convergence of the multiplicative scheme follows from Theorem 3.
Example 1. The following sequences of ε N j satisfy the conditions (i)-(iii) of Theorem 6:

Numerical Example and Discussion of Open Problems
First, let us illustrate the results of Section 3 with a numerical example. Take H = 0.75, N = 10. In this case the covariance matrix of fractional Brownian motion R(j, k) = cov(B H j , B H k ), j, k = 1, . . . , 10 equals Then the upper-triangular matrix of its Cholesky decomposition is given by We see that this example confirms the results of Lemmas 3 and 4. In particular, all elements of this matrix are non-negative and its rows are increasing. Moreover, all diagonal elements are less or equal than 1, and are bigger than √ 2 − 2 2H−1 = 0.76537. Furthermore, the bound (23) also holds. Indeed, for H = 0.75, C H = H·2 2−2H √ 2−2 2H−1 = 1.38582. For example, for r = 10 we have that max 1≤j≤10 d j,10 = 2.81139 is less than C H · 10 2H−1 = 4.38235.
We suppose that the columns of the above matrix are decreasing. More precisely, our conjecture can be formulated as follows.
However, the proof of this fact is an open problem.
Further, the corresponding covariance matrix of the increments process B H k − B H k−1 , k = 1, . . . , 10, is equal to and the upper-triangular matrix of the corresponding Cholesky decomposition has the following form: We observe that the values along all diagonals of this matrix decrease. These numerical results allows us to formulate the following conjecture.

Remark 5.
1. For the moment, we can proof only non-strict inequality for the case j = k, i.e., the monotonicity along the main diagonal, see Remark A2 below. 2. Conjecture A2 implies Conjecture A1. This becomes clear, if we rewrite the relation (20) between d j,k and j,k as follows d j,k = j,j + j,j+1 + j,j+2 + · · · + j,k , 1 ≤ j ≤ k.
3. In Subsection A.3 below we formulate Conjecture A3, which is a sufficient condition for Conjecture A2.

Riemann-Liouville Fractional Brownian Motion as a Limit Process
Let H ∈ ( 1 2 , 1), T = 1. Let us define where W = {W u , u ∈ [0, 1]} is a Wiener process. The process X is known as Riemann-Liouville fractional Brownian motion or type II fractional Brownian motion, see, e.g., Davidson and Hashimzade (2009); Marinucci and Robinson (1999). Define Theorem 7. Let {ξ i , i ≥ 1} be a sequence of iid random variables with E [ ξ i ] = 0 and E ξ 2 i = 1. Assume that c N j,k , 1 ≤ j ≤ k ≤ N are defined by (37) and First, let us prove the convergence of finite-dimensional distributions.
Lemma 8. Under assumptions of Theorem 7, the finite-dimensional distributions of X N converge to those of Z H .
Proof. In order to prove the lemma, we will verify the conditions of Theorem 2. Let us start with proving that the covariance function of X N converges to the covariance function of Z H as N → ∞. Indeed, for s ≤ t we have [Ns] ∑ j=1 c N j, [Nt] Therefore [Ns] ∑ j=1 c N j, [Nt] hence, the condition (C2) is satisfied. Note that by (37) and (38), and the condition (C3) also holds. Thus the assumptions of Theorem 2 are satisfied. This concludes the proof. Now let us verify the tightness condition (A2).
Lemma 9. Let the numbers c N j,k , 1 ≤ j ≤ k ≤ n be defined by (37). Then there exists a constant C > 0 such that for all 1 ≤ j < k ≤ N, Proof. We estimate each of the sums in the left-hand side of (40). Using (37), we get Consequently, we have Now we estimate the second sum in (40). By the change of variables m = k − n + 1, we get We bound the inner sum using (41) and estimate m 2H−1 ≤ (k − j) 2H−1 . We obtain Combining (42) and (43), we conclude the proof. Now let us verify the conditions of Theorem 3. We start with condition (B1).
Lemma 10. Let b N j,k , 1 ≤ j ≤ k ≤ n be a triangular array of non-negative numbers defined by (36). Then for all N ≥ 1 and for all k = 1, . . . , N, Proof. It follows from (36) and (41) that If k ≥ 2, then k H−1/2 and (44) is valid. In the remaining case k = 1 we have and (44) also holds. Now let us verify the condition (B2).
Thus, we have proved that all assumptions of Theorem 3 are satisfied. As a consequence, we obtain the following result.
Theorem 8. Assume that b N j,k , 1 ≤ j ≤ k ≤ n is a triangular array of real numbers defined by (36), ξ j , j ≥ 1 are iid Rademacher random variables, the process Z H is given by (35). Then the sequence of stochastic processes

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A. Monotonicity Along the Diagonal of a Triangular Matrix in the Cholesky Decomposition of a Positive Definite Toeplitz Matrix
In this appendix we establish the connection between the prediction of a stationary Gaussian process and the Cholesky decomposition of its covariance matrix. It turns out that the positivity of the coefficients of predictor implies the monotonicity along the diagonals of a triangular matrix in the Cholesky decomposition.
Let X = {X n , n ∈ Z} be a centered stationary Gaussian discrete-time process with the known autocovariance function: Assume that all finite-dimensional distributions of X are non-degenerate multivariate normal distributions. In other words, we assume that for all n ∈ N, the symmetric covariance matrix of n subsequent values of X, is non-degenerate.

Appendix A.1. Prediction of a Stationary Stochastic Process
Let us construct the predictor of X n by the observations X 1 , . . . , X m . Since the joint distribution X 1 , . . . , X m , X n is Gaussian, we see that the conditional distribution X n | (X 1 , . . . , X m ) is also Gaussian. (Anderson 2003, Section 2.5). The parameters of this distribution are given by The following theorem claims that if the coefficients of the one-step-ahead predictor are positive, then the coefficients of the multi-step-ahead predictor are also positive.

Appendix A.2. Cholesky Decomposition of the Covariance Matrix
Fix N. Recall that, by assumption, the covariance matrix Γ N of the random vector (X 1 , . . . , X N ) is non-degenerate. In this case the matrix Γ N can be uniquely represented as a product where L N is a lower-triangular matrix with positive diagonal elements. This representation is called Cholesky decomposition.
Remark A1. The numbers k,n do not depend on N: k,n (N 1 ) = k,n (N 2 ) for all k ≤ n ≤ min(N 1 , N 2 ), where k,n (N) denotes an element of the matrix L N in Decomposition (A9) for a certain value of N.
Since L −1 N 0 N×1 = 0 N×1 is a zero vector and L −1 N Γ N (L −1 N ) = I N is an identity matrix, we see that the random vector  has N-dimensional standard normal distribution, Hence, the values of the stochastic process {X n , n = 1, . . . , N} can be represented as follows: where ζ k , k = 1, 2, . . . , N, are independent standard normal random variables. Taking into account Remark A1, Equalities (A10) and (A11) can be generalized for all k: where for 1 ≤ k ≤ N the matrix L k is a sub-matrix of L N , For all k the matrix L k is non-degenerate, and L k L k = Γ k is the covariance matrix of the vector (X 1 , . . . , X k ) . This implies that the σ-algebra, generated by the random variables X 1 , . . . , X k coincides with σ-algebra, generated by the random variables ζ 1 , . . . , ζ k : F k = σ(X 1 , . . . , X k ) = σ(ζ 1 , . . . , ζ k ).
Theorem A2. Assume that the condition of Theorem A1 is satisfied: For all k and m such that 1 ≤ k ≤ m, the inequality α m+1,m,k > 0 holds. Then for all 1 ≤ k ≤ n, k,n > 0, Var[X n | F m ] = σ 2 n,m .
The standard Gaussian random variables ζ k are measurable with respect to σ-algebra F m for k ≤ m and independent of σ-algebra F m for k > m. Therefore Hence, It follows from (A15) and (A17) that m ∑ k=1 k,n ζ k = m ∑ k=1 α n,m,k X k .
Inequality (A14) is proved in the case n = k. It follows from orthonormality of the random variables ζ k and from the representation (A12) that Since m,m > 0 (as a diagonal element of the triangular matrix in the Cholesky decomposition) and α n,m,m > 0 (by Theorem A1), we see that m,n > 0. Hence, Inequality (A13) is proved.
Thus Inequality (A14) is proved in the case n > k. This completes the proof.
can be proved without the assumption α m+1,m,k > 0 for all 1 ≤ k ≤ m.

Appendix A.3. Application to fractional Brownian motion
In order to prove Conjecture A2, it suffices to apply Theorem A2 to stationary Gaussian process G k = B H k+1 − B H k with covariance matrix Γ N given by (11) (its Cholesky decomposition is considered in Lemma 2). To this end, we need to prove the positivity of the coefficients α m+1,m,k , 1 ≤ k ≤ m of the corresponding one-step-ahead predictor. According to (A2), these coefficients are given by