Next Article in Journal
Do We Need Stochastic Volatility and Generalised Autoregressive Conditional Heteroscedasticity? Comparing Squared End-Of-Day Returns on FTSE
Next Article in Special Issue
General Compound Hawkes Processes in Limit Order Books
Previous Article in Journal
Modelling Unobserved Heterogeneity in Claim Counts Using Finite Mixture Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

General Conditions of Weak Convergence of Discrete-Time Multiplicative Scheme to Asset Price with Memory

Department of Probability Theory, Statistics and Actuarial Mathematics, Taras Shevchenko National University of Kyiv, Volodymyrska 64, Kyiv 01601, Ukraine
*
Author to whom correspondence should be addressed.
Risks 2020, 8(1), 11; https://doi.org/10.3390/risks8010011
Submission received: 25 December 2019 / Revised: 20 January 2020 / Accepted: 27 January 2020 / Published: 30 January 2020
(This article belongs to the Special Issue Stochastic Modelling in Financial Mathematics)

Abstract

:
We present general conditions for the weak convergence of a discrete-time additive scheme to a stochastic process with memory in the space D [ 0 , T ] . Then we investigate the convergence of the related multiplicative scheme to a process that can be interpreted as an asset price with memory. As an example, we study an additive scheme that converges to fractional Brownian motion, which is based on the Cholesky decomposition of its covariance matrix. The second example is a scheme converging to the Riemann–Liouville fractional Brownian motion. The multiplicative counterparts for these two schemes are also considered. As an auxiliary result of independent interest, we obtain sufficient conditions for monotonicity along diagonals in the Cholesky decomposition of the covariance matrix of a stationary Gaussian process.

1. Introduction

The question of approximating prices in financial markets with continuous time using prices in markets with discrete time goes back to the approximation of Black–Scholes prices with prices changing in discrete time. For an initial acquaintance with the subject, we recommend a book, Föllmer and Schied (2011), that starts with the central limit theorem for approximation of the Black–Scholes model by the Cox–Ross–Rubinstein model. However, this area of research is immeasurably wider, since there are many more market models. Of course, they are functioning in discrete time, but their analytical research is easier to carry out in continuous time. Therefore, we need various theorems on the convergence of random sequences to random processes with continuous time, and it is also desirable to produce the convergence of some connected functionals. For example, functional limit theorems make it possible to go to a limit in stochastic integrals and stochastic differential equations. Such equations are widely used for modeling in physics, biology, finance and other fields. Concerning finance, functional limit theorems allow us to investigate how the convergence of stock prices affects the convergence of option prices. The last of these questions is widely considered in many papers; we mention now only Hubalek and Schachermayer (1998). Concerning the weak limit theorems for financial market we mention the book Prigent (2003). Diffusion approximation of financial markets was described, in particular, in the papers Mishura (2015a, 2015b, 2015c), see references therein. All the above-mentioned works relate to the case when the ultimate stochastic process and the corresponding market model are Markov, that is, they have no memory. However, the presence of memory in financial markets has already been so convincingly recorded that for many years models have been studied that could well model this memory, and the question of approximation of non-Markov asset prices and other components of financial market processes by the discrete-time random sequences is also studied. As regards purely theoretical results on functional limit theorems in which the limit process is not Markov, we cite the papers Davydov (1970), where the limit process is stationary, and Gorodetskii (1977), where the limit process is semi-stable and Gaussian. In turn, with regard to memory modeling, considering the processes with short- or long-range dependence, it is easiest to use fractional Brownian motion that is a Gaussian self-similar process with stationary correlated increments. There are two approaches to the problem: To model prices themselves using processes with memory, in particular, to consider the models involving fBm, or to concentrate the model’s memory in stochastic volatility. The first approach has the peculiarity that an ultimate market with memory allows arbitrage, while prelimit markets can be arbitrage-free. The existence of arbitrage was first established in the paper Rogers (1997) and discussed in detail in the book Mishura (2008). However, such an approach has the right to exist, if only because regardless of possible financial applications, it is reasonable to prove functional limit theorems in which the limit process is a fractional Brownian motion or some related process. For the first time, a discrete approximation of fractional Brownian motion by a binomial market model was considered in the paper Sottinen (2001), and a fairly thorough analysis of the number of arbitrage points in such a market was made in the paper Cordero et al. (2016). However, even fractional Black–Scholes model can be approximated by various discrete-time sequences, and the purpose of this article is to formulate and illustrate by examples the functional limit theorem and its multiplicative version, in which both the prelimit sequence of processes and the limiting process are quite general, but simple to consider. Moreover, the fractional binomial market considered by Sottinen (2001) is a special case of our model.
Thus, the main objectives of this article and its novelty are as follows. To start with, we consider an additive stochastic sequence that is based on the sequence of iid random variables and has the coefficients that allow for this stochastic sequence to be dependent on the past. For such a sequence, we formulate the conditions of the weak convergence to some limit process in terms of coefficients and the characteristic function of any basic random variable. These conditions are stated in Theorem 1. This theorem is of course a special case of general functional limit theorems, but it has the advantages that it is formulated in terms of coefficients, that the coefficients are such that they immediately show the dependence on the past, and that the limit process in it is not required to have any special properties with respect to distribution, self-similarity etc. However, then, in order to apply our general theorem to more practical situations, in Theorem 2 we adapt the general conditions to the case where the limit process is Gaussian. Then we go to the multiplicative scheme in order to get the almost surely positive limit process that can model the asset price on the financial market. So, we assume that all multipliers in the prelimit multiplicative scheme are positive, and this imposes additional restrictions on the coefficients, and in addition, we consider only Bernoulli basic random variables. The next goal is to apply these general results to the case, where the limit processes in the additive scheme are fractional Brownian motion (fBm) and Riemann–Liouville fBm. In the case of the limit fBm we consider the prelimit processes that are constructed regarding to Cholesky decomposition of the covariance function of fBm. The result concerning fBm is new in the sense that nobody before considered the multiplicative scheme with exponent of fBm in the limit, and the result concerning Riemann–Liouville fBm is new in the sense that nobody before considered Riemann–Liouville fBm itself and in exponent as the limiting process. In both cases we were lucky in the sense that such coefficients are suitable also for the multiplicative scheme. Our proofs require deep study of the properties of the Cholesky decomposition for the covariance matrix of fBm. It turns out that all elements of the upper-triangular matrix in this decomposition are positive and moreover, the rows of this matrix are increasing. We also suppose that the columns of this matrix are decreasing. This conjecture is confirmed by numeric results, however its proof remains an interesting open problem. For the moment, we can only prove the uniform upper bound for the elements in each column, which is sufficient for our purposes.
As for stochastic volatility with memory, it is not considered in the present paper, but we can refer the reader to Gatheral et al. (2018) and Bezborodov et al. (2019), among many others.
The paper is organized as follows. In Section 2 we establish sufficient conditions for the weak convergence of continuous-time random walks of rather general form to some limit in the space D [ 0 , T ] . The case of Gaussian limit is studied in more detail. Multiplicative version of this result is also obtained. Section 3 and Section 5 are devoted to two particular examples of the general scheme investigated in Section 2. In Section 3 we consider a discrete process that converges to fractional Brownian motion. This example is based on Cholesky decomposition of the covariance matrix of fractional Brownian motion. In Section 4 we investigate possible perturbations of the coefficients in the scheme, studied in Section 3. Moreover, Section 4 contains a numerical example, which illustrates the results of Section 3; moreover, we discuss there some our conjectures and open problems. Section 5 is devoted to another example, where the limit process is a so-called Riemann–Liouville fractional Brownian motion. In Appendix A we establish auxiliary results concerning the Cholesky decomposition of the covariance matrix of a stationary Gaussian process. In particular, we explore the connection between this decomposition and time series prediction problems.

2. General Conditions of Weak Convergence

Let T > 0 , ( Ω , F , F = ( F t ) t [ 0 , T ] , P ) be a stochastic basis, i.e., a complete probability space ( Ω , F , P ) with a filtration F satisfying standard assumptions.

2.1. Convergence of Sums

For any N 1 consider the uniform partition t k N = k T / N , 0 k N of [ 0 , T ] . Let ξ i , i 1 be a sequence of iid random variables with E ξ i = 0 and E ξ i 2 = 1 .
Assume we are given for each N 1 a triangular array of real numbers c j , k N , 1 j k N . Define a stochastic process
X N ( t k N ) = j = 1 k c j , k N ξ j , k = 0 , 1 , , N ;
let X N ( t ) = X N ( t k 1 N ) for t [ t k 1 N , t k N ) , k = 1 , , N . Because of the dependence of the coefficients c j , k N on k, the increments of X N may depend on the past, and the dependence may be strong.
Let us first establish general conditions of weak convergence of the sequence X N , N 1 in terms of coefficients c j , k N and characteristic function φ ( λ ) = E e i λ ξ 1 of the underlying noise. We use the Skorokhod topology in the space of càdlàg functions D ( [ 0 , T ] ) . A detailed discussion of the selection of topology can be found in D ( [ 0 , T ] ) (Billingsley 1999, Chapter 13).
Theorem 1.
Assume that the following assumptions hold:
(A1)
There exists a stochastic process X ( t ) , t [ 0 , T ] such that X N X , N , in the sense of finite-dimensional distributions, that is for any l 1 ,
0 = t 0 < t 1 < t 2 < < t l T , a n d λ 1 , , λ l R ,
n = 1 l p = [ N t n 1 / T ] + 1 [ N t n / T ] φ m = n l λ m c p , [ N t m / T ] N E exp i n = 1 l λ n X ( t n ) , N .
(A2)
There exist positive constants K and α such that for all integer N 1 and 0 j < k N
n = 1 j c n , k N c n , j N 2 + n = j + 1 k c n , k N 2 K k j N 1 + α .
Then the weak convergence of measures holds: X N X , N in D ( [ 0 , T ] ) .
Proof. 
First, note that
E exp i m = 1 l λ m X N ( t m ) = E exp i m = 1 l λ m p = 1 [ N t m / T ] ξ p c p , [ N t m / T ] N = E exp i m = 1 l λ m n = 1 m p = [ N t n 1 / T ] + 1 [ N t n / T ] ξ p c p , [ N t m / T ] N = E exp i n = 1 l p = [ N t n 1 / T ] + 1 [ N t n / T ] ξ p m = n l λ m c p , [ N t m / T ] N = n = 1 l p = [ N t n 1 / T ] + 1 [ N t n ] / T φ m = n l λ m c p , [ N t m / T ] N .
Therefore, the convergence of finite-dimensional distributions is equivalent to Condition (1).
In order to prove the weak convergence, it suffices to establish the tightness of the sequence X N , N 1 . To start with, let us mention that for 0 t 1 < t 2 T ,
E X N ( t 2 ) X N ( t 1 ) 2 = E n = 1 [ N t 1 / T ] c n , [ N t 2 / T ] N c n , [ N t 1 / T ] N ξ n 2 + E n = [ N t 1 / T ] + 1 [ N t 2 / T ] c n , [ N t 2 / T ] N ξ n 2 = n = 1 [ N t 1 / T ] c n , [ N t 2 / T ] N c n , [ N t 1 / T ] N 2 + n = [ N t 1 / T ] + 1 [ N t 2 / T ] c n , [ N t 2 / T ] N 2 = : A N ( t 1 , t 2 ) .
Further, let us prove that there exists C > 0 such that
A N ( t 1 , t 2 ) A N ( t 2 , t 3 ) C ( t 3 t 1 ) 2 + 2 α
for all N 1 and for all 0 t 1 < t 2 < t 3 T . We consider two cases.
Case 1: t 3 t 1 < T / N . In this case we have that [ N t 3 / T ] [ N t 1 / T ] < N ( t 3 t 1 ) / T + 1 < 2 , which means that [ N t 3 / T ] [ N t 1 / T ] 1 . This implies that at least one of the following equalities is true: [ N t 1 / T ] = [ N t 2 / T ] or [ N t 2 / T ] = [ N t 3 / T ] . If [ N t 1 / T ] = [ N t 2 / T ] , then A N ( t 1 , t 2 ) = 0 and Inequality (5) holds for any C > 0 . Similarly, it holds in the case [ N t 2 / T ] = [ N t 3 / T ] , because A N ( t 2 , t 3 ) = 0 .
Case 2: t 3 t 1 T / N . In this case
[ N t 2 / T ] [ N t 1 / T ] N t 2 t 1 T + 1 N 2 ( t 3 t 1 ) T .
Then Condition (2) implies that
A N ( t 1 , t 2 ) K [ N t 2 / T ] [ N t 1 / T ] N 1 + α K 2 T 1 + α ( t 3 t 1 ) 1 + α .
The same bound holds for A N ( t 2 , t 3 ) . Therefore, we have
A N ( t 1 , t 2 ) A N ( t 2 , t 3 ) K 2 2 T 2 + 2 α ( t 3 t 1 ) 2 + 2 α ,
that is (5) holds with C = K 2 ( 2 / T ) 2 + 2 α .
Thus, (5) is proved in both cases. Now using Inequalities (4) and (5), we may write
E | X N ( t 2 ) X N ( t 1 ) | | X N ( t 3 ) X N ( t 2 ) | E | X N ( t 2 ) X N ( t 1 ) | 2 E | X N ( t 3 ) X N ( t 2 ) | 2 1 / 2 = A N ( t 1 , t 2 ) A N ( t 2 , t 3 ) 1 / 2 C 1 / 2 ( t 3 t 1 ) 1 + α .
Consequently, the sequence of processes is tight (Billingsley 1999, Theorem 13.5). Hence, the statement follows. □
If X is a Gaussian process, we can formulate the sufficient conditions for the convergence of finite-dimensional distributions in terms of the covariance function.
Theorem 2.
Assume that there exists a stochastic process X = X ( t ) , t [ 0 , 1 ] such that the following conditions hold:
(C1)
X is Gaussian and centered.
(C2)
For all t , s [ 0 , 1 ] ,
j = 1 [ N t ] [ N s ] c j , [ N t ] N c j , [ N s ] N cov X ( t ) , X ( s ) , a s N .
(C3)
max 1 j k N | c j , k N | 0 , as N .
Then the finite-dimensional distributions of X N converge to those of X as N .
Proof. 
Let us consider the characteristic function of l-dimensional distribution:
E exp i m = 1 l λ m X N ( t m ) , 0 = t 0 < t 1 < t 2 < < t l 1 , λ 1 , , λ l R .
According to (3),
Z N : = m = 1 l λ m X N ( t m ) = p = 1 [ N t l ] α N , p ξ p ,
where for 1 n l , [ N t n 1 ] + 1 p [ N t n ] , N 1 , we define
α N , p : = m = n l λ m c p , [ N t m ] N .
For every N 1 , the random variables η N , p : = α N , p ξ p , p 1 , are independent. We will apply Lindeberg’s CLT (see Billingsley 1995, Theorem 27.2) for the scheme of series η N , 1 , , η N , [ N t l ] . Let us calculate the variance:
σ N 2 : = Var p = 1 [ N t l ] η N , p = p = 1 [ N t l ] α N , p 2 = n = 1 l p = [ N t n 1 ] + 1 [ N t n ] m = n l λ m c p , [ N t m ] N 2 = n = 1 l p = [ N t n 1 ] + 1 [ N t n ] m = n l q = n l λ m λ q c p , [ N t m ] N c p , [ N t q ] N = m = 1 l q = 1 l λ m λ q n = 1 m q p = [ N t n 1 ] + 1 [ N t n ] c p , [ N t m ] N c p , [ N t q ] N = m = 1 l q = 1 l λ m λ q p = 1 [ N t m ] [ N t q ] c p , [ N t m ] N c p , [ N t q ] N .
Hence, by the assumption (C2), we have
σ N 2 m = 1 l q = 1 l λ m λ q cov X ( t m ) , X ( t q ) , N .
Now we are ready to verify Lindeberg’s condition. We have for any ε > 0
L N ( ε ) : = 1 σ N 2 p = 1 [ N t l ] E η N , p 2 𝟙 | η N , p | ε σ N = 1 σ N 2 p = 1 [ N t l ] α N , p 2 E ξ p 2 𝟙 | ξ p | ε σ N | α N , p | .
We can estimate | α N , p | for [ N t n 1 ] + 1 p [ N t n ] as follows
| α N , p | = | m = n l λ m c p , [ N t m ] N | max 1 j k N | c j , k N | m = 1 l | λ m | .
Therefore,
σ N | α N , p | σ N max 1 j k N | c j , k N | m = 1 l | λ m | = : a N .
Note that due to (C3) and (6), a N + , N . Hence,
L N ( ε ) 1 σ N 2 p = 1 [ N t l ] α N , p 2 E ξ p 2 𝟙 | ξ p | ε a N = 1 σ N 2 p = 1 [ N t l ] α N , p 2 E ξ 1 2 𝟙 | ξ 1 | ε a N ,
because ξ p , p 1 , are iid. Since σ N 2 = p = 1 [ N t l ] α N , p 2 , we see that
L N ( ε ) E ξ 1 2 𝟙 | ξ 1 | ε a N 0 , N ,
by the dominated convergence theorem.
According to Lindeberg’s CLT, Z N N ( 0 , σ 2 ) , where
σ 2 = lim N σ N 2 = m = 1 l q = 1 l λ m λ q cov X ( t m ) , X ( t q ) ,
see (6). Thus,
E exp i m = 1 l λ m X N ( t m ) = E exp i Z N exp σ 2 2 = exp 1 2 m = 1 l q = 1 l λ m λ q cov X ( t m ) , X ( t q ) = E exp i m = 1 l λ m X ( t m ) .
 □

2.2. Multiplicative Scheme

Consider now a multiplicative counterpart of the process considered above. Namely, let b j , k N , 1 j k N be a triangular array of real numbers. Define
S N ( t ) = k = 1 [ N t / T ] ( 1 + Z k N ) , t [ 0 , T ] ,
where
Z k N = j = 1 k b j , k N ξ j , k = 1 , , N .
To assure that the values of S N are positive, we assume that ξ n , n 1 are iid Rademacher variables, i.e., P ( ξ n = 1 ) = P ( ξ n = 1 ) = 1 / 2 , and that b j , k N satisfy the following assumption:
j = 1 k | b j , k N | < 1 , k = 1 , , N .
Our aim is to investigate the weak convergence of S N to some positive process S. It is more convenient to work with logarithms, i.e. to consider
log S N ( t ) = k = 1 [ N t / T ] log ( 1 + Z k N ) , t [ 0 , T ] .
We will need a uniform version of the above boundedness assumption:
(B1)
sup N 1 , k = 1 , , N j = 1 k | b j , k N | < 1 .
We will also need the following assumption
(B2)
k = 1 N j = 1 k b j , k N 2 0 , N .
Theorem 3.
Assume (B1) and (B2). Let also the assumptions (A1) and (A2) hold for
c j , k N = i = j k b j , i N , 1 j k N ,
with some process X. Then S N ( t ) , t [ 0 , T ] converges in D ( [ 0 , T ] ) to S ( t ) = e X ( t ) , t [ 0 , T ] .
Proof. 
Let sup N 1 , k = 1 , , N j = 1 k | b j , k N | = a ( 0 , 1 ) .
By the Taylor formula, for x ( a , a ) ,
log ( 1 + x ) = x θ ( x ) x 2 ,
where 0 < θ ( x ) C ( a ) . Therefore,
log S N ( t ) = k = 1 [ N t / T ] log ( 1 + Z k N ) = k = 1 [ N t / T ] Z k N k = 1 [ N t / T ] θ ( Z k N ) Z k N 2 = : X 1 N ( t ) + X 2 N ( t ) .
Write
X 1 N ( t ) = k = 1 [ N t / T ] j = 1 k b j , k N ξ j = j = 1 [ N t / T ] ξ j k = j [ N t / T ] b j , k N = j = 1 [ N t / T ] c j , k N ξ j .
By Theorem 1, X 1 N X in D ( [ 0 , T ] ) . Further,
E sup t [ 0 , T ] | X 2 N ( t ) | C ( a ) k = 1 N E Z k N 2 = C ( a ) k = 1 N j = 1 k | b j , k N | 2 0 , N .
Therefore, sup t [ 0 , T ] | X 2 N ( t ) | P 0 , N , so X 2 N 0 , N , in D ( [ 0 , T ] ) . By Slutsky’s theorem (see, e.g., Grimmett and Stirzaker 2001, p. 318), we get log S N X , N , in D ( [ 0 , T ] ) , whence the claim follows. □
Remark 1.
The statement of Theorem 3 remains valid, if we replace Rademacher random variables by any other sequence ξ n , n 1 of iid random variables such that E [ ξ n ] = 0 , E [ ξ n 2 ] = 1 , and | ξ n | 1 for all n 1 . The latter condition along with the assumption (B1) ensures that Z k N > 1 for all 1 k N , and consequently, the values of S N are positive.

3. Fractional Brownian Motion as a Limit Process and Prelimit Coefficients Taken from Cholesky Decomposition of its Covariance Function

Let H ( 1 2 , 1 ) , T = 1 . Let B H = B t H , t [ 0 , 1 ] be a fractional Brownian motion, i.e., a centered Gaussian process with covariance function
R ( s , t ) = 1 2 s 2 H + t 2 H | t s | 2 H .
For N 1 we define the triangular array d j , k , 1 j k N by the following relation:
j = 1 p r d j , p d j , r = R ( p , r ) .
It is known that such sequence d j , k , 1 j k N exists and it is unique, since (8) is the Cholesky decomposition of positively definite matrix (the covariance matrix of fBm).
Define
c j , k N = d j , k N H .
Theorem 4.
Let ξ i , i 1 be a sequence of iid random variables with E ξ i = 0 and E ξ i 2 = 1 . Assume that c j , k N , 1 j k N are defined by (9) and
X N ( t ) = j = 1 [ N t ] c j , [ N t ] N ξ j , t [ 0 , 1 ] .
Then X N B H , N , weakly in D ( [ 0 , 1 ] ) .
In order to prove Theorem 4, we need to verify the conditions of Theorem 1 in the particular case X = B H . Since fractional Brownian motion is a Gaussian process, we will use Theorem 2 in order to prove the convergence of finite-dimensional distributions. Let us start with assumption (A2).
Lemma 1.
Let c j , k , 1 j k N be defined by (9). Then for all 1 j < k N ,
n = 1 j c n , k N c n , j N 2 + n = j + 1 k c n , k N 2 = ( k j ) 2 H N 2 H .
Proof. 
By applying successively (9), (8) and (7), we get
n = 1 j c n , k N c n , j N 2 + n = j + 1 k c n , k N 2 = 1 N 2 H n = 1 j d n , k d n , j 2 + n = j + 1 k d n , k 2 = 1 N 2 H n = 1 j ( d n , j ) 2 + n = 1 k ( d n , k ) 2 2 n = 1 j d n , j d n , k = 1 N 2 H R ( j , j ) + R ( k , k ) 2 R ( j , k ) = ( k j ) 2 H N 2 H .
 □
Thus the assumption (A2) holds. In order to check the condition (A1), we will use Theorem 2. First, we will establish some further properties of the coefficients d j , k . Let us consider the discrete-time stochastic process G k : = B k H B k 1 H , k 1 . It is a stationary process with covariance
γ k = E G 1 G k + 1 = E B 1 H B k + 1 H B k H = 1 2 | k + 1 | 2 H + | k 1 | 2 H 2 | k | 2 H .
In other words, ( G 1 , G 2 , , G N ) is a centered Gaussian vector with covariance matrix
Γ N = γ 0 γ 1 γ 2 γ N 2 γ N 1 γ 1 γ 0 γ 1 γ N 3 γ N 2 γ 2 γ 1 γ 0 γ N 4 γ N 3 γ N 2 γ N 3 γ N 4 γ 0 γ 1 γ N 1 γ N 2 γ N 3 γ 1 γ 0 .
Lemma 2.
The Cholesky decomposition of Γ N , given by
Γ N = 1 , 1 0 0 0 1 , 2 2 , 2 0 0 1 , 3 2 , 3 3 , 3 0 1 , N 2 , N 3 , N N , N 1 , 1 1 , 2 1 , 3 1 , N 0 2 , 2 2 , 3 2 , N 0 0 3 , 3 3 , N 0 0 0 N , N
has the following properties
i , j 0 , 1 i j N ,
2 2 2 H 1 i , i 1 = 1 , 1 , 1 i N .
Remark 2.
Similarly to (8), we can write the Cholesky decomposition of Γ N coordinatewise as follows
j = 1 p r j , p j , r = γ p r p , r 1 , , N .
Proof. 
In order to prove (12), we will apply (Berenhaut and Bandyopadhyay 2005, Theorem 1). To this end we need to verify the following conditions on the sequence γ 0 , γ 1 , , γ N :
(a)
Monotonicity and positivity:
γ 0 γ 1 γ 2 γ N 0 ;
(b)
Convexity:
γ 0 γ 1 γ 1 γ 2 γ N 1 γ N 0 .
Note that by (10),
γ 0 = 1 and γ 1 = 2 2 H 1 1 ,
hence, γ 1 γ 0 (recall that 1 / 2 < H < 1 ). For k 1 , we have
γ k = H ( 2 H 1 ) 0 1 0 1 ( z y + k ) 2 H 2 d z d y H ( 2 H 1 ) 0 1 0 1 ( z y + k + 1 ) 2 H 2 d z d y = γ k + 1 ,
whence (15) follows.
Now let us prove (16). By (17), γ 0 γ 1 = 2 2 2 H 1 . Applying (18), we may write
γ 1 γ 2 = H ( 2 H 1 ) 0 1 0 1 ( z y + 1 ) 2 H 2 ( z y + 2 ) 2 H 2 d z d y = H ( 2 H 1 ) ( 2 2 H ) 0 1 0 1 0 1 ( z y + 1 + u ) 2 H 3 d u d z d y H ( 2 H 1 ) ( 2 2 H ) 0 1 0 1 0 1 ( z + u ) 2 H 3 d u d z d y = H ( 2 H 1 ) 0 1 z 2 H 2 ( z + 1 ) 2 H 2 d z = H 2 2 2 H 1 2 2 2 H 1 = γ 0 γ 1 ,
here we have used that the function y ( z y + 1 + u ) 2 H 3 is increasing for any z , u ( 0 , 1 ) . Similarly, for k 2 , we have
γ k γ k + 1 = H ( 2 H 1 ) ( 2 2 H ) 0 1 0 1 0 1 ( z y + k + u ) 2 H 3 d u d z d y H ( 2 H 1 ) ( 2 2 H ) 0 1 0 1 0 1 ( z y + k 1 + u ) 2 H 3 d u d z d y = γ k 1 γ k .
Hence, (16) is proved.
Then by applying (Berenhaut and Bandyopadhyay 2005, Theorem 1) we get (12). Further, by (Berenhaut and Bandyopadhyay 2005, Corollary 1), we have the following lower bound
i , i γ 0 γ 1 + γ j 1 ( γ j 2 γ j 1 ) , 2 i N ,
whence
i , i γ 0 γ 1 = 2 2 2 H 1 , 2 i N .
Finally, Equality (14) implies that
j = 1 p j , p 2 = γ 0 = 1 , 1 p N .
Therefore, 1 , 1 = 1 and p , p 1 for all 1 p N . □
It is not hard to see that the sequences d j , k , 1 j k N and j , k , 1 j k N are related as follows
j , k = d j , k d j , k 1 , j < k , d j , j , j = k .
Indeed, by (8) we have
γ p r = cov ( B p H B p 1 H , B r H B r 1 H ) = j = 1 p r d j , p d j , r j = 1 p ( r 1 ) d j , p d j , r 1 j = 1 ( p 1 ) r d j , p 1 d j , r + j = 1 ( p 1 ) ( r 1 ) d j , p 1 d j , r 1 = j = 1 p r d j , p d j , p 1 d j , r d j , r 1
(here d p , p 1 : = 0 ), and comparing this expression with (14), we see that (20) holds. From (20) and Lemma 2, we immediately obtain the following result.
Lemma 3.
The coefficients d i , j , 1 i j N are increasing with respect to j and are positive:
0 < 2 2 2 H 1 d i , i d i , i + 1 d i , N , 1 i N ;
moreover,
1 1 d i , i 1 = d 1 , 1 , 1 i N .
Lemma 4.
For all r 1 ,
max 1 j r d j , r C H r 2 H 1 ,
where C H = H · 2 2 2 H 2 2 2 H 1 .
Proof. 
By (8), for 1 < j < r we have
i = 1 j d i , j d i , r = R ( j , r ) .
Therefore, taking into account Inequality (21), we get
d j , j d j , r = R ( j , r ) i = 1 j 1 d i , j d i , r R ( j , r ) i = 1 j 1 d i , j 1 d i , r = R ( j , r ) R ( j 1 , r ) = 1 2 j 2 H ( r j ) 2 H ( j 1 ) 2 H + ( r j + 1 ) 2 H = H j 1 j x 2 H 1 + ( r x ) 2 H 1 d x .
Note that the maximal value of the function f ( x ) = x 2 H 1 + ( r x ) 2 H 1 , x [ 0 , r ] , is attained at the point x = r 2 and equals f ( r 2 ) = 2 · ( r 2 ) 2 H 1 = 2 2 2 H r 2 H 1 . Therefore
d j , j d j , r H · 2 2 2 H r 2 H 1 .
Using (21), we get
d j , r H · 2 2 2 H r 2 H 1 d j , j H · 2 2 2 H r 2 H 1 2 2 2 H 1 .
Similarly, in the case j = 1 we have
d 1 , r = R ( 1 , r ) = 1 2 1 + r 2 H ( r 1 ) 2 H = H 0 1 x 2 H 1 + ( r x ) 2 H 1 d x H · 2 2 2 H r 2 H 1 C H r 2 H 1 .
Finally, if j = r , then using (21), (22) and (24), we obtain
d r , r d 1 , 1 d 1 , r C H r 2 H 1 .
 □
Remark 3.
Lemma 4 claims that max 1 j r d j , r = O ( r 2 H 1 ) , as r . Note that the asymptotic rate of convergence O ( r 2 H 1 ) is exact, since
max 1 j r d j , r d 1 , r = 1 2 1 + r 2 H ( r 1 ) 2 H = H r 2 H 1 + O ( r 2 H 2 ) , r .
Moreover, we suppose that max 1 j r d j , r = d 1 , r , however, the proof of this equality remains an open problem, Section 4 for further details.
Lemma 5.
Let X N be the process defined in Theorem 4. Then the finite-dimensional distributions of X N converge to those of B H .
Proof. 
Let us check the conditions of Theorem 2. Evidently, the assumption (C1) holds, because fractional Brownian motion is a centered Gaussian process.
The verification of (C2) is straightforward. Applying (8), we get
j = 1 [ N t ] [ N s ] c j , [ N t ] N c j , [ N s ] N = 1 N 2 H j = 1 [ N t ] [ N s ] d j , [ N t ] d j , [ N s ] = 1 N 2 H R ( [ N t ] , [ N s ] ) = R [ N t ] N , [ N s ] N R t , s , N .
Finally, using Lemma 4, we can estimate
max 1 j k N | c j , k N | = 1 N H max 1 j k N d j , k C H N H max 1 k N k 2 H 1 = C H N H max 1 k N k 2 H 1 = C H N H 1 0 , N .
Hence, the assumption (C3) also holds.
Thus, the result follows from Theorem 2. □

Multiplicative Scheme

Now let us verify the conditions of Theorem 3 for the sequence
b j , k N = d j , k d j , k 1 N H , j < k , d j , j N H , j = k .
Evidently
i = j k b j , i N = d j , k N H = c j , k N ,
where the coefficients c j , k N are defined by (9). Hence, the conditions of Theorem 1 for c j , k N are satisfied. In the following Lemmas 6 and 7 we check the assumptions (B2) and (B1), respectively.
Lemma 6.
Let b j , k , 1 j k N be defined by (27). Then
k = 1 N j = 1 k b j , k N 2 0 , a s N .
Proof. 
It follows from (27), (8) and (7) that for all 1 k N ,
j = 1 k b j , k N 2 = 1 N 2 H j = 1 k 1 d j , k d j , k 1 2 + ( d k , k ) 2 = 1 N 2 H j = 1 k d j , k 2 + j = 1 k 1 d j , k 1 2 2 j = 1 k 1 d j , k d j , k 1 = 1 N 2 H R ( k , k ) + R ( k 1 , k 1 ) 2 R ( k , k 1 ) = 1 N 2 H k 2 H + ( k 1 ) 2 H k 2 H ( k 1 ) 2 H + 1 = 1 N 2 H .
Therefore
k = 1 N j = 1 k b j , k N 2 = N 1 2 H 0 , as N .
 □
Lemma 7.
Let b j , k , 1 j k N be defined by (27). Then
sup N 2 , 1 k N j = 1 k | b j , k N | < 1 .
Proof. 
Using the Cauchy–Schwarz inequality and equality (29), we obtain that for N 2 and 1 k N ,
j = 1 k | b j , k N | k j = 1 k b j , k N 2 = k N H N 1 2 H < 1 .
 □
Thus, we have proved that all assumptions of Theorem 3 are satisfied. As a consequence, we obtain the following result.
Theorem 5.
Assume that b j , k N , 1 j k n is a triangular array of real numbers defined by (27) and ξ j , j 1 are iid Rademacher random variables. Then the sequence of stochastic processes
S N ( t ) = k = 1 [ N t ] 1 + j = 1 k b j , k N ξ j , t [ 0 , 1 ]
converge in D [ 0 , 1 ] to S ( t ) = e B H ( t ) , t [ 0 , 1 ] .
Remark 4.
Theorems 4 and 5 suggest one of possible ways to approximate fractional Brownian motion by a discrete model. Another scheme was proposed by Sottinen (2001). It worth noting that his approximation is also a particular case of the general scheme described in Section 2, but with the following coefficients:
c j , k N = N j 1 N j N z k N , s d s ,
where
z ( t , s ) = 2 H Γ ( 3 2 H ) Γ ( H + 1 2 ) Γ ( 2 2 H ) H 1 2 s 1 2 H s t u H 1 2 ( u s ) H 3 2 d u .
Note that assumptions (A1) and (A2) for such c j , k N are verified in the proof of (Sottinen 2001, Theorem 1); in particular, the tightness condition (A2) is established in (Sottinen 2001, Equation (8)).
The coefficients b j , k N of the corresponding multiplicative scheme are equal to
b j , k N = c j , k N c j , k 1 N = N j 1 N j N z k N , s z k 1 N , s d s , j k 1 , c k , k N = N k 1 N k N z k N , s d s , j = k .
Then, by the Cauchy–Schwarz inequality, we have
j = 1 k b j , k N 2 = N j = 1 k 1 j 1 N j N z k N , s z k 1 N , s d s 2 + N k 1 N k N z k N , s d s 2 j = 1 k 1 j 1 N j N z k N , s z k 1 N , s 2 d s + k 1 N k N z k N , s 2 d s = 0 k 1 N z k 1 N , s 2 d s + 0 k N z k N , s 2 d s 2 0 k 1 N z k N , s z k 1 N , s d s .
The function z ( t , s ) is the kernel of the following Molchan–Golosov representation of the fractional Brownian motion as an integral with respect to Wiener process W = W t , t 0 :
B H = 0 t z ( t , s ) d W s .
Therefore the covariance function of B H equals
R ( t , s ) = 0 t s z ( t , u ) z ( s , u ) d u ,
and we obtain from (31) that
j = 1 k b j , k N 2 R k 1 N , k 1 N + R k N , k N 2 R k 1 N , k N = 1 N 2 H .
Conditions (B2) and (B1) are derived from the bound (32) similarly to the derivation of Lemmas 6 and 7 from Equality (29).

4. Possible Perturbations of the Coefficients in Cholesky Decomposition. Numerical Example and Discussion of Open Problems

4.1. Possible Perturbations of the Coefficients in Cholesky Decomposition

We now discuss the question how it is possible to perturb the coefficients (8) and (9) in the pre-limit sequence so that the convergence to fractional Brownian motion is preserved. In this sense, we estimate the rate of convergence of the perturbations to zero, sufficient to preserve the convergence.
Theorem 6.
1. Let the coefficients c j , k N , 1 j k N and the random variables ξ i , i 1 be the same as in Theorem 4. Consider the perturbed coefficients
c ˜ j , k N = c j , k N + ε j N ,
where a sequence { ε j N , 1 j N } satisfies the following conditions:
(i) 
There exist positive constants C and α such that
n = j + 1 k ε n N 2 C k j N 1 + α
for all 0 j < k N ;
(ii) 
j = 1 N ε j N 2 0 , as N .
Then the processes
X N ( t ) = j = 1 [ N t ] c ˜ j , [ N t ] N ξ j , t [ 0 , 1 ] ,
converge to B H , as N , weakly in D ( [ 0 , 1 ] ) .
2. Assume, additionally, that the following assumption holds:
(iii) 
There exists N 0 > 0 such that for all N N 0 ,
| ε j N | < 1 j 1 / 2 N H ,
and ξ j , j 1 are iid Rademacher random variables. Let
b ˜ j , k N = c ˜ j , k N c ˜ j , k 1 N , j < k , c ˜ j , j N , j = k .
Then the sequence of stochastic processes
S N ( t ) = k = 1 [ N t ] 1 + j = 1 k b ˜ j , k N ξ j , t [ 0 , 1 ] , N N 0
converge in D [ 0 , 1 ] to S ( t ) = e B H ( t ) , t [ 0 , 1 ] .
Proof. 
1. Let us prove that the conditions (A2), (C2) and (C3) remain valid, if we replace the coefficients c j , k N by c ˜ j , k N . Applying (i) and Lemma 1, we get
n = 1 j c ˜ n , k N c ˜ n , j N 2 + n = j + 1 k c ˜ n , k N 2 = n = 1 j c n , k N c n , j N 2 + n = j + 1 k c n , k N + ε n N 2 2 n = 1 j c n , k N c n , j N 2 + 2 n = j + 1 k c n , k N 2 + 2 n = j + 1 k ε n N 2 2 k j N 2 H + 2 C k j N 1 + α ,
whence (A2) follows.
Further, for 0 s t 1 , we have
j = 1 [ N t ] [ N s ] c ˜ j , [ N t ] N c ˜ j , [ N s ] N = j = 1 [ N s ] c j , [ N t ] N + ε j N c j , [ N s ] N + ε j N = j = 1 [ N s ] c j , [ N t ] N c j , [ N s ] N + j = 1 [ N s ] ε j N 2 + j = 1 [ N s ] ε j N c j , [ N s ] N + j = 1 [ N s ] ε j N c j , [ N t ] N .
The first term in the right-hand side of (33) converges to R ( t , s ) by (25), as N . The second term is bounded by the sum j = 1 N ε j N 2 , which tends to zero according to assumption (ii). Using the Cauchy–Schwarz and (7), we can bound the third term as follows:
| j = 1 [ N s ] ε j N c j , [ N s ] N | j = 1 [ N s ] ε j N 2 j = 1 [ N s ] c j , [ N s ] N 2 j = 1 N ε j N 2 R [ N s ] N , [ N s ] N 0 , as N ,
by (ii), since R [ N s ] N , [ N s ] N = [ N s ] N 2 H s 2 H , as N . Similarly, for the forth term we have
| j = 1 [ N s ] ε j N c j , [ N t ] N | j = 1 [ N s ] ε j N 2 j = 1 [ N s ] c j , [ N t ] N 2 j = 1 N ε j N 2 j = 1 [ N t ] c j , [ N t ] N 2 0 , as N .
Thus, j = 1 [ N t ] [ N s ] c ˜ j , [ N t ] N c ˜ j , [ N s ] N R ( t , s ) , as N , i.e., the assumption (C2) is verified.
Finally, the assumption (C3) also holds true, since
max 1 j k N | c ˜ j , k N | max 1 j k N | c j , k N | + max 1 j N | ε j N | ,
max 1 j k N | c j , k N | 0 , N , by (26), and max 1 j N | ε j N | j = 1 N ε j N 2 0 , N , by (ii).
2. Now let us verify the conditions (B1) and (B2) for the convergence of the corresponding multiplicative scheme. Note that
b ˜ j , k N = c ˜ j , k N c ˜ j , k 1 N , j < k , c ˜ j , j N , j = k . = c j , k N c j , k 1 N , j < k , c ˜ j , j N + ε j N , j = k . = b j , k N , j < k , b j , j N + ε j N , j = k .
Therefore,
j = 1 k b ˜ j , k N 2 = j = 1 k 1 b j , k N 2 + b k , k N + ε k N 2 2 j = 1 k b j , k N 2 + 2 ε k N 2 .
Consequently, using (ii) and Lemma 6 we get
k = 1 N j = 1 k b ˜ j , k N 2 2 k = 1 N j = 1 k b j , k N 2 + 2 k = 1 N ε k N 2 0 , as N ,
i.e., (B2) holds. Applying successively (34), (30), and (iii), we get
j = 1 k | b ˜ j , k N | j = 1 k | b j , k N | + | ε k N | k 1 / 2 N H + | ε k N | < 1 ,
for all N N 0 and all 1 k N . Hence, the assumption (B1) is also satisfied, and the convergence of the multiplicative scheme follows from Theorem 3. □
Example 1.
The following sequences of ε j N satisfy the conditions (i)–(iii) of Theorem 6:
  • ε j N = 1 N β , β > 1 2 ,
  • ε j N = j γ N β , γ 0 , β γ > 1 2 .

4.2. Numerical Example and Discussion of Open Problems

First, let us illustrate the results of Section 3 with a numerical example. Take H = 0.75 , N = 10 . In this case the covariance matrix of fractional Brownian motion R ( j , k ) = cov ( B j H , B k H ) , j , k = 1 , , 10 equals
1.00000 1.41421 1.68386 1.90192 2.09017 2.25830 2.41166 2.55358 2.68629 2.81139 1.41421 2.82843 3.51229 4.00000 4.40631 4.76268 5.08417 5.37945 5.65408 5.91189 1.68386 3.51229 5.19615 6.09808 6.77403 7.34847 7.85821 8.32161 8.74961 9.14933 1.90192 4.00000 6.09808 8.00000 9.09017 9.93426 10.6621 11.3137 11.9098 12.4629 2.09017 4.40631 6.77403 9.09017 11.1803 12.4386 13.4361 14.3058 15.0902 15.8114 2.25830 4.76268 7.34847 9.93426 12.4386 14.6969 16.1086 17.248 18.2504 19.1599 2.41166 5.08417 7.85821 10.6621 13.4361 16.1086 18.5203 20.0738 21.3459 22.4734 2.55358 5.37945 8.32161 11.3137 14.3058 17.248 20.0738 22.6274 24.3137 25.7109 2.68629 5.65408 8.74961 11.9098 15.0902 18.2504 21.3459 24.3137 27.0000 28.8114 2.81139 5.91189 9.14933 12.4629 15.8114 19.1599 22.4734 25.7109 28.8114 31.6228
Then the upper-triangular matrix of its Cholesky decomposition is given by
1 1.41421 1.68386 1.90192 2.09017 2.2583 2.41166 2.55358 2.68629 2.81139 0 0.91018 1.24256 1.43958 1.59349 1.7238 1.83873 1.94263 2.03816 2.12704 0 0 0.90378 1.22457 1.41016 1.55336 1.67362 1.77909 1.87406 1.96107 0 0 0 0.90040 1.21504 1.39427 1.53132 1.6457 1.74555 1.83513 0 0 0 0 0.89857 1.20967 1.38511 1.51842 1.62917 1.72551 0 0 0 0 0 0.89741 1.20618 1.37909 1.50985 1.61811 0 0 0 0 0 0 0.89661 1.20373 1.37483 1.50375 0 0 0 0 0 0 0 0.89603 1.20192 1.37165 0 0 0 0 0 0 0 0 0.89558 1.20053 0 0 0 0 0 0 0 0 0 0.89523
We see that this example confirms the results of Lemmas 3 and 4. In particular, all elements of this matrix are non-negative and its rows are increasing. Moreover, all diagonal elements are less or equal than 1, and are bigger than 2 2 2 H 1 = 0.76537 . Furthermore, the bound (23) also holds. Indeed, for H = 0.75 , C H = H · 2 2 2 H 2 2 2 H 1 = 1.38582 . For example, for r = 10 we have that max 1 j 10 d j , 10 = 2.81139 is less than C H · 10 2 H 1 = 4.38235 .
We suppose that the columns of the above matrix are decreasing. More precisely, our conjecture can be formulated as follows.
Conjecture A1.
For all r 1 ,
d 1 , r > d 2 , r > > d r , r .
However, the proof of this fact is an open problem.
Further, the corresponding covariance matrix of the increments process B k H B k 1 H , k = 1 , , 10 , is equal to
1.00000 0.41421 0.26965 0.21806 0.18825 0.16813 0.15336 0.14192 0.13271 0.12510 0.41421 1.00000 0.41421 0.26965 0.21806 0.18825 0.16813 0.15336 0.14192 0.13271 0.26965 0.41421 1.00000 0.41421 0.26965 0.21806 0.18825 0.16813 0.15336 0.14192 0.21806 0.26965 0.41421 1.00000 0.41421 0.26965 0.21806 0.18825 0.16813 0.15336 0.18825 0.21806 0.26965 0.41421 1.00000 0.41421 0.26965 0.21806 0.18825 0.16813 0.16813 0.18825 0.21806 0.26965 0.41421 1.00000 0.41421 0.26965 0.21806 0.18825 0.15336 0.16813 0.18825 0.21806 0.26965 0.41421 1.00000 0.41421 0.26965 0.21806 0.14192 0.15336 0.16813 0.18825 0.21806 0.26965 0.41421 1.00000 0.41421 0.26965 0.13271 0.14192 0.15336 0.16813 0.18825 0.21806 0.26965 0.41421 1.00000 0.41421 0.12510 0.13271 0.14192 0.15336 0.16813 0.18825 0.21806 0.26965 0.41421 1.00000
and the upper-triangular matrix of the corresponding Cholesky decomposition has the following form:
1 0.41421 0.26965 0.21806 0.18825 0.16813 0.15336 0.14192 0.13271 0.12510 0 0.91018 0.33238 0.19702 0.15391 0.13031 0.11493 0.10391 0.09553 0.08888 0 0 0.90378 0.32080 0.18559 0.14319 0.12027 0.10547 0.09496 0.08702 0 0 0 0.90040 0.31464 0.17923 0.13705 0.11438 0.09985 0.08958 0 0 0 0 0.89857 0.31109 0.17545 0.13331 0.11075 0.09634 0 0 0 0 0 0.89741 0.30877 0.17291 0.13077 0.10825 0 0 0 0 0 0 0.89661 0.30712 0.17110 0.12892 0 0 0 0 0 0 0 0.89603 0.30590 0.16973 0 0 0 0 0 0 0 0 0.89558 0.30495 0 0 0 0 0 0 0 0 0 0.89523
We observe that the values along all diagonals of this matrix decrease. These numerical results allows us to formulate the following conjecture.
Conjecture A2.
For all 1 j k ,
j , k > j + 1 , k + 1 .
Remark 5.
1.
For the moment, we can proof only non-strict inequality for the case j = k , i.e., the monotonicity along the main diagonal, see Remark A2 below.
2.
Conjecture A2 implies Conjecture A1. This becomes clear, if we rewrite the relation (20) between d j , k and j , k as follows
d j , k = j , j + j , j + 1 + j , j + 2 + + j , k , 1 j k .
3.
In Appendix A.3 below we formulate Conjecture A3, which is a sufficient condition for Conjecture A2.

5. Riemann–Liouville Fractional Brownian Motion as a Limit Process

Let H ( 1 2 , 1 ) , T = 1 . Let us define
Z H ( t ) = 0 t ( t u ) H 1 / 2 d W u , t [ 0 , 1 ] ,
where W = W u , u [ 0 , 1 ] is a Wiener process. The process X is known as Riemann–Liouville fractional Brownian motion or type II fractional Brownian motion, see, e.g., Davidson and Hashimzade (2009); Marinucci and Robinson (1999).
Define
b j , k N = H 1 2 N H ( k j + 1 ) H 3 / 2 , 1 j k N .
c j , k N = i = j k b j , i N = H 1 2 N H i = j k ( i j + 1 ) H 3 / 2 = H 1 2 N H l = 1 k j + 1 l H 3 / 2 , 1 j k N .
Theorem 7.
Let ξ i , i 1 be a sequence of iid random variables with E ξ i = 0 and E ξ i 2 = 1 . Assume that c j , k N , 1 j k N are defined by (37) and
X N ( t ) = j = 1 [ N t ] c j , [ N t ] N ξ j , t [ 0 , 1 ] .
Then X N Z H , N weakly in D ( [ 0 , 1 ] ) .
First, let us prove the convergence of finite-dimensional distributions.
Lemma 8.
Under assumptions of Theorem 7, the finite-dimensional distributions of X N converge to those of Z H .
Proof. 
In order to prove the lemma, we will verify the conditions of Theorem 2. Let us start with proving that the covariance function of X N converges to the covariance function of Z H as N . Indeed, for s t we have
j = 1 [ N s ] c j , [ N t ] N c j , [ N s ] N = ( H 1 2 ) 2 N 2 H j = 1 [ N s ] l = 1 [ N t ] j + 1 l H 3 / 2 k = 1 [ N s ] j + 1 k H 3 / 2 .
By the Euler–Maclaurin formula,
l = 1 N l H 3 / 2 = N H 1 / 2 H 1 2 + O ( 1 ) , as N .
Therefore
j = 1 [ N s ] c j , [ N t ] N c j , [ N s ] N 1 N 2 H j = 1 [ N s ] ( [ N t ] j + 1 ) H 1 / 2 ( [ N s ] j + 1 ) H 1 / 2 = [ N s ] 2 H N 2 H j = 1 [ N s ] [ N t ] [ N s ] j 1 [ N s ] H 1 / 2 1 j 1 [ N s ] H 1 / 2 1 [ N s ] s 2 H 0 1 t s y H 1 / 2 ( 1 y ) H 1 / 2 d y = 0 s t u H 1 / 2 ( s u ) H 1 / 2 d u = E 0 t ( t u ) H 1 / 2 d W u 0 s ( s u ) H 1 / 2 d W u = cov Z H ( t ) , Z H ( s ) ,
hence, the condition (C2) is satisfied.
Note that by (37) and (38),
max 1 j k N | c j , k N | = H 1 2 N H max 1 j k N l = 1 k j + 1 l H 3 / 2 = H 1 2 N H l = 1 N l H 3 / 2 = O N 1 / 2 , N ,
and the condition (C3) also holds.
Thus the assumptions of Theorem 2 are satisfied. This concludes the proof. □
Now let us verify the tightness condition (A2).
Lemma 9.
Let the numbers c j , k N , 1 j k n be defined by (37). Then there exists a constant C > 0 such that for all 1 j < k N ,
n = 1 j c n , k N c n , j N 2 + n = j + 1 k c n , k N 2 C k j N H + 1 / 2 .
Proof. 
We estimate each of the sums in the left-hand side of (40). Using (37), we get
n = 1 j c n , k N c n , j N 2 = ( H 1 2 ) 2 N 2 H n = 1 j l = j n + 2 k n + 1 l H 3 / 2 2 = ( H 1 2 ) 2 N 2 H i = 1 j l = i + 1 i + k j l H 3 / 2 2 .
The inner sum can be bounded as follows
l = i + 1 i + k j l H 3 / 2 i H 2 3 4 l = i + 1 i + k j l H 2 3 4 i H 2 3 4 l = i + 1 i + k j l 1 l x H 2 3 4 d x = i H 2 3 4 i i + k j x H 2 3 4 d x = i H 2 3 4 ( i + k j ) H 2 + 1 4 i H 2 + 1 4 H 2 + 1 4 i H 2 3 4 ( k j ) H 2 + 1 4 H 2 + 1 4 ,
where we used the inequality x β y β ( x y ) β for x > y > 0 and 0 < β 1 . Then
n = 1 j c n , k N c n , j N 2 ( H 1 2 ) 2 ( H 2 + 1 4 ) 2 ( k j ) H + 1 2 N 2 H i = 1 j i H 3 2 .
Since i H 3 / 2 x H 3 / 2 for x [ i 1 , i ] , we see that
i = 1 j i H 3 / 2 i = 1 j i 1 i x H 3 / 2 d x = 0 j x H 3 / 2 d x = j H 1 / 2 H 1 2 .
Consequently, we have
n = 1 j c n , k N c n , j N 2 H 1 2 ( H 2 + 1 4 ) 2 j N H 1 2 k j N H + 1 2 H 1 2 ( H 2 + 1 4 ) 2 k j N H + 1 2 .
Now we estimate the second sum in (40). By the change of variables m = k n + 1 , we get
n = j + 1 k c n , k N 2 = ( H 1 2 ) 2 N 2 H n = j + 1 k l = 1 k n + 1 l H 3 / 2 2 = ( H 1 2 ) 2 N 2 H m = 1 k j l = 1 m l H 3 / 2 2 .
We bound the inner sum using (41) and estimate m 2 H 1 ( k j ) 2 H 1 . We obtain
n = j + 1 k c n , k N 2 1 N 2 H m = 1 k j m 2 H 1 ( k j ) 2 H N 2 H = k j N H 1 2 k j N H + 1 2 k j N H + 1 2 .
Combining (42) and (43), we conclude the proof. □
Now let us verify the conditions of Theorem 3. We start with condition (B1).
Lemma 10.
Let b j , k N , 1 j k n be a triangular array of non-negative numbers defined by (36). Then for all N 1 and for all k = 1 , , N ,
j = 1 k b j , k N 1 2 .
Proof. 
It follows from (36) and (41) that
j = 1 k b j , k N = H 1 2 N H j = 1 k ( k j + 1 ) H 3 / 2 = H 1 2 N H l = 1 k l H 3 / 2 k H 1 / 2 N H .
If k 2 , then k H 1 / 2 N H = k N H 1 k 1 2 and (44) is valid. In the remaining case k = 1 we have
j = 1 k b j , k N = b 1 , 1 N = H 1 2 N H H 1 2 < 1 2 < 1 2 ,
and (44) also holds. □
Now let us verify the condition (B2).
Lemma 11.
The numbers b j , k N , 1 j k n defined by (36) satisfy the condition (B2).
Proof. 
We have
k = 1 N j = 1 k b j , k N 2 = H 1 2 2 N 2 H k = 1 N j = 1 k ( k j + 1 ) 2 H 3 = H 1 2 2 N 2 H k = 1 N l = 1 k l 2 H 3 = H 1 2 2 N 2 H l = 1 N k = l N l 2 H 3 = H 1 2 2 N 2 H l = 1 N l 2 H 3 ( N l + 1 ) H 1 2 2 N 2 H 1 l = 1 N l 2 H 3 0 , as N ,
since H ( 1 / 2 , 1 ) and l = 1 N l 2 H 3 < . □
Thus, we have proved that all assumptions of Theorem 3 are satisfied. As a consequence, we obtain the following result.
Theorem 8.
Assume that b j , k N , 1 j k n is a triangular array of real numbers defined by (36), ξ j , j 1 are iid Rademacher random variables, the process Z H is given by (35). Then the sequence of stochastic processes
S N ( t ) = k = 1 [ N t ] 1 + j = 1 k b j , k N ξ j , t [ 0 , 1 ]
converge in D [ 0 , 1 ] to S ( t ) = e Z H ( t ) , t [ 0 , 1 ] .

Author Contributions

All authors have read and agree to the published version of the manuscript. Conceptualization, Y.M.; investigation, Y.M., K.R. and S.S.; writing–original draft preparation, Y.M., K.R. and S.S.; writing–review and editing,Y.M., K.R. and S.S.

Funding

Y.M. and K.R. acknowledge that the present research is carried through within the frame and support of the ToppForsk project nr. 274410 of the Research Council of Norway with title STORM: Stochastics for Time-Space Risk Models.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Monotonicity Along the Diagonal of a Triangular Matrix in the Cholesky Decomposition of a Positive Definite Toeplitz Matrix

In this appendix we establish the connection between the prediction of a stationary Gaussian process and the Cholesky decomposition of its covariance matrix. It turns out that the positivity of the coefficients of predictor implies the monotonicity along the diagonals of a triangular matrix in the Cholesky decomposition.
Let X = { X n , n Z } be a centered stationary Gaussian discrete-time process with the known autocovariance function:
γ ( k ) = cov ( X n + k , X n ) = E X n + k X n .
Assume that all finite-dimensional distributions of X are non-degenerate multivariate normal distributions. In other words, we assume that for all n N , the symmetric covariance matrix of n subsequent values of X,
Γ n = cov X 1 X 2 X n = γ ( 0 ) γ ( 1 ) γ ( n 1 ) γ ( 1 ) γ ( 0 ) γ ( n 2 ) γ ( n 1 ) γ ( n 2 ) γ ( 0 ) ,
is non-degenerate.

Appendix A.1. Prediction of a Stationary Stochastic Process

Let us construct the predictor of X n by the observations X 1 , , X m . Since the joint distribution X 1 , , X m , X n is Gaussian, we see that the conditional distribution X n ( X 1 , , X m ) is also Gaussian. (Anderson 2003, Section 2.5). The parameters of this distribution are given by
E [ X n X 1 , , X m ] = cov X n , X 1 X m cov X 1 X m 1 X 1 X m = γ ( n 1 ) γ ( n m ) Γ m 1 X 1 X m , Var [ X n X 1 , , X m ] = Var ( X n ) cov X n , X 1 X m cov X 1 X m 1 cov X 1 X m , X n = γ ( 0 ) γ ( n 1 ) γ ( n m ) Γ m 1 γ ( n 1 ) γ ( n m ) .
Denote
α n , m , 1 α n , m , m = Γ m 1 γ ( n 1 ) γ ( n m ) , σ n , m 2 = γ ( 0 ) γ ( n 1 ) γ ( n m ) Γ m 1 γ ( n 1 ) γ ( n m ) .
Then
X n ( X 1 , , X m ) N k = 1 m α n , m , k X k , σ n , m 2 .
The following theorem claims that if the coefficients of the one-step-ahead predictor are positive, then the coefficients of the multi-step-ahead predictor are also positive.
Theorem A1.
Assume that the inequality α m + 1 , m , k > 0 holds for all 1 k m . Then for all 1 k m < n ,
α n , m , k > 0 ,
α n , m , k > α n + 1 , m + 1 , k + 1 .
Proof. 
For a discrete-time stochastic process, the coefficients of the multi-step-ahead predictor are evaluated by the following recursive formula:
α n + 1 , m , k = α n + 1 , n , k + i = m + 1 n α n + 1 , n , i α i , m , k , 1 k m n .
(Here, by convention, i = n + 1 n = 0 .)
Fix m and k, 1 k m . By assumption, α m + 1 , m , k > 0 . According to (A6), if α i , m , k > 0 for all i = m + 1 , , n , then α n + 1 , m , k > 0 . Hence, by induction, α n + 1 , m , k > 0 for all n m + 1 . Inequality (A4) is proved.
Now let us establish Inequality (A5). Again, we prove it by induction. We use the following recursive formula from the Levinson–Durbin algorithm (Shumway and Stoffer 2017; Subba Rao 2018):
α m + 2 , m + 1 , k + 1 = α m + 1 , m , k α m + 2 , m + 1 , 1 α m + 1 , m , m + 1 k , 1 k m .
It implies that
α m + 2 , m + 1 , k + 1 < α m + 1 , m , k , 1 k m .
The induction hypothesis is the following: For all m and k, 1 k m , and for all j = 1 , , J
α m + j + 1 , m + 1 , k + 1 < α m + j , m , k .
Hence, we need to prove the inequality
α m + J + 2 , m + 1 , k + 1 < α m + J + 1 , m , k , 1 k m .
Using (A6), we obtain
α m + J + 2 , m + 1 , k + 1 = α m + J + 2 , m + J + 1 , k + 1 + i = m + 2 m + J + 1 α m + J + 2 , m + J + 1 , i α i , m + 1 , k + 1 = α m + J + 2 , m + J + 1 , k + 1 + i = m + 1 m + J α m + J + 2 , m + J + 1 , i + 1 α i + 1 , m + 1 , k + 1 < α m + J + 1 , m + J , k + i = m + 1 m + J α m + J + 1 , m + J , i α i , m , k = α m + J + 1 , m , k .
Thus, Inequality (A8) is proved. Hence, α m + j + 1 , m + 1 , k + 1 < α m + j , m , k for all m and k, 1 k m , and for all j N . Equivalently, Inequality (A5) holds for all n, m and k such that 1 k m < n . □

Appendix A.2. Cholesky Decomposition of the Covariance Matrix

Fix N. Recall that, by assumption, the covariance matrix Γ N of the random vector ( X 1 , , X N ) is non-degenerate. In this case the matrix Γ N can be uniquely represented as a product
Γ N = L N L N = 1 , 1 0 0 1 , 2 2 , 2 0 1 , N 2 , N N , N 1 , 1 1 , 2 1 , N 0 2 , 2 2 , N 0 0 N , N ,
where L N is a lower-triangular matrix with positive diagonal elements. This representation is called Cholesky decomposition.
Remark A1.
The numbers k , n do not depend on N:
k , n ( N 1 ) = k , n ( N 2 ) f o r a l l k n min ( N 1 , N 2 ) ,
where k , n ( N ) denotes an element of the matrix L N in Decomposition (A9) for a certain value of N.
Since L N 1 0 N × 1 = 0 N × 1 is a zero vector and L N 1 Γ N ( L N 1 ) = I N is an identity matrix, we see that the random vector
ζ 1 ζ N = L N 1 X 1 X N
has N-dimensional standard normal distribution,
ζ 1 ζ N N ( 0 N × 1 , I N ) .
We have
X 1 X N = L N ζ 1 ζ N .
Hence, the values of the stochastic process { X n , n = 1 , , N } can be represented as follows:
X n = k = 1 n k , n ζ k ,
where ζ k , k = 1 , 2 , , N , are independent standard normal random variables.
Taking into account Remark A1, Equalities (A10) and (A11) can be generalized for all k:
ζ 1 ζ k = L k 1 X 1 X k , X 1 X k = L k ζ 1 ζ k ,
where for 1 k N the matrix L k is a sub-matrix of L N ,
L k = 1 , 1 0 1 , k k , k .
For all k the matrix L k is non-degenerate, and L k L k = Γ k is the covariance matrix of the vector ( X 1 , , X k ) . This implies that the σ -algebra, generated by the random variables X 1 , , X k coincides with σ -algebra, generated by the random variables ζ 1 , , ζ k :
F k = σ ( X 1 , , X k ) = σ ( ζ 1 , , ζ k ) .
Theorem A2.
Assume that the condition of Theorem A1 is satisfied: For all k and m such that 1 k m , the inequality α m + 1 , m , k > 0 holds. Then for all 1 k n ,
k , n > 0 ,
k , n > k + 1 , n + 1 .
Proof. 
Equation (A3) implies that for 1 m < n
E [ X n F m ] = k = 1 m α n , m , k X k ,
Var [ X n F m ] = σ n , m 2 .
The standard Gaussian random variables ζ k are measurable with respect to σ -algebra F m for k m and independent of σ -algebra F m for k > m . Therefore
E [ ζ k F m ] = ζ k for k m , 0 for k > m .
Hence,
E [ X n F m ] = E k = 1 n k , n ζ k | F m = k = 1 n k , n E [ ζ k F m ] = k = 1 m k , n ζ k .
It follows from (A15) and (A17) that
k = 1 m k , n ζ k = k = 1 m α n , m , k X k .
We apply (A12) and (A18) with n = m + 1 We obtain
X m + 1 = k = 1 m α m + 1 , m , k X k + m + 1 , m + 1 ζ m + 1 .
Further,
σ m + 1 , m 2 = Var [ X m + 1 F m ] = E [ ( X m + 1 E [ X m + 1 F m ] ) 2 | F m ] = E X m + 1 k = 1 m α m + 1 , m , k X k 2 = E ( m + 1 , m + 1 ζ m + 1 ) 2 = m + 1 , m + 1 2 .
Here we have used that
Var [ X m + 1 F m ] = E [ ξ F m ] is deterministic , Var [ X m + 1 F m ] = E [ Var [ X m + 1 F m ] ] = E [ E [ ξ F m ] ] = E [ ξ ] ,
where ξ = ( X m + 1 E [ X m + 1 F m ] ) 2 . Hence,
σ m + 1 , m 2 = m + 1 , m + 1 2 .
Moreover, note that
γ ( 0 ) = E X 1 2 = E ( 1 , 1 ζ 1 ) 2 = 1 , 1 2 .
In the Levinson–Durbin method the conditional variance of one-step-ahead predictor is calculated as follows (Shumway and Stoffer 2017; Subba Rao 2018):
σ 2 , 1 2 = γ ( 0 ) ( 1 α 2 , 1 , 1 2 ) ;
σ m + 1 , m 2 = σ m , m 1 2 ( 1 α m + 1 , m , 1 2 ) , m = 2 , 3 ,
Taking into account Equalities (A19) and (A20) with (A21) and (A22), we obtain
m + 1 , m + 1 2 = m , m 2 ( 1 α m + 1 , m , 1 2 ) , m N .
Since m , m > 0 and α m + 1 , m , 1 > 0 for all m, we get
1 , 1 2 > 2 , 2 2 > 3 , 3 2 > , 1 , 1 > 2 , 2 > 3 , 3 >
Inequality (A14) is proved in the case n = k .
It follows from orthonormality of the random variables ζ k and from the representation (A12) that
E [ ζ k ζ m ] = 1 if k m , 0 if k < m ; E [ X k ζ m ] = m , k if k m , 0 if k < m .
Transform Equality (A18):
E k = 1 m k , n ζ k = E k = 1 m α n , m , k X k ζ m , k = 1 m k , n E [ ζ k ζ m ] = k = 1 m α n , m , k E [ X k ζ m ] ,
and taking into account (A25), we get
m , n = α n , m , m m , m .
Since m , m > 0 (as a diagonal element of the triangular matrix in the Cholesky decomposition) and α n , m , m > 0 (by Theorem A1), we see that m , n > 0 . Hence, Inequality (A13) is proved.
By Theorem A1, Inequality (A24) and Equality (A26) we get that for k < n
k + 1 , n + 1 = α n + 1 , k + 1 , k + 1 k + 1 , k + 1 < α n , k , k k , k = k , n .
Thus Inequality (A14) is proved in the case n > k . This completes the proof. □
Remark A2.
It follows from (A23) that the following non-strict inequalities for the elements of the main diagonal
1 , 1 2 , 2 3 , 3
can be proved without the assumption α m + 1 , m , k > 0 for all 1 k m .

Appendix A.3. Application to Fractional Brownian Motion

In order to prove Conjecture A2, it suffices to apply Theorem A2 to stationary Gaussian process G k = B k + 1 H B k H with covariance matrix Γ N given by (11) (its Cholesky decomposition is considered in Lemma 2). To this end, we need to prove the positivity of the coefficients α m + 1 , m , k , 1 k m of the corresponding one-step-ahead predictor. According to (A2), these coefficients are given by
α m + 1 , m , 1 α m + 1 , m , m = Γ m 1 γ m γ 1 , m 1 .
Consequently, in order to verify Conjecture A2, it suffices to establish the following result.
Conjecture A3.
For any m 1 , a solution x R m to the linear system of equations
Γ m x = γ m γ 1
is positive.
For m = 1 , we have an equation γ 0 x 1 = γ 1 with the positive solution x 1 = γ 1 / γ 0 = 2 2 H 1 1 , see (17). Below we show that the above conjecture holds also in the particular case m = 2 . The proof in the general case is an open problem.
Proof of Conjecture A3 for the particular case m = 2.
We have the following system:
x 1 + γ 1 x 2 = γ 2 , γ 1 x 1 + x 2 = γ 1 ,
where γ 1 = 2 2 H 1 1 and γ 2 = 1 2 3 2 H + 1 2 · 2 2 H , see (10). The determinant
Δ : = det Γ 2 = 1 γ 1 γ 1 1 = 1 γ 1 2 > 0 ,
since γ 1 = 2 2 H 1 1 < 1 . The solution is given by
x 1 = γ 2 γ 1 γ 1 1 Δ = γ 2 γ 1 2 Δ , x 2 = 1 γ 2 γ 1 γ 1 Δ = γ 1 ( 1 γ 2 ) Δ .
By (15), 0 γ 2 γ 1 γ 0 = 1 . Therefore, x 2 > 0 . In order to prove that x 1 > 0 , we need to establish the inequality γ 2 > γ 1 2 , which is equivalent to
1 2 3 2 H + 1 2 · 2 2 H > 2 2 H 1 1 2 .
This inequality can be simplified to
2 4 H 2 · 3 2 H + 2 < 0 .
Let x = 2 H ( 1 , 2 ) . We need to prove that
f ( x ) : = 4 x 2 · 3 x + 2 < 0 , x ( 1 , 2 ) .
Note that f ( 1 ) = f ( 2 ) = 0 . The first and second derivatives are equal to
f : = ( x ) 4 x log 4 2 · 3 x log 3 , f : = ( x ) 4 x log 2 4 2 · 3 x log 2 3 .
Hence, we need to prove that 4 x log 2 4 > 2 · 3 x log 2 3 , which is equivalent to 4 3 x > 2 log 2 3 log 2 4 . Therefore, it suffices to show that 4 3 > 2 log 2 3 log 2 4 , which is true and can be checked by straightforward calculation.
Thus, we have proved that f ( x ) > 0 for x ( 1 , 2 ) , i.e., the function f is convex for x ( 1 , 2 ) . Then f is negative for x ( 1 , 2 ) , since f ( 1 ) = f ( 2 ) = 1 . Consequently, we have that x 1 > 0.1 . □

References

  1. Anderson, Theodore Wilbur. 2003. An Introduction to Multivariate Statistical Analysis. New York: Wiley. [Google Scholar]
  2. Berenhaut, Kenneth S., and Dipankar Bandyopadhyay. 2005. Monotone convex sequences and Cholesky decomposition of symmetric Toeplitz matrices. Linear Algebra and its Applications 403: 75–85. [Google Scholar] [CrossRef] [Green Version]
  3. Bezborodov, Viktor, Luca Di Persio, and Yuliya Mishura. 2019. Option pricing with fractional stochastic volatility and discontinuous payoff function of polynomial growth. Methodology and Computing in Applied Probability 21: 331–66. [Google Scholar] [CrossRef] [Green Version]
  4. Billingsley, Patrick. 1995. Probability and Measure, 3rd ed. New York: John Wiley & Sons. [Google Scholar]
  5. Billingsley, Patrick. 1999. Convergence of Probability Measures, 2nd ed. New York: John Wiley & Sons. [Google Scholar]
  6. Cordero, Fernando, Irene Klein, and Irene Klein Perez-Ostafe. 2016. Asymptotic proportion of arbitrage points in fractional binary markets. Stochastic Processes and their Applications 126: 315–36. [Google Scholar] [CrossRef] [Green Version]
  7. Davidson, James, and Nigar Hashimzade. 2009. Type I and type II fractional Brownian motions: A reconsideration. Computational Statistics & Data Analysis 53: 2089–106. [Google Scholar]
  8. Davydov, Yurii Aleksandrovich. 1970. The invariance principle for stationary processes. Theory of Probability & Its Applications 15: 487–98. [Google Scholar]
  9. Föllmer, Hans, and Alexander Schied. 2011. Stochastic Finance: An Introduction in Discrete Time, extended ed. Berlin: Walter de Gruyter & Co. [Google Scholar]
  10. Gatheral, Jim, Thibault Jaisson, and Mathieu Rosenbaum. 2018. Volatility is rough. Quantitative Finance 18: 933–49. [Google Scholar] [CrossRef]
  11. Gorodetskii, Vasiliy V. 1977. On convergence to semi-stable Gaussian processes. Theory of Probability & Its Applications 22: 498–508. [Google Scholar]
  12. Grimmett, Geoffrey R., and David R. Stirzaker. 2001. Probability and Random Processes, 3rd ed. New York: Oxford University Press. [Google Scholar]
  13. Hubalek, Friedrich, and Walter Schachermayer. 1998. When does convergence of asset price processes imply convergence of option prices? Mathematical Finance 8: 385–403. [Google Scholar] [CrossRef] [Green Version]
  14. Marinucci, Domenico, and Peter M. Robinson. 1999. Alternative forms of fractional Brownian motion. Journal of Statistical Planning and Inference 80: 111–22. [Google Scholar] [CrossRef] [Green Version]
  15. Mishura, Yuliya. 2015a. Diffusion approximation of recurrent schemes for financial markets, with application to the Ornstein-Uhlenbeck process. Opuscula Mathematica. 35: 99–116. [Google Scholar] [CrossRef] [Green Version]
  16. Mishura, Yuliya. 2015b. The rate of convergence of option prices on the asset following a geometric Ornstein-Uhlenbeck process. Lithuanian Mathematical Journal 55: 134–49. [Google Scholar] [CrossRef]
  17. Mishura, Yuliya. 2015c. The rate of convergence of option prices when general martingale discrete-time scheme approximates the Black-Scholes model. Banach Center Publications 104: 151–65. [Google Scholar] [CrossRef]
  18. Mishura, Yuliya. 2008. Stochastic Calculus for Fractional Brownian Motion and Related Processes. Berlin: Springer-Verlag, vol. 1929. [Google Scholar]
  19. Prigent, Jean-Luc. 2003. Weak Convergence of Financial Markets. Berlin: Springer-Verlag. [Google Scholar]
  20. Rogers, Leonard C. G. 1997. Arbitrage with fractional Brownian motion. Mathematical Finance 7: 95–105. [Google Scholar] [CrossRef] [Green Version]
  21. Shumway, Robert H., and David S. Stoffer. 2017. Time Series Analysis and Its Applications: With R Examples, 4th ed. Cham: Springer. [Google Scholar]
  22. Sottinen, Tommi. 2001. Fractional Brownian motion, random walks and binary market models. Finance and Stochastics 5: 343–55. [Google Scholar] [CrossRef]
  23. Subba Rao, Suhasini. 2018. A Course in Time Series Analysis. Available online: https://www.stat.tamu.edu/~suhasini/teaching673/time_series.pdf (accessed on 29 January 2020).

Share and Cite

MDPI and ACS Style

Mishura, Y.; Ralchenko, K.; Shklyar, S. General Conditions of Weak Convergence of Discrete-Time Multiplicative Scheme to Asset Price with Memory. Risks 2020, 8, 11. https://doi.org/10.3390/risks8010011

AMA Style

Mishura Y, Ralchenko K, Shklyar S. General Conditions of Weak Convergence of Discrete-Time Multiplicative Scheme to Asset Price with Memory. Risks. 2020; 8(1):11. https://doi.org/10.3390/risks8010011

Chicago/Turabian Style

Mishura, Yuliya, Kostiantyn Ralchenko, and Sergiy Shklyar. 2020. "General Conditions of Weak Convergence of Discrete-Time Multiplicative Scheme to Asset Price with Memory" Risks 8, no. 1: 11. https://doi.org/10.3390/risks8010011

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop