Next Article in Journal
Measuring Performances of a White-Box Approach in the IoT Context
Next Article in Special Issue
Addition Formula and Related Integral Equations for Heine–Stieltjes Polynomials
Previous Article in Journal
Exploring the Utilization of Gradient Information in SIFT Based Local Image Descriptors
Previous Article in Special Issue
Some Singular Vector-Valued Jack and Macdonald Polynomials
 
 
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quadratic Spline Wavelets for Sparse Discretization of Jump–Diffusion Models

Department of Mathematics and Didactics of Mathematics, Technical University in Liberec, Studentská 2, Liberec 46117, Czech Republic
Symmetry 2019, 11(8), 999; https://doi.org/10.3390/sym11080999
Received: 29 June 2019 / Revised: 24 July 2019 / Accepted: 30 July 2019 / Published: 3 August 2019
(This article belongs to the Special Issue Symmetry in Special Functions and Orthogonal Polynomials)

Abstract

:
This paper is concerned with a construction of new quadratic spline wavelets on a bounded interval satisfying homogeneous Dirichlet boundary conditions. The inner wavelets are translations and dilations of four generators. Two of them are symmetrical and two anti-symmetrical. The wavelets have three vanishing moments and the basis is well-conditioned. Furthermore, wavelets at levels i and j where i j > 2 are orthogonal. Thus, matrices arising from discretization by the Galerkin method with this basis have O 1 nonzero entries in each column for various types of differential equations, which is not the case for most other wavelet bases. To illustrate applicability, the constructed bases are used for option pricing under jump–diffusion models, which are represented by partial integro-differential equations. Due to the orthogonality property and decay of entries of matrices corresponding to the integral term, the Crank–Nicolson method with Richardson extrapolation combined with the wavelet–Galerkin method also leads to matrices that can be approximated by matrices with O 1 nonzero entries in each column. Numerical experiments are provided for European options under the Merton model.

1. Introduction

Nowadays, various models and methodologies are available to calculate theoretical values of options, including the famous Black–Scholes model as well as stochastic volatility models such as the Heston or the Stein and Stein models. These models assume that the price of the underlying asset is represented by a continuous function, which is not always consistent with real market prices. This paper focuses on jump–diffusion models originally suggested by Merton [1] and later generalized (e.g., in [2]). These models assume that the spot price S of the underlying asset at time t ˜ follows the jump–diffusion process
d S t ˜ S t ˜ = μ d t ˜ + σ d W t ˜ + d i = 1 N t ˜ V i 1 ,
where S t ˜ denotes the left-hand limit of S at t ˜ , μ is the drift rate, W t ˜ is standard Brownian motion and σ is the volatility associated with the Brownian component of the process. N t ˜ is a Poisson process with intensity λ that is independent of W t ˜ , and V i is a sequence of independent identically distributed nonnegative random variables such that Y i = log V i has a distribution with the probability density function g. There are several possible choices for the function g. In the Merton model Y i is normally distributed with the mean μ J and the standard deviation σ J and
g x = 1 2 π σ J e x μ J 2 2 σ J 2 .
For other models and more details, see [1,2,3]. Let the variable t = T t ˜ represent time to maturity and r be a risk-free rate. Using no arbitrage principle and the standard Itô calculus, one can derive that the market price U S , t of the option at time to maturity t is the solution of the equation [1,2,3,4]:
U t D U I U = 0 , S > 0 , t 0 , T ,
where the operators D and I are given by
D U = σ 2 S 2 2 2 U S 2 + r λ κ S U S r + λ U
and
I U = λ U S e x , t g x d x .
The parameter κ represents the expected value E V i 1 . The initial and boundary conditions depend on the type of the option.
In Section 3, Equation (3) is transformed to logarithmic prices and restricted to a bounded domain. Then, the Galerkin method combined with the Crank–Nicolson scheme is used for its numerical solution. The stability of the scheme and convergence of the method have already been investigated (see [4,5,6] and references therein). This paper focuses on the structure of the discretization matrices. It is known that matrices arising from wavelet discretization of differential operators have a so-called finger pattern (see Figure 1, left). In [7,8,9,10], cubic and quartic wavelet bases were constructed such that the bi-infinite stiffness and mass matrices have a finite number of nonzero entries in each column and have a similar structure, as shown in Figure 1 (left). This simplifies the algorithm and increases the efficiency of wavelet-based methods. However, a construction of quadratic spline wavelets with this property has not yet been proposed, and such a construction is the main aim of this paper. It should also be mentioned that the wavelet bases from [11,12,13], which are semiorthogonal with respect to the H 1 -seminorm, lead to banded matrices for the one-dimensional Poisson equation, and the L 2 -orthogonal wavelets from [14,15] lead to diagonal mass matrices.
The differential operator D is a special case of an operator A defined by
A u = k , l = 1 d x k p k , l u x l + k = 1 d q k u x k + p 0 u ,
for d N . Let p k , l = p l , k and P = p k , l k , l = 1 d be positive definite on Ω .
A u = f on Ω = i = 1 d a i , b i , u = 0 on Ω
covers the Poisson equation, the Helmholtz equation, the stationary convection diffusion equation and various equations from financial mathematics, such as equations representing the Black–Scholes model and the Heston model semidiscretized in time using the θ -scheme.
For d = 1 , denote a = a 1 , b = b 1 , h = b a / 2 j , I k = k 1 h , k h ,
P m , j a , b = v L 2 a , b : v | I k Π m I k , k = 1 , , 2 j ,
and
P m , j 0 a , b = P m , j a , b C a , b ,
where Π m I k represents the set of all polynomials on I k of degree at most m. The main aim of this paper is to construct a quadratic spline wavelet basis Ψ on the interval a , b satisfying homogeneous Dirichlet boundary conditions such that the matrix
A Ψ , Ψ = A ψ λ , ψ μ ψ λ , ψ μ Ψ ,
where · , · is the L 2 -inner product, has O 1 nonzero entries in each column if p 1 , 1 P 2 , l 0 a , b , q 1 P 1 , l 0 a , b , and p 0 P 0 , l a , b for some l N .
Wavelet bases on product domains can be constructed using an isotropic or anisotropic tensor product approach (see, e.g., [16,17]). The discretization matrices for such multidimensional wavelet bases also have O 1 of nonzero entries in each row if
p k , l k = 1 d P 2 , l 0 a k , b k , g k k = 1 d P 1 , l 0 a k , b k , p 0 k = 1 d P 0 , l a k , b k .
Although the property that the matrix has O 1 nonzero entries in each column is valid for any l N , in most applications l should be small.
Since the differential operator D is a special case of the operator A , the discretization matrix for this operator is sparse. This paper also aims to derive decay estimates for the entries of matrix arising from discretization of the integral term using the Galerkin method with the proposed wavelet basis and to show that truncated discretization matrices for the problem in Equation (3) are sparse. The sparsity of the discretization matrix has two advantages. First, the multiplication of this matrix with a vector requires O N floating-point operations, while other quadratic spline wavelet bases require O N log N operations and other bases such as quadratic B-spline basis require O N 2 operations because the matrix is typically full. This increases the efficiency of iterative methods for the numerical solution of the resulting discrete system. Second, due to the smaller number of elements, the computation of the discretization matrix is faster for the Galerkin method with the proposed basis than for the Galerkin method with other bases of the same order, e.g., other quadratic spline wavelet bases and quadratic B-spline bases.
In addition to the orthogonality property, it is required that wavelets have vanishing moments and that the wavelet basis is well conditioned. Vanishing wavelet moments determine the decay of entries of discretization matrices. Since the constructed wavelets have three vanishing moments, this decay is fast (see Theorem 5 in Section 3). Furthermore, due to vanishing moments, the basis can be used in adaptive wavelet methods (see [17]). It is important that the basis is well-conditioned because the condition numbers of discretization matrices depend on condition numbers of the basis and small condition numbers of system matrices guarantee the stability of computation and influences the number of iterations of iterative methods used for the numerical solution of the resulting system.
Due to these interesting properties, the wavelet basis proposed in this paper can be used in many applications such as the numerical solution of various types of operator equations using the wavelet-Galerkin method, an adaptive wavelet method or a collocation method. For a survey of such applications, refer to [4,17,18].
The paper is organized as follows. In Section 2, a construction of a wavelet basis satisfying the aforementioned properties is proposed and a rigorous proof of its Riesz basis property is provided. It is shown that the condition numbers of the basis are small with respect to both the L 2 -norm and the H 1 -seminorm. In Section 3, the problem in Equation (3) is discretized and the properties of discretization matrices studied. Finally, in Section 4, numerical experiments are provided for pricing European options under the Merton model, and it is shown that the proposed method is efficient because it can achieve high-order convergence with respect to both time and spatial variables, the number of iterations is small, and due to sparsity of system matrices one iteration only requires a small number of floating-point operations. Furthermore, in comparison with methods from [19,20,21], the proposed method requires a smaller number of degrees of freedom to obtain a sufficiently accurate solution.

2. Construction of Wavelets

First, briefly recall the concept of a wavelet basis. Let J be at most a countable index set such that each index λ J takes the form λ = j , k and denote λ = j Z . The norm of v = v λ λ J , v λ R , is defined by
v = λ J v λ 2 .
The space of all sequences v with finite norm is denoted by
l 2 J = v : v = v λ λ J , v λ R , v < .
The symbol L 2 a , b denotes the space of square-integrable functions defined on a , b . Let H L 2 a , b be a real Hilbert space equipped with the inner product · , · H and the norm · H , e.g., H is the Sobolev space H 0 1 a , b of functions that vanish at boundary points and whose first weak derivatives are in L 2 a , b . The aim is to construct a wavelet basis for H in the sense of the following definition.
Definition 1.
A family Ψ = ψ λ , λ J is called a wavelet basis of H if:
(i) 
Ψ is a Riesz basis for H, i.e., the span of Ψ is dense in H and there exist constants c , C 0 , such that
c b λ J b λ ψ λ H C b ,
for all b = b λ λ J l 2 J .
(ii) 
The functions are local in the sense that
d i a m s u p p ψ λ C 2 λ , λ J ,
where the constant C does not depend on λ, and at a given level j the supports of only finitely many wavelets overlap at any point x.
(iii) 
The family Ψ has the hierarchical structure
Ψ = Φ j 0 j = j 0 K Ψ j
for some K N .
(iv) 
There exists L 1 such that all functions ψ λ Ψ j , j 0 j K , have L vanishing moments, i.e.,
a b x k ψ λ x = 0 , k = 0 , , L 1 .
For the two countable sets of functions Γ , Θ L 2 Ω , the symbol Γ , Θ denotes the matrix
Γ , Θ = γ , θ γ Γ , θ Θ .
The constants
c Ψ = sup c : c satisfies ( 14 ) and C Ψ = inf C : C satisfies ( 14 )
are called a lower and upper Riesz bound, respectively, and the number c o n d Ψ = C Ψ / c Ψ is called the condition number of Ψ . In some papers, the squares of norms are used in Equation (14) and the Riesz bounds are defined as c Ψ 2 and C Ψ 2 . The Gram matrix Ψ , Ψ can be finite or biinfinite and it is known that it represents a linear operator that is continuous, positive definite, and self-adjoint, and that the constants c Ψ and C Ψ satisfy
c Ψ = λ m i n Ψ , Ψ , C Ψ = λ m a x Ψ , Ψ .
If Ψ satisfies Equation (14) but the span of Ψ is not necessarily dense in H, then Ψ is called a Riesz sequence in H.
The definition of a wavelet basis is not unified in the mathematical literature, and Conditions (i)–(iv) from Definition 1 can be generalized. The functions from the set Φ j 0 are called scaling functions and the functions from the set Ψ j , j j 0 are called wavelets on the level j. Wavelets in the inner part of the interval are typically translations and dilations of one function ψ or several functions ψ 1 , , ψ p also called wavelets, i.e.,
ψ j , k x = 2 j / 2 ψ l 2 j x m ,
for some l 1 , , p and some m Z , and similarly the wavelets near the boundary are derived from functions called boundary wavelets.
In the following, a construction of a new wavelet basis is proposed. Scaling functions are defined as in [22,23,24]. Let ϕ and ϕ b be quadratic B-splines on knots [ 0 , 1 , 2 , 3 ] and [ 0 , 0 , 1 , 2 ] , respectively. Then, ϕ and ϕ b have the explicit form
ϕ ( x ) = x 2 2 , x [ 0 , 1 ] , x 2 + 3 x 3 2 , x [ 1 , 2 ] , x 2 2 3 x + 9 2 , x [ 2 , 3 ] , 0 , otherwise ; ϕ b ( x ) = 3 x 2 2 + 2 x , x [ 0 , 1 ] , x 2 2 2 x + 2 , x [ 1 , 2 ] , 0 , otherwise .
The graphs of the functions ϕ b and ϕ are displayed in Figure 2.
For j 2 , the functions
ϕ j , k ( x ) = 2 j / 2 ϕ ( 2 j x k + 2 ) , k = 2 , , 2 j 1 , ϕ j , 1 ( x ) = 2 j / 2 ϕ b ( 2 j x ) , ϕ j , 2 j ( x ) = 2 j / 2 ϕ b ( 2 j ( 1 x ) ) ,
where x 0 , 1 , form the scaling basis Φ j = ϕ j , k , k = 1 , , 2 j , and the spaces V j = span Φ j form a multiresolution analysis.
For simplicity, denote P m , j = P m , j 0 , 1 and P m , j 0 = P m , j 0 0 , 1 . To obtain sparse discretization matrices, dual spaces V ˜ j and complement spaces W j are defined by
V ˜ j = P 2 , j 2 P 0 , j 1 , W j = V ˜ j V j + 1 ,
where V ˜ j is the L 2 -orthogonal complement of V ˜ j .
Lemma 1.
The functions g W i and h W j , i , j 2 , i j > 2 , satisfy
p 0 g , h = 0 , q 1 g , h = 0 , and p 1 , 1 g , h = 0
for p 1 , 1 P 2 , l 0 , q 1 P 1 , l 0 , and p 0 P 0 , l , l max i , j 2 . Furthermore, the functions g W i and h W j , i , j 2 , i j > 1 , satisfy g , h = 0 .
Proof. 
Assume that j > i + 2 , i , j 2 , and l max i , j 2 . Then, g W i V i + 1 V j 2 V ˜ j , and thus p 0 g V ˜ j . Since h W j and W j is orthogonal to V ˜ j , p 0 g , h = 0 is obtained. Using the similar argument and the relations q 1 g , h = g , q 1 h + q 1 h and p 1 , 1 g , h = p 1 , 1 g + p 1 , 1 g , h , the remaining part of the lemma is proved. ☐
Therefore, if wavelets are defined as basis functions for the spaces W j , then the matrices in Equation (10) will be sparse. Figure 1 shows the cases p 1 , 1 = 1 , p 0 = q 1 = 0 and p 0 = 1 , q 1 = p 1 , 1 = 0 . The inner wavelet generators are defined by
ψ l x = k = 0 5 h k l ϕ 2 x k , ψ l + 2 = k = 0 13 h k l + 2 ϕ 2 x k ,
for l = 1 , 2 , and the boundary wavelet generator is defined by
ψ b x = h 1 b ϕ b 2 x + k = 0 5 h k b ϕ 2 x k .
The coefficients h k l are computed such that ψ l is L 2 -orthogonal to p m x 4 k , k Z , where
p i x = x 2 i , x 0 , 4 , 0 , otherwise , p i + 2 x = 1 , x 2 i 2 , 2 i , 0 , otherwise ,
for i = 1 , 2 . This leads to systems of linear algebraic equations with infinitely many solutions. Using numerical experiments, the coefficients h k l that lead to a well-conditioned wavelet basis were found, namely
h 0 1 , h 5 1 = 1 , 3 , 2 , 2 , 3 , 1 ,
h 0 2 , h 5 2 = 1 , 7 3 , 2 , 2 , 7 3 , 1 .
h 0 3 , h 6 3 = 7 10 , 11 20 , 24 25 , 27 10 , 653 250 , 5343 500 , 708 125 ,
h 0 4 , h 6 4 = 13 20 , 8 25 , 9 100 , 153 100 , 423 200 , 4823 600 , 139 20 ,
and h k 3 = h 13 k 3 , h k 4 = h 13 k 4 , for k = 7 , , 13 , and for the boundary wavelet
h 1 b , h 5 b = 121 20 , 841 120 , 371 200 , 131 100 , 17 100 , 4 25 , 13 25 .
The graphs of constructed wavelets are displayed in Figure 2.
The functions ψ 1 and ψ 3 are symmetric; the functions ψ 2 and ψ 4 are antisymmetric; supp ψ l = 0 , 4 for l = b , 1 , 2 ; supp ψ l = 0 , 8 for l = 3 , 4 ; and all the wavelets have three vanishing moments.
For j 2 , a wavelet basis on the level j
Ψ j = ψ j , k / ψ j , k L 2 0 , 1 , k = 1 , , 2 j ,
contains the functions
ψ j , 1 x = 2 j / 2 ψ b 2 j x , ψ j , 2 j x = 2 j / 2 ψ b 2 j 1 x , ψ j , 4 k + l + 1 x = 2 j / 2 ψ l 2 j x 4 k , l = 1 , 4 , 1 < 4 k + l + 1 < 2 j ,
for x 0 , 1 .
For j j 0 and j 0 = 2 , the sets
Ψ = Φ j 0 j = j 0 Ψ j , Ψ s = Φ j 0 j = j 0 j 0 + s 1 Ψ j ,
are a wavelet basis in the space L 2 0 , 1 and its finite-dimensional subset, respectively. In the following, the proof of the Riesz basis property of Ψ is provided.
Theorem 1.
The wavelets ψ j , k have three vanishing moments, and Ψ j , j 2 , are Riesz bases of the spaces W j such that their lower Riesz bounds c Ψ j and the upper Riesz bounds C Ψ j are uniformly bounded, i.e., they satisfy 0 < c < c Ψ j C Ψ j < C < for some constants c and C independent on j.
Proof. 
It was already mentioned that ψ l , l = 1 , 2 , 3 , 4 , b , have three vanishing moments. Indeed, since ψ l are defined to be L 2 -orthogonal to p m x 4 k , k Z , and the polynomials x i , i = 0 , 1 , 2 , restricted to support of ψ l are linear combinations of p m x 4 k , relation ψ l , x i = 0 is obtained, and thus ψ l have three vanishing moments. Therefore, the wavelets ψ j , k have three vanishing moments as well.
The L 2 -orthogonality of ψ l and p m x 4 k , k Z , implies the L 2 -orthogonality of ψ j , k to the functions p m 2 j x 4 l , m = 1 , 2 , 3 , 4 , l = 0 , , 2 j 2 1 , which form the basis of V ˜ j and thus ψ j , k W j . Since the number of elements in Ψ j is equal to the dimension of W j , the set Ψ j is a basis of W j . Every finite dimensional basis is a Riesz basis and thus it remains to be proven that Riesz bounds for Ψ j are uniformly bounded. Let Γ j = Ψ j , Ψ j and n = 2 j . The matrix Γ j has similar structure as the matrix G j in Equation (42), and since its entries are L 2 -products of piecewise polynomial functions, one is able to compute them precisely or with arbitrary precision. For j 2 , the Gerschgorin circle theorem from [25] yields
c Ψ j = λ m i n Γ j min k = 1 n Γ j k , k l = 1 l k n Γ j k , l 0.3
C Ψ j = λ m a x Γ j max k = 1 n Γ j k , k l = 1 l k n Γ j k , l 1.7 ,
 ☐
The proof of the Riesz basis property in Equation (14) for Ψ is based on the following theorem [8,10,26].
Theorem 2.
Let j 0 N and, for j j 0 , let V j and V ˜ j be subspaces of the space V L 2 0 , 1 , such that V j V j + 1 , V ˜ j V ˜ j + 1 , dim V j = dim V ˜ j < . Let Φ j be bases of V j , Φ ˜ j be bases of V ˜ j , and Ψ j be bases of V ˜ j V j + 1 , such that Riesz bounds with respect to the L 2 -norm of Φ j , Φ ˜ j and Ψ j are uniformly bounded, and let Ψ be composed of Φ j 0 and Ψ j , j j 0 , as in Equation (31). Furthermore, assume that
G j = Φ j , Φ ˜ j
is invertible and that the spectral norm of G j 1 is bounded independently on j. In addition, for some positive constants C, γ and d, such that γ < d , let
inf v j V j v v j L 2 0 , 1 C 2 j t v H t 0 , 1 , v H t 0 , 1 V , 0 t d ,
and
v j H s 0 , 1 C 2 j s v j L 2 0 , 1 , v j V j , 0 s < γ ,
and similarly let Equations (35) and (36) hold for γ ˜ and d ˜ on the dual side. Then,
ψ j , k / ψ j , k H s 0 , 1 , ψ j , k Ψ
is a Riesz sequence in H s 0 , 1 for s γ ˜ , γ .
The following theorem shows that the spaces V j and V ˜ j defined above satisfy the assumptions of Theorem 2.
Theorem 3.
There exist uniform Riesz bases Φ ˜ j of V ˜ j such that matrices G j defined by Equation (34) are invertible and the spectral norms of G j 1 are bounded independently on j.
Proof. 
Let p k be defined by Equation (24). For l = 1 , , 4 , let
ϕ ˜ l = k = 1 4 c k l p k ,
such that
ϕ · m , ϕ ˜ l = 0 , m l > 1 , ϕ · l , ϕ ˜ l = 1 .
Since supp ϕ · m supp ϕ ˜ l = 0 for m 2 , , 4 , the relations in Equation (38) lead to the system of four linear algebraic equations with four unknown coefficients for each function ϕ ˜ l . The invertibility of all four system matrices was verified using symbolic computations. Thus, the functions ϕ ˜ l exist and are unique. Then,
Φ ˜ j = ϕ ˜ j , k , k = 1 , , 2 j ,
where
ϕ ˜ j , 4 k + l x = 2 j / 2 ϕ ˜ l 2 j x 4 k , l = 1 , 2 , 3 , 4 , k Z ,
is a basis of V ˜ j and the matrix G j defined by Equation (34) is tridiagonal and has the structure
Symmetry 11 00999 i001
Using symbolic computation and rounding the resulting elements of the matrix G to three decimal digits,
G = 0.423 0 0 0 1.000 0.422 0 0 0.502 1.000 0.167 0 0 0.159 1.000 0.403 0 0 0.417 1.000 0 0 0 0.406 .
Similarly, the matrices G L and G R are given by
G L = 0.577 0.422 0 0 0.502 1.000 0.169 0 0 0.159 1.000 0.403 0 0 0.417 1.000 0 0 0 0.406
and
G R = 0.423 0 0 0 1.000 0.422 0 0 0.502 1.000 0.169 0 0 0.159 1.000 0.403 0 0 0.417 0.594 .
Thus, the matrices G j and G j T are diagonally dominant and invertible. Due to the Johnson’s lower bound for the smallest singular value [27],
σ m i n G j min k = 1 n G j k , k 1 2 l = 1 l k n G j k , l + G j l , k 0.115 ,
where n = 2 j . Therefore, the spectral norm of the inverse matrix satisfies
G j 1 = σ m i n G j 1 8.696 .
It remains to be proven that Φ ˜ j are uniform Riesz bases of V ˜ j . The matrices
Γ ˜ j = Φ ˜ j , Φ ˜ j
are block diagonal matrices with 4 × 4 boundary blocks Γ ˜ L and Γ ˜ R and inner blocks Γ ˜ that do not depend on j.
The Riesz lower bound c Ψ ˜ and the Riesz upper bound C Ψ ˜ satisfy
c Ψ ˜ = λ m i n Γ ˜ j = min λ m i n Γ ˜ L , λ m i n Γ ˜ R , λ m i n Γ ˜ 1.60 , C Ψ ˜ = λ m a x Γ ˜ j = max λ m a x Γ ˜ L , λ m a x Γ ˜ R , λ m a x Γ ˜ 7.74 .
 ☐
Theorem 4.
The set Ψ satisfies Equation (14) for H = H s 0 , 1 , s 0.5 , 1.5 ; especially, Ψ is a Riesz basis of the space L 2 0 , 1 and Ψ normalized in the H 1 -norm or in the H 1 -seminorm is a Riesz basis of the space H 0 1 0 , 1 .
Proof. 
Using the Gershgorin circle theorem similarly to in the proof of Theorem 1, estimates c Φ j 0.1 and C Φ j 1 for the Riesz lower and upper bounds of Φ j , respectively, are obtained. The estimates in Equations (35) and (36) are satisfied for d = 3 , d ˜ = 3 , γ = 1.5 , and γ ˜ = 0.5 . These parameters depend on the polynomial exactness and smoothness of the primal and dual spaces (see [26]). Due to these facts, as well as Theorems 2 and 3, the proof is complete. ☐
Table 1 presents the minimal and maximal eigenvalues and the condition numbers (cond) of diagonally preconditioned stiffness and mass matrices, i.e.,
κ L 2 = cond Ψ s Ψ s L 2 0 , 1 , Ψ s Ψ s L 2 0 , 1
and
κ H 1 = cond Ψ s Ψ s H 1 0 , 1 , Ψ s Ψ s H 1 0 , 1 ,
where Ψ s = ψ λ : ψ λ Ψ s .
These values correspond to the lower and upper Riesz bounds and the condition numbers of normalized wavelet bases with respect to the L 2 -norm and to the H 1 -seminorm · H 1 0 , 1 . Although the aim is to construct a quadratic spline wavelet basis that leads to sparse discretization matrices rather than the optimization of the condition number, the resulting basis is better conditioned than many other quadratic spline wavelet bases (see comparison of quadratic spline wavelet bases in [16]).
A wavelet basis on a bounded interval a , b can be constructed from the proposed wavelet basis on the unit interval using the simple linear transform y = a + b a x , x 0 , 1 . Wavelet bases on the hypercube that are constructed using an isotropic, anisotropic, or sparse tensor product approach (see, e.g., [4,16,17]) preserve the properties of the wavelet basis on the interval such as the Riesz basis property, vanishing moments, and the sparse structure of the discretization matrices.

3. Discretization of the Jump–Diffusion Option Pricing Models

In this section, the Galerkin method with the constructed wavelet basis is used for valuation of options under jump–diffusion models. The choice of the method is motivated by the fact that the Galerkin method using a wavelet basis, also called the wavelet-Galerkin method, has several advantages for equations containing an integral term. As mentioned above, the discretization matrices for the wavelet-Galerkin method are sparse or quasi-sparse, while most of the standard methods suffer from the fact that the discretization matrices are full. Furthermore, the wavelet-Galerkin method is higher-order accurate if higher-order bases are used and the solution is sufficiently smooth. For many types of equations, the discretization matrices are well-conditioned, which results in a small number of iterations when using iterative methods for solving the resulting discrete system. For details on the methods for the numerical solution of integral equations and operator equations containing the integral term, see [4,18].
Recall that the jump–diffusion models are represented by the partial integro-differential Equation (3). The initial and boundary conditions depend on the type of the option. Here, the method is presented for a European put option. The value of a European call option can be computed using the put–call parity [3]. The initial condition for a vanilla European put option is U S , 0 = max K S , 0 , where K is the strike price, and the boundary conditions have the form U 0 , t = K e r t , U S , t 0 for S .
The minimal value S m i n > 0 and the maximal value S m a x are chosen such that a domain Ω = S m i n , S m a x approximates the unbounded domain 0 , . Since U S , t K e r t S for small S, the boundary conditions at S m i n have the form
U S m i n , t = K e r t S m i n , U S m a x , t = 0 .
It is convenient to transform Equation (3) to logarithmic prices x = log S because the transformed differential operator
D ^ U ^ = σ 2 2 2 U ^ x 2 + r λ κ σ 2 2 U ^ x r + λ U ^
has constant coefficients.
The transformed equation is given by
U ^ t D ^ U ^ I ^ U ^ = 0 , x X , t 0 , T ,
where U ^ x , t = U e x , t , X = x m i n , x m a x , x m i n = log S m i n , x m a x = log S m a x , and
I ^ U ^ = λ U ^ x + y , t g y d y = λ U ^ y , t g y x d y .
The error caused by localization, i.e., by solving the Equation (54) on a bounded domain X instead on the whole real line, was studied in [4,28]. Due to decays of a value of a put option and of a probability density function at infinity,
x m a x U ^ y , t g y x d y 0 for x m a x .
Furthermore, since U S , t K e r t S for S close to zero, the integral term I ^ can be approximated by
I ^ U ^ I 1 + I 2 U ^ ,
where
I 1 = x m i n K e r t e x g y x d y , I 2 U ^ = x m i n x m a x U ^ y , t g y x d y .
The boundary condition at the point x m i n is a non-homogeneous Dirichlet boundary condition. Therefore, one can transform Equation (54) into an equation with homogeneous Dirichlet boundary conditions. Let U ˜ = U ^ W , where U ^ is the solution of Equation (54) satisfying the initial and boundary conditions defined above and W is a function satisfying boundary conditions that is smooth enough. A possible choice of the function W is
W x , t = K e r t e x m i n 1 + x m i n x
for x X ¯ and t 0 , T . Then, U ˜ is the solution of the equation
U ˜ t D ^ U ˜ I 2 U ˜ = f W , f W = W t + D ^ W + I 2 W + I 1
satisfying the initial condition
U ˜ x , 0 = U e x , 0 W x , 0 , x X ,
and boundary conditions
U ˜ x m i n , t = 0 , U ˜ x m a x , t = 0 , t 0 , T .
The symbol L 2 0 , T ; B denotes the Bochner space of functions f such that f t = f · , t B for t 0 , T and
f L 2 0 , T ; X = 0 T f t B 2 d t 1 / 2 <
with · B being the norm in Banach space B. Let a be a bilinear form defined by
a u , v = D ^ u , v + I 2 u , v ,
for all u , v H 0 1 X .
Then, the variational formulation of Equation (60) reads as:
Find U ˜ H 0 1 0 , T ; H 0 1 X such that U ˜ t H 0 1 0 , T ; H 1 X and U ˜ satisfies Equation (61) and
U ˜ t , v a u , v = f W , v , v H 0 1 X ,
almost everywhere in 0 , T .
It can be shown that the bilinear form a is continuous and satisfies a Gårding inequality, which implies the existence of a unique solution to this problem (see [4]).
The Crank–Nicolson scheme is used for time discretization. Let
M N , τ = T M , t l = l τ , l = 0 , , M ,
and denote
U ˜ l x = U ˜ x , t l , f l x = f W x , t l .
The Crank–Nicolson scheme has the form
U ˜ l + 1 , v τ a U ˜ l + 1 , v 2 = U ˜ l , v τ + a U ˜ l , v 2 + f l + f l + 1 2 , v
for l = 0 , , M 1 .
Let Ψ be a wavelet basis for the space L 2 X such that Ψ normalized in the H 1 -norm is the wavelet basis for the space H 0 1 X . Let Ψ s be a finite-dimensional subset of Ψ with s levels of wavelets, i.e., Ψ s has the structure in Equation (31), and denote V s = span Ψ s . The Galerkin method consists in finding U ˜ l + 1 s V s such that
U ˜ l + 1 s , v τ a U ˜ l + 1 s , v 2 = U ˜ l s , v τ + a U ˜ l s , v 2 + f l + f l + 1 2 , v
for all v V s . Setting v = ψ μ Ψ s and expanding U ˜ l + 1 s in a basis Ψ s , i.e.,
U ˜ l + 1 s = ψ λ Ψ s u λ s ψ λ ,
the vector of coefficients u s = u λ s is the solution of the system of linear algebraic equations A s u s = f s , where
A μ , λ s = ψ λ , ψ μ τ a ψ λ , ψ μ 2
and
f μ s = U ˜ l s , ψ μ τ + a U ˜ l s , ψ μ 2 + f l + f l + 1 2 , ψ μ .
It is obvious that f s and u s depend on the time level t l , but for simplicity the index l is omitted.
The stability of the Crank–Nicolson scheme and error estimates for the Galerkin method combined with the Crank–Nicolson scheme have been already studied (see, e.g., [4,5,6] and references therein). For quadratic spline basis functions and for sufficiently smooth solutions, the L 2 -norm of the error depends on the error of approximation of function representing the initial condition in the space V s and the term of order O τ 2 + h 3 , where h represents the spatial step that is in this case for N basis functions given by h = 1 / N .
In the following, the structure of a discretization matrix is studied. Using the Jacobi diagonal preconditioner D s , where the diagonal elements of D s satisfy
D λ , λ s = A λ , λ s ,
gives the preconditioned system
A ˜ s u ˜ s = f ˜ s
with
A ˜ s = D s 1 A s D s 1 , f ˜ s = D s 1 f s , u ˜ s = D s u s .
As is already known from the previous section, the matrix arising from discretization of the differential operator D ^ is sparse and has the structure displayed in Figure 1 (middle). Hence, the main focus is on properties of the matrix C s corresponding to the integral, i.e., the matrix C s with entries
C μ , λ s = I 2 ψ λ , ψ μ , ψ λ , ψ μ Ψ s .
For the Galerkin method with the standard spline basis, the matrix C s is full. However, it is known that, for integral operators with some types of kernels and for wavelet bases with vanishing moments, many entries of discretization matrices are small and can be thresholded and the matrices can be approximated with matrices that are sparse or quasi-sparse (see, e.g., [4,18,29,30]). The following theorem provides the decay estimates for the entries of the matrix C s corresponding to general wavelets with L vanishing moments.
Theorem 5.
Let Ψ be a wavelet basis with L vanishing moments, i.e., Conditions (i)–(iv) from Definition 1 are satisfied. Let ψ i , k , ψ j , l Ψ be wavelets that are generated from wavelets ψ l , ψ m , respectively, via translations and dilations as in Equation (21). Denote by C 1 the maximum of the lengths of the supports of ψ l and ψ m and
C 2 = max n = l , m s u p p ψ n ψ n x d x .
Denote a i , k , b i , k = Ω i , k = s u p p ψ i , k , a j , l , b j , l = Ω j , l = s u p p ψ j , l , I i , j , k , l = a j , l b i , k , b j , l a i , k . If g C 2 L I i , j , k , l , then
I 2 ψ i , k , ψ j , l C i , j , k , l 2 L + 1 2 i + j ,
with
C i , j , k , l = C 1 2 L C 2 2 λ 4 L L ! 2 max x , y Ω i , j , k , l g ( 2 L ) y x .
Consequently, if Ω 0 , S m a x 2 is a bounded set such that the function
K x , y = λ g y x
satisfies K C 2 L Ω , then there exists a constant C independent of i , j , k , l such that
I 2 ψ i , k , ψ j , l C 2 L + 1 2 i + j ,
for all wavelets ψ i , k and ψ j , l such that the set Ω i , j , k , l = Ω i , k × Ω j , l satisfies Ω i , j , k , l Ω .
Proof. 
The proof is based on Taylor expansion of the kernel in a similar way as in [18,29]. Let the centers of the supports of ψ i , k and ψ j , l be denoted by x i , k and y j , l , respectively. If g C 2 L I i , j , k , l , then the function K defined by Equation (80) satisfies K C 2 L Ω i , j , k , l . By the Taylor theorem, there exists a function P that is a polynomial of degree at most L 1 with respect to x and a function Q that is a polynomial of degree at most L 1 with respect to y such that
K x , y = P x , y + Q x , y + R x , y x x i , k L y y j , l L ,
where
R x , y = 1 L ! 2 2 L K ξ x , y x L y L
and
ξ x , y = x i , k , y j , l + α x , y x i , k , y j , l
for some α 0 , 1 . Due to L vanishing moments of the wavelets ψ i , k and ψ j , l ,
Ω i , j , k , l P x , y ψ i , k x ψ j , l y d x d y = 0
is obtained and similarly for Q.
Using Property (ii) from Definition 1 gives
x x i , k L C 1 L 2 L 2 L i , y y j , l L C 1 L 2 L 2 L j .
Definition (31) implies
Ω i , k ψ i , k x d x C 2 2 i / 2 .
Hence,
I 2 ψ i , k , ψ j , l = Ω i , j , k , l K x , y ψ i , k x ψ j , l y d x d y C Ω i , j , k , l x x i , k L y y j , l L ψ i , k S ψ j , l y d x d y C C 1 2 L C 2 2 4 L 2 L i L j i / 2 j / 2 ,
with
C = 1 L ! 2 max x , y Ω i , j , k , l 2 L K x , y x L y L .
This proves the theorem. ☐
Let
C ˜ s = D s 1 C s D s 1
with D s defined by Equation (73). Then, the discretization matrix A ˜ s is the sum of the matrix C ˜ s and the matrix arising from discretization of the differential operator D ^ . Due to Theorem 5, many entries of the matrix A ˜ s are small and can be thresholded, and thus the matrix A ˜ s can be represented by a sparse matrix. The structure of the truncated matrix A ˜ s is presented in Figure 3. This matrix contains only entries larger than 10 7 , and it was computed for the option with parameters from Example 1 and the wavelet basis from this paper containing eight levels of wavelets.
In some papers [18,29], decay estimates were derived for integrals with a kernel K that has a singularity or a maximal value on the diagonal y = x and decays with y x . However, in some models such as the Merton model, with the density from Equation (2) that is used in numerical experiments in Section 4, these estimates cannot be used, because the kernel has maximal values for y = x + μ J and is decaying exponentially with y x μ J .
Since the matrix A ˜ s is the same for all time levels, the system matrix can be computed, analyzed and compressed only once as a preprocessing step and then one can work with the compressed matrix. However, since the computation of all integrals in Equation (76) can be time consuming, it is more convenient to use estimates in Equations (78) and (81) to compute only significant entries of the matrix C ˜ s . More precisely, the following strategy can be used:
(1)
Choose a tolerance ϵ .
(2)
Compute all the entries C ˜ λ , μ s for indexes λ = i , k and μ = j , l such that g C 2 L I i , j , k , l .
(3)
Based on estimate in Equation (81), set the level L ˜ such that C ˜ λ , μ s < ϵ for any i + j > L ˜ and λ , μ such that g C 2 L I i , j , k , l .
(4)
If i + j L ˜ , then use a local estimate in Equation (81) to compute only entries for which it is not guaranteed that C ˜ λ , μ s < ϵ .
Note that Step (4) enables one to obtain the matrix C ˜ s and thus also A ˜ s with more zero elements. To obtain a sparse matrix, it is sufficient to use Steps (1)–(3), i.e., to compute entries for which i + j L ˜ and entries in regions where g y x is not smooth, and set to zero any other entries.
The impact of the truncation on the solution of the system in Equation (74) can be described as follows. Let A ^ s be the truncated matrix and u ^ s be the solution of the system A ^ s u ^ s = f ˜ s . If
C A = A ^ s 1 · A ˜ s A ^ s < 1 ,
then
u ˜ s u ^ s u ˜ s cond A ˜ s 1 C A A ˜ s A ^ s A ˜ s
(see [31]). Moreover, the matrices A ˜ s have uniformly bounded condition numbers [4], i.e., cond A ˜ s < C with C independent on s. Hence, if a threshold is chosen that is small enough, then u ^ s will be close to u ˜ s .

4. Numerical Example

In this section, numerical experiments are provided for a vanilla European option and the Merton model. Since analytic solution is known for this model (see [1]), it is possible to compute the errors for the numerical solution.
Example 1.
The efficiency of the proposed method is illustrated on valuation of vanilla European options under the Merton model [1]. Put and call options with the same parameters as in [19,20,21] are considered, so that one can compare the numerical results for the methods proposed therein. The parameters are the following: option maturity T = 0.25 year, interest rate r = 0.05 , volatility σ = 0.15 , intensity λ = 0.1 , and the strike price K = 100 . The probability density function is given by Equation (2) with parameters μ J = 0.9 and σ J = 0.45 . The domain of interest is the interval 0 , 2 K , but because the localization error is the largest near the right endpoint of the interval (see [4,28]), S m a x = 4 K is chosen. To obtain x m i n , one has to choose S m i n > 0 , thus S m i n = 30 is chosen. For the matrix compression, a tolerance ϵ = 10 7 is set.
The method proposed in the previous section is used for the computation of the value of the put option and the put–call parity to compute the value of the call option. The resulting functions representing the values of the options are displayed in Figure 4.
In Table 2, the resulting values of the options for the asset prices S = 90 , S = 100 , and S = 110 are listed. The reference values computed by analytic formula from [1] are 9.28541807 for S = 90 , 3.14902574 for S = 100 , and 1.40118588 for S = 110 , for a put option. Table 2 also presents the pointwise errors, i.e., the differences between the computed values and the reference values.
In this paper, the main focus is on the spatial discretization using the Galerkin method with the proposed basis. However, this method can also be combined with time discretization schemes other than Crank–Nicolson to obtain higher order convergence with respect to time. Here, the order of convergence with respect to τ is improved by simple postprocessing based on Richardson extrapolation, similar to the work in [32,33], i.e., the approximate solution U h , τ is computed using the spatial step h and the time step τ and the approximate solution U h , τ / 2 using the spatial step h and the time step τ / 2 . Richardson extrapolation consists of computing a new approximate solution
U h , τ / 2 R = 4 U h , τ / 2 U h , τ 3 .
Then, it is possible to compute a sufficiently accurate solution with a significantly smaller number of time steps (see Table 2 and Table 3).
Table 3 lists the errors in the norms of the spaces L 0 , 2 K and L 2 0 , 2 K . The optimal order for the Crank–Nicolson scheme is O τ 2 , and the optimal order for the quadratic spline approximation is O h 3 , where h = 1 / N represents the spatial step. Thus, in Table 2 and Table 3, if the spatial step is decreased to h / 2 , the time step is decreased to τ / 8 , · denoting the upper integer part. Hence, the experimental rate of convergence with respect to spatial step h is computed as
r a t e = log e r r o r N 2 , M a log e r r o r N , M log 2 .
for a = 8 . Experimental rates of convergence for the method with Richardson extrapolation are computed using Equation (94) with a = 2 .
Figure 5 shows the pointwise errors for values of European options corresponding to the Merton model with the same parameters, in dependence on the spatial parameter N. In this figure, WG denotes the wavelet-Galerkin method combined with the Crank–Nicolson scheme with Richardson extrapolation as discussed in this paper, FD denotes the finite difference method from [21], FPI denotes the method from [19], which is based on fixed point iterations and the Crank–Nicolson scheme, and IMEX denotes the explicit–implicit method from [20], which is a combination of the cubic spline collocation method and the backward difference method. The number of time steps is comparable or smaller for the present method than for the other methods (see Table 2 and numerical experiments in [19,20,21]). Other numerical methods for the Merton model can be found in [34,35,36,37,38,39]. In comparison with methods from [19,20,21], the spatial parameter N was significantly smaller for the present method and thus significantly smaller matrices are involved in the computation.
Table 4 presents the condition numbers (cond) of diagonally preconditioned matrices A ˜ s . Furthermore, the table lists the number of outer and the number of inner iterations needed to resolve the resulting system of equations by the generalized minimal residual method (GMRES) with the following input parameters: restart after ten iterations, maximum number of outer iterations is 100 and the iterations stop if the relative residual is less than 10 12 .

5. Conclusions

In this paper, the construction of a quadratic spline wavelet basis on the bounded interval is proposed. The basis is adapted to homogeneous Dirichlet boundary conditions, wavelets have three vanishing moments, and the wavelets on the levels i and j are orthogonal both with respect to the L 2 -norm and with respect to the H 1 -seminorm if i j > 2 . A rigorous proof of the Riesz basis property is provided and the condition numbers for the basis are presented. Using the tensor product and appropriate parametric mappings, this basis can be adapted to higher-dimensional bounded domains. Due to properties listed in Equation (23), the matrices arising from discretization using the Galerkin method or adaptive wavelet method are sparse for various types of equations such as Poisson, Helmholtz, Black–Scholes, and convection-diffusion equations.
The applicability of the constructed basis is illustrated on the jump–diffusion option pricing problem that is represented by partial integro-differential equation. The equation is transformed to logarithmic prices, and the Crank–Nicolson scheme is used for time discretization combined with the wavelet-Galerkin method for spatial discretization. Decay estimates are proved for the entries of the resulting matrices arising from discretization and, based on these decays, a strategy for approximation of the system matrix with a sparse matrix is proposed. Then, numerical results are presented for European option pricing under the Merton model. The advantage of the proposed method is the sparse structure of the matrices and, in comparison with methods from [19,20,21], the presented method requires a significantly smaller number of degrees of freedom to compute the solution with desired accuracy. The time step size needed to compute a sufficiently accurate solution is reduced using Richardson extrapolation. Finally, it is shown that the matrices are well-conditioned and that the number of iterations of the GMRES method used for the numerical solution of the resulting system of linear algebraic equations is small.

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Merton, R.C. Option pricing when underlying stock returns are discontinuous. J. Financ. Econ. 1976, 3, 125–144. [Google Scholar] [CrossRef][Green Version]
  2. Kou, S. A jump-diffusion model for option pricing. Manag. Sci. 2002, 48, 1086–1101. [Google Scholar] [CrossRef]
  3. Achdou, Y.; Pironneau, O. Computational Methods for Option Pricing; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2005. [Google Scholar]
  4. Hilber, N.; Reichmann, O.; Schwab, C.; Winter, C. Computational Methods for Quantitative Finance; Springer: Berlin, Germany, 2013. [Google Scholar]
  5. Hilber, N.; Schwab, C.; Winter, C. Variational sensitivity analysis of parametric Markovian market models. Banach Cent. Publ. 2008, 83, 85–106. [Google Scholar][Green Version]
  6. Winter, C. Wavelet Galerkin Schemes for Option Pricing in Multidimensional Lévy Models. Ph.D. Thesis, ETH Zurich, Zürich, Switzerland, 2009. [Google Scholar]
  7. Černá, D.; Finěk, V. Sparse wavelet representation of differential operators with piecewise polynomial coefficients. Axioms 2017, 6, 4. [Google Scholar] [CrossRef]
  8. Chegini, N.; Stevenson, R. Adaptive wavelet schemes for parabolic problems: Sparse matrices and numerical results. SIAM J. Numer. Anal. 2011, 49, 182–212. [Google Scholar] [CrossRef]
  9. Chegini, N.; Stevenson, R. The adaptive tensor product wavelet scheme: Sparse matrices and the application to singularly perturbed problems. IMA J. Numer. Anal. 2012, 32, 75–104. [Google Scholar] [CrossRef]
  10. Dijkema, T.J.; Stevenson, R. A sparse Laplacian in tensor product wavelet coordinates. Numer. Math. 2010, 115, 433–449. [Google Scholar] [CrossRef][Green Version]
  11. Han, B.; Michelle, M. Derivative-orthogonal Riesz wavelets in Sobolev spaces with applications to differential equations. Appl. Comput. Harm. Anal. 2017. [Google Scholar] [CrossRef]
  12. Jia, R.Q.; Liu, S.T. Wavelet bases of Hermite cubic splines on the interval. Adv. Comput. Math. 2006, 25, 23–39. [Google Scholar] [CrossRef][Green Version]
  13. Shumilov, B.M. Semi-orthogonal spline-wavelets with derivatives and the algorithm with splitting. Numer. Anal. Appl. 2017, 10, 90–100. [Google Scholar] [CrossRef]
  14. Donovan, G.; Geronimo, J.; Hardin, D. Intertwining multiresolution analyses and the construction of piecewise-polynomial wavelets. SIAM J. Math. Anal. 1996, 27, 1791–1815. [Google Scholar] [CrossRef]
  15. Alpert, B.K. A class of bases in L2 for the sparse representation of integral operator. SIAM J. Math. Anal. 1993, 24, 246–262. [Google Scholar] [CrossRef]
  16. Černá, D.; Finěk, V. Quadratic spline wavelets with short support satisfying homogeneous boundary conditions. Electron. Trans. Numer. Anal. 2018, 48, 15–39. [Google Scholar] [CrossRef]
  17. Urban, K. Wavelet Methods for Elliptic Partial Differential Equations; Oxford University Press: Oxford, UK, 2009. [Google Scholar]
  18. Chen, Z.; Micchelli, C.A.; Xu, Y. Multiscale Methods for Fredholm Integral Equations; Cambridge University Press: Cambridge, UK, 2016. [Google Scholar]
  19. D’Halluin, Y.; Forsyth, P.A.; Vetzal, K.R. Robust numerical methods for contingent claims under jump diffusion processes. IMA J. Numer. Anal. 2005, 25, 87–112. [Google Scholar] [CrossRef][Green Version]
  20. Kadalbajoo, M.K.; Tripathi, L.P.; Kumar, A. Second order accurate IMEX methods for option pricing under Merton and Kou jump-diffusion models. J. Sci. Comput. 2015, 65, 979–1024. [Google Scholar] [CrossRef]
  21. Kwon, Y.; Lee, Y. A second-order finite difference method for option pricing under jump-diffusion models. SIAM J. Numer. Anal. 2011, 49, 2598–2617. [Google Scholar] [CrossRef]
  22. Černá, D.; Finěk, V. Construction of optimally conditioned cubic spline wavelets on the interval. Adv. Comput. Math. 2011, 34, 219–252. [Google Scholar] [CrossRef]
  23. Chui, C.K.; Quak, E. Wavelets on a bounded interval. In Numerical Methods of Approximation Theory; Braess, D., Schumaker, L.L., Eds.; Birkhäuser: Basel, Switzerland, 1992; pp. 53–75. [Google Scholar]
  24. Primbs, M. New stable biorthogonal spline-wavelets on the interval. Results Math. 2010, 57, 121–162. [Google Scholar] [CrossRef]
  25. Gerschgorin, S. Über die Abgrenzung der Eigenwerte einer Matrix. Izv. Akad. Nauk. USSR Otd. Fiz.-Mat. Nauk. 1931, 6, 749–754. [Google Scholar]
  26. Dahmen, W. Stability of multiscale transformations. J. Fourier Anal. Appl. 1996, 4, 341–362. [Google Scholar]
  27. Johnson, C.R. A Gershgorin-type lower bound for the smallest singular value. Linear Algebra Appl. 1989, 112, 1–7. [Google Scholar] [CrossRef]
  28. Cont, R.; Voltchkova, E. A Finite Difference Scheme for Option Pricing in Jump Diffusion and Exponential Lévy Models. SIAM J. Numer. Anal. Math. 2005, 43, 1596–1626. [Google Scholar] [CrossRef]
  29. Beylkin, G.; Coifman, R.; Rokhlin, V. Fast wavelet transforms and numerical algorithms I. Commun. Pure Appl. Math. 1991, 44, 141–183. [Google Scholar] [CrossRef]
  30. Černá, D. Cubic spline wavelets with four vanishing moments on the interval and their applications to option pricing under Kou model. Int. J. Wavelets Multiresolut. Inf. Process. 2019, 17, 1850061. [Google Scholar] [CrossRef]
  31. Trefethen, L.N.; Bau, D. Numerical Linear Algebra; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1997. [Google Scholar]
  32. Arciniega, A.; Allen, E. Extrapolation of difference methods in option valuation. Appl. Math. Comput. 2004, 153, 165–186. [Google Scholar] [CrossRef]
  33. Finěk, V. Fourth order scheme for wavelet based solution of Black-Scholes equation. In Proceedings of the 43rd International Conference of Mathematics in Engineering and Economics (AMEE’17), Sozopol, Bulgaria, 8–13 June 2017; American Institute of Physics: Melville, NY, USA, 2017. [Google Scholar][Green Version]
  34. Almendral, A.; Oosterlee, C.W. Numerical valuation of options with jumps in the underlying. Appl. Numer. Math. 2005, 53, 1–18. [Google Scholar] [CrossRef][Green Version]
  35. Cosma, A.; Galluccio, S.; Pederzoli, P.; Scaillet, O. Early exercise decision in American options with dividends, stochastic volatility and jumps. J. Financ. Quant. Anal. 2019, 54, 1–26. [Google Scholar]
  36. Feng, L.; Linetski, V. Pricing options in jump-diffusion an extrapolation approach. Oper. Res. 2008, 56, 304–325. [Google Scholar] [CrossRef]
  37. Itkin, A. Pricing Derivatives under Lévy Models: Modern Finite-Difference and Pseudo-Differential Operators Approach; Birkhäuser: Basel, Switzerland, 2017. [Google Scholar]
  38. Mayo, A. Methods for the rapid solution of the pricing PIDEs in exponential and Merton models. J. Comput. Appl. Math. 2008, 222, 128–143. [Google Scholar] [CrossRef]
  39. Patel, K.S.; Mehra, M. Fourth-order compact scheme for option pricing under the Merton’s and Kou’s jump-diffusion models. Int. J. Theor. Appl. Financ. 2018, 21, 1850027. [Google Scholar] [CrossRef]
Figure 1. Sparse structure of the upper left part of the (infinite) matrix Ψ , Ψ for the wavelet basis Ψ from [16] (left); the basis from this paper (middle); and the structure of upper left part of the mass matrix Ψ , Ψ for the basis from this paper (right).
Figure 1. Sparse structure of the upper left part of the (infinite) matrix Ψ , Ψ for the wavelet basis Ψ from [16] (left); the basis from this paper (middle); and the structure of upper left part of the mass matrix Ψ , Ψ for the basis from this paper (right).
Symmetry 11 00999 g001
Figure 2. Scaling functions ϕ b and ϕ (left); wavelets ψ b , ψ 1 and ψ 2 (middle); and wavelets ψ 3 and ψ 4 (right).
Figure 2. Scaling functions ϕ b and ϕ (left); wavelets ψ b , ψ 1 and ψ 2 (middle); and wavelets ψ 3 and ψ 4 (right).
Symmetry 11 00999 g002
Figure 3. The structure of matrix A ˜ 8 truncated using the threshold 10 7 .
Figure 3. The structure of matrix A ˜ 8 truncated using the threshold 10 7 .
Symmetry 11 00999 g003
Figure 4. Functions representing the values of a European put (left) and call (right) option for the Merton model.
Figure 4. Functions representing the values of a European put (left) and call (right) option for the Merton model.
Symmetry 11 00999 g004
Figure 5. Pointwise errors for S = 90 , S = 100 , and S = 110 , in dependence on the spatial parameter N for four methods: WG from this paper, FD from [21], FPI from [19], and IMEX from [20].
Figure 5. Pointwise errors for S = 90 , S = 100 , and S = 110 , in dependence on the spatial parameter N for four methods: WG from this paper, FD from [21], FPI from [19], and IMEX from [20].
Symmetry 11 00999 g005
Table 1. The condition numbers of the wavelet bases Ψ s with respect to the L 2 -norm and the H 1 -seminorm. s is the number of wavelet levels and N is the number of basis functions.
Table 1. The condition numbers of the wavelet bases Ψ s with respect to the L 2 -norm and the H 1 -seminorm. s is the number of wavelet levels and N is the number of basis functions.
sN λ min λ max κ L 2 λ min λ max κ H 1
180.221.778.060.501.533.05
2160.181.8710.230.501.673.34
3320.161.9612.210.501.723.45
4640.152.0113.700.501.743.49
51280.142.0414.860.501.753.51
62560.132.0615.790.501.763.52
75120.132.0816.590.501.763.53
810240.122.1017.280.501.773.53
920480.122.1117.860.501.773.54
1040960.122.1218.350.501.773.54
Table 2. Values of vanilla European put options and pointwise errors.
Table 2. Values of vanilla European put options and pointwise errors.
Crank–NicolsonRichardson
S N M PutError M PutError
903269.2848955.23 × 10 4 109.2876762.26 × 10 3
64169.2822653.15 × 10 3 169.2826792.74 × 10 3
128469.2850903.78 × 10 4 279.2851432.75 × 10 4
2561289.2854278.69 × 10 6 469.2854331.51 × 10 5
5123639.2854134.97 × 10 6 779.2854144.17 × 10 6
102410249.2854176.24 × 10 7 1289.2854185.39 × 10 7
1003263.1668321.78 × 10 2 103.1651571.61 × 10 2
64163.1489378.86 × 10 5 163.1485904.36 × 10 4
128463.1490502.44 × 10 5 273.1490121.30 × 10 5
2561283.1490381.20 × 10 5 463.1490326.56 × 10 6
5123633.1490279.07 × 10 7 773.1490262.20 × 10 7
102410243.1490265.64 × 10 8 1283.1490261.76 × 10 7
1103261.3895391.16 × 10 2 101.3896461.15 × 10 2
64161.4016644.78 × 10 4 161.4017505.65 × 10 4
128461.4013501.64 × 10 4 271.4013621.76 × 10 4
2561281.4011961.06 × 10 5 461.4011981.20 × 10 5
5123631.4011833.22 × 10 6 771.4011833.04 × 10 6
102410241.4011863.22 × 10 7 1281.4011863.00 × 10 7
Table 3. Errors in the L 0 , 2 K -norm and in the L 2 0 , 2 K -norm and the corresponding experimental rates of convergence.
Table 3. Errors in the L 0 , 2 K -norm and in the L 2 0 , 2 K -norm and the corresponding experimental rates of convergence.
Crank–NicolsonRichardson
N M L Rate L 2 Rate M L Rate L 2 Rate
3263.48 × 10 2 -9.91 × 10 2 -103.26 × 10 2 -9.63 × 10 2 -
64163.31 × 10 3 3.398.00 × 10 3 3.63162.89 × 10 3 3.497.67 × 10 3 3.65
128463.59 × 10 4 3.208.80 × 10 4 3.18273.07 × 10 4 3.238.55 × 10 4 3.17
2561284.24 × 10 5 3.081.07 × 10 4 3.04463.78 × 10 5 3.021.05 × 10 4 3.03
5123635.28 × 10 6 3.011.33 × 10 5 3.01774.66 × 10 6 3.021.30 × 10 5 3.01
102410247.35 × 10 7 2.851.91 × 10 6 2.801286.52 × 10 7 2.841.87 × 10 6 2.80
Table 4. The condition numbers (cond) of discretization matrices, and numbers of GMRES iterations (it).
Table 4. The condition numbers (cond) of discretization matrices, and numbers of GMRES iterations (it).
NMCondIt
32611.55(1)
641612.95(3)
1284614.05(7)
25612814.95(10)
51236315.76(2)
1024102416.36(3)

Share and Cite

MDPI and ACS Style

Černá, D. Quadratic Spline Wavelets for Sparse Discretization of Jump–Diffusion Models. Symmetry 2019, 11, 999. https://doi.org/10.3390/sym11080999

AMA Style

Černá D. Quadratic Spline Wavelets for Sparse Discretization of Jump–Diffusion Models. Symmetry. 2019; 11(8):999. https://doi.org/10.3390/sym11080999

Chicago/Turabian Style

Černá, Dana. 2019. "Quadratic Spline Wavelets for Sparse Discretization of Jump–Diffusion Models" Symmetry 11, no. 8: 999. https://doi.org/10.3390/sym11080999

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop