Next Article in Journal
Kullback-Leibler Divergence and Mutual Information of Experiments in the Fuzzy Case
Next Article in Special Issue
Norm Retrieval and Phase Retrieval by Projections
Previous Article in Journal / Special Issue
Discrete Frames on Finite Dimensional Left Quaternion Hilbert Spaces
Article Menu

Export Article

Article
Sparse Wavelet Representation of Differential Operators with Piecewise Polynomial Coefficients
Department of Mathematics and Didactics of Mathematics, Technical University in Liberec, Studentská 2, 461 17 Liberec, Czech Republic
*
Author to whom correspondence should be addressed.
Academic Editors: Palle E.T. Jorgensen and Azita Mayeli
Received: 12 January 2017 / Accepted: 20 February 2017 / Published: 22 February 2017

Abstract

:
We propose a construction of a Hermite cubic spline-wavelet basis on the interval and hypercube. The basis is adapted to homogeneous Dirichlet boundary conditions. The wavelets are orthogonal to piecewise polynomials of degree at most seven on a uniform grid. Therefore, the wavelets have eight vanishing moments, and the matrices arising from discretization of differential equations with coefficients that are piecewise polynomials of degree at most four on uniform grids are sparse. Numerical examples demonstrate the efficiency of an adaptive wavelet method with the constructed wavelet basis for solving the one-dimensional elliptic equation and the two-dimensional Black–Scholes equation with a quadratic volatility.
Keywords:
Riesz basis; wavelet; spline; interval; differential equation; sparse matrix; Black–Scholes equation

1. Introduction

Wavelets are a powerful and useful tool for analyzing signals, the detection of singularities, data compression and the numerical solution of partial differential and integral equations. One of the most important properties of wavelets is that they have vanishing moments. Vanishing wavelet moments ensure the so-called compression property of wavelets. This means that a function f that is smooth, except at some isolated singularities, typically has a sparse representation in a wavelet basis, i.e., only a small number of wavelet coefficients carry most of the information on f. Similarly as functions, also certain differential and integral operators have sparse or quasi-sparse representation in a wavelet basis. This compression property of wavelets leads to the design of many multiscale wavelet-based methods for the solution of differential equations. The first wavelet methods used orthogonal wavelets, e.g., Daubechies wavelets or coiflets [1,2]. Their disadvantage is that the most orthogonal wavelets are usually not known in a closed form and that their smoothness is typically dependent on the length of the support. The orthogonal wavelets that are known in a closed form are Haar wavelets. They were successfully used for solving differential equations, e.g., in [3,4,5]. Another useful tool is the short Haar wavelet transform that was derived and used for solving differential equations in [6,7,8]. Since spline wavelets are known in a closed form and they are smoother and have more vanishing moments than orthogonal wavelets of the same length of support, many wavelet methods using spline wavelets were proposed [9,10,11]. For a review of wavelet methods for solving differential equations, see also [12,13].
It is known that spectral methods can be used to study singularity formation for PDE solution [14,15,16]. Due to their compression property, wavelets can also be used to study singularity formation for PDE solutions. The wavelet approach simply insists on analyzing wavelet coefficients that are large in regions where the singularity occurs and very small in regions where the function is smooth and derivatives are relatively small. Many adaptive wavelet methods are based on this property [17,18].
We focus on an adaptive wavelet method that was originally designed in [17,18] and later modified in many papers [19,20,21], because it has the following advantages:
  • Optimality: For a large class of differential equations, both linear and nonlinear, it was shown that this method converges and is asymptotically optimal in the sense that the storage and number of floating point operations, needed to resolve the problem with desired accuracy, depend linearly on the number of parameters representing the solution, and the number of these parameters is small. Thus, the computational complexity for all steps of the algorithm is controlled.
  • High order-approximation: The method enables high order approximation. The order of approximation depends on the order of the spline wavelet basis.
  • Sparsity: The solutions and the right-hand side of the equation have sparse representation in a wavelet basis, i.e., they are represented by a small number of numerically significant parameters. In the beginning, iterations start for a small vector of parameters, and the size of the vector increases successively until the required tolerance is reached. The differential operator is represented by a sparse or quasi-sparse matrix, and a procedure for computing the product of this matrix with a finite-length vector with linear complexity is known.
  • Preconditioning: For a large class of problems, the matrices arising from a discretization using wavelet bases can be simply preconditioned by a diagonal preconditioner, and the condition numbers of these preconditioned matrices are uniformly bounded. It is important that the preconditioner is simple, such as the diagonal preconditioner, because in some implementations, only nonzero elements in columns of matrices corresponding to significant coefficients of solutions are stored and used.
It should be noted that also other spline wavelet methods utilize some of these features, but to our knowledge, there are no other wavelet methods than adaptive wavelet methods based on the ideas from [17,18] that have all of these properties. For more details about adaptive wavelet methods, see Section 6 and [17,18,19,20,21,22,23].
In this paper, we are concerned with the wavelet discretization of the partial differential equation:
k , l = 1 d x k p k , l u x l + k = 1 d q k u x k + p 0 u = f on Ω = 0 , 1 d , u = 0 on Ω .
We assume that q k x Q > 0 , that the functions p k , l , q k , p 0 and f are sufficiently smooth and bounded on Ω and that p k , l satisfy the uniform ellipticity condition:
k = 1 d l = 1 d p k , l x x k x l C k = 1 d x k 2 , x = x 1 , , x d ,
where C > 0 is independent of x. The discretization matrix for wavelet bases is typically not sparse, but only quasi-sparse, i.e., the matrix of the size N × N has O N × log N nonzero entries. For multiplication of this matrix with a vector, a routine called APPLYhas to be used [18,19,24]. However, it was observed in several papers, e.g., in [25], that “quantitatively the application of the APPLY routine is very demanding, where this routine is also not easy to implement”. Therefore, in [25] a wavelet basis was constructed with respect to which the discretization matrix is sparse, i.e., it has O N nonzero entries, for Equation (1) if the coefficients are constant. The construction from [25] was modified in [26,27] with the aim to improve the condition number of the discretization matrices. Some numerical experiments with these bases can be found in [28,29]. In this paper, our aim is to construct a wavelet basis such that the discretization matrix corresponding to (1) is sparse if the coefficients p k , l , q k  and p 0 are piecewise polynomial functions of degree at most n on the uniform grid, where n = 6 for p k , l , n = 5 for q k and n = 4 for p 0 . Our construction is based on Hermite cubic splines. Let us mention that cubic Hermite wavelets were constructed also in [25,26,27,30,31,32,33,34,35].
Example 1.
We have recently implemented the adaptive wavelet method for solving the Black–Scholes equation:
V t k , l = 1 d ρ k , l 2 σ k σ l S k S l 2 V S k S l r k = 1 d S k V S k + r V = 0 ,
where S 1 , , S d , t 0 , S 1 m a x × × 0 , S d m a x × 0 , T . We used the θ-scheme for time discretization and tested the performance of the adaptive method with respect to the choice of a wavelet basis for d = 1 , 2 , 3 . Some results can be found in [28]. In the case of cubic spline wavelets, the smallest number of iterations was required for the wavelet basis from [36]. The discretization matrix for most spline wavelet bases is not sparse, but only quasi-sparse, and thus, the above-mentioned routine APPLY has to be used. For wavelet bases from [25,26,27], the discretization matrix corresponding to the Black–Scholes operator is sparse if volatilities σ i are constant. However, in more realistic models, volatilities are represented by non-constant functions, e.g., piecewise polynomial functions [37]. For the basis that will be constructed in this paper, the discretization matrix is sparse also for the Black–Scholes equation with volatilities σ i that are piecewise quadratic.

2. Wavelet Bases

In this section, we briefly review the concept of a wavelet basis in Sobolev spaces and introduce notations; for more details, see, e.g., [23]. Let H be a Hilbert space with the inner product · , · H and the norm · H , and let · , · denote the L 2 -inner product. Let J be an index set, and let each index λ J take the form λ = j , k , where λ : = j Z is a level. For v = v λ λ J , v λ R , we define:
v 2 : = λ J v λ 2 1 / 2 , l 2 J : = v : v 2 < .
Our aim is to construct a wavelet basis in the sense of the following definition.
Definition 1.
A family Ψ : = ψ λ , λ J is called a wavelet basis of H, if:
(i) 
Ψ is a Riesz basis for H, i.e., the closure of the span of Ψ is H, and there exist constants c , C 0 , , such that:
c b 2 λ J b λ ψ λ H C b 2 , for all b : = b λ λ J l 2 J .
(ii) 
The functions are local in the sense that diam supp ψ λ C ˜ 2 λ for all λ J , and at a given level j, the supports of only finitely many wavelets overlap at any point x.
For the two countable sets of functions Γ , Γ ˜ H , the symbol Γ , Γ ˜ H denotes the matrix:
Γ , Γ ˜ H : = γ , γ ˜ H γ Γ , γ ˜ Γ ˜ .
The constants c Ψ : = sup c : c satisfies ( 5 ) and C Ψ : = inf C : C satisfies ( 5 ) are called Riesz bounds, and the number c o n d Ψ = C Ψ / c Ψ is called the condition number of Ψ. It is known that:
c Ψ = λ m i n Ψ , Ψ H , C Ψ = λ m a x Ψ , Ψ H ,
where λ m i n Ψ , Ψ H and λ m a x Ψ , Ψ H are the smallest and the largest eigenvalues of the matrix Ψ , Ψ H , respectively.
Let M be a Lebesgue measurable subset of R d . The space L 2 M is the space of all Lebesgue measurable functions on M, such that the norm:
f = M f x 2 d x 1 / 2
is finite. The space L 2 M is a Hilbert space with the inner product:
f , g = M f x g x ¯ d x , f , g L 2 M .
The Sobolev space H s R d for s 0 is defined as the space of all functions f L 2 R d , such that the seminorm:
f H s R d = 1 2 π d R d f ^ ξ 2 ξ 2 s d ξ 1 / 2
is finite. The symbol f ^ denotes the Fourier transform of the function f defined by:
f ^ ξ = R s f x e i ξ · x d x .
The space H s R d is a Hilbert space with the inner product:
f , g H s R d = 1 2 π d R d f ^ ξ g ^ ξ ¯ 1 + ξ 2 s d ξ , f , g H s R d ,
and the norm:
f H s R d = f , f H s R d .
For an open set M R d , H s M is the set of restrictions of functions from H s R d to M equipped with the norm:
f H s M = inf g H s R d : g H s M and g | M = f .
Let C 0 M be the space of all continuous functions with the support in M, such that they have continuous derivatives of order r for any r R . The space H 0 s M is defined as the closure of C 0 M in H s R d . It is known that:
f H 1 M = f H 1 M + f ,
where:
f H 1 M = f , f
is the seminorm in H 1 M and f denotes the gradient of f.

3. Construction of Scaling Functions

We start with the same scaling functions as in [25,26,27,30,31,32,33,34]. Let:
ϕ 1 ( x ) = x + 1 2 1 2 x , x [ 1 , 0 ] , 1 x 2 1 + 2 x , x [ 0 , 1 ] , 0 , otherwise , ϕ 2 ( x ) = x + 1 2 x , x [ 1 , 0 ] , 1 x 2 x , x [ 0 , 1 ] , 0 , otherwise .
For j 3 and x 0 , 1 , we define:
ϕ j , 2 k + l 1 x = 2 j / 2 ϕ l 2 j x k for k = 1 , , 2 j 1 , l = 1 , 2 , ϕ j , 1 x = 2 j / 2 ϕ 2 2 j x , ϕ j , 2 j + 1 x = 2 j / 2 ϕ 2 2 j x 1 ,
and:
Φ j = ϕ j , k , k = 1 , , 2 j + 1 , V j = span Φ j .
The scaling functions ϕ j , k on the level j = 3 are displayed in Figure 1.
Then, the spaces V j form a multiresolution analysis. We choose dual space V ˜ j as the set of all functions v L 2 0 , 1 , such that v restricted to the interval k 1 2 j 2 , k 2 j 2 is a polynomial of degree less than eight for any k = 1 , , 2 j 2 , i.e.,
V ˜ j = v L 2 0 , 1 : v | k 1 2 j 2 , k 2 j 2 Π 8 k 1 2 j 2 , k 2 j 2 for k = 0 , , 2 j 2 ,
where Π 8 a , b denotes the set of all polynomials on a , b of degree less than eight. Let:
W j = V ˜ j V j + 1 ,
where V ˜ j is the orthogonal complement of V ˜ j with respect to the L 2 -inner product. If a function g is a piecewise polynomial of degree n, we write deg g = n .
Lemma 1.
Let the spaces W j , j 3 , be defined as above. Then, all functions g W i and h W j , i , j 3 , i j > 2 , satisfy:
a g , h = 0 , b g , h = 0 , c g , h = 0 ,
where a , b , c are piecewise polynomial functions, such that a , b , c V ˜ p , p max i , j , deg a 4 , deg b 5 and deg c 6 .
Proof of Lemma 2.
Let us assume that j > i + 2 . We have g W i V i + 1 V j 2 V ˜ j , deg g 3 , a V ˜ j , deg a 4 , and thus, a g V ˜ j . Since h W j and W j is orthogonal to V ˜ j , we obtain a g , h = 0 . Similarly, the relation b g , h = 0 is the consequence of the fact that b g V ˜ j and h W j . Using integration by parts, we obtain:
c g , h = c g + c g , h .
Since c g + c g V ˜ j and h W j , we have c g , h = 0 . The situation for j < i + 2 is similar. ☐
Therefore, the discretization matrix for the Equation (1) is sparse. Let Ψ j be a basis of W j . The proof that:
Ψ = ψ λ , λ J = Φ 3 j = 3 Ψ j .
is a Riesz basis of the space L 2 0 , 1 , and that Ψ is a Riesz basis of the space H 0 1 0 , 1 when normalized with respect to the H 1 -norm is based on the following theorem [25,38].
Theorem 2.
Let J N , and let V j and V ˜ j , j J , be subspaces of L 2 0 , 1 , such that:
V j V j + 1 , V ˜ j V ˜ j + 1 , dim V j = dim V ˜ j < , j J .
Let Φ j be bases of V j , Φ ˜ j be bases of V ˜ j and Ψ j be bases of V ˜ j V j + 1 , such that Riesz bounds with respect to the L 2 -norm of Φ j ; Φ ˜ j , and Ψ j are uniformly bounded; and let Ψ be given by (24). Furthermore, let the matrix:
G j : = Φ j , Φ ˜ j
be invertible, and the spectral norm of G j 1 is bounded independently of j. In addition, for some positive constants C, γ and d, such that γ < d , let:
inf v j V j v v j C 2 j d v H d 0 , 1 , v H 0 d 0 , 1 ,
and for 0 s < γ , let:
v j H d 0 , 1 C 2 j s v j , v j V j ;
and let similar estimates (27) and (28) hold for γ ˜ and d ˜ on the dual side. Then, there exist constants k and K, 0 < k K < , such that:
k b 2 λ J b λ 2 λ s ψ λ H s 0 , 1 K b 2 , b : = b λ λ J l 2 J
holds for s γ ˜ , γ .
We focus on the spaces V j and V ˜ j defined by (19) and (20), respectively, and we show that they satisfy the assumptions of Theorem 2.
Theorem 3.
There exist uniform Riesz bases Φ ^ j of V j and Φ ˜ j of V ˜ j , such that the matrix:
G j = Φ ^ j , Φ ˜ j
is invertible and the spectral norm of G j 1 is bounded independently of j.
Proof of Theorem 4.
Let Φ j , V j and V ˜ j be defined as above. For i = 0 , , 7 , we define:
p i x x 1 / 2 i , x 0 , 1 , 0 , otherwise ,
and:
θ j , 8 k + i + 1 = 2 j 2 / 2 p i 2 j 2 x k , k Z , i = 0 , , 7 .
Then, the set Θ j = θ j , k , k = 1 , , 2 j + 1 is a basis of V ˜ j , and the matrix A j = Φ j , Θ j , j 3 , has the structure:
Axioms 06 00004 i001
where A is the matrix of the size 10 × 8 . Our aim is to apply several transforms on Φ j and Θ j , such that new bases Φ ^ j of V j and Φ ˜ j of V ˜ j are local, and the matrix G j defined by (30) and its transpose G j T are both strictly diagonally dominant. First, we replace functions θ j , k by functions g j , k in such a way that the matrix of L 2 -inner products of ϕ j , k and g j , l is tridiagonal. Therefore, we define:
g j , 8 k + i + 1 = l = 1 8 c j , l i θ j , 8 k + l , i = 0 , , 7 , k = 0 , , 2 j 2 1 ,
where the coefficients c j , l i are chosen, such that:
ϕ j , p , g j , q = 0 , p q > 1 , ϕ j , p , g j , p = 1 , p = 1 , , 2 j + 1 1 .
For m = 8 k + i + 1 , i = 0 , , 7 , k = 1 , , 2 j 2 2 , we substitute (34) into (35), and using supp ϕ j , 8 k + l + 1 supp g j , m = 0 for l 0 , , 9 , we obtain systems of eight linear algebraic equations with eight unknown coefficients:
A i c i = e i , for k = 2 , , 2 j 2 2 , c i = c j , l i l = 1 8 ,
the system matrices A i that are submatrices of A containing all rows of A except the i-th and i + 2 -th rows, and e i are unit vectors, such that e i l = δ i , l . The symbol δ i , l denotes the Kronecker delta. We computed all of the system matrices precisely using symbolic computations and verified that they are regular. Thus, the coefficients c j , l i exist and are unique.
The matrix B j defined by B j k , l = ϕ j , k , g j , m , k , l = 1 , , 2 j + 1 , is tridiagonal and has the structure:
Axioms 06 00004 i002
where:
B = 13.199 0 0 0 0 0 0 0 1.000 0.098 0 0 0 0 0 0 2.185 1.000 24.781 0 0 0 0 0 0 0.138 1.000 0.104 0 0 0 0 0 0 13.887 1.000 6.026 0 0 0 0 0 0 0.074 1.000 0.041 0 0 0 0 0 0 34.953 1.000 8.824 0 0 0 0 0 0 0.018 1.000 0.023 0 0 0 0 0 0 9.423 1.000 0 0 0 0 0 0 0 0.092 ,
and:
B L = B 2 , , 10 , B R = B 1 , , 8 , 10 .
The symbol B M denotes the submatrix of the matrix B containing rows from B with indices from M . In (38), the numbers are rounded to three decimal digits.
We apply several transforms on ϕ j , k and denote the new functions by ϕ j , k i . In the following, let:
B j , k , l = B j k , l , B j , k , l i = ϕ j , l i , g j , k , i = 1 , , 4 .
We define:
ϕ j , k 1 = ϕ j , k B j , k , k + 1 B j , k + 1 , k + 1 ϕ j , k + 1 for k even , ϕ j , k 1 = ϕ j , k for k odd , ϕ j , k 2 = ϕ j , k 1 B j , k , k 1 1 B j , k 1 , k 1 1 ϕ j , k 1 1 for k even , ϕ j , k 2 = ϕ j , k 1 for k odd , ϕ j , 4 + 8 k 3 = ϕ j , 4 + 8 k 2 B j , 4 + 8 k , 2 + 8 k 2 B j , 2 + 8 k , 2 + 8 k 2 ϕ j , 2 + 8 k 1 for k = 1 , , 2 j 2 , ϕ j , l 3 = ϕ j , l 2 otherwise , ϕ j , 6 + 8 k 4 = ϕ j , 6 + 8 k 3 B j , 6 + 8 k , 4 + 8 k 3 B j , 4 + 8 k , 4 + 8 k 3 ϕ j , 4 + 8 k 1 for k = 1 , , 2 j 2 , ϕ j , l 4 = ϕ j , l 3 otherwise ,
and:
ϕ ^ j , l = 2.1 ϕ j , l 4 l = 4 + 8 k , 10 ϕ j , l 4 l = 2 j + 1 , ϕ j , l 4 otherwise .
Furthermore, we set ϕ ˜ j , 2 + 8 k = 1.3 g j , 2 + 8 k for k = 0 , , 2 j 2 and ϕ ˜ j , l = g j , l for l 2 + 8 k . Let  Φ ^ j = ϕ ^ j , l , l = 1 , 2 j + 1 and Φ ˜ j = ϕ ˜ j , l , l = 1 , 2 j + 1 . The matrix G j defined by (30) has the same structure as A j and B j , i.e.,
G j 8 i + k , 8 i + l k = 0 , , 14 , l = 1 , , 8 = G , i = 2 , , 2 j 2 2 G j k , l k = 1 , , 14 , l = 1 , , 8 = G L , G j 2 j + 1 8 + k , 2 j + 1 8 + l k = 0 , , 8 , l = 1 , , 8 = G R ,
where:
G = 0 1.6863 0 0 0 0 0 0 1.0000 0.1278 0 0 0 0 0 0 0 2.8555 0 2.5773 0 0 0 0 0 0.1790 1.0000 0.1040 0 0 0 0 0 0 0 2.8443 0 0.5229 0 0 0 0 0 0.0737 1.0000 0.0413 0 0 0 0 0 0 0 0.7599 0 0.2011 0 0 0 0 0 0.0179 1.0000 0.0228 0 0 0 0 0 0.1689 0 2.4295 0 0 0 0 0 0 0 0.0920 0 0 0 0 0 0 0 0.2011 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.3675 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.3330
and:
G L = G 2 , , 15 , G R = G 1 , , 8 0 0 0.920
Thus, the matrices G j and G j T are diagonally dominant and invertible, and due to Johnson’s lower bound for the smallest singular value [39], we have:
σ m i n G j 0.117 , G j 1 2 = 1 σ m i n G j 8.527 .
It remains to prove that Φ ^ j are uniform Riesz bases of V j and Φ ˜ j are uniform Riesz bases of V ˜ j . Since ϕ ^ j , k are locally supported and there exists M independent of j and k, such that ϕ ^ j , k L 2 0 , 1 M , we have:
k c j , k ϕ ^ j , k L 2 0 , 1 2 = k l c j , k c j , l 0 1 ϕ ^ j , k x ϕ ^ j , l x d x C c 2 2 .
and similarly for ϕ ˜ j , k , we have:
k c j , k ϕ ˜ j , k L 2 0 , 1 2 C c 2 2 .
By the same argument as in the Proof of Theorem 3.3. in [25], from (47), (48), the invertibility of G j and (46), we can conclude that Φ ^ j and Φ ˜ j are uniform Riesz bases of their spans. ☐

4. Construction of Wavelets

Now, we construct a basis Ψ j of the space W j = V ˜ j V j + 1 , such that c o n d Ψ j C , where C is a constant independent on j, and functions from Ψ j are translations and dilations of some generators. We propose one boundary generator ψ b and functions ψ i , i = 1 , , 8 , generating inner wavelets, such that the sets:
Ψ j = ψ j , k , k = 1 , , 2 j + 1 , j 3 ,
contain functions ψ j , k defined for x 0 , 1 by:
ψ j , 1 x = 2 j / 2 ψ b 2 j x , ψ j , 2 j + 1 x = 2 j / 2 ψ b 2 j 1 x , ψ j , 8 k + l + 1 x = 2 j / 2 ψ l 2 j x 4 k , l = 1 , 8 , 1 < 8 k + l + 1 < 2 j + 1 .
We denote the scaling functions on the level j = 1 by:
ϕ 1 , 2 k + l 2 x = 2 1 / 2 ϕ l 2 x k for k Z , l = 1 , 2 , x R .
For l = 1 , , 6 , let the functions ψ l have the form:
ψ l = k = 1 14 h l , k ϕ 1 , k
and be such that ψ 1 , ψ 2 and ψ 3 are antisymmetric and ψ 4 , ψ 5 and ψ 6 are symmetric. Thus,  supp ψ l = 0 , 4 for l = 1 , , 6 . Let p i be polynomials defined by (31). It is clear that if:
ψ l x , p i x 4 = 0 , i = 0 , , 7 , l = 1 , 6 ,
then ψ l 2 j x k , p i 2 j 2 x m = 0 , for k , m 4 Z , and thus, ψ j , 8 i + l , g = 0 for any g V ˜ j , i = 0 , , 2 j 2 1 , and l = 1 , , 6 . Substituting (52) into (53), we obtain the system of linear algebraic equations with the solution h l = h l , k k = 1 14 of the form:
h l = a l , 1 u 1 + a l , 2 u 2 + a l , 3 u 3 , l = 1 , 2 , 3 ,
and:
h l = b l 3 , 1 v 1 + b l 3 , 2 v 2 + b l 3 , 3 v 3 , l = 4 , 5 , 6 ,
where a l , k and b l , k are chosen real parameters and:
u 1 , u 2 , u 3 = 29 95361 120 31787 5716 31787 0 0 1 592 95361 13477 95361 56300 95361 0 1 0 13456 95361 39892 95361 49671 31787 1 0 0 0 0 0 11708 4541 26022 4541 116428 4541 13456 95361 39892 95361 49671 31787 1 0 0 592 95361 13477 95361 56300 95361 0 1 0 29 95361 120 31787 5716 31787 0 0 1 , v 1 , v 2 , v 3 = 11 10500 17 2625 1727 10500 0 0 1 13 875 293 2625 1123 2625 0 1 0 1 12 5 21 53 84 1 0 0 34 175 6 25 386 525 0 0 0 1 12 5 21 53 84 1 0 0 13 875 293 2625 1123 2625 0 1 0 11 10500 17 2625 1727 10500 0 0 1 .
For l 7 , 8 , let the functions ψ l have the form:
ψ l = k = 1 28 h l , k ϕ 1 , k .
These functions are uniquely determined by imposing that ψ 7 is symmetric, ψ 8 is antisymmetric, both ψ 7 and ψ 8 are L 2 -orthogonal to the functions ψ l , l = 1 , , 6 , they are normalized with respect to the L 2 -norm and:
ψ l x , p i x 4 = 0 , for i = 0 , , 7 .
It remains to construct boundary function ψ b . Let:
ψ b = k = 0 14 h b , k ϕ 1 , k | 0 , .
Substituting (59) into:
ψ b x , p i x 4 = 0 , for i = 0 , , 7 ,
we obtain the system of eight equations for 15 unknown coefficients. The solution h b = h b , k k = 0 14 is the linear combination of vectors w i given by:
w 1 = 150 83 , 1429 1992 , 4509 664 , 74 249 , 2839 166 , 1897 1992 , 6741 664 , 53 249 , 1 , 0 , 0 , 0 , 0 , 0 , 0 T , w l = 0 u l 1 , l = 2 , 3 , 4 , w l = 0 v l 4 , l = 5 , 6 , 7 ,
i.e.,
h b = i = 1 7 d i W i ,
where d i are chosen real parameters.
Hence, the set Ψ j depends on the choice of a k , l , b k , l and d i . However, it is not true that c o n d Ψ j C for all possible choices of these parameters. Moreover, for some choices, the condition numbers of Ψ j are uniformly bounded, but the condition number of the resulting basis Ψ is large, e.g., 10 6 .
Therefore, we optimize the construction to improve the condition number of Ψ. We choose a k , 1 , and then, we set a k , 2 and a k , 3 , such that ψ i , ψ j = δ i , j for i , j = 1 , 2 , 3 , and similarly, we choose b k , 1 and then set b k , 2 and b k , 3 , such that ψ i , ψ j = δ i , j for i , j = 4 , 5 , 6 . Moreover, the functions ψ 7 and ψ 8 are constructed, such that they are orthogonal to ψ i for i = 1 , , 6 , and due to the symmetry and antisymmetry, we have ψ i , ψ j = δ i , j for i = 1 , 2 , 3 and j = 4 , 5 , 6 and ψ 7 , ψ 8 = 0 . In summary, ψ i is orthogonal to ψ j with respect to the L 2 -norm for i , j = 1 , 6 , i j . To further improve the condition number, we orthogonalize scaling functions on the coarsest level j = 3 , i.e., we determine the set:
Φ 3 o r t : = K 1 Φ 3 , K = Φ 3 , Φ 3 ,
and we redefine Φ 3 as Φ 3 : = Φ 3 o r t .
Furthermore, we wrote a program that computes the condition number of the wavelet basis containing all wavelets up to the level of seven with respect to both the L 2 -norm and the H 1 -norm for given parameters a k , l , b k , l and d i and performed extensive numerical experiments. In the following, we consider the parameters that lead to good results:
a 1 = a 1 , 1 , a 1 , 2 , a 1 , 3 = 4.62 , 4.43 , 0.67 , a 2 = a 2 , 1 , a 2 , 2 , a 2 , 3 = 7.196227729728021 , 4.658487033189625 , 2.279869518963229 , a 3 = a 3 , 1 , a 3 , 2 , a 3 , 3 = 0.775021413514386 , 0.613425421561151 , 0.151825757948663 , b 1 = b 1 , 1 , b 1 , 2 , b 1 , 3 = 0.24 , 3.92 , 4.17 , b 2 = b 2 , 1 , b 2 , 2 , b 2 , 3 = 4.214132381596882 , 2.612399654970785 , 1.411579368326525 , b 3 = b 3 , 1 , b 3 , 2 , b 3 , 3 = 0.601286696663076 , 0.778487053796787 , 0.180033928710130 , d = d 1 , , d 7 = 0.075 , 0.363 , 0.616 , 0.134 , 0.344 , 0.580 , 0.099 ,
and after computing ψ b and ψ i , i = 1 , , 8 , using these parameters, we normalize them with respect to the L 2 -norm, i.e., we redefine ψ b : = ψ b / ψ b and ψ i : = ψ i / ψ i . The wavelets ψ 3 , 1 , , ψ 3 , 9 that are dilations of ψ b , ψ 1 , , ψ 8 are displayed in Figure 2.
Theorem 4.
The sets Ψ j with the parameters given by (64) are uniform Riesz bases of W j for j 3 .
Proof of Theorem 5.
Since we constructed wavelets such that many of them are orthogonal, there is only a small number of nonzero entries in N j . Since wavelets are normalized with respect to the L 2 -norm, we have:
N j k , k = 1 .
Direct computation yields that:
N j k = 1 , l = 2 , , 9 = N j k = 2 j , l = 2 j 1 , , 2 j 8 = z , N j k = 2 , , 9 , l = 1 = N j k = 2 j 1 , , 2 j 8 , l = 2 j = z T ,
where:
z = 0.0022 , 0.0927 , 0.0166 , 0.0339 , 0.0075 , 0.0045 , 0.2652 , 0.2439 ,
and for i = 1 , , 2 j 2 2 , we have:
N j k = 8 i , 8 i + 1 , l = 8 i + 8 , 8 i + 9 = N N j k = 8 i + 8 , 8 i + 9 , l = 8 i , 8 i + 1 = N T ,
where:
N = 0.2048 0.1885 0.1885 0.1734 .
The numbers in (67) and (69) are rounded to four decimal digits. All other entries of N j are zero. The structure of the Gram matrix N j = Ψ j , Ψ j is displayed in Figure 3.
Using Gershgorin theorem, the smallest eigenvalue λ m i n N j 0.21 , and the largest eigenvalue λ m a x N j 1.79 . Therefore, Ψ j are uniform Riesz bases of their spans. ☐
Theorem 5.
The set Ψ is a Riesz basis of L 2 0 , 1 , and when normalized with respect to the H 1 -norm, it is a Riesz basis of H 0 1 0 , 1 .
Proof of Theorem 6.
Due to the Theorems 2, 3 and 4, the relation (29) holds both for s = 0 and s = 1 . Hence, Ψ is a Riesz basis of L 2 0 , 1 and:
2 3 ϕ 3 , k , k = 1 , , 16 2 j ψ j , k , j 3 , k = 1 , , 2 j + 1
is a Riesz basis of H 0 1 0 , 1 . To show that also:
ϕ 3 , k ϕ 3 , k H 1 0 , 1 , k = 1 , , 16 ψ j , k ψ j , k H 1 0 , 1 , j 3 , k = 1 , , 2 j + 1
is a Riesz basis of H 0 1 0 , 1 , we follow the Proof of Theorem 2 in [40]. From (18) and (50) there exist nonzero constants C 1 and C 2 , such that:
C 1 2 j ψ j , k H 0 1 Ω C 2 2 j , for j 3 , k = 1 , , 2 j + 1 ,
and:
C 1 2 3 ϕ 3 , k H 0 1 Ω C 2 2 3 , for k = 1 , , 16 .
Let b ^ = a ^ 3 , k , k = 1 , , 16 b ^ j , k , j 3 , k = 1 , , 2 j + 1 be such that:
b ^ 2 2 = k = 1 16 a ^ 3 , k 2 + j = 3 k = 1 2 j + 1 b ^ j , k 2 < .
We define:
a 3 , k = 2 3 a ^ 3 , k ϕ 3 , k H 0 1 0 , 1 , k = 1 , , 16 , b j , k = 2 j b ^ j , k ψ j , k H 0 1 0 , 1 , j 3 , k = 1 , , 2 j + 1 ,
and b = a 3 , k , k = 1 , , 16 b j , k , j 3 , k = 1 , , 2 j + 1 . Then:
b 2 b ^ 2 C 1 < .
Since the set (70) is a Riesz basis of H 0 1 0 , 1 , there exist constants C 3 and C 4 , such that:
C 3 b 2 k = 1 16 a 3 , k 2 3 ϕ 3 , k + j = 3 k = 1 2 j + 1 b j , k 2 j ψ j , k H 0 1 0 , 1 C 4 b 2 .
Therefore:
C 4 C 1 b ^ 2 C 4 b 2 k = 1 16 a 3 , k 2 3 ϕ 3 , k + j = 3 k = 1 2 j + 1 b j , k 2 j ψ j , k H 0 1 0 , 1 = k = 1 16 a ^ 3 , k ϕ 3 , k H 0 1 0 , 1 ϕ 3 , k + j = 3 k = 1 2 j + 1 b ^ j , k ψ j , k H 0 1 0 , 1 ψ j , k H 0 1 0 , 1
and similarly:
C 3 C 2 b ^ 2 k = 1 16 a ^ 3 , k ϕ 3 , k H 0 1 0 , 1 ϕ 3 , k + j = 3 k = 1 2 j + 1 b ^ j , k ψ j , k H 0 1 0 , 1 ψ j , k H 0 1 0 , 1 .
The condition number of the resulting wavelet basis with wavelets up to the level of 10 with respect to the L 2 -norm is 17.2, and the condition number of this basis normalized with respect to the H 1 -norm is 6.0. The sparsity pattern of the matrix arising from a discretization using a wavelet basis constructed in this paper and a wavelet basis from [25] for the one-dimensional Black–Scholes equation with quadratic volatilities from Example 1 is displayed in Figure 4.

5. Wavelets on the Hypercube

We present a well-known construction of a multivariate wavelet basis on the unit hypercube Ω = 0 , 1 d ; for more details, see, e.g., [23]. It is based on tensorizing univariate wavelet bases and preserves the Riesz basis property, the locality of wavelets, vanishing moments and polynomial exactness. This approach is known as an anisotropic approach.
For notational simplicity, we denote J j = 1 , , 2 j + 1 for j 3 , and:
ψ 2 , k : = ϕ 3 , k , k J 2 : = J 3 , J : = j , k , j 2 , J j .
Then, we can write:
Ψ = ψ j , k , j 2 , k J j = ψ λ , λ J .
We use u v to denote the tensor product of functions u and v, i.e., u v x 1 , x 2 = u x 1 v x 2 . We define multivariate basis functions as:
ψ λ = i = 1 d ψ λ i , λ = λ 1 , , λ d J , J = J d = J J .
Since Ψ is a Riesz basis of L 2 0 , 1 and Ψ normalized with respect to the H 1 -norm is a Riesz basis of H 0 1 0 , 1 , the set:
Ψ a n i : = ψ λ , λ J
is a Riesz basis of L 2 Ω , and its normalization with respect to the H 1 -norm is a Riesz basis of H 0 1 Ω . Using the same argument as in the Proof of Lemma 1, we conclude that for this basis, the discretization matrix is sparse for Equation (1) with piecewise polynomial coefficients on uniform meshes, such that deg p k , l 6 , deg q k 5 and deg a 0 4 .

6. Numerical Examples

In this section, we solve the elliptic Equation (1) and the equation with the Black–Scholes operator from Example 1 by an adaptive wavelet method with the basis constructed in this paper. We briefly describe the algorithm. While the classical adaptive methods typically uses refining a mesh according to a posteriori local error estimates, the wavelet approach is different, and it comprises the following steps [13,17,18]:
  • One starts with a variational formulation for a suitable wavelet basis, but instead of turning to a finite dimensional approximation, the continuous problem is transformed into an infinite-dimensional l 2 -problem.
  • Then, one proposes a convergent iteration for the l 2 -problem.
  • Finally, one derives an implementable version of this idealized iteration, where all infinite-dimensional quantities are replaced by finitely supported ones.
To the left-hand side of the Equation (1), we associate the following bilinear form:
a v , w : = Ω k , l = 1 d p k , l v x k w x l + k = 1 d q k v x k w + p 0 v w d x .
The weak formulation of (1) reads as follows: Find u H 0 1 Ω , such that:
a u , v = f , v for all v H 0 1 Ω .
Instead of turning to a finite dimensional approximation, Equation (85) is reformulated as an equivalent bi-infinite matrix equation A u = f , where:
A λ , μ = a ψ λ , ψ μ , f λ = f , ψ λ ,
for ψ λ , ψ μ Ψ and Ψ is a wavelet basis of H 0 1 Ω .
We use the standard Jacobi diagonal preconditioner D for preconditioning this equation, i.e., D λ , μ = D λ , μ δ λ , μ . If the coefficients are constant, one can also use an efficient diagonal preconditioner from [41]. The algorithm for solving the l 2 -problem is the following:
  • Compute sparse representation f j of the right-hand side f , such that f f j 2 is smaller than a given tolerance  ϵ j 1 . The computation of a sparse representation insists on thresholding the smallest coefficients and working only with the largest ones. We denote the routine as f j : = RHS [ f , ϵ j 1 ] .
  • Compute K steps of GMRESfor solving the system A v = f j with the initial vector v j . Each iteration of GMRES requires multiplication of the infinite-dimensional matrix with a finitely-supported vector. Since for the wavelet basis constructed in this paper, the matrix is sparse, it can be computed exactly. Otherwise, it is computed approximately with the given tolerance ϵ j 2 by the method from [24]. We denote the routine z = GMRES [ A , f j , v j , K ] .
  • Compute sparse representation v j + 1 of z with the error smaller than ϵ j 2 . We denote the routine v j + 1 : = COARSE [ z , ϵ j 2 ] . It insists on thresholding the coefficients.
We repeat Steps 1, 2 and 3 until the norm of the residual r j = f A v j 2 is not smaller than the required tolerance ϵ ˜ . Since we work with the sparse representation of the right-hand side and the sparse representation of the vector representing the solution, the method is adaptive. It is known that the coefficients in the wavelet basis are small in regions where the function is smooth and large in regions where the function has some singularity. Therefore, by this method, the singularities are automatically detected.
We use the following algorithm that is a modified version of the original algorithm from [17,18]:
Algorithm 1 u : = SOLVE [ A , f , ϵ ˜ ]
1. Choose k 0 , k 1 , k 2 0 , 1 , K N .
2. Set j : = 0 , v 0 : = 0 and ϵ : = f 2 .
3. While ϵ > ϵ ˜
        j : = j + 1 ,
        ϵ : = k 0 ϵ ,
        ϵ j 1 : = k 1 ϵ ,
        ϵ j 2 : = k 2 ϵ ,
        f j : = RHS [ f , ϵ j 1 ] ,
        z : = GMRES [ A , f j , v j 1 , K ]
        v j : = COARSE [ z , ϵ j 2 ] ,
        Estimate r j = f A v j and set ϵ : = r j 2 .
end while,
4. u : = v j ,
5. Compute approximate solution u ˜ = u λ u u λ ψ λ .
For an appropriate choice of parameters k 0 , k 1 , k 2 and K and more details about the routines RHS and COARSE, we refer to [17,18,23].
Example 2.
We solve the equation:
ϵ u + x 2 u + u = f on 0 , 1 , u 0 = u 1 = 0 ,
where ϵ = 0.001 and the right-hand side f is corresponding to the solution:
u x = x 1 e 50 x 50 for x 0 , 1 .
We solve this equation using the adaptive wavelet method described above with the wavelet basis constructed in this paper. The approximate solution and the derivative of the approximate solution that were computed using only 79 coefficients are displayed in Figure 5. The significant coefficients were located near Point 1, because the solution has a large derivative near this point.
The sparsity patterns of the matrices arising from discretization of Equation (87) using wavelets constructed in this paper and wavelets from [25] are the same as the sparsity patterns of matrices for Example 1 that are displayed in Figure 4. Convergence history is displayed in Figure 6. The number of iterations equals the parameter j from Algorithm 1; the number of basis functions determining the approximate solution in j-th iteration is the same as the number of nonzero entries of the vector v j ; and the L -norm of the error is given by:
u u ˜ = max x 0 , 1 u x u ˜ x .
Example 3.
We consider the equation:
V t k , l = 1 2 ρ k , l 2 σ k σ l S k S l 2 V S k S l r k = 1 2 S k V S k + r V = f ,
for S 1 , S 2 Ω : = 0 , 1 2 and t 0 , 1 . We choose parameters of the Black–Scholes operator as ρ 1 , 1 = ρ 2 , 2 = 1 , ρ 1 , 2 = ρ 2 , 1 = 0.88 , σ 1 x = 0.1 x 2 0.1 x + 0.66 , σ 2 x = 0.1 x 2 0.1 x + 0.97 , r = 0.02 , and we set the right-hand side f, the initial and boundary conditions, such that the solution V is given by:
V S 1 , S 2 , t = e r t S 1 S 2 1 e 20 S 1 20 1 e 20 S 2 20
for S 1 , S 2 , t Ω × 0 , 1 . We use the Crank–Nicolson scheme for the semi-discretization of the Equation (90) in time. Let M N , τ = M 1 , t l = l τ , l = 0 , , M , and denote V l S 1 , S 2 = V S 1 , S 2 , t l and f l S 1 , S 2 = f S 1 , S 2 , t l . The Crank–Nicolson scheme has the form:
V l + 1 V l τ k , l = 1 2 ρ k , l 4 σ k σ l S k S l 2 V l + 1 + V l S k S l r 2 k = 1 2 S k V l + 1 + V l S k + r V l + 1 + V l 2 = f l + 1 + f l 2 .
In this scheme, the function V l is known from the equation on the previous time level, and the function V l + 1 is an unknown solution. Thus, for the given time level t l , Equation (92) is of the form (1), and we can use the adaptive wavelet method for solving it. The approximate solution V 1 for τ = 1 / 365 that was computed using 731 coefficients is displayed in Figure 7.
It can be seen that the gradient of the solution V 1 has the largest values near the point 1 , 1 . Therefore, the largest wavelet coefficients correspond to wavelets with supports in regions near this point, and wavelet coefficients are small for wavelets that are not located in these regions. Thus, many wavelet coefficients are omitted, and the representation of the solution is sparse. The convergence history is shown in Figure 8.

7. Conclusions

In this paper, we constructed a new cubic spline multi-wavelet basis on the unit interval and unit cube. The basis is adapted to homogeneous Dirichlet boundary conditions, and wavelets have eight vanishing moments. The main advantage of this basis is that the matrices arising from a discretization of the differential Equation (1) with piecewise polynomial coefficients on uniform meshes, such that deg p k , l 6 , deg q k 5 and deg a 0 4 , are sparse and not only quasi-sparse. We proved that the constructed basis is indeed a wavelet basis, i.e., the Riesz basis property (5) is satisfied. We performed extensive numerical experiments and present the construction that leads to the wavelet basis that is well-conditioned with respect to the L 2 -norm, as well as the H 1 -norm.

Acknowledgments

This work was supported by Grant GA16-09541S of the Czech Science Foundation.

Author Contributions

The authors contributed equally to this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Beylkin, G. Wavelets and fast numerical algorithms. In Different Perspectives on Wavelets 47; Daubechies, I., Ed.; Proceedings of Symposia in Applied Mathematics; American Mathematical Society: Providence, RI, USA, 1993; pp. 89–117. [Google Scholar]
  2. Shann, W.C.; Xu, J.C. Galerkin-wavelet methods for two-point boundary value problems. Numer. Math. 1992, 63, 123–142. [Google Scholar]
  3. Hariharan, G.; Kannan, K. Haar wavelet method for solving some nonlinear parabolic equations. J. Math. Chem. 2010, 48, 1044–1061. [Google Scholar] [CrossRef]
  4. Lepik, Ü. Numerical solution of differential equations using Haar wavelets. Math. Comput. Simul. 2005, 68, 127–143. [Google Scholar] [CrossRef]
  5. Lepik, Ü. Numerical solution of evolution equations by the Haar wavelet method. Appl. Math. Comput. 2007, 185, 695–704. [Google Scholar] [CrossRef]
  6. Cattani, C. Haar wavelet spline. J. Interdiscip. Math. 2001, 4, 35–47. [Google Scholar] [CrossRef]
  7. Cattani, C.; Bochicchio, I. Wavelet analysis of chaotic systems. J. Interdiscip. Math. 2006, 9, 445–458. [Google Scholar]
  8. Ciancio, A.; Cattani, C. Analysis of Singularities by Short Haar Wavelet Transform. In Lecture Notes in Computer Science; Gervasi, O., Kumar, V., Tan, C.J.K., Taniar, D., Laganá, A., Mun, Y., Choo, H., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; pp. 828–838. [Google Scholar]
  9. Kumar, V.; Mehra, M. Cubic spline adaptive wavelet scheme to solve singularly perturbed reaction diffusion problems. Int. J. Wavelets Multiresolut. Inf. Process. 2007, 15, 317–331. [Google Scholar] [CrossRef]
  10. Jia, R.Q.; Zhao, W. Riesz bases of wavelets and applications to numerical solutions of elliptic equations. Math. Comput. 2011, 80, 1525–1556. [Google Scholar] [CrossRef]
  11. Negarestani, H.; Pourakbari, F.; Tavakoli, A. Adaptive multiple knot B-spline wavelets for solving Saint-Venant equations. Int. J. Wavelets Multiresolut. Inf. Process. 2013, 11, 1–12. [Google Scholar]
  12. Mehra, M. Wavelets and differential equations—A short review. In AIP Conference Proceedings 1146; Siddiqi, A.H., Gupta, A.K., Brokate, M., Eds.; American Institute of Physics: New York, NY, USA, 2009; pp. 241–252. [Google Scholar]
  13. Dahmen, W. Multiscale and wavelet methods for operator equations. Lect. Notes Math. 2003, 1825, 31–96. [Google Scholar]
  14. Caflisch, R.E.; Gargano, F.; Sammartino, M.; Sciacca, V. Complex singularities and PDEs. Rivista Matematica Universita di Parma 2015, 6, 69–133. [Google Scholar]
  15. Gargano, F.; Sammartino, M.; Sciacca, V.; Cassel, K.V. Analysis of complex singularities in high-Reynolds-number Navier-Stokes solutions. J. Fluid Mech. 2014, 747, 381–421. [Google Scholar] [CrossRef]
  16. Weideman, J.A.C. Computing the dynamics of complex singularities of nonlinear PDEs. SIAM J. Appl. Dyn. Syst. 2003, 2, 171–186. [Google Scholar] [CrossRef]
  17. Cohen, A.; Dahmen, W.; DeVore, R. Adaptive wavelet schemes for elliptic operator equations—Convergence rates. Math. Comput. 2001, 70, 27–75. [Google Scholar] [CrossRef]
  18. Cohen, A.; Dahmen, W.; DeVore, R. Adaptive wavelet methods II - beyond the elliptic case. Found. Math. 2002, 2, 203–245. [Google Scholar] [CrossRef]
  19. Dijkema, T.J.; Schwab, C.; Stevenson, R. An adaptive wavelet method for solving high-dimensional elliptic PDEs. Constr. Approx. 2009, 30, 423–455. [Google Scholar] [CrossRef]
  20. Hilber, N.; Reichmann, O.; Schwab, C.; Winter, C. Computational Methods for Quantitative Finance; Springer: Berlin, Germany, 2013. [Google Scholar]
  21. Stevenson, R. Adaptive solution of operator equations using wavelet frames. SIAM J. Numer. Anal. 2003, 41, 1074–1100. [Google Scholar] [CrossRef]
  22. Dahmen, W.; Kunoth, A. Multilevel preconditioning. Numer. Math. 1992, 63, 315–344. [Google Scholar] [CrossRef]
  23. Urban, K. Wavelet Methods for Elliptic Partial Differential Equations; Oxford University Press: Oxford, UK, 2009. [Google Scholar]
  24. Černá, D.; Finěk, V. Approximate multiplication in adaptive wavelet methods. Cent. Eur. J. Math. 2013, 11, 972–983. [Google Scholar] [CrossRef]
  25. Dijkema, T.J.; Stevenson, R. A sparse Laplacian in tensor product wavelet coordinates. Numer. Math. 2010, 115, 433–449. [Google Scholar] [CrossRef]
  26. Cvejnová, D.; Černá, D.; Finěk, V. Hermite cubic spline multi-wavelets on the cube. In AIP Conference Proceedings 1690; Pasheva, V., Popivanov, N., Venkov, G., Eds.; American Institute of Physics: New York, NY, USA, 2015; No. 030006. [Google Scholar]
  27. Černá, D.; Finěk, V. On a sparse representation of an n-dimensional Laplacian in wavelet coordinates. Results Math. 2016, 69, 225–243. [Google Scholar] [CrossRef]
  28. Černá, D. Numerical solution of the Black–Scholes equation using cubic spline wavelets. In AIP Conference Proceedings 1789; Pasheva, V., Popivanov, N., Venkov, G., Eds.; American Institute of Physics: New York, NY, USA, 2016; No. 030001. [Google Scholar]
  29. Cvejnová, D.; Šimůnková, M. Comparison of multidimensional wavelet bases. AIP Conf. Proc. 2015, 1690, 030008. [Google Scholar]
  30. Schneider, A. Biorthogonal cubic Hermite spline multi-wavelets on the interval with complementary boundary conditions. Results Math. 2009, 53, 407–416. [Google Scholar] [CrossRef]
  31. Dahmen, W.; Han, B.; Jia, R.Q.; Kunoth, A. Biorthogonal multi-wavelets on the interval: Cubic Hermite splines. Constr. Approx. 2000, 16, 221–259. [Google Scholar] [CrossRef]
  32. Jia, R.Q.; Liu, S.T. Wavelet bases of Hermite cubic splines on the interval. Adv. Comput. Math. 2006, 25, 23–39. [Google Scholar] [CrossRef]
  33. Shumilov, B.M. Multiwavelets of the third-degree Hermitian splines orthogonal to cubic polynomials. Math. Models Comput. Simul. 2013, 5, 511–519. [Google Scholar] [CrossRef]
  34. Shumilov, B.M. Cubic multi-wavelets orthogonal to polynomials and a splitting algorithm. Numer. Anal. Appl. 2013, 6, 247–259. [Google Scholar] [CrossRef]
  35. Xue, X.; Zhang, X.; Li, B.; Qiao, B.; Chen, X. Modified Hermitian cubic spline wavelet on interval finite element for wave propagation and load identification. Finite Elem. Anal. Des. 2014, 91, 48–58. [Google Scholar] [CrossRef]
  36. Černá, D.; Finěk, V. Wavelet bases of cubic splines on the hypercube satisfying homogeneous boundary conditions. Int. J. Wavelets Multiresolut. Inf. Process. 2015, 13, 1550014. [Google Scholar] [CrossRef]
  37. Zuhlsdorff, C. The pricing of derivatives on assets with quadratic volatility. Appl. Math. Financ. 2001, 8, 235–262. [Google Scholar] [CrossRef]
  38. Dahmen, W. Stability of multiscale transformations. J. Fourier Anal. Appl. 1996, 4, 341–362. [Google Scholar]
  39. Johnson, C.R. A Gershgorin-type lower bound for the smallest singular value. Linear Algebra Appl. 1989, 112, 1–7. [Google Scholar] [CrossRef]
  40. Černá, D.; Finěk, V. Quadratic spline wavelets with short support for fourth-order problems. Results Math. 2014, 66, 525–540. [Google Scholar] [CrossRef]
  41. Černá, D.; Finěk, V. A diagonal preconditioner for singularly pertubed problems. Bound. Value Probl. 2017, 2017, 22. [Google Scholar] [CrossRef]
Figure 1. Scaling functions on the level j = 3 .
Figure 1. Scaling functions on the level j = 3 .
Axioms 06 00004 g001
Figure 2. Wavelets ψ 3 , 1 , , ψ 3 , 9 .
Figure 2. Wavelets ψ 3 , 1 , , ψ 3 , 9 .
Axioms 06 00004 g002
Figure 3. The structure of the matrix N j .
Figure 3. The structure of the matrix N j .
Axioms 06 00004 g003
Figure 4. The sparsity pattern of the matrices arising from a discretization using a wavelet basis constructed in this paper (left) and a wavelet basis from [25] (right) for the Black–Scholes equation with quadratic volatilities.
Figure 4. The sparsity pattern of the matrices arising from a discretization using a wavelet basis constructed in this paper (left) and a wavelet basis from [25] (right) for the Black–Scholes equation with quadratic volatilities.
Axioms 06 00004 g004
Figure 5. The approximate solution (left) and the derivative of the approximate solution (right) for Example 1.
Figure 5. The approximate solution (left) and the derivative of the approximate solution (right) for Example 1.
Axioms 06 00004 g005
Figure 6. Convergence history for Example 1. The number of basis functions and the L -norm of the error are in logarithmic scaling.
Figure 6. Convergence history for Example 1. The number of basis functions and the L -norm of the error are in logarithmic scaling.
Axioms 06 00004 g006
Figure 7. Contour plot (left) and 3D plot (right) of the approximate solution V 1 for Example 2.
Figure 7. Contour plot (left) and 3D plot (right) of the approximate solution V 1 for Example 2.
Axioms 06 00004 g007
Figure 8. Convergence history for Example 2. The number of basis functions and the L -norm of the error are in logarithmic scaling.
Figure 8. Convergence history for Example 2. The number of basis functions and the L -norm of the error are in logarithmic scaling.
Axioms 06 00004 g008
Axioms EISSN 2075-1680 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top