Sparse Wavelet Representation of Differential Operators with Piecewise Polynomial Coefficients

We propose a construction of a Hermite cubic spline-wavelet basis on the interval and hypercube. The basis is adapted to homogeneous Dirichlet boundary conditions. The wavelets are orthogonal to piecewise polynomials of degree at most seven on a uniform grid. Therefore, the wavelets have eight vanishing moments, and the matrices arising from discretization of differential equations with coefficients that are piecewise polynomials of degree at most four on uniform grids are sparse. Numerical examples demonstrate the efficiency of an adaptive wavelet method with the constructed wavelet basis for solving the one-dimensional elliptic equation and the two-dimensional Black–Scholes equation with a quadratic volatility.


Introduction
Wavelets are a powerful and useful tool for analyzing signals, the detection of singularities, data compression and the numerical solution of partial differential and integral equations.One of the most important properties of wavelets is that they have vanishing moments.Vanishing wavelet moments ensure the so-called compression property of wavelets.This means that a function f that is smooth, except at some isolated singularities, typically has a sparse representation in a wavelet basis, i.e., only a small number of wavelet coefficients carry most of the information on f .Similarly as functions, also certain differential and integral operators have sparse or quasi-sparse representation in a wavelet basis.This compression property of wavelets leads to the design of many multiscale wavelet-based methods for the solution of differential equations.The first wavelet methods used orthogonal wavelets, e.g., Daubechies wavelets or coiflets [1,2].Their disadvantage is that the most orthogonal wavelets are usually not known in a closed form and that their smoothness is typically dependent on the length of the support.The orthogonal wavelets that are known in a closed form are Haar wavelets.They were successfully used for solving differential equations, e.g., in [3][4][5].Another useful tool is the short Haar wavelet transform that was derived and used for solving differential equations in [6][7][8].Since spline wavelets are known in a closed form and they are smoother and have more vanishing moments than orthogonal wavelets of the same length of support, many wavelet methods using spline wavelets were proposed [9][10][11].For a review of wavelet methods for solving differential equations, see also [12,13].
It is known that spectral methods can be used to study singularity formation for PDE solution [14][15][16].Due to their compression property, wavelets can also be used to study singularity formation for PDE solutions.The wavelet approach simply insists on analyzing wavelet coefficients that are large in regions where the singularity occurs and very small in regions where the function is smooth and derivatives are relatively small.Many adaptive wavelet methods are based on this property [17,18].
We focus on an adaptive wavelet method that was originally designed in [17,18] and later modified in many papers [19][20][21], because it has the following advantages:

•
Optimality: For a large class of differential equations, both linear and nonlinear, it was shown that this method converges and is asymptotically optimal in the sense that the storage and number of floating point operations, needed to resolve the problem with desired accuracy, depend linearly on the number of parameters representing the solution, and the number of these parameters is small.Thus, the computational complexity for all steps of the algorithm is controlled.

•
High order-approximation: The method enables high order approximation.The order of approximation depends on the order of the spline wavelet basis.

•
Sparsity: The solutions and the right-hand side of the equation have sparse representation in a wavelet basis, i.e., they are represented by a small number of numerically significant parameters.In the beginning, iterations start for a small vector of parameters, and the size of the vector increases successively until the required tolerance is reached.The differential operator is represented by a sparse or quasi-sparse matrix, and a procedure for computing the product of this matrix with a finite-length vector with linear complexity is known.

•
Preconditioning: For a large class of problems, the matrices arising from a discretization using wavelet bases can be simply preconditioned by a diagonal preconditioner, and the condition numbers of these preconditioned matrices are uniformly bounded.It is important that the preconditioner is simple, such as the diagonal preconditioner, because in some implementations, only nonzero elements in columns of matrices corresponding to significant coefficients of solutions are stored and used.
It should be noted that also other spline wavelet methods utilize some of these features, but to our knowledge, there are no other wavelet methods than adaptive wavelet methods based on the ideas from [17,18] that have all of these properties.For more details about adaptive wavelet methods, see .
In this paper, we are concerned with the wavelet discretization of the partial differential equation: We assume that q k (x) ≥ Q > 0, that the functions p k,l , q k , p 0 and f are sufficiently smooth and bounded on Ω and that p k,l satisfy the uniform ellipticity condition: where C > 0 is independent of x.The discretization matrix for wavelet bases is typically not sparse, but only quasi-sparse, i.e., the matrix of the size N × N has O (N × log N) nonzero entries.
For multiplication of this matrix with a vector, a routine called APPLYhas to be used [18,19,24].However, it was observed in several papers, e.g., in [25], that "quantitatively the application of the APPLY routine is very demanding, where this routine is also not easy to implement".Therefore, in [25] a wavelet basis was constructed with respect to which the discretization matrix is sparse, i.e., it has O (N) nonzero entries, for Equation (1) if the coefficients are constant.The construction from [25]  was modified in [26,27] with the aim to improve the condition number of the discretization matrices.Some numerical experiments with these bases can be found in [28,29].In this paper, our aim is to construct a wavelet basis such that the discretization matrix corresponding to (1) is sparse if the coefficients p k,l , q k and p 0 are piecewise polynomial functions of degree at most n on the uniform grid, where n = 6 for p k,l , n = 5 for q k and n = 4 for p 0 .Our construction is based on Hermite cubic splines.Let us mention that cubic Hermite wavelets were constructed also in [25][26][27][30][31][32][33][34][35].
Example 1.We have recently implemented the adaptive wavelet method for solving the Black-Scholes equation: where (S 1 , . . . ,S d , t) ∈ 0, S max 1 × . . .× 0, S max d × (0, T).We used the θ-scheme for time discretization and tested the performance of the adaptive method with respect to the choice of a wavelet basis for d = 1, 2, 3. Some results can be found in [28].In the case of cubic spline wavelets, the smallest number of iterations was required for the wavelet basis from [36].The discretization matrix for most spline wavelet bases is not sparse, but only quasi-sparse, and thus, the above-mentioned routine APPLY has to be used.For wavelet bases from [25][26][27], the discretization matrix corresponding to the Black-Scholes operator is sparse if volatilities σ i are constant.However, in more realistic models, volatilities are represented by non-constant functions, e.g., piecewise polynomial functions [37].For the basis that will be constructed in this paper, the discretization matrix is sparse also for the Black-Scholes equation with volatilities σ i that are piecewise quadratic.

Wavelet Bases
In this section, we briefly review the concept of a wavelet basis in Sobolev spaces and introduce notations; for more details, see, e.g., [23].Let H be a Hilbert space with the inner product •, • H and the norm • H , and let •, • denote the L 2 -inner product.Let J be an index set, and let each index λ ∈ J take the form λ = (j, k), where |λ| := j ∈ Z is a level.For v = {v λ } λ∈J , v λ ∈ R, we define: Our aim is to construct a wavelet basis in the sense of the following definition.
Definition 1.A family Ψ := {ψ λ , λ ∈ J } is called a wavelet basis of H, if: (i) Ψ is a Riesz basis for H, i.e., the closure of the span of Ψ is H, and there exist constants c, C ∈ (0, ∞), such that: (ii) The functions are local in the sense that diam (supp ψ λ ) ≤ C 2 −|λ| for all λ ∈ J , and at a given level j, the supports of only finitely many wavelets overlap at any point x.
For the two countable sets of functions Γ, Γ ⊂ H, the symbol Γ, Γ H denotes the matrix: The constants c Ψ := sup {c : c satisfies (5)} and C Ψ := inf {C : C satisfies (5)} are called Riesz bounds, and the number cond Ψ = C Ψ /c Ψ is called the condition number of Ψ.It is known that: where λ min ( Ψ, Ψ H ) and λ max ( Ψ, Ψ H ) are the smallest and the largest eigenvalues of the matrix Ψ, Ψ H , respectively.
Let M be a Lebesgue measurable subset of R d .The space L 2 (M) is the space of all Lebesgue measurable functions on M, such that the norm: (8) is finite.The space L 2 (M) is a Hilbert space with the inner product: The Sobolev space H s R d for s ≥ 0 is defined as the space of all functions f ∈ L 2 R d , such that the seminorm: is finite.The symbol f denotes the Fourier transform of the function f defined by: The space H s R d is a Hilbert space with the inner product: and the norm: For an open set M ⊂ R d , H s (M) is the set of restrictions of functions from H s R d to M equipped with the norm: Let C ∞ 0 (M) be the space of all continuous functions with the support in M, such that they have continuous derivatives of order r for any r ∈ R. The space H s 0 (M) is defined as the closure of where: is the seminorm in H 1 (M) and ∇ f denotes the gradient of f .
Then, the spaces V j form a multiresolution analysis.We choose dual space Ṽj as the set of all functions v ∈ L 2 (0, 1), such that v restricted to the interval k−1 where Π 8 (a, b) denotes the set of all polynomials on (a, b) of degree less than eight.Let: where Ṽ⊥ j is the orthogonal complement of Ṽj with respect to the L 2 -inner product.If a function g is a piecewise polynomial of degree n, we write deg g = n.Lemma 1.Let the spaces W j , j ≥ 3, be defined as above.Then, all functions g ∈ W i and h where a, b, c are piecewise polynomial functions, such that a, b, c ∈ Ṽp , p ≤ max (i, j), deg a ≤ 4, deg b ≤ 5 and deg c ≤ 6.

Proof of Lemma 2. Let us assume that
a ∈ Ṽj , deg a ≤ 4, and thus, ag ∈ Ṽj .Since h ∈ W j and W j is orthogonal to Ṽj , we obtain a g, h = 0. Similarly, the relation b g , h = 0 is the consequence of the fact that bg ∈ Ṽj and h ∈ W j .Using integration by parts, we obtain: Since c g + c g ∈ Ṽj and h ∈ W j , we have c g , h = 0.The situation for j < i + 2 is similar.
Therefore, the discretization matrix for the Equation ( 1) is sparse.Let Ψ j be a basis of W j .The proof that: is a Riesz basis of the space L 2 (0, 1), and that Ψ is a Riesz basis of the space H 1 0 (0, 1) when normalized with respect to the H 1 -norm is based on the following theorem [25,38].
Theorem 2. Let J ∈ N, and let V j and Ṽj , j ≥ J, be subspaces of L 2 (0, 1), such that: Let Φ j be bases of V j , Φj be bases of Ṽj and Ψ j be bases of Ṽ⊥ j ∩ V j+1 , such that Riesz bounds with respect to the L 2 -norm of Φ j ; Φj , and Ψ j are uniformly bounded; and let Ψ be given by (24).Furthermore, let the matrix: be invertible, and the spectral norm of G −1 j is bounded independently of j.In addition, for some positive constants C, γ and d, such that γ < d, let: and for 0 ≤ s < γ, let: and let similar estimates ( 27) and ( 28) hold for γ and d on the dual side.Then, there exist constants k and K, holds for s ∈ (− γ, γ).
We focus on the spaces V j and Ṽj defined by (19) and (20), respectively, and we show that they satisfy the assumptions of Theorem 2. Theorem 3.There exist uniform Riesz bases Φj of V j and Φj of Ṽj , such that the matrix: is invertible and the spectral norm of G −1 j is bounded independently of j.
Proof of Theorem 4. Let Φ j , V j and Ṽj be defined as above.For i = 0, . . ., 7, we define: and: Then, the set Θ j = θ j,k , k = 1, . . ., 2 j+1 is a basis of Ṽj , and the matrix A j = Φ j , Θ j , j ≥ 3, has the structure: where A is the matrix of the size 10 × 8. Our aim is to apply several transforms on Φ j and Θ j , such that new bases Φj of V j and Φj of Ṽj are local, and the matrix G j defined by (30) and its transpose G T j are both strictly diagonally dominant.First, we replace functions θ j,k by functions g j,k in such a way that the matrix of L 2 -inner products of φ j,k and g j,l is tridiagonal.Therefore, we define: where the coefficients c i j,l are chosen, such that: For m = 8k + i + 1, i = 0, . . ., 7, k = 1, . . ., 2 j−2 − 2, we substitute (34) into (35), and using supp φ j,8k+l+1 ∩ supp g j,m = 0 for l = 0, . . ., 9, we obtain systems of eight linear algebraic equations with eight unknown coefficients: the system matrices A i that are submatrices of A containing all rows of A except the i-th and (i + 2)-th rows, and e i are unit vectors, such that e i l = δ i,l .The symbol δ i,l denotes the Kronecker delta.We computed all of the system matrices precisely using symbolic computations and verified that they are regular.Thus, the coefficients c i j,l exist and are unique.The matrix B j defined by B j k,l = φ j,k , g j,m , k, l = 1, . . ., 2 j+1 , is tridiagonal and has the structure: where: and: The symbol B M denotes the submatrix of the matrix B containing rows from B with indices from M.
Since φj,k are locally supported and there exists M independent of j and k, such that φj,k L 2 (0,1) ≤ M, we have: and similarly for φj,k , we have: By the same argument as in the Proof of Theorem 3.3.in [25], from (47), (48), the invertibility of G j and (46), we can conclude that Φj and Φj are uniform Riesz bases of their spans.

Construction of Wavelets
Now, we construct a basis Ψ j of the space W j = Ṽ⊥ j ∩ V j+1 , such that cond Ψ j ≤ C, where C is a constant independent on j, and functions from Ψ j are translations and dilations of some generators.We propose one boundary generator ψ b and functions ψ i , i = 1, . . ., 8, generating inner wavelets, such that the sets: contain functions ψ j,k defined for x ∈ [0, 1] by: We denote the scaling functions on the level j = 1 by: For l = 1, . . ., 6, let the functions ψ l have the form: and be such that ψ 1 , ψ 2 and ψ 3 are antisymmetric and ψ 4 , ψ 5 and ψ 6 are symmetric.Thus, supp ψ l = [0, 4] for l = 1, . . ., 6. Let p i be polynomials defined by (31).It is clear that if: then ψ l 2 j x − k , p i 2 j−2 x − m = 0, for k, m ∈ 4Z, and thus, ψ j,8i+l , g = 0 for any g ∈ Ṽj , i = 0, . . ., 2 j−2 − 1, and l = 1, . . ., 6. Substituting (52) into (53), we obtain the system of linear algebraic equations with the solution h l = h l,k 14 k=1 of the form: and: where a l,k and b l,k are chosen real parameters and: For l ∈ {7, 8}, let the functions ψ l have the form: These functions are uniquely determined by imposing that ψ 7 is symmetric, ψ 8 is antisymmetric, both ψ 7 and ψ 8 are L 2 -orthogonal to the functions ψ l , l = 1, . . ., 6, they are normalized with respect to the L 2 -norm and: It remains to construct boundary function ψ b .Let: Substituting (59) into: we obtain the system of eight equations for 15 unknown coefficients.The solution is the linear combination of vectors w i given by: w 1 = 150 83 , 1429 1992 , 4509 664 , 74 249 , 2839 166 , − 1897 1992 , 6741 664 , − 53 249 , 1, 0, 0, 0, 0, 0, 0 i.e., where d i are chosen real parameters.Hence, the set Ψ j depends on the choice of a k,l , b k,l and d i .However, it is not true that cond Ψ j ≤ C for all possible choices of these parameters.Moreover, for some choices, the condition numbers of Ψ j are uniformly bounded, but the condition number of the resulting basis Ψ is large, e.g., 10 6 .
Therefore, we optimize the construction to improve the condition number of Ψ.We choose a k,1 , and then, we set a k,2 and a k,3 , such that ψ i , ψ j = δ i,j for i, j = 1, 2, 3, and similarly, we choose b k,1 and then set b k,2 and b k,3 , such that ψ i , ψ j = δ i,j for i, j = 4, 5, 6.Moreover, the functions ψ 7 and ψ 8 are constructed, such that they are orthogonal to ψ i for i = 1, . . ., 6, and due to the symmetry and antisymmetry, we have ψ i , ψ j = δ i,j for i = 1, 2, 3 and j = 4, 5, 6 and ψ 7 , ψ 8 = 0.In summary, ψ i is orthogonal to ψ j with respect to the L 2 -norm for i, j = 1, . . .6, i = j.To further improve the condition number, we orthogonalize scaling functions on the coarsest level j = 3, i.e., we determine the set: and we redefine Φ 3 as Φ 3 := Φ ort 3 .Furthermore, we wrote a program that computes the condition number of the wavelet basis containing all wavelets up to the level of seven with respect to both the L 2 -norm and the H 1 -norm for given parameters a k,l , b k,l and d i and performed extensive numerical experiments.In the following, we consider the parameters that lead to good results: and after computing ψ b and ψ i , i = 1, . . ., 8, using these parameters, we normalize them with respect to the L 2 -norm, i.e., we redefine ψ b := ψ b / ψ b and ψ i := ψ i / ψ i .The wavelets ψ 3,1 , . . ., ψ 3,9 that are dilations of ψ b , ψ 1 , . . ., ψ 8 are displayed in Figure 2.  Theorem 4. The sets Ψ j with the parameters given by (64) are uniform Riesz bases of W j for j ≥ 3.
Proof of Theorem 6. Due to the Theorems 2, 3 and 4, the relation (29) holds both for s = 0 and s = 1.Hence, Ψ is a Riesz basis of L 2 (0, 1) and: is a Riesz basis of H 1 0 (0, 1).To show that also: is a Riesz basis of H 1 0 (0, 1), we follow the Proof of Theorem 2 in [40].From ( 18) and (50) there exist nonzero constants C 1 and C 2 , such that: and: We define: and b = a 3,k , k = 1, . . ., 16 ∪ b j,k , j ≥ 3, k = 1, . . ., 2 j+1 .Then: Since the set (70) is a Riesz basis of H 1 0 (0, 1), there exist constants C 3 and C 4 , such that: Therefore: and similarly: The condition number of the resulting wavelet basis with wavelets up to the level of 10 with respect to the L 2 -norm is 17.2, and the condition number of this basis normalized with respect to the H 1 -norm is 6.0.The sparsity pattern of the matrix arising from a discretization using a wavelet basis constructed in this paper and a wavelet basis from [25] for the one-dimensional Black-Scholes equation with quadratic volatilities from Example 1 is displayed in Figure 4.

Wavelets on the Hypercube
We present a well-known construction of a multivariate wavelet basis on the unit hypercube Ω = (0, 1) d ; for more details, see, e.g., [23].It is based on tensorizing univariate wavelet bases and preserves the Riesz basis property, the locality of wavelets, vanishing moments and polynomial exactness.This approach is known as an anisotropic approach.
For notational simplicity, we denote J j = 1, . . ., 2 j+1 for j ≥ 3, and: Then, we can write: We use u ⊗ v to denote the tensor product of functions u and v, i.e., (u ⊗ v) (x 1 , x 2 ) = u (x 1 ) v (x 2 ).We define multivariate basis functions as: Since Ψ is a Riesz basis of L 2 (0, 1) and Ψ normalized with respect to the H 1 -norm is a Riesz basis of H 1 0 (0, 1), the set: is a Riesz basis of L 2 (Ω), and its normalization with respect to the H 1 -norm is a Riesz basis of H 1 0 (Ω).Using the same argument as in the Proof of Lemma 1, we conclude that for this basis, the discretization matrix is sparse for Equation (1) with piecewise polynomial coefficients on uniform meshes, such that deg p k,l ≤ 6, deg q k ≤ 5 and deg a 0 ≤ 4.

Numerical Examples
In this section, we solve the elliptic Equation ( 1) and the equation with the Black-Scholes operator from Example 1 by an adaptive wavelet method with the basis constructed in this paper.We briefly describe the algorithm.While the classical adaptive methods typically uses refining a mesh according to a posteriori local error estimates, the wavelet approach is different, and it comprises the following steps [13,17,18]: One starts with a variational formulation for a suitable wavelet basis, but instead of turning to a finite dimensional approximation, the continuous problem is transformed into an infinite-dimensional l 2 -problem.

2.
Then, one proposes a convergent iteration for the l 2 -problem.

3.
Finally, one derives an implementable version of this idealized iteration, where all infinite-dimensional quantities are replaced by finitely supported ones.
To the left-hand side of the Equation ( 1), we associate the following bilinear form: The weak formulation of (1) reads as follows: Find u ∈ H 1 0 (Ω), such that: Instead of turning to a finite dimensional approximation, Equation ( 85) is reformulated as an equivalent bi-infinite matrix equation Au = f, where: for ψ λ , ψ µ ∈ Ψ and Ψ is a wavelet basis of H 1 0 (Ω).We use the standard Jacobi diagonal preconditioner D for preconditioning this equation, i.e., D λ,µ = D λ,µ δ λ,µ .If the coefficients are constant, one can also use an efficient diagonal preconditioner from [41].The algorithm for solving the l 2 -problem is the following: 1.
Compute sparse representation f j of the right-hand side f, such that f − f j 2 is smaller than a given tolerance 1 j .The computation of a sparse representation insists on thresholding the smallest coefficients and working only with the largest ones.We denote the routine as f j := RHS[f, 1  j ].

2.
Compute K steps of GMRESfor solving the system Av = f j with the initial vector v j .Each iteration of GMRES requires multiplication of the infinite-dimensional matrix with a finitely-supported vector.Since for the wavelet basis constructed in this paper, the matrix is sparse, it can be computed exactly.Otherwise, it is computed approximately with the given tolerance 2 j by the method from [24].We denote the routine z = GMRES[A, f j , v j , K].

3.
Compute sparse representation v j+1 of z with the error smaller than 2 j .We denote the routine v j+1 := COARSE[z, 2  j ].It insists on thresholding the coefficients.
We repeat Steps 1, 2 and 3 until the norm of the residual r j = f − Av j 2 is not smaller than the required tolerance ˜ .Since we work with the sparse representation of the right-hand side and the sparse representation of the vector representing the solution, the method is adaptive.It is known that the coefficients in the wavelet basis are small in regions where the function is smooth and large in regions where the function has some singularity.Therefore, by this method, the singularities are automatically detected.The sparsity patterns of the matrices arising from discretization of Equation (87) using wavelets constructed in this paper and wavelets from [25] are the same as the sparsity patterns of matrices for Example 1 that are displayed in Figure 4. Convergence history is displayed in Figure 6.The number of iterations equals the parameter j from Algorithm 1; the number of basis functions determining the approximate solution in j-th iteration is the same as the number of nonzero entries of the vector v j ; and the L ∞ -norm of the error is given by: Example 3. We consider the equation: for (S 1 , S 2 ) ∈ Ω := (0, 1) 2 and t ∈ (0, 1).We choose parameters of the Black-Scholes operator as ρ 1,1 = ρ 2,2 = 1, ρ 1,2 = ρ 2,1 = 0.88, σ 1 (x) = 0.1x 2 − 0.1x + 0.66, σ 2 (x) = 0.1x 2 − 0.1x + 0.97, r = 0.02, and we set the right-hand side f , the initial and boundary conditions, such that the solution V is given by: V (S 1 , S 2 , t) = e −rt S 1 S 2 1 − e 20S 1 −20 1 − e 20S 2 −20 (91) for (S 1 , S 2 , t) ∈ Ω × (0, 1).We use the Crank-Nicolson scheme for the semi-discretization of the Equation (90) in time.Let M ∈ N, τ = M −1 , t l = lτ, l = 0, . . ., M, and denote V l (S 1 , S 2 ) = V (S 1 , S 2 , t l ) and f l (S 1 , S 2 ) = f (S 1 , S 2 , t l ).The Crank-Nicolson scheme has the form: In this scheme, the function V l is known from the equation on the previous time level, and the function V l+1 is an unknown solution.Thus, for the given time level t l , Equation (92) is of the form (1), and we can use the adaptive wavelet method for solving it.The approximate solution V 1 for τ = 1/365 that was computed using 731 coefficients is displayed in Figure 7.It can be seen that the gradient of the solution V 1 has the largest values near the point [1, 1].Therefore, the largest wavelet coefficients correspond to wavelets with supports in regions near this point, and wavelet coefficients are small for wavelets that are not located in these regions.Thus, many wavelet coefficients are omitted, and the representation of the solution is sparse.The convergence history is shown in Figure 8.

Conclusions
In this paper, we constructed a new cubic spline multi-wavelet basis on the unit interval and unit cube.The basis is adapted to homogeneous Dirichlet boundary conditions, and wavelets have eight vanishing moments.The main advantage of this basis is that the matrices arising from a discretization of the differential Equation ( 1) with piecewise polynomial coefficients on uniform meshes, such that deg p k,l ≤ 6, deg q k ≤ 5 and deg a 0 ≤ 4, are sparse and not only quasi-sparse.We proved that the constructed basis is indeed a wavelet basis, i.e., the Riesz basis property (5) is satisfied.We performed extensive numerical experiments and present the construction that leads to the wavelet basis that is well-conditioned with respect to the L 2 -norm, as well as the H 1 -norm.

Figure 3 .
Figure 3.The structure of the matrix N j .

Figure 4 .
Figure 4.The sparsity pattern of the matrices arising from a discretization using a wavelet basis constructed in this paper (left) and a wavelet basis from [25] (right) for the Black-Scholes equation with quadratic volatilities.

Figure 5 .
Figure 5.The approximate solution (left) and the derivative of the approximate solution (right) for Example 1.

Figure 6 .
Figure 6.Convergence history for Example 1.The number of basis functions and the L ∞ -norm of the error are in logarithmic scaling.

Figure 7 .
Figure 7. Contour plot (left) and 3D plot (right) of the approximate solution V 1 for Example 2.

Figure 8 .
Figure 8. Convergence history for Example 2. The number of basis functions and the L ∞ -norm of the error are in logarithmic scaling.