Next Article in Journal
A Tutorial for the Analysis of the Piecewise-Smooth Dynamics of a Constrained Multibody Model of Vertical Hopping
Next Article in Special Issue
A Novel Air Quality Monitoring Unit Using Cloudino and FIWARE Technologies
Previous Article in Journal
Magnetic Field Analytical Solution for Non-homogeneous Permeability in Retaining Sleeve of a High-Speed Permanent-Magnet Machine
Previous Article in Special Issue
Green’s Function of the Linearized Logarithmic Keller–Segel–Fisher/KPP System
Article Menu
Issue 4 (December) cover image

Export Article

Math. Comput. Appl. 2018, 23(4), 73; https://doi.org/10.3390/mca23040073

Article
Iterated Petrov–Galerkin Method with Regular Pairs for Solving Fredholm Integral Equations of the Second Kind
Departamento de Matemática, Facultad de Ingeniería, Universidad de Buenos Aires, C1063ACV Buenos Aires, Argentina
*
Author to whom correspondence should be addressed.
Received: 22 October 2018 / Accepted: 12 November 2018 / Published: 13 November 2018

Abstract

:
In this work we obtain approximate solutions for Fredholm integral equations of the second kind by means of Petrov–Galerkin method, choosing “regular pairs” of subspaces, { X n , Y n } , which are simply characterized by the positive definitiveness of a correlation matrix. This choice guarantees the solvability and numerical stability of the approximation scheme in an easy way, and the selection of orthogonal basis for the subspaces make the calculations quite simple. Afterwards, we explore an interesting phenomenon called “superconvergence”, observed in the 1970s by Sloan: once the approximations u n X n to the solution of the operator equation u K u = g are obtained, the convergence can be notably improved by means of an iteration of the method, u n * = g + K u n . We illustrate both procedures of approximation by means of two numerical examples: one for a continuous kernel, and the other for a weakly singular one.
Keywords:
Fredholm integral equations; numerical solutions; Petrov–Galerkin method; regular pairs; iterated methods

1. Introduction

Fredholm equations of the second kind are integral equations of the form
u ( t ) a b k ( t , s ) u ( s ) d s = g ( t )     t [ a , b ]
with u an unknown function in a Banach space X . The kernel k : [ a , b ] × [ a , b ] R and the right-hand side g : [ a , b ] R are given functions.
They appear in different areas of applied mathematics, sometimes as equivalent formulation for boundary value problems with ordinary differential equations, and there are many problems of mathematical physics that are modelled with Fredholm integral equations with different kernels (see, for example, [1,2,3]).
The equation may be written
u K u = g
by defining the operator K : X X , K ( u ) ( . ) = a b k ( . , s ) u ( s ) d s .
If for the kernel k ( t , s ) the operator K is bounded, a sufficient condition to guarantee the existence and uniqueness of a solution of Equation (2) is that K < 1 (see [4], Theorem 2.14, p. 23).
Petrov–Galerkin is a projection method often proposed to find numerical approximate solutions to this type of integral equation. The idea is to choose appropriate sequences of finite dimensional subspaces of X , { X n } n N and { Y n } n N , the trial and test subspaces respectively, where the unknown u and the data g are to be projected.
In the case of being X a Hilbert space with inner product . , . , the Petrov–Galerkin method looks for u n X n such that
u n K u n , v = g , v      v Y n
and, as X n and Y n are subspaces of dimension d n < , solving Equation (3) reduces to solve a linear algebraic system of equations represented by a d n × d n matrix.
In [5] it is proved that, if K : X X is a compact linear operator not having 1 as an eigenvalue, and the pair { X n , Y n } is a regular pair—a concept to be defined in the next section—Equation (3) has a unique solution u n which satisfies
u u n X C . i n f x X n x u X    
where the constant C does not depend on n .
Solvability and numerical stability of the approximation scheme are, in this way, assured, and the accuracy of the approximation u n to the unique solution u of Equation (2) does not depend formally on Y n , as can be noted in Equation (4). The goal is to choose test function subspaces Y n that are easy to handle, while the quality of convergence of the method is preserved.
In addition, the convergence can even be improved by means of an iteration of the method: once the approximations u n X n are obtained, a new sequence of approximate solutions u n * X n can be built by means of a simple procedure (see [6,7,8]):
u n * = g + K u n
In this work we choose pairs of simple subspaces { X n , Y n } generated by Legendre polynomials and show the goodness of the approximations in two numerical examples with known solution, one of them having singular kernel. We then improved the convergence by means of an iteration of the method and show why the approximation is better, even for small values of n N .

2. Method

Let ( X , . , . ) be a Hilbert space, . the associated norm, and K : X X a compact linear operator. It is shown in [4] that, if K < 1 , there exists a solution u X to Equation (2) for g X a given function, and it is unique. We are interested in looking for a good approximation to u X satisfying Equation (2).
For each n N 0 let us consider subspaces X n X , Y n X , with dim ( X n ) = dim ( Y n ) = d n < . The Petrov–Galerkin method for Equation (2) is a numerical method to find u n X n satisfying Equation (3).
For the method to be useful, it is necessary to establish conditions under which Equation (3) has a unique solution u n X n and lim n u u n X = 0 for u the unique solution of Equation (2).
It is easy to show that the condition
X n Y n = { 0 }
ensures the existence of a unique solution u n X n for Equation (3). From [4] (p. 243), convergence can be expected only if
x X ,   { x n , n N } X n : lim n x n = x  
so, from now on, the sequences of subspaces { X n } n N and { Y n } n N are both chosen verifying this condition of denseness.
Following [5], and denoting by { X n , Y n } the sequences of subspaces, the pair { X n , Y n } is said to be ‘’regular’’ if there exists a linear surjective operator Π n : X n Y n satisfying
  • C 1 R / x X n : x C 1 x , Π n x
  • C 2 R / x X n : Π n x C 2 x
It is easy to show that the surjectivity of Π n and (i.) assure the condition of Equation (6).
From [5] (p. 411), the following theorem summarizes the conditions for the existence and uniqueness of the solutions of Equation (3) and their convergence to the solution of Equation (2):
Theorem 1.
Let X be a Hilbert space and K : X X a compact linear operator not having 1 as eigenvalue. Suppose X n and Y n are finite dimensional subspaces of X , with dim ( X n ) = dim ( Y n ) , verifying that { X n , Y n } is a regular pair and, for each x X , there exist sequences { x n , n N } X n and { y n , n N } Y n so that lim n x n = x and lim n y n = x . Then, there exists n 0 N such that, for n > n 0 , equation u n K u n , v = g , v   v Y n has a unique solution u n X n for any given g X , that satisfies u u n C . i n f x X n x u , where u X is the unique solution of u K u = g and C is constant not dependent of n.
From [9], the characterization of a regular pair is simple by means of the so called “correlation matrix”. Let { φ i n , i = 1 , , d n } and { ψ j n , j = 1 , , d n } be bases of X n and Y n , respectively, and define the d n × d n matrices [ G ( X n ) ] i j φ i n , φ j n , [ G ( Y n ) ] i j ψ i n , ψ j n , the correlation matrix [ G ( X n , Y n ) ] i j φ i n , ψ j n , and [ G + ( X n , Y n ) ] i j = 1 2 ( [ G ( X n , Y n ) ] i j + [ G ( X n , Y n ) ] j i ) , for i , j = 1 , , d n .
Note that for the real case, G ( X n ) and G ( Y n ) are positive definite and G + ( X n , Y n ) is the symmetric part of the correlation matrix. We have proven (see [10]) the following
Proposition 1.
If G + ( X n , Y n ) is positive definite, { X n , Y n } is a regular pair.
For conciseness, we assume [ a , b ] = [ 0 , 1 ] from here on.
Let us consider X = L 2 ( [ 0 , 1 ] ) .
For the interval [ 0 , 1 ] , S n m is the subspace of polynomials of degree less than m on each subinterval I j , n = ( j 2 n , j + 1 2 n ) , j = 0 , 1 , , 2 n 1 ; dim ( S n m ) = m 2 n and S 0 m S 1 m n = 0 S n m S m . S m ¯ = L 2 ( [ 0 , 1 ] ) since every continuous functions with compact support on [ 0 , 1 ] can be approximated by steps functions on subintervals of the form ( j 2 n , j + 1 2 n ) , j = 0 , 1 , , 2 n 1 , and they are dense in L 2 ( [ 0 , 1 ] ) . The condition of Equation (7) of denseness is, so, satisfied.
As the basis of S n m , we will choose Legendre polynomials of degree less than m , adapted to each of the subintervals I j , n : S n m = span { p i j , n , j = 0 , 1 , , 2 n 1 , i = 0 , 1 , , m 1 } with p i j , n ( x ) = Q i j , n ( x ) / Q i j , n ( x ) ,     Q i j , n ( x ) = L i ( 2 n ( 2 x     2 j + 1 2 n ) ) .   χ I j , n , L i ( x ) the Legendre polynomial of degree i on [ 1 , 1 ] and χ I j , n the characteristic function of the subinterval I j , n .
We rename q l i , n p l i , n + 1 to simplify the notation and choose the sequences of subspaces X n = S n 2 = span { p 0 0 , n , p 1 0 , n , p 0 1 , n , p 1 1 , n , , p 0 2 n 1 , n , p 1 2 n 1 , n } and Y n = S n + 1 1 = span { q 0 0 , n , q 0 1 , n , , , q 0 2 n + 1 1 , n } , with dim ( X n ) = 2.2 n = 2 n + 1 = 1.2 n + 1 = dim ( Y n ) = d n .
Note that the condition of Equation (6), X n Y n = { 0 } , which assures the uniqueness of the solution of Equation (3) for each n , is fulfilled.
Indeed, suppose that q 0 j , n Y n , for j between 0 and 2 n + 1 1 , satisfies that q 0 j , n p 0 i , n and q 0 j , n p 1 i , n for every i = 0 , , 2 n 1 ; then j 2 n + 1 j + 1 2 n + 1 q 0 j , n . p 0 j 2 , n d x = q 0 j , n . p 0 j 2 , n 2 n + 1 = 0 if j is even or j + 1 2 n + 1 j + 2 2 n + 1 q 0 j + 1 , n . p 0 j + 1 2 , n d x = q 0 j + 1 , n . p 0 j + 1 2 , n 2 n + 1 = 0 if j is odd, which is impossible, since q 0 j , n 0 and p 0 i , n 0 for every j and every i .
Renaming the elements of the basis as φ i n ( x ) p 0 ( i 1 ) 2 1 , n for i odd, φ i n ( x ) p 1 ( i 2 ) 2 1 , n ( x ) for i even and ψ j n ( x ) q 0 j , n , it is easy to show that { X n , Y n } is a regular pair, since G + ( X n , Y n ) is a 2 n + 1 × 2 n + 1 matrix with definite positive 2 × 2 blocks on its principal diagonal and 0s everywhere else (for details, see [10]).
Once the approximations u n are obtained, an almost natural iteration procedure is possible to obtain new approximations of the real solution u . Since the equation being solved is u K u = g , or u = g + K u , we can define u n * = g + K u n . This first iteration, applied to the Galerkin method, has been studied since the 1970s, because, under appropriated conditions of K and g , it reveals an interesting phenomenon called “superconvergence” (see [6,11], for instance), as the order of convergence can be notably improved.
In [11] (p. 42), the existence of a unique solution u n * for Equation (5) and the improvement of the order of convergence of the iterated approximation for any projection method are guaranteed.
In [5] (p. 419), the superconvergence in the Petrov–Galerkin scheme applied to Fredholm equations of the second kind is explained and, under the same conditions of the Theorem 1 we have just enunciated, a theorem establishes that u n * satisfies
u u n * 2 C . ess   sup s [ 0 , 1 ] [ inf ψ Y n k ( . , s ) ψ 2 ] . inf x X n x u 2
for u the unique solution in L 2 ( [ 0 , 1 ] ) of Equation (1), showing that the improvement of the order of convergence by the iteration procedure is due to the approximation of the kernel k by elements ψ of test subspace Y n . In our work, the elements of test subspaces Y n = S n + 1 1 are piecewise constant functions on the dyadic subintervals I j , n = ( j 2 n , j + 1 2 n ) , j = 0 , 1 , , 2 n 1 .
We will now follow an idea from [6] (p. 67). For f a Lipschitz function on an interval I , with Lipschitz constant L , let ψ 1 2 be the piecewise constant function defined by ψ 1 2 ( t ) = f ( t i + h 2 ) for t I i = [ t i , t i + h ] , with I = I i a regular partition of I with norm h .
For any I i , if t I i   : | f ( t ) ψ 1 2 ( t ) | h L 2 , so f ψ 1 2 h L 2 .
If the kernel k satisfies that k ( . , s ) = k s ( . ) is a Lipschitz function with Lipschitz constant L s for each s [ 0 , 1 ] , it is k s ψ 1 2 1 2 1 2 n + 1 L s and, then, inf ψ Y n k s ψ L s 2 n + 2 for each s [ 0 , 1 ]   and, consequently, ess   sup s [ 0 , 1 ] [ inf ψ Y n k ( . , s ) ψ 2 ] 1 2 n + 2 ess   sup s [ 0 , 1 ] L s .
Moreover, if ess   sup s [ 0 , 1 ] L s < , from Equation (8), u u n * 2 C . 1 2 n + 2 ess   sup s [ 0 , 1 ] L s . inf x X n x u 2 , and the approximation is actually improved.

3. Results

We will offer two numerical examples of the goodness of the Petrov–Galerkin method and iterated Petrov–Galerkin method with regular pairs, applied to Fredholm integral equations of the second kind: one with a continuous kernel, and the other with a weakly singular kernel ([4], p. 29; [12], p.7).
The kernel k : [ a , b ] × [ a , b ] R is said to be weakly singular if it verifies
| k ( s , t ) | M | s t | α 1     ( s , t ) [ a , b ] × [ a , b ] , s t
with 0 < α < 1 and M R .
Both for k a continuous kernel or a weakly singular one, K : L 2 ( [ 0 , 1 ] ) L 2 ( [ 0 , 1 ] ) is compact operator (see [4], p. 28, Theorem 2.28; and [13], p. 582, Theorem 1, respectively).
We have chosen “regular pairs” of subspaces, and orthogonal basis for them, reducing the difficulty of calculations.
We worked with X n = S n 2 = span { φ i n , i = 1 , , 2 n + 1 } and Y n = S n + 1 1 = span { ψ j n , j = 1 , , 2 n + 1 } , with φ i n ( x ) = 2 n .   χ I i 1 , n for i odd, φ i n ( x ) = 3.2 n ( 2 n + 1 x i + 1 ) .   χ I i 1 , n for i even and ψ j n ( x ) = 2 n + 1 .   χ I j 1 , n + 1 .
Note that the trial space X n is generated by piecewise constant and piecewise linear orthogonal functions; in [14], only piecewise linear (not orthogonal) functions are used.

3.1. Numerical Examples

3.1.1. Example 1

The equation
u ( t ) 1 2 0 1 e s t u ( s ) d s = g ( t )     t [ 0 , 1 ]
with   g ( t ) = t e t 1 2 ( 1 + t e t + 1 ) ( 1 + t ) 2 , has the exact solution u ( t ) = t e t .
The linear operator K : L 2 ( [ 0 , 1 ] ) L 2 ( [ 0 , 1 ] ) ,   K [ u ] ( t ) = 1 2 0 1 e s t u ( s ) d s is compact because k ( t , s ) = e s t is continuous, and K ( 0 1 ( 0 1 k 2 ( t , s ) d s ) d t ) 1 2 < 1 , thus 1 is not an eigenvalue of K and convergence of the Petrov–Galerkin method to the (unique) exact solution is guaranteed.
In Figure 1a we plot the exact solution together with the approximations u 0 , u 1 , u 2 and u 3 . The quadratic errors with respect to the exact solution u ,   ε n = u u n 2 , are, respectively, ε 0 ~ 0.160 157, ε 1 ~ 0.043244 , ε 2 ~ 0.011036 and ε 3 ~ 0.002773 .
In Figure 1b, the plots of the exact solution together with u 1 * , u 2 * and u 3 * are shown. The quadratic errors with respect to t e t are, in this case, ε 0 * ~ 0.005657 , ε 1 * ~ 0.000849 and ε 2 * ~ 0.000139 .
Note that the plots of the iterated approximations and the real solution are indistinguishable.
All the approximations were obtained by means of ad hoc designed algorithms, implemented with Wolfram Mathematica® 9.

3.1.2. Example 2

The equation
u ( t ) 1 3 0 1 u ( s ) | t s | 4 d s = g ( t )     t [ 0 , 1 ]
with g ( t ) = t 2 t 3 4 231 3 (   32 t 11 4 + ( 1 t ) 3 4   ( 21 + 24 t + 32 t 2 ) + 128 5 t 15 4 + ( 1 t ) 3 4 5   ( 77 + 84 t + 96 t 2 + 128 t 3 ) ) , has the exact solution u ( t ) = t 2 t 3 .
The kernel k ( t , s ) = 1 3 1 | t s | 4 is weakly singular, with α = 3 4 , according to (9).
Theorem 1 from [13] (p. 582) guarantees the compactness of the operator K : L 2 ( [ 0 , 1 ] ) L 2 ( [ 0 , 1 ] ) ,   K [ u ] ( t ) = 1 3 0 1 u ( s ) | t s | 4 d s , because the necessary and sufficient conditions are verified: sup t [ 0 , 1 ] k ( t , . ) 2 = sup t [ 0 , 1 ] ( 1 3 0 1 d s | t s | ) 1 2 2 3 < and lim t τ k ( t , . ) k ( τ , . ) 2 = 0 for τ [ 0 , 1 ] .
It is K ( 0 1 ( 0 1 k 2 ( t , s ) d s ) d t ) 1 2 < 1 , thus 1 is not an eigenvalue of K and convergence of the method to the (unique) exact solution is guaranteed.
In Figure 2a we plot the exact solution together with the approximations u 0 , u 1 , u 2 , u 3 and u 4 obtained with Mathematica®, and in Figure 2b, the exact solution together with u 0 * , u 1 * and u 2 * , the last one being practically indistinguishable from the exact solution. By comparing quadratic errors, the improvement of the approximation can be appreciated: for n = 2 , ε 2 = u u 2 2 < 0.0046 and ε 2 * = u u 2 * 2 < 0.00023 .

4. Discussion

The Petrov–Galerkin method is applied by choosing appropriate subspaces for projecting. The choice of a “regular pair” of subspaces (easily characterized by the positive definitiveness of a correlation matrix), and orthogonal basis for them, reduce the difficulty of calculations. Iteration is shown to be a very simple way for improving convergence in a remarkable way, and better orders of convergence can be shown, even for a weakly singular kernel. It is necessary to say that, in this second numerical example, we have had difficulties with the fluid implementation of the computational algorithms because of the improper integrals involved. However, not so many computations were necessary since with n = 2 we have obtained very good results. In [14,15], the authors propose discrete methods to face the numerical difficulties arising from the calculation of improper integrals involved in the case of weakly singular kernel.
It is appropriate to point out that, in recent papers, different discrete Galerkin approaches were proposed to solve integral equations. In particular, meshless discrete Galerkin methods were successfully developed for solving Fredholm and Hammerstein integral equations for various bases. See, for example, [16] for an effective and stable method to estimate the solution to Hammerstein integral equations with free shape parameter radial basis functions, constructed on scattered points; [17,18], for effective computational meshless methods for solving Fredholm integral equations of the second kind with logarithmic and weakly singular kernels, using radial basis functions, meshless product integration and collocation methods; and [19,20], for efficient meshless methods for solving non-linear weakly singular Fredholm integral equations, combining discrete collocation method with locally supported radial basis functions and thin-plate splines.
Finally, a plausible line for our future work could be to explore and take advantages of some of these discrete methods of approximation to avoid the difficulties of calculations arising from the improper integrals when solving Fredholm integral equations of the second kind with weakly singular kernels.

Author Contributions

Conceptualization, M.I.T.; formal analysis, S.A.S. and M.I.T.; investigation, S.A.S. and M.I.T.; methodology, M.I.T.; project administration, M.I.T.; software, S.A.S. and M.I.T.; supervision, M.I.T.; validation, S.A.S. and M.I.T.; visualization, S.A.S.; writing—original draft, S.A.S.; writing—review and editing, S.A.S. and M.I.T.

Funding

This research was partially supported by Universidad de Buenos Aires, UBACyT 2018-2021, 20020170100350BA.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lonseth, A. Sources and Applications of Integral Equations. SIAM Rev. 1977, 19, 241–278. [Google Scholar] [CrossRef]
  2. Kovalenko, E.V. Some approximate methods of solving integral equations of mixed problems. J. Appl. Math. Mech. 1989, 53, 85–92. [Google Scholar] [CrossRef]
  3. Assari, P. Thin plate spline Galerkin scheme for numerically solving nonlinear weakly singular Fredholm integral equations. Appl. Anal. 2018, 1–21. [Google Scholar] [CrossRef]
  4. Kress, R. Linear Integral Equations, 3rd ed.; Springer: New York, NY, USA, 2014. [Google Scholar]
  5. Chen, Z.; Xu, Y. The Petrov–Galerkin and iterated Petrov–Galerkin methods for second-kind integral equations. SIAM J. Numer. Anal., 1998, 35, 406–434. [Google Scholar] [CrossRef]
  6. Chandler, G. Superconvergence of Numerical Solutions to Second Kind Integral Equations. Ph.D. Thesis, Australian National University, Canberra, Australia, September 1979. [Google Scholar]
  7. Sloan, I.H. Improvement by Iteration for Compact Operator Equations. Math. Comput. 1976, 30, 756–764. [Google Scholar] [CrossRef]
  8. Sloan, I.H. The iterated Galerkin method for integral equations of the second kind. In Miniconference on Operator Theory and Partial Differential Equations; Centre for Mathematics and its Applications, Mathematical Sciences Institute, The Australian National University: Canberra, Australia, 1984; pp. 153–161. [Google Scholar]
  9. Chen, Z.; Micchelli, C.A.; Xu, Y. The Petrov–Galerkin method for second kind integral equations II: Multiwavelet schemes. Adv. Comput. Math., 1997, 7, 199–233. [Google Scholar] [CrossRef]
  10. Orellana Castillo, A.; Seminara, S.; Troparevsky, M.I. Regular pairs for solving Fredholm integral equations of the second kind. Poincare J. Anal. Appl. 2018, accepted, in press. [Google Scholar]
  11. Sloan, I.H. Superconvergence. In Numerical Solution of Integral Equations; Goldberg, M., Ed.; Plenum Press: New York, NY, USA, 1990; pp. 35–70. [Google Scholar]
  12. Vainikko, G. Weakly Singular Integral Equations. Available online: http://math.tkk.fi/opetus/funasov/2006/WSIElectures.pdf (accessed on 12 November 2018).
  13. Graham, I.G.; Sloan, I.H. On the Compactness of Certain Integral Operators. J. Math. Anal. Appl. 1979, 68, 580–594. [Google Scholar] [CrossRef]
  14. Chen, Z.; Xu, Y.; Zhao, J. The Discrete Petrov–Galerkin Method for Weakly Singular. Integral Equ. Appl. 1999, 11, 1–35. [Google Scholar] [CrossRef]
  15. Chen, Z.; Micchelli, C.A.; Xu, Y. Discrete wavelet Petrov–Galerkin methods. Adv. Comput. Math. 2002, 16, 1–28. [Google Scholar] [CrossRef]
  16. Assari, P.; Dehgan, M. A Meshless Discrete Galerkin Method Based on the Free Shape Parameter Radial Basis Functions for Solving Hammerstein Integral Equations. Numer. Math. Theory Methods Appl. 2018, 11, 540–568. [Google Scholar] [CrossRef]
  17. Assari, P.; Adibi, H.; Dehgan, M. A meshless discrete Galerkin (MDG) method for the numerical solution of integral equations with logarithmic kernels. J. Comput. Appl. Math. 2014, 267, 160–181. [Google Scholar] [CrossRef]
  18. Assari, P.; Adibi, H.; Dehgan, M. The numerical solution of weakly singular integral equations based on the meshless product integration (MPI) method with error analysis. Appl. Numer. Math. 2014, 81, 76–93. [Google Scholar] [CrossRef]
  19. Assari, P.; Dehgan, M. The numerical solution of two-dimensional logarithmic integral equations on normal domains using radial basis functions with polynomial precision. Eng. Comput. 2017, 33, 853–870. [Google Scholar] [CrossRef]
  20. Assari, P.; Asadi-Mehregan, F.; Dehghan, M. On the numerical solution of Fredholm integral equations utilizing the local radial basis function method. Int. J. Comput. Math. 2018, 1–28. [Google Scholar] [CrossRef]
Figure 1. (a) Approximations to exact solution of Equation (10) before iteration: Purple for n = 0 , blue for n = 1 , green for n = 2 , yellow for n = 3 , and, dashed in red, the exact solution. The quadratic errors with respect to the exact solution are, respectively, ε 0 ~ 0.160 157, ε 1 ~ 0.043244 , ε 2 ~ 0.011036 and ε 3 ~ 0.002773 ; (b) the approximations for n = 1 ,   2 and 3 after iteration are graphically indistinguishable from the exact solution of the equation (10). The quadratic errors with respect to t e t are, in this case, ε 1 * ~ 0.005657 , ε 2 * ~ 0.000849 and ε 3 * ~ 0.000139 .
Figure 1. (a) Approximations to exact solution of Equation (10) before iteration: Purple for n = 0 , blue for n = 1 , green for n = 2 , yellow for n = 3 , and, dashed in red, the exact solution. The quadratic errors with respect to the exact solution are, respectively, ε 0 ~ 0.160 157, ε 1 ~ 0.043244 , ε 2 ~ 0.011036 and ε 3 ~ 0.002773 ; (b) the approximations for n = 1 ,   2 and 3 after iteration are graphically indistinguishable from the exact solution of the equation (10). The quadratic errors with respect to t e t are, in this case, ε 1 * ~ 0.005657 , ε 2 * ~ 0.000849 and ε 3 * ~ 0.000139 .
Mca 23 00073 g001
Figure 2. (a) Approximations to exact solution of Equation (11) before iteration: purple for n = 0 , blue for n = 1 , green for n = 2 , yellow for n = 3 , orange for n = 4 , and, dashed in red, the exact solution; (b) the same approximations after the iteration. For n = 2 , the approximation is graphically indistinguishable from the exact solution of the Equation (11), and the quadratic errors is reduced from ε 2 = u u 2 2 < 0.0046 to ε 2 * = u u 2 * 2 < 0.00023 .
Figure 2. (a) Approximations to exact solution of Equation (11) before iteration: purple for n = 0 , blue for n = 1 , green for n = 2 , yellow for n = 3 , orange for n = 4 , and, dashed in red, the exact solution; (b) the same approximations after the iteration. For n = 2 , the approximation is graphically indistinguishable from the exact solution of the Equation (11), and the quadratic errors is reduced from ε 2 = u u 2 2 < 0.0046 to ε 2 * = u u 2 * 2 < 0.00023 .
Mca 23 00073 g002

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Math. Comput. Appl. EISSN 2297-8747 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top