Next Article in Journal
Eigenfunction Families and Solution Bounds for Multiplicatively Advanced Differential Equations
Next Article in Special Issue
Fractional Singular Differential Systems of Lane–Emden Type: Existence and Uniqueness of Solutions
Previous Article in Journal
On Smoothness of the Solution to the Abel Equation in Terms of the Jacobi Series Coefficients
Previous Article in Special Issue
Numerical Solutions of Coupled Burgers’ Equations
Open AccessArticle

Solving a Quadratic Riccati Differential Equation, Multi-Pantograph Delay Differential Equations, and Optimal Control Systems with Pantograph Delays

1
Department of Mathematics, Kashmar Higher Education Institute, Kashmar 3619995161, Iran
2
Department of Mathematics, University of Venda, P Bag X5050, Thohoyandou 0950, South Africa
*
Author to whom correspondence should be addressed.
Axioms 2020, 9(3), 82; https://doi.org/10.3390/axioms9030082
Received: 23 May 2020 / Revised: 13 June 2020 / Accepted: 22 June 2020 / Published: 18 July 2020
(This article belongs to the Special Issue Iterative Processes for Nonlinear Problems with Applications)

Abstract

An effective algorithm for solving quadratic Riccati differential equation (QRDE), multipantograph delay differential equations (MPDDEs), and optimal control systems (OCSs) with pantograph delays is presented in this paper. This technique is based on Genocchi polynomials (GPs). The properties of Genocchi polynomials are stated, and operational matrices of derivative are constructed. A collocation method based on this operational matrix is used. The findings show that the technique is accurate and simple to use.
Keywords: Genocchi polynomials; operational matrix of derivatives Genocchi polynomials; operational matrix of derivatives

1. Introduction

Riccati differential equations (RDEs) play significant role in many fields of applied science [1]. For example, a one-dimensional static Schrödinger equation [2,3,4]. The applications of this equation found not only in random processes, optimal control, and diffusion problems [1] but also in stochastic realization theory, optimal control, network synthesis and financial mathematics. Now, RDEs attracted much attention. Recently, various iterative methods are employed for the numerical and analytical solution of functional equations such as Adomian’s decomposition method (ADM) (see [5,6]), homotopy perturbation method (HPM) [7], variational iteration method (VIM) [8], and differential transform method (DTM) [9].
The GPs are non-orthogonal polynomials, which were first applied to solve fractional calculus problem (FCP) involving differential equation [10], this GPs were successfully applied to solve different kinds of problems in numerical analysis, system of Volterra integro-differential equation [11] and fractional Klein-Gordon equation [12], differential topology (differential structures on spheres), theory of modular forms (Eisenstein series).
In this paper, a new operational matrix of fractional order derivative based on Genocchi polynomials is introduced to provide approximate solutions of QRDE. Although the method is very easy to utilize and straightforward, the obtained results are satisfactory (see the numerical results).
The outline of this sequel is as follows: In Section 2, Some basic preliminaries are stated. Explanation of the problem is explained in Section 3. Some numerical results are provided in Section 4. A remark is provided about MPDDEs and OCSs. Numerical applications for solving MPDDEs are stated in Section 5. Finally, Section 6 will give a conclusion briefly.

2. Some Basic Preliminaries

Genocchi numbers ( G n ) and Genocchi polynomials ( G n ( x ) ) have been extensively studied in various papers, (see [13]). The classical Genocchi polynomials G n ( x ) are usually defined by the following form
2 t e x t e t + 1 = i = 0 G n ( x ) t n n ! , ( | t | < π ) ,
where
G n ( x ) = k = 0 n n k G k x n k , G 1 = 1 , G 2 = 0 , G 3 = 0 , G 4 = 1 , G 5 = 0 , G 6 = 3 , G 7 = 0 , G 8 = 17 , G 9 = 0 , G 10 = 155 , G 11 = 0 , G 12 = 2073 , G 2 n + 1 = 0 , n N , G 1 ( x ) = 1 , G 2 ( x ) = 2 x 1 , G 3 ( x ) = 3 x 2 3 x , G 4 ( x ) = 4 x 3 6 x 2 + 1 , G 5 ( x ) = 5 x 4 10 x 3 + 5 x , G 6 ( x ) = 6 x 5 15 x 4 + 15 x 2 3 , G n ( x + 1 ) + G n ( x ) = 2 n x n 1 , n 0 , d G n ( x ) d x = n G n 1 ( x ) , n 1 , a b G n ( x ) d x = G n + 1 ( b ) G n + 1 ( a ) n + 1 , 0 1 G n ( x ) G m ( x ) d x = 2 ( 1 ) n n ! m ! ( n + m ) ! G n + m , m , n 1 ,
from (2), we have
G n ( x ) = 0 x n G n 1 ( t ) d t + G n , n 1 ,
also, we have
e t x = 1 2 t ( 2 t e t + 1 e ( 1 + x ) t + 2 t e t + 1 e x t ) = 1 2 t n = 0 ( G n ( x + 1 ) + G n ( x ) ) t n n ! .

3. Explanation of the Problem

Firstly, Riccati differential equation (RDE) is considered
y ( x ) = p ( x ) q ( x ) y ( x ) + r ( x ) y 2 ( x ) , x 0 x x f , y ( x 0 ) = α ,
where p ( x ) , q ( x ) and r ( x ) are continuous, x 0 , x f and α are arbitrary constants, and y ( x ) is unknown function.
Now, the collocation method based on Genocchi operational matrix of derivatives to solve numerically RDEs is presented.
Our strategy is utilizing Genocchi polynomials (GPs) to approximate the solution y ( x ) by y N ( x ) is as given below.
y ( x ) y N ( x ) = n = 1 N c n G n ( x ) = G ( x ) C ,
where
C T = [ c 1 , c 2 , , c n ] ,
G ( x ) = [ G 1 ( x ) , G 2 ( x ) , , G N ( x ) ] , M = 0 0 0 0 0 0 , 2 0 0 0 0 0 0 0 0 N 1 0 0 0 0 0 0 N 0
G ( x ) T = M G T ( x ) , G ( x ) = G ( x ) M T , G ( k ) ( x ) = G ( x ) ( M T ) k ,
then, the k-th derivative of y N ( x ) can be stated as
y N ( k ) ( x ) = G ( k ) ( x ) C = G ( x ) ( M T ) k C ,
by Equations (3) and (6), we have
G ( x ) M T C = p ( x ) q ( x ) G ( x ) C + r ( x ) ( G ( x ) C ) 2 ,
to obtain y N ( x ) , one may use the collocation points x j = j 1 N , j = 1 , 2 , , N 1 .
These equations can be solved by Maple 15 software.
Lemma 1.
If y ( x ) C n + 1 [ 0 , 1 ] and U = S p a n { G 1 ( x ) , G 2 ( x ) , , G N ( x ) } , then G ( x ) C is the best approximation of y ( x ) out of U when
y ( x ) G ( x ) C h 2 n + 3 2 R ( n + 1 ) ! 2 n + 3 , x [ x i , x i + 1 ] [ 0 , 1 ] ,
where R = max x [ x i , x i + 1 ] | y ( n + 1 ) ( x ) | and h = x i + 1 x i .
Proof. 
(The proof is coming in [14], but we state again here). One may set
y 1 ( t ) = f ( t i ) + f ( t i ) ( t t i ) + f ( t i ) ( t t i ) 2 2 ! + + f ( n ) ( t i ) ( t t i ) n n ! ,
from Taylor’s expansion, we have
| f ( t ) y 1 ( t ) | | f ( n ) ( ζ t ) | ( t t i ) n + 1 ( n + 1 ) ! , ζ [ t i , t i + 1 ] ,
since C T G ( t ) is the best approximation of f ( t ) out of Y and y 1 ( t ) Y , then
f ( t ) C T G ( t ) 2 2 f ( t ) y 1 ( t ) 2 2 = t i t i + 1 | f ( s ) y 1 ( s ) | 2 d s t i t i + 1 f ( n + 1 ) ( ζ t ) 2 ( s t i ) n + 1 ( n + 1 ) ! d s h 2 n + 3 R 2 ( ( n + 1 ) ! ) 2 ( 2 n + 3 )
therefore
f ( t ) C T G ( t ) h 2 n + 3 2 R ( n + 1 ) ! 2 n + 3 .
 □

4. Numerical Applications

In this section, some results are given to demonstrate the quality of the sated technique in approximating the solution of RDEs.
Example 1.
First, the following RDE is considered (see [15])
y ( x ) = 1 + 2 y ( x ) y 2 ( x ) , 0 x 1 , y ( 0 ) = 1 , y e x a c t ( x ) = 1 + 2 tanh 2 x + 1 2 log ( 2 1 2 + 1 ) , y e x a c t ( 0 ) = 2 × 10 10 0 .
One may achieve y a p p r o x ( x ) = 0.4836486196 + 1.959259361 x + 0.1873135074 x 2 0.5349351716 x 3 with this technique by n = 4 . The approximate and exact solution for y ( x ) are shown in Figure 1. Table 1 demonstrates the absolute error of the this technique.
Example 2.
Second, the following RDE is considered (see [15])
y ( x ) = 16 x 2 5 + 8 x y ( x ) + y 2 ( x ) , 0 x 1 , y ( 0 ) = 1 , y e x a c t ( x ) = 1 4 x ,
One may obtain y a p p r o x ( x ) = 1 4 x 2.864375404 × 10 14 x 2 + 1.687538998 × 10 14 x 3 with this method by n = 4 . The approximate and exact solution for y ( x ) are shown in Figure 2. Table 2 demonstrates the absolute error of the this technique and stated technique in [15].
Example 3.
Third, the following RDE is considered (see [15])
y ( x ) = 16 x 2 5 + 8 x y ( x ) + y 2 ( x ) , 0 x 1 , y ( 0 ) = 1 , y e x a c t ( x ) = 1 4 x ,
One may obtain y a p p r o x ( x ) = 0.9999999998 + 1.026769538 x + 0.3911797260 x 2 + 0.3003325646 x 3 with this method by n = 4 . The approximate and exact solutions for y ( x ) are shown in Figure 3. Table 3 demonstrates the absolute error of the this technique.
Remark 1.
Delay differential equations (DDEs) are defined as distributed delay systems. DDEs are encountered in various practical systems such as engineering, and in the modeling of feeding system (see [16]). Many researchers used various polynomials for solving DDEs. Orthogonal functions were utilized for solving OCSs with time delay ([17]). Also Chebyshev polynomials (ChPs) were used to solving time-varying systems with distributed time delay. The stated technique in [18] is based on expanding all time functions in terms of ChPs. The Bezier technique is utilized for solving DDEs and switched systems (see [15]). Using Bessel polynomials, pantograph equations were solved in [19].
Here, the following system of MPDDEs is considered
u 1 ( x ) = β 1 u 1 ( x ) + f 1 ( x , u 1 ( η 11 x ) , , u m ( η 1 m x ) ) , u 2 ( x ) = β 2 u 2 ( x ) + f 2 ( x , u 1 ( η 21 x ) , , u m ( η 2 m x ) ) , u m ( x ) = β m u m ( x ) + f m ( x , u 1 ( η m 1 x ) , , u m ( η m m x ) ) , u i ( x 0 ) = u i 0 , i = 1 , 2 , , m , x 0 x x f , x 0 , x f R , 0 < η i j 1 , i , j = 1 , 2 , , m ,
where u i 0 is given constant, f i and β i ( i = 1 , 2 , , m ) are given continuous functions.
MPDDEs are in various applications such as astrophysics, number theory, nonlinear dynamical systems (NDSs), quantum mechanics and cell growth, probability theory on algebraic structures, and etc. Properties of the analytic solution of MPDDEs as well as numerical techniques have been studied by several researchers. For example, there are treated in [20].
In this sequel, a new operational matrix of fractional order derivative based on GPs is introduced to provide approximate solutions of MPDDEs and optimal control systems with pantograph delays.
Our strategy is utilizing GPs to approximate the solution u i ( x ) by u i N ( x ) is as given below.
u i ( x ) u i N ( x ) = n = 1 N c n i G n ( x ) = G ( x ) C i ,
where
C i T = [ c 1 i , c 2 i , , c n i ] ,
also G ( x ) satisfies in Equations (4) and (5), then, the k-th derivative of u i ( x ) can be stated as
u i ( k ) ( x ) = G ( k ) ( x ) C i = G ( x ) ( M T ) k C i ,
by Equations (8) and (6), we have
G ( x ) M T C 1 = β 1 G ( x ) C 1 + f 1 ( x , G ( η 11 x ) C 1 , , G ( η 1 m x ) C m ) , G ( x ) M T C 2 = β 2 G ( x ) C 2 + f 2 ( x , G ( η 21 x ) C 1 , , G ( η 2 m x ) C m ) , G ( x ) M T C m = β m G ( x ) C m + f m ( x , G ( η m 1 x ) C 1 , , G ( η m m x ) C m ) , G ( x ) C i = u i 0 , i = 1 , 2 , , m , x 0 x x f , 0 < η i j 1 , i , j = 1 , 2 , , m ,
to obtain u i ( x ) , one may use the collocation points x j = j 1 N , j = 1 , 2 , , N 1 .

5. Numerical Applications for Solving MPDDEs

In this section, some findings are given to demonstrate the quality of the sated technique in approximating the solution of MPDDEs and optimal control systems with pantograph delays.
Example 4.
The following time-varying system described by (see [21])
d u d x = 5 4 e 1 4 x u ( 4 5 x ) , 0 x 1 , u ( 0 ) = 1 , u e x a c t ( x ) = e 1.25 x ,
One may obtain
u a p p r o x ( x ) = 1 1.25 x + 0.775524733283677 x 2 0.298896140751031 x 3 + 0.0598762043673534 x 4 ,
with this method by n = 5 . The approximate and exact solution for u ( x ) are shown in Figure 4. Table 4 demonstrates the absolute error of the this technique.
Example 5.
Consider the time-varying system described by (see [21])
u ( x ) = u ( x ) + 0.1 u ( 0.8 x ) + 0.5 x ( 0.8 x ) + ( 0.32 x 0.5 ) e 0.8 x + e x , u ( 0 ) = 1 , u e x a c t ( x ) = x e x ,
One may obtain
u a p p r o x ( x ) = 1.11022302462516 × 10 16 + x 0.941756802771441 x 2 + 0.309636243942884 x 3 ,
with this technique by n = 4 . The approximate and exact u ( x ) are shown in Figure 5.
Example 6.
The following optimal control system with pantograph delay is considered (see [21])
min I = 1 2 0 1 u 1 2 ( x ) + u 2 2 ( x ) d x , s . t . d u 1 d x = u 1 ( 0.5 x ) + 4 u 2 ( x ) , u 1 ( 0 ) = 1 ,
One may obtain
u 1 , a p p r o x ( x ) = 1.11022302462516 × 10 16 + x 0.941756802771441 x 2 + 0.309636243942884 x 3 ,
with this technique by n = 5 . The approximate and exact u 2 ( x ) is shown in Figure 6.
Example 7.
First, the following two-dimensional pantograph equations is considered (see [22])
u 1 ( x ) u 1 ( x ) + u 2 ( x ) u 1 ( x 2 ) f 1 ( x ) = 0 , u 2 ( x ) + u 1 ( x ) + u 2 ( x ) + u 1 ( x 2 ) f 2 ( x ) = 0 , f 1 ( x ) = e x e x 2 , f 2 ( x ) = e x e x 2 , u 1 ( 0 ) = 1 , u 2 ( 0 ) = 1 , u 1 , e x a c t ( x ) = e x , u 2 , e x a c t ( x ) = e x ,
One may achieve
u 1 , a p p r o x ( x ) = 1 + 0.876603255540951 x + 0.841678572918099 x 2 , u 2 , a p p r o x ( x ) = 1 0.941756802371442 x + 0.309636243542884 x 2 ,
with this technique by n = 3 . The approximate and exact u 1 , a p p r o x ( x ) and u 2 , a p p o x ( x ) are shown in Figure 7 and Figure 8. Table 5 demonstrates the absolute error of the this technique.

6. Conclusions

In this paper, GPs stated for solving the RDEs, also GPs stated for solving the MPDDEs and optimal control systems with pantograph delays. The stated technique is computationally attractive. Some results are included to explain the validity of this technique. The presented approximate solutions are more accurate compared to the references as it is shown in the tables. By stated technique, the high orders of convergence obtained when it achieved accurate solutions even for small values of n.

Author Contributions

Conceptualization, F.G.; methodology, S.S.; software, F.G.; validation, F.G. and S.S., formal analysis, F.G.; investigation, F.G.; resources, F.G. and S.S.; data curation, S.S.; writing—original draft preparation, F.G.; writing—review and editing, S.S.; visualization, F.G.; supervision, S.S.; project administration, F.G. and S.S.; funding acquisition, S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare that there is no conflict.

References

  1. Reid, W.T. Riccati Differential Equations; Academic Press: NewYork, NY, USA, 1972. [Google Scholar]
  2. Dehghan, M.; Taleei, A. A compact split-step finite difference method for solving the nonlinear Schrödinger equations withconstant and variable coefficients. Comput. Phys. Commun. 2010, 181, 43–51. [Google Scholar] [CrossRef]
  3. Medina-Dorantes, F.I.; Villafuerte-Segura, R.; Aguirre-Hernández, B. Controller with time-delay to stabilize first-order procesases with dead-time. J. Control Eng. Appl. Inform. 2018, 20, 42–50. [Google Scholar]
  4. Villafuerte-Segura, R.; Medina-Dorantes, F.I.; Vite-Hernandez, L.; Aguirre-Hernández, B. Tuning of a time-delayed controller for a general clases of second-order LTI Systems with dead-time. IET Control Theory Appl. 2018, 13, 451–457. [Google Scholar] [CrossRef]
  5. Bulut, H.; Evans, D.J. On the solution of the Riccati equationby the decomposition method. Int. J. Comput. Math. 2002, 79, 103–109. [Google Scholar] [CrossRef]
  6. El-Tawil, M.A.; Bahnasawi, A.A.; Abdel-Naby, A. Solving Riccatidifferential equation using Adomian’s decomposition method. Appl. Math. Comput. 2004, 157, 503–514. [Google Scholar]
  7. Abbasbandy, S. Homotopy perturbation method for quadraticRiccati differential equation and comparison with Adomian’s decomposition method. Appl. Math. Comput. 2006, 172, 485–490. [Google Scholar]
  8. Geng, F.; Lin, Y.; Cui, M. A piecewise variational iteration methodfor Riccati differential equations. Comput. Math. Appl. 2009, 58, 2518–2522. [Google Scholar] [CrossRef]
  9. Mukherjee, S.; Roy, B. Solution of Riccati equation with variableco-efficient by differential transform method. Int. J. Nonlinear Sci. 2012, 14, 251–256. [Google Scholar]
  10. Isah, A.; Phang, C. Operational Matrix Based on Genocchi Polynomials for Solution of Delay Differential Equations. Ain Shams Eng. J. 2018, 9, 2123–2128. [Google Scholar] [CrossRef]
  11. Loh, J.R.; Phang, C. A New Numerical Scheme for Solving System of Volterra Integro-differential Equation. Alexandria Eng. J. 2018, 57, 1117–1124. [Google Scholar] [CrossRef]
  12. Afshan, K.; Phang, C.; Iqbal, U. Numerical Solution of Fractional Diffusion Wave Equation and Fractional Klein-Gordon Equation via Two-Dimensional Genocchi Polynomials with a Ritz-Galerkin Method. Computation 2018, 6, 40. [Google Scholar]
  13. Isah, A.; Phang, C. Operational matrix based on Genocchi polynomials for solution of delay differential equations. Ain Shams Eng. J. 2017. [Google Scholar] [CrossRef]
  14. Isah, A.; Phang, C. New operational matrix of derivative for solving non-linear fractional differential equations via Genocchi polynomials. J. King Saud Univ.-Sci. 2017. [Google Scholar] [CrossRef]
  15. Ghomanjani, F.; Khorram, E. Approximate solution for quadratic Riccati differential equation. J. Taibah Univ. Sci. 2017, 11, 246–250. [Google Scholar] [CrossRef]
  16. Chen, W.; Zheng, W.X. Delay-dependent robust stabilization for uncertain neutral systemswith distributed delays. Automatica 2007, 43, 95–104. [Google Scholar] [CrossRef]
  17. Nazarzadeh, J. Finite Time Nonlinear Optimal Systems Solution by Spectral Methods. Ph.D. Thesis, Amirkabir University of Technology, Tehran, Iran, 1998. [Google Scholar]
  18. Esfanjani, R.M.; Nikravesh, S.K.Y. Predictive control for a class of distributed delay systems using Chebyshev polynomials. Int. J. Comput. Math. 2010, 87, 1591–1601. [Google Scholar] [CrossRef]
  19. Yüzbasi, S.; Sahin, N.; Sezar, M. A Bessel collocation method for numerical solution of generalized pantograph equations. Numer. Methods Part. Equ. 2012, 28, 1105–1123. [Google Scholar] [CrossRef]
  20. Derfel, G.A.; Vogl, F. On the asymptotics of solutions of a class of linear functional-differential equations. Eur. J. Appl. Math. 1996, 7, 511–518. [Google Scholar] [CrossRef]
  21. Ghomanjani, F.; Farahi, M.H.; Kamyad, A.V. Numerical solution of some linear optimal control systems with pantograph delays. IMA J. Math. Control Inf. 2015, 32, 225–243. [Google Scholar] [CrossRef]
  22. Komashynska, I.; Al-Smadi, M.; Al-Habahbeh, A.; Ateiwi, A. Analytical approximate solutions of systems of multi-pantograph delay differential equations using residual power-series method. Aust. J. Basic Appl. Sci. 2014, 8, 664–675. [Google Scholar]
Figure 1. The approximate and exact solution of y ( x ) for Example 1.
Figure 1. The approximate and exact solution of y ( x ) for Example 1.
Axioms 09 00082 g001
Figure 2. The approximate and exact solution of y ( x ) for Example 2.
Figure 2. The approximate and exact solution of y ( x ) for Example 2.
Axioms 09 00082 g002
Figure 3. The approximate and exact solution of y ( x ) for Example 3.
Figure 3. The approximate and exact solution of y ( x ) for Example 3.
Axioms 09 00082 g003
Figure 4. The approximate and exact solution of u ( x ) for Example 4.
Figure 4. The approximate and exact solution of u ( x ) for Example 4.
Axioms 09 00082 g004
Figure 5. The approximate and exact solution of u ( x ) for Example 5.
Figure 5. The approximate and exact solution of u ( x ) for Example 5.
Axioms 09 00082 g005
Figure 6. The approximate solution of u 2 ( x ) for Example 6.
Figure 6. The approximate solution of u 2 ( x ) for Example 6.
Axioms 09 00082 g006
Figure 7. The approximate and exact solution of u 1 , a p p r o x ( x ) for Example 7.
Figure 7. The approximate and exact solution of u 1 , a p p r o x ( x ) for Example 7.
Axioms 09 00082 g007
Figure 8. The approximate and exact solution of u 2 , a p p r o x ( x ) for Example 7.
Figure 8. The approximate and exact solution of u 2 , a p p r o x ( x ) for Example 7.
Axioms 09 00082 g008
Table 1. The absolute error of the this method for Example 1.
Table 1. The absolute error of the this method for Example 1.
xError of y
0.1 0.01576255020
0.2 0.01957152970
0.3 0.01520160000
0.4 0.007259874300
0.5 7.000000000 × 10 10
0.6 0.003753795400
0.7 0.003263092300
0.8 3.000000000 × 10 10
0.9 0.002664043000
1.0 0.0
Table 2. The absolute error of the this method for Example 2.
Table 2. The absolute error of the this method for Example 2.
xError of y for Presented MethodError in [15]
0.1 2.695621504 × 10 16 0.000233600365141
0.2 1.010747042 × 10 15
0.3 2.122302334 × 10 15 0.00045422294912
0.4 3.502975687 × 10 15
0.5 5.051514762 × 10 15 9.375 × 10 11
0.6 6.666667214 × 10 15
0.7 8.247180717 × 10 15 0.00045422275331
0.8 9.691802920 × 10 15
0.9 1.089928147 × 10 14 0.00023360043610
1.0 1.176836406 × 10 14
Table 3. The absolute error of the this method for Example 3.
Table 3. The absolute error of the this method for Example 3.
xError of y
0.1 0.001718166000
0.2 0.002000999000
0.3 0.001487207000
0.4 0.0006931570000
0.5 0.0
0.6 0.0003605420000
0.7 0.0003218950000
0.8 0.0
0.9 0.0002874910000
1.0 1.000000000 × 10 9
Table 4. The absolute error of the this method for Example 4.
Table 4. The absolute error of the this method for Example 4.
xError of u ( x )
0.1 0.3456378748 × 10 4
0.2 0.00007516096767
0.3 0.00007725134937
0.4 0.4322455087 × 10 4
0.5 5.551115123 × 10 16
0.6 0.2074096591 × 10 4
0.7 2.775557562 × 10 16
0.8 0.00005314265411
0.9 0.00008794236230
1.0 3.885780586 × 10 16
Table 5. The absolute error of the this method for Example 7.
Table 5. The absolute error of the this method for Example 7.
xError of u 1 , a p p r o x ( x ) Error of u 2 , a p p r o x ( x )
0.2 0.01241496400 0.005303336200
0.4 0.006514824000 0.002519032000
0.6 0.006847440000 0.002396669800
0.8 0.01441596300 0.004567210100
1.0 0.0 0.0
Back to TopTop