Next Article in Journal
Forecasting Economic Recession through Share Price in the Logistics Industry with Artificial Intelligence (AI)
Previous Article in Journal
A Robust Approximation of the Schur Complement Preconditioner for an Efficient Numerical Solution of the Elliptic Optimal Control Problems
Open AccessArticle

On the Solution of Equations by Extended Discretization

1
Department of Computing and Technology, Cameron University, Lawton, OK 73505, USA
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Karnataka 575025, India
*
Author to whom correspondence should be addressed.
Computation 2020, 8(3), 69; https://doi.org/10.3390/computation8030069
Received: 4 July 2020 / Revised: 27 July 2020 / Accepted: 28 July 2020 / Published: 31 July 2020

Abstract

The method of discretization is used to solve nonlinear equations involving Banach space valued operators using Lipschitz or Hölder constants. But these constants cannot always be found. That is why we present results using ω continuity conditions on the Fréchet derivative of the operator involved. This way, we extend the applicability of the discretization technique. It turns out that if we specialize ω continuity our new results improve those in the literature too in the case of Lipschitz or Hölder continuity. Our analysis includes tighter upper error bounds on the distances involved.
Keywords: banach space; lipschitz condition; hölder condition; newton’s method; discretization banach space; lipschitz condition; hölder condition; newton’s method; discretization

1. Introduction

Let X , Y stand for Banach spaces, D X be a convex set and L ( X , Y ) denote the space of bounded linear operators acting between X and Y .
We are interested in generating a sequence approximating a solution x * of equation
F ( x ) = 0 .
Here F : D X Y is differentiable according to Fréchet. We resort to iterative methods to approximate x * , since closed form solutions are found only in special cases.
The Newton-Kantorovich method (NKM) defined for x 0 D and all n = 0 , 1 , 2 , by
x n + 1 = x n F ( x n ) 1 F ( x n ) ,
is without a doubt the most popular iterative method generating a sequence { x n } such that lim n x n = x * . Local as well as semi-local convergence results on NKM can be found in [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28], and the references therein. If the Banach space is infinite dimensional, then to find the solution x * is a great challenge even if NKM is used. However, for instance, in line with the approach first-optimize-and -then-discretize the Newton method is often utilized in the context of optimal control of ODEs or PDEs constrained optimization, see, e.g., [17,19,26] and references therein. That is why, we first discretize Equation (1), then solve the finite dimensional problem. Therefore, we define the discretized NKM (DNKM) as follows x ( 0 ) D , for i = 0 , 1 , 2 ,
x n ( i + 1 ) = x n ( i ) A n ( x n ( i ) ) F ( x n ( i ) ) ,
where A n : D L ( Y , X ) is an approximation to F ( x n ( i ) ) 1 . There is a plethora of discretization studies based on Lipschitz or Hölder constants [1,2,10,11,14]. But there are problems in the literature, where these constants cannot be found (see Example 3.3). Hence, the applicability of the aforementioned results is limited. That is why we present results using ω continuity conditions on the Fréchet derivative of the operator involved. This way, we extend the applicability of the discretization technique. It turns out that if we specialize ω continuity our new results improve those in the literature too in the case of Lipschitz or Hölder continuity. Our analysis includes tighter upper error bounds on the distances involved.
The rest of the study contains the convergence of DNKM in Section 2 and the examples in Section 3.

2. Convergence of DNKM

It is convenient for the local convergence of DNKM to develop some real functions and parameters. Let T = [ 0 , ) . Let ω 0 : T T be continuous and nondecreasing function.
Suppose that equation
ω 0 ( t ) 1 = 0 ,
has a least positive solution denoted by r 0 . Set T 0 = [ 0 , r 0 ) .
Let f , ω : T 0 T and c n : T 0 × T 0 T be continuous and nondecreasing functions for all n = 0 , 1 , 2 , .
Suppose equation
h n ( t ) = 0
has a least zero e ¯ n ( 0 , r 0 ) for all n = 0 , 1 , 2 , , where
h n ( t ) = 3 c n ( t , t ) + 2 ( 1 + f ( t ) ) 2 0 1 ω ( ( 1 τ ) t ) d τ 1 ω 0 ( t ) 1 .
Notice that this hypothesis implies
0 c n < 1 3 for all n = 0 , 1 , 2 , .
We denote by U ( x , a ) , U ¯ ( x , a ) the open and closed balls in X , with center x X and of radius a > 0 . Set ρ = sup { t 0 : U ¯ ( x * , t ) D } .
The following conditions (H) shall be used in our local convergence analysis of DNKM:
(H1)
F : D Y is differentiable, and there exists a simple solution x * of Equation (1).
(H2)
There exists continuous and nondecreasing function ω 0 : T T such that for all x D
F ( x * ) 1 ( F ( x ) F ( x * ) ) ω 0 ( x x * ) .
Set D 0 = D U ( x * , r 0 ) , provided r 0 exists and is given by (4).
(H3)
There exist continuous and nondecreasing function ω : T 0 T such that for all x , y D 0
F ( x * ) 1 ( F ( y ) F ( x ) ) ω ( y x ) .
(H4)
Least zeros e n ¯ ( 0 , r 0 ) of functions h n given in (5) exist for all n = 0 , 1 , 2 , .
(H5)
U ¯ ( x * , r ) D , where r = min { ρ , r 0 } .
(H6)
sup x , y U ( x * , r ) ( I A n ( x ) F ( x ) ) ( I A n ( y ) ) F ( y ) c n ( x x * , y x * )
(H7)
sup x U ( x * , r ) I A n ( x ) F ( x ) f ( x x * ) ,
where functions c n and f are defined previously, and
(H8)
The initial function x n ( 0 ) chosen from the ball U ¯ ( x * , e n ) is such that the first iterate x n ( 1 ) U ¯ ( x * , e n ) , where e n = min { r , e ¯ n } .
Next, we present the main result for DNKM. In particular, we show that DNKM converges to x * as long as the approximation A n fullfills the set of conditions (H).
Theorem 1.
Suppose that conditions (H) hold. Then, sequence { x n ( i ) } is well defined, remains in U ( x * , e n ) , and converges to x * , so that
x n ( i ) x * e n ( 1 c n ) i 0 a s i ,
where c n = c n ( r , r ) .
Proof. 
Let x U ( x * , r ) . Using (H1), (H2), and the definition of r 0 , we have
F ( x * ) 1 ( F ( x ) F ( x * ) ) ω 0 ( x x * ) ω 0 ( r ) < 1 ,
by the Banach on lemma on invertible operators [29] F ( x ) 1 L ( Y , X ) and
F ( x ) 1 F ( x * ) 1 1 ω 0 ( x x * ) .
We need some estimates. Let B n = I A n ( x ) F ( x ) . In view of (H7) and (8), we get
A n ( x ) 1 + ω ( x x * ) 1 ω 0 ( x x * ) .
Set
T n ( x n ( i ) ) : = A n ( x n ( i ) ) 0 1 ( F ( x n ( i ) + τ ( x * x n ( i ) ) ) F ( x n ( i ) ) ) d τ ( x n ( i ) x * ) .
Then, by DNKM, we can write
x n ( i + 1 ) x * = I A n ( x n ( i ) ) F ( x n ( i ) ) ( x n ( i ) x * ) + T n ( x n ( i ) x * ) ,
x n ( i + 1 ) x * = B n ( x n ( i ) ) ( x n ( i ) x * ) + T n ( x n ( i ) ) ( x n ( i ) x * ) ,
and
x n ( i ) x * = B n ( x n ( i 1 ) ) ( x n ( i 1 ) x * ) + T n ( x n ( i 1 ) ) ( x n ( i 1 ) x * ) .
Hence, we obtain
x n ( i + 1 ) x * = B n ( x n ( i ) ) ( B n ( x n ( i 1 ) ) ( x n ( i ) x * ) + T n ( x n ( i 1 ) ) ( x n ( i 1 ) x * ) ) + T n ( x n ( i ) ) ( x n ( i ) x * ) .
Suppose that for all i = 1 , 2 , , x n ( i 1 ) , x n ( i ) U ( x * , e n ) . Then, by (H3) we have
0 1 F ( x * ) 1 ( F ( x n ( i ) + τ ( x * x n ( i ) ) ) F ( x n ( i ) ) ) d τ ( x * x n ( i ) ) ω ( x n ( i ) x * ) x n ( i ) x * .
Using (10), (11) and (H4)–(H8), we obtain in turn that
x n ( i + 1 ) x * B n ( x n ( i ) ) B n ( x n ( i 1 ) ) ( x n ( i 1 ) x * ) + B n ( x n ( i ) ) T n ( x n ( i ) ) ( x n ( i ) x * ) c n ( x n ( i ) x * , x n ( i 1 ) x * ) x n ( i 1 ) x * + f ( x n ( i ) x * ) ( 1 + f ( x n ( i 1 ) x * ) ) ω ( x n ( i 1 ) x * ) x n ( i 1 ) x * 1 ω 0 ( x n ( i 1 ) x * ) + ( 1 + f ( x n ( i ) x * ) ) ω ( x n ( i ) x * ) x n ( i ) x * 1 ω 0 ( x n ( i ) x * ) [ c n + f ( x n ( i ) x * ) ( 1 + f ( x n ( i 1 ) x * ) ) ω ( x n ( i 1 ) x * ) 1 ω 0 ( x n ( i 1 ) x * ) + ( 1 + f ( x n ( i ) x * ) ) ω ( x n ( i ) x * ) 1 ω 0 ( x n ( i 1 ) x * ) ] ( x n ( i 1 ) x * + x n ( i ) x * ) 1 c n 2 ( x n ( i 1 ) x * + x n ( i ) x * ) ( 1 c n ) e n ,
(by the definition of h n ). Then, by (6) and (12) x n ( i + 1 ) U ( x * , e n ) .  ☐
Remark 1.
We shall provide a condition under which x n ( 1 ) U ( x * , e n ) provided that x n ( 0 ) U ( x * , e n ) , where
x n ( 1 ) = x n ( 0 ) A n ( 0 ) F ( x n ( 0 ) ) .
Proposition 1.
Under the conditions (H) further suppose that for all v D
( A n ( x n ( 0 ) ) F ( x n ( 0 ) ) 1 ) v 0 a s n .
Then, for sufficiently large n ,
x n ( 1 ) U ( x * , e n ) .
Proof. 
Set α = x ( 0 ) F ( x ( 0 ) ) 1 F ( x ( 0 ) ) . Then, we get in turn that
α x * = x ( 0 ) x * F ( x ( 0 ) ) 1 ( F ( x ( 0 ) ) F ( x * ) ) = F ( x ( 0 ) ) 1 0 1 ( F ( x * + τ ( x ( 0 ) x * ) ) F ( x ( 0 ) ) ) d τ ( x ( 0 ) x * ) .
Hence, by the calculations for (12) and (14)
α x * ( 1 + f ( x ( 0 ) x * ) ) ω ( x ( 0 ) x * ) 1 ω 0 ( x ( 0 ) x * ) x ( 0 ) x * ( 1 c n ) e n e n ,
so α U ( x * , e n ) . Moreover, we can write
α x n ( 1 ) = ( A n ( x ( 0 ) ) F ( x ( 0 ) ) 1 ) F ( x ( 0 ) ) .
Then, the proof is completed by (13) and (16). ☐
Remark 2.
If we consider:
  • Lipschitz case:We choose ω 0 ( t ) = 0 t , ω ( t ) = t for 0 < 0 .
  • Hölder case: We set ω 0 ( t ) = 0 t p and ω ( t ) = t p for p ( 0 , 1 ] .
Moreover, in the Lipschitz case, if A n ( x n ) = F ( x n ) 1 , then c n = f = 0 , (4) and (5) give t 1 0 t 1 = 0 , r 0 = 1 0 and e ¯ n = 1 + 0 . In the old cases r 0 o l d = 1 1 and e ¯ n o l d = 1 2 1 . But 0 1 and 1 , since D 0 D , where 1 is the Lipschitz constant on D . Then, we have
r 0 o l d r 0
and
e ¯ n o l d e ¯ n .
Hence, the results are extended even in the Lipschitz case.

3. Numerical Examples

Example 1.
Let us consider a system of differential equations governing the motion of an object and given by
F 1 ( x ) = e x , F 2 ( y ) = ( e 1 ) y + 1 , F 3 ( z ) = 1
with initial conditions F 1 ( 0 ) = F 2 ( 0 ) = F 3 ( 0 ) = 0 . Let F = ( F 1 , F 2 , F 3 ) T = ( e x , e 1 2 y 2 + y , z ) T . Let X = Y = R 3 , D = U ¯ ( 0 , 1 ) , x * = ( 0 , 0 , 0 ) T . Define function F on D for w = ( x , y , z ) T by
F ( w ) = ( e x 1 , e 1 2 y 2 + y , z ) T .
The Fréchet-derivative is defined by
F ( v ) = e x 0 0 0 ( e 1 ) y + 1 0 0 0 1 .
Notice that using the (H) conditions, we get ω 0 ( s ) = ( e 1 ) s , ω ( s ) = e 1 e 1 s .
Then, for c n = 0 = f , we have r 0 = 0.5820 , e n = 0.2851 and for c n = 10 4 = f , we have r 0 = 0.5820 , e n = = 0.2850 .
Example 2.
Let X = Y = C [ 0 , 1 ] , the space of continuous functions defined on [ 0 , 1 ] be equipped with the max norm. Let D = U ¯ ( 0 , 1 ) . Define function F on D by
F ( φ ) ( x ) = φ ( x ) 5 0 1 x θ φ ( θ ) 3 d θ .
We have that
F ( φ ( ξ ) ) ( x ) = ξ ( x ) 15 0 1 x θ φ ( θ ) 2 ξ ( θ ) d θ , f o r e a c h ξ D .
Then, we get that x * = 0 , so ω 0 ( t ) = 7.5 t and ω ( t ) = 15 t . Then, for c n = 0 = f , we have r 0 = 0.1333 , e n = 0.0444 and for c n = 10 4 = f , we have r 0 = 0.1333 , e n = 0.0444 .
Example 3.
Let X = Y = C [ 0 , 1 ] , D = U ¯ ( x * , 1 ) and consider the nonlinear integral equation of the mixed Hammerstein-type defined by
x ( s ) = 0 1 G ( s , t ) ( x ( t ) 3 / 2 + x ( t ) 2 2 ) d t ,
where the kernel G is the Green’s function defined on the interval [ 0 , 1 ] × [ 0 , 1 ] by
G ( s , t ) = ( 1 s ) t , t s s ( 1 t ) , s t .
The solution x * ( s ) = 0 is the same as the solution of Equation (1), where F : C [ 0 , 1 ] C [ 0 , 1 ] ) is defined by
F ( x ) ( s ) = x ( s ) 0 1 G ( s , t ) ( x ( t ) 3 / 2 + x ( t ) 2 2 ) d t .
Notice that
0 1 G ( s , t ) d t 1 8 .
Then, we have that
F ( x ) y ( s ) = y ( s ) 0 1 G ( s , t ) ( 3 2 x ( t ) 1 / 2 + x ( t ) ) d t ,
so since F ( x * ( s ) ) = I ,
F ( x * ) 1 ( F ( x ) F ( y ) ) 1 8 ( 3 2 x y 1 / 2 + x y ) .
Then, we get w 0 ( s ) = ω ( s ) = 1 8 ( 3 2 s + s ) .
Then, for c n = 0 = f , we have r 0 = 1.4752 , e n = 1.1773 and for c n = 10 4 = f , we have r 0 = 1.4752 , e n = 1.1772 .

Author Contributions

Conceptualization, G.I.A., M.I.A., S.R., I.K.A. and S.G.; methodology, G.I.A., M.I.A., S.R., I.K.A. and S.G.; software, G.I.A., M.I.A., S.R., I.K.A. and S.G.; validation, G.I.A., M.I.A., S.R., I.K.A. and S.G.; formal analysis, G.I.A., M.I.A., S.R., I.K.A. and S.G.; investigation, G.I.A., M.I.A., S.R., I.K.A. and S.G.; resources, G.I.A., M.I.A., S.R., I.K.A. and S.G.; data curation, G.I.A., M.I.A., S.R., I.K.A. and S.G.; writing—original draft preparation, G.I.A., M.I.A., S.R., I.K.A. and S.G.; writing—review and editing, G.I.A., M.I.A., S.R., I.K.A. and S.G.; visualization, G.I.A., M.I.A., S.R., I.K.A. and S.G.; supervision, G.I.A., M.I.A., S.R., I.K.A. and S.G.; project administration, G.I.A., M.I.A., S.R., I.K.A. and S.G.; funding acquisition, G.I.A., M.I.A., S.R., I.K.A. and S.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Amorós, C.; Argyros, I.K.; Magreñán, A.A.; Regmi, S.; González, R.; Sicilia, J.A. Extending the Applicability of Stirling’s Method. Mathematics 2020, 8, 35. [Google Scholar] [CrossRef]
  2. Amat, S.; Busquier, S.; Gutiérrez, J.M. On the local convergence of secant-type methods. Intern. J. Comput. Math. 2004, 81, 1153–1161. [Google Scholar] [CrossRef]
  3. Argyros, I.K. On an extension of the mesh-independence principle for operator equations in Banach spaces. Appl. Math. Lett. 1996, 9, 1–7. [Google Scholar] [CrossRef]
  4. Argyros, I.K. Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15; Chui, C.K., Wuytack, L., Eds.; Elsevier Publ. Company: New York, NY, USA, 2007. [Google Scholar]
  5. Argyros, I.K. Convergence and Application of Newton-type Iterations, Convergence and Application of Newton-type Iterations; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  6. Argyros, I.K.; George, S. Local convergence for a Chebyshev-type method in Banach space free of derivatives. Adv. Theory Nonlinear Anal. Its Appl. 2018, 2, 62–69. [Google Scholar]
  7. Argyros, I.K.; George, S. Local comparison of two sixth-order solvers using only the first derivative. Adv. Theory Nonlinear Anal. Its Appl. 2019, 3, 220–230. [Google Scholar]
  8. Argyros, I.K.; George, S. Mathematical Modeling for the Solution of Equations and Systems of Equations with Applications, Volume-IV; Nova Publishes: New York, NY, USA, 2020. [Google Scholar]
  9. Argyros, I.K.; George, S. On the complexity of extending the convergence region for Traub’s method. J. Complex. 2020, 56, 101423. [Google Scholar] [CrossRef]
  10. Argyros, I.K.; González, D. Extending the applicability of Newton’s method by improving a local result due to Dennis and Schnabel. Sema J. 2014, 63, 53–63. [Google Scholar] [CrossRef]
  11. Argyros, I.K.; Magreñán, A.A. Iterative Method and Their Dynamics with Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
  12. Argyros, I.K.; Regmi, S. Ball Convergence Theorems Extending the Chen-Yamamoto Results for Nonlinear Equations. Panam. Math. J. 2019, 29, 97–104. [Google Scholar]
  13. Argyros, I.K.; Regmi, S. Extending the Applicability of a Theorem by Haßler for the Gauss-Newton Solve. Trans. Math. Program. Appl. 2019, 7, 57–62. [Google Scholar]
  14. Argyros, I.K.; Regmi, S. Majorizing Sequences for Single Step Iterative Processes and Restricted Convergence. Panam. Math. J. 2019, 28, 93–102. [Google Scholar]
  15. Argyros, I.K.; Regmi, S. Undergraduate Research at Cameron University on Iterative Procedures in Banach and Other Spaces; Nova Science Publisher: New York, NY, USA, 2019. [Google Scholar]
  16. Argyros, I.K.; Szidarovszky, F. The Theory and Application of Iterative Methods; CRC Press: Boca Raton, FL, USA, 1993. [Google Scholar]
  17. Bonnans, F.J. Local analysis of Newton-type method for variational inequalities and nonlinear programming. Appl. Math. Optim. 1994, 29, 161–186. [Google Scholar] [CrossRef]
  18. Cátinas, E. Inexact perturbed Newton methods and applications to a class of Krylov solvers. J. Optim. Theory. Appl. 2001, 108, 543–570. [Google Scholar] [CrossRef]
  19. Cibulka, R.; Dontchev, A.L.; Kruger, A.Y. Strong metric subregularity of mapping in variational analysis and optimization. J. Math. Anal. Appl. 2017, 457, 1247–1282. [Google Scholar] [CrossRef]
  20. Deuflhard, P.; Potra, F.A. Asymptotic mesh independence of Newton- Galerkin methods and a refined Mysovskii theorem. SIAM J. Numer. Anal. 1992, 29, 1395–1412. [Google Scholar] [CrossRef]
  21. Ezquerro, J.A.; González, D.; Hernández, M.A. On the local convergence of Newton’s method under generalized conditions of Kantorovich. Appl. Math. Lett. 2013, 26, 566–570. [Google Scholar] [CrossRef]
  22. Ezquerro, J.A.; Hernández, M.A. Gneralized differentiability conditions for Newton’s method. Ima J. Numer. Anal. 2002, 22, 187–205. [Google Scholar] [CrossRef]
  23. Laumen, M. Newton’s mesh independence principle for a class of optimal design problems. SIAM J. Control. Optim. 1999, 37, 1070–1088. [Google Scholar] [CrossRef]
  24. Magreñán, A.A. Different anomalies in a Jarratt family of iterative root finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar]
  25. Potra, F.A.; Pták, V. Nondiscrete Induction and Iterative Processes; Pitman Publishing Limited: London, UK, 1984. [Google Scholar]
  26. Preininger, J.; Scarinci, T.; Veliov, V.M. Metric regularity properties in Bang Bang type linear quadratic optimal control problems. Set Valued Var. Anal. 2019, 27, 381. [Google Scholar] [CrossRef]
  27. Rheinboldt, W.C. An adaptive continuation process for solving systems of nonlinear equations. In Mathematical Models and Numerical Methods; Tikhonov, A.N., Ed.; Banach Center: Warsaw, Poland, 1977; pp. 129–142. [Google Scholar]
  28. Traub, J.F. Iterative Methods for Solution of Equations; Englewood Cliffs: Prentice-Hall, NJ, USA, 1964. [Google Scholar]
  29. Allgower, E.L.; Böhmer, K.; Potra, F.A.; Rheinboldt, W.C. A mesh-independent principle for operator equations and their discretizations. SIAM J. Numer. Anal. 1986, 23, 160–169. [Google Scholar] [CrossRef]
Back to TopTop