Next Article in Journal
High Energy Behavior in Maximally Supersymmetric Gauge Theories in Various Dimensions
Next Article in Special Issue
Two-Step Solver for Nonlinear Equations
Previous Article in Journal
Chiral Neuronal Motility: The Missing Link between Molecular Chirality and Brain Asymmetry

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Local Convergence of a Family of Weighted-Newton Methods

by
Ramandeep Behl
1,*,
Ioannis K. Argyros
2,
3 and
Ali Saleh Alshomrani
1
1
Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Department of Mathematics Sciences, Cameron University, Lawton, OK 73505, USA
3
Institute of Engineering, Polytechnic of Porto Department of Electrical Engineering, 4200-072 Porto, Portugal
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(1), 103; https://doi.org/10.3390/sym11010103
Submission received: 9 December 2018 / Revised: 11 January 2019 / Accepted: 12 January 2019 / Published: 17 January 2019
(This article belongs to the Special Issue Symmetry with Operator Theory and Equations)

## Abstract

:
This article considers the fourth-order family of weighted-Newton methods. It provides the range of initial guesses that ensure the convergence. The analysis is given for Banach space-valued mappings, and the hypotheses involve the derivative of order one. The convergence radius, error estimations, and results on uniqueness also depend on this derivative. The scope of application of the method is extended, since no derivatives of higher order are required as in previous works. Finally, we demonstrate the applicability of the proposed method in real-life problems and discuss a case where previous studies cannot be adopted.
PACS:
65D10; 65D99; 65G99; 47J25; 47J05

## 1. Introduction

In this work, $B 1$ and $B 2$ denote Banach spaces, $A ⊆ B 1$ stands for a convex and open set, and $φ : A → B 2$ is a differentiable mapping in the Fréchet sense. Several scientific problems can be converted to the expression. This paper addresses the issue of obtaining an approximate solution $s *$ of:
$φ ( x ) = 0 ,$
by using mathematical modeling [1,2,3,4]. Finding a zero $s *$ is a laborious task in general, since analytical or closed-form solutions are not available in most cases.
We analyze the local convergence of the two-step method, given as follows:
$y j = x j − δ φ ′ ( x j ) − 1 φ ( x j ) , x n + 1 = x j − A j − 1 c 1 φ ( x j ) + c 2 φ ( y j ) ,$
where $x 0 ∈ A$ is a starting point, $A j = α φ ′ ( x j ) + β φ ′ x j + y j 2 + γ φ ′ ( y j )$, and $α , β$, $γ , δ , c 1 , c 2 ∈ S ,$ where $S = R$ or $S = C$. The values of the parameters $α , γ , β$, and $c 1$ are given as follows:
$α = − 1 3 c 2 ( 3 δ 2 − 7 δ + 2 ) , β = − 4 3 c 2 ( 2 δ − 1 ) , γ = 1 3 c 2 ( δ − 2 ) and c 1 = − c 2 ( δ 2 − δ + 1 ) , for δ ≠ 0 , c 2 ≠ 0 .$
Comparisons with other methods, proposed by Cordero et al. [5], Darvishi et al. [6], and Sharma [7], defined respectively as:
$w j = x j − φ ′ ( x j ) − 1 φ ( x j ) , x n + 1 = w j − B j − 1 φ ( w j ) ,$
$w j = x j − φ ′ ( x j ) − 1 φ ( x j ) , z j = x j − φ ′ ( x j ) − 1 φ ( x j ) + φ ( w j ) , x n + 1 = x j − C j − 1 φ ( x j ) ,$
$y j = x j − 2 3 φ ′ ( x j ) − 1 φ ( x j ) , x n + 1 = x j − 1 2 D j − 1 φ ′ ( x j ) − 1 φ ( x j ) ,$
where:
$B j = 2 φ ′ ( x j ) − 1 − φ ′ ( x j ) − 1 φ ′ ( w j ) φ ′ ( x j ) − 1 , C j = 1 6 φ ′ ( x j ) + 2 3 φ ′ x j + w j 2 + 1 6 φ ′ ( z j ) , D j = − I + 9 4 φ ′ ( y j ) − 1 φ ′ ( x j ) + 3 4 φ ′ ( x j ) − 1 φ ′ ( y j ) ,$
were also reported in [8]. The local convergence of Method (2) was shown in [8] for $B 1 = B 2 = R m$ and $S = R ,$ by using Taylor series and hypotheses reaching up to the fourth Fréchet-derivative. However, the hypothesis on the fourth derivative limits the applicability of Methods (2)–(5), particularly because only the derivative of order one is required. Let us start with a simple problem. Set $B 1 = B 2 = R$ and $A = [ − 5 2 , 3 2 ]$. We suggest a function $φ : A → R$ as:
$φ ( x ) = 0 , x = 0 x 3 l n x 2 + x 5 − x 4 , x ≠ 0 ,$
which further yield:
$φ ′ ( x ) = 3 x 2 ln x 2 + 5 x 4 − 4 x 3 + 2 x 2 ,$
$φ ″ ( x ) = 12 x ln x 2 + 20 x 3 − 12 x 2 + 10 x ,$
$φ ‴ ( x ) = 12 ln x 2 + 60 x 2 − 12 x + 22 ,$
where the solution is $s * = 1$. Obviously, the function $φ ‴ ( x )$ is unbounded in the domain $A$. Therefore, the results in [5,6,7,8,9] and Method (2) cannot be applicable to such problems or its special cases that require the hypotheses on the third- or higher order derivatives of $φ$. Without a doubt, some of the iterative method in Brent [10] and Petkovíc et al. [4] are derivative free and are used to locate zeros of functions. However, there have been many developments since then. Faster iterative methods have been developed whose convergence order is determined using Taylor series or with the technique introduce in our paper. The location of the initial points is a “shot in the dark” in these references; no uniqueness results or estimates on $∥ x n − x * ∥$ are available. Methods on abstract spaces derived from the ones on the real line are also not addressed.
These works do not give a radius of convergence, estimations on $∥ x j − s * ∥$, or knowledge about the location of $s *$. The novelty of this study is that it provides this information, but requiring only the derivative of order one for method (2). This expands the scope of utilization of (2) and similar methods. It is vital to note that the local convergence results are very fruitful, since they give insight into the difficult operational task of choosing the starting points/guesses.
Otherwise, with the earlier approaches: (i) use the Taylor series and high-order derivative; (ii) have no clue about the choice of the starting point $x 0$; (iii) have no estimate in advance about the number of iterations needed to obtain a predetermined accuracy; and (iv) have no knowledge of the uniqueness of the solution.
The work is laid out as follows: we give the convergence of the iterative scheme (2) with the main Theorem 1 is given in Section 2. Six numerical problems are discussed in Section 3. The final conclusions are summarized in Section 4.

## 2. Convergence Study

This section starts by analyzing the convergence of Scheme (2). We assume that $L > 0$, $L 0 > 0$, $M ≥ 1$ and $γ , α , β , δ , c 1 , c 2 ∈ S$. We consider some maps/functions and constant numbers. Therefore, we assume the following functions $g 1$, p, and $h p$ on the open interval $[ 0 , 1 L 0 )$ by:
$g 1 ( t ) = 1 2 ( 1 − L 0 t ) ( L t + 2 M | 1 − δ | ) , p ( t ) = L 0 | α + β + γ | | α | + | β | 2 | β | 2 + | γ | g 1 ( t ) t , for α + β + γ ≠ 0 , h p ( t ) = p ( t ) − 1 ,$
and the values of $r 1$ and $r A$ are given as follows:
$r 1 = 2 ( M | 1 − δ | − 1 ) 2 L 0 + L , r A = 2 L + 2 L 0 .$
Consider that:
$M | 1 − δ | < 1 .$
It is clear from the function $g 1$, parameters $r 1$ and $r A$, and Equation (6), that $0 < r 1 ≤ r A < 1 L 0$, $g 1 ( r 1 ) = 1$, and $0 ≤ g 1 ( t ) < 1$, for each $t ∈ [ 0 , r 1 )$ and $h p ( 0 ) = − 1$ and $h p ( t ) → + ∞$ as $t → 1 − L 0$. On the basis of the classical intermediate value theorem, the function $h p$ has at least one zero in the open interval $0 , 1 L 0$. Let us call $r p$ as the smallest zero. We suggest some other functions $g 2$ and $h 2$ on the interval $[ 0 , r p )$ by means of the expressions:
$g 2 ( t ) = 1 2 ( 1 − L 0 t ) [ L t + 2 M 2 | α − 1 | + | β | + | γ | | 1 − c 1 | + | c 2 | g 1 ( t ) | α + β + γ | ( 1 − L 0 t ) ( 1 − p ( t ) ) + 2 M | 1 − c 1 | + | c 2 | g 1 ( t ) 1 − L 0 t ]$
and:
$h 2 ( t ) = g 2 ( t ) − 1 .$
Suppose that:
$M | 1 − c 1 | + c 2 M | 1 − δ | 1 + M | α − 1 | + | β | + | γ | | α + β + γ | < 1 .$
Then, we have by Equation (7) that $h 2 ( 0 ) < 0$ and $h 2 ( t ) → + ∞$ as $t → r p −$ by the definition of $r p$. We recall $r 2$ as the least zero of $h 2$ on $( 0 , r p )$.
Define:
$r = min { r 1 , r 2 } .$
Then, notice that for all $t ∈ [ 0 , r )$:
$0 < r < r A ,$
$0 ≤ g 1 ( t ) < 1 ,$
$0 ≤ p ( t ) < 1 ,$
$0 ≤ g 2 ( t ) < 1 .$
Assume that $Q ( x , δ ) = y ∈ B 1 : ∥ x − y ∥ < δ$. We can now proceed with the local convergence study of (2) adopting the preceding notations.
Theorem 1.
Let us assume that $φ : A ⊂ B 1 → B 2$ is a differentiable operator. In addition, we consider that there exist $s * ∈ A$, $L > 0$, $L 0 > 0$, $M ≥ 1$ and the parameters $α , β , γ , c 1 , c 2 ∈ S ,$ with $α + β + γ ≠ 0 ,$ are such that:
$φ ( s * ) = 0 , φ ′ ( s * ) − 1 ∈ L ( B 2 , B 1 ) ,$
$∥ φ ′ ( s * ) − 1 ( φ ′ ( s * ) − φ ′ ( x ) ∥ ≤ L 0 ∥ s * − x ∥ , ∀ x ∈ A .$
Set $x , y ∈ A 0 = A ∩ Q s * , 1 L 0$ so that:
$∥ φ ′ ( s * ) − 1 φ ′ ( y ) − φ ′ ( x ) ∥ ≤ L ∥ y − x ∥ , ∀ y , x ∈ A 0$
$∥ φ ′ ( s * ) − 1 φ ′ ( x ) ∥ ≤ M , ∀ x ∈ A 0 ,$
satisfies Equations (6) and (7), the condition:
$Q ¯ ( s * , r ) ⊂ A ,$
holds, and the convergence radius r is provided by (8). The obtained sequence of iterations ${ x j }$ generated for $x 0 ∈ Q ( s * , r ) − { x * }$ by (2) is well defined. In addition, the sequence also converges to the required root $s *$, remains in $Q ( s * , r )$ for every $n = 0 , 1 , 2 , …$, and:
$∥ y j − s * ∥ ≤ g 1 ( ∥ x j − s * ∥ ) ∥ x j − s * ∥ ≤ ∥ x j − s * ∥ < r ,$
$∥ x n + 1 − s * ∥ ≤ g 2 ( ∥ x j − s * ∥ ) ∥ x j − s * ∥ < ∥ x j − s * ∥ ,$
where the g functions were described previously. Moreover, the limit point $s *$ of the obtained sequence ${ x j }$ is the only root of $φ ( x ) = 0$ in $A 1 : = Q ¯ ( s * , T ) ∩ A$, and T is defined as $T ∈ [ r , 2 L 0 )$.
Proof.
We prove the estimates (18)–(19), by mathematical induction. Adopting the hypothesis $x 0 ∈ Q ( s * , r ) − { x * }$ and Equations (6) and (14), it results:
$∥ φ ′ ( s * ) − 1 ( φ ′ ( x 0 ) − φ ′ ( s * ) ) ∥ ≤ L 0 ∥ x 0 − s * ∥ < L 0 r < 1 .$
Using Equation (20) and the results on operators by [1,2,3] that $φ ′ ( x 0 ) ≠ 0$, we get:
$∥ φ ′ ( x 0 ) − 1 φ ′ ( s * ) ∥ ≤ 1 1 − L 0 ∥ x 0 − s * ∥ .$
Therefore, it is clear that $y 0$ exists. Then, by using Equations (8), (10), (15), (16), and (21), we obtain:
$∥ y 0 − s * ∥ = ∥ x 0 − s * − φ ′ ( x 0 ) − 1 φ ( x 0 ) + ( 1 − δ ) φ ′ ( x 0 ) − 1 φ ( x 0 ) ∥ ≤ ∥ φ ′ ( x 0 ) − 1 φ ′ ( s * ) ∥ ∥ ∫ 0 1 φ ′ ( x * ) − 1 φ ′ ( s * + θ ( x 0 − s * ) ) − φ ′ ( x 0 ) ( x 0 − s * ) d θ ∥ + ∥ φ ′ ( x 0 ) − 1 φ ′ ( s * ) ∥ ∥ ∫ 0 1 φ ′ ( x * ) − 1 φ ′ ( s * + θ ( x 0 − s * ) ) ( x 0 − s * ) d θ ∥ ≤ L ∥ x 0 − x * ∥ 2 2 ( 1 − L 0 ∥ x 0 − s * ∥ ) + M | 1 − δ | ∥ x 0 − s * ∥ 1 − L 0 ∥ x 0 − s * ∥ = g 1 ( ∥ x 0 − s * ∥ ) ∥ x 0 − s * ∥ < ∥ x 0 − s * ∥ < r ,$
illustrating that $y 0 ∈ Q ( s * , r )$ and Equation (18) is true for $j = 0$.
Now, we demonstrate that the linear operator $A 0$ is invertible. By Equations (8), (10), (14), and (22), we obtain:
$∥ ( α + β + γ ) φ ′ ( s * ) − 1 A 0 − ( α + β + γ ) φ ′ ( s * ) ∥ ≤ L 0 | α + β + γ | | α | ∥ x 0 − s * ∥ + | β | 2 ∥ x 0 − s * ∥ + ∥ y 0 − s * ∥ + | γ | ∥ y 0 − s * ∥ ≤ L 0 | α + β + γ | | α | + | β | 2 | β | 2 + | γ | g 1 ∥ x 0 − s * ∥ ∥ x 0 − s * ∥ = p ( ∥ x 0 − s * ∥ ) < p ( r ) < 1 .$
Hence, $A 0 − 1 ∈ L ( B 2 , B 1 ) ,$
$∥ A 0 − 1 φ ′ ( s * ) ∥ ≤ 1 | α + β + γ | ( 1 − p ( ∥ x 0 − s * ∥ ) ) ,$
and $x 1$ exists. Therefore, we need the identity:
$x 1 − s * = x 0 − s * − φ ′ ( x 0 ) − 1 φ ( x 0 ) − φ ′ ( x 0 ) − 1 ( 1 − c 1 ) φ ( x 0 ) + c 2 φ ( y 0 ) + φ ′ ( x 0 ) − 1 A 0 − φ ′ ( x 0 ) A 0 − 1 c 1 φ ( x 0 ) + c 2 φ ( y 0 ) .$
Further, we have:
$∥ x 1 − s * ∥ ≤ ∥ x 0 − s * − φ ′ ( x 0 ) − 1 φ ( x 0 ) ∥ + ∥ φ ′ ( x 0 ) − 1 ( 1 − c 1 ) φ ( x 0 ) + c 2 φ ( y 0 ) ∥ + ∥ φ ′ ( x 0 ) − 1 φ ′ ( s * ) ∥ ∥ φ ′ ( s * ) − 1 A 0 − φ ′ ( x 0 ) ∥ ∥ A 0 − 1 φ ′ ( s * ) ∥ ∥ φ ′ ( s * ) − 1 c 1 φ ( x 0 ) + c 2 φ ( y 0 ) ∥ ≤ L ∥ x 0 − s * ∥ 2 2 1 − L 0 ∥ x 0 − s * ∥ + M | 1 − c 1 | ∥ x 0 − s * ∥ + | c 2 | ∥ y 0 − s * ∥ 1 − L 0 ∥ x 0 − s * ∥ + M 2 | α − 1 | + | β | + | γ | | 1 − c 1 | + | c 2 | g 1 ( ∥ x 0 − s * ∥ ) ∥ x 0 − s * ∥ | α + β + γ | ( 1 − L 0 ∥ x 0 − s * ∥ ) 1 − p ( ∥ x 0 − s * ∥ ) ≤ g 2 ( ∥ x 0 − s * ∥ ) ∥ x 0 − s * ∥ < ∥ x 0 − s * ∥ < r ,$
which demonstrates that $x 1 ∈ Q ( s * r )$ and (19) is true for $j = 0$, where we used (15) and (21) for the derivation of the first fraction in the second inequality. By means of Equations (21) and (16), we have:
$∥ φ ( s * ) − 1 φ ( x 0 ) ∥ = ∥ φ ′ ( s * ) − 1 φ ( x 0 ) − φ ( s * ) ∥ = ∥ ∫ 0 1 φ ′ ( s * ) − 1 φ ′ ( s * + θ ( x 0 − s * ) ) d θ ∥ ≤ M ∥ x 0 − s * ∥ .$
In the similar fashion, we obtain $∥ φ ′ ( s * ) − 1 φ ( y 0 ) ∥ ≤ M ∥ y 0 − s * ∥ ≤ M g 1 ( ∥ x 0 − s * ∥ ) ∥ x 0 − s * ∥$ (by (22)) and the definition of $A$ to arrive at the second section. We reach (18) and (19), just by changing $x 0$, $z 0 , y 0$, and $x 1$ by $x j$, $z j , y j$, and $x j + 1$, respectively. Adopting the estimates $∥ x j + 1 − s * ∥ ≤ q ∥ x j − s * ∥ < r ,$ where $q = g 2 ( ∥ x 0 − s * ∥ ) ∈ [ 0 , 1 )$, we conclude that $x j + 1 ∈ Q ( s * , r )$ and $lim j → ∞ x j = s *$. To illustrate the unique solution, we assume that $y * ∈ A 1$, satisfying $φ ( y * ) = 0$ and $U = ∫ 0 1 φ ′ ( y * + θ ( s * − y * ) ) d θ$. From Equation (14), we have:
$∥ φ ′ ( s * ) − 1 ( U − φ ′ ( s * ) ) ∥ ≤ ∥ ∫ 0 1 L 0 | y * + θ ( s * − y * ) − s * ∥ d θ ≤ ∫ 0 1 ( 1 − t ) ∥ y * − s * ∥ d θ ≤ L 0 2 T < 1 .$
It follows from Equation (27) that U is invertible. Therefore, the identity $0 = φ ( y * ) − φ ( s * ) = U ( y * − s * )$ leads to $y * = s *$. □

## 3. Numerical Experiments

Herein, we illustrate the previous theoretical results by means of six examples. The first two are standard test problems. The third is a counter problem where we show that the previous results are not applicable. The remaining three examples are real-life problems considered in several disciplines of science.
Example 1.
We assume that $B 1 = B 2 = R 3 , A = Q ¯ ( 0 , 1 )$. Then, the function φ is defined on $A$ for $u = ( x 1 , x 2 , x 3 ) T$ as follows:
$φ ( u ) = e 1 x − 1 , x 2 − 1 2 ( 1 − e ) x 2 2 , x 3 T .$
We yield the following Fréchet-derivative:
$φ ′ ( u ) = e x 1 0 0 0 ( e − 1 ) x 2 + 1 0 0 0 1 .$
It is important to note that we have $s * = ( 0 , 0 , 0 ) T , L 0 = e − 1 < L = e 1 L 0$, $δ = 1$, $M = 2$, $c 1 = 1$, and $φ ′ ( s * ) = φ ′ ( s * ) − 1 = 1 0 0 0 1 0 0 0 1$. By considering the parameter values that were defined in Theorem 1, we get the different radii of convergence that are depicted in Table 1 and Table 2.
Example 2.
Let us consider that $B 1 = B 2 = C [ 0 , 1 ] , A = Q ¯ ( 0 , 1 )$ and introduce the space of continuous maps in $[ 0 , 1 ]$ having the max norm. We consider the following function φ on $A$:
$φ ( ϕ ) ( x ) = φ ( x ) − 5 ∫ 0 1 x τ ϕ ( τ ) 3 d τ ,$
which further yields:
$φ ′ ϕ ( μ ) ( x ) = μ ( x ) − 15 ∫ 0 1 x τ ϕ ( τ ) 2 μ ( τ ) d τ , f o r e a c h μ ∈ A .$
We have $s * = 0 , L = 15 , L 0 = 7 . 5 ,$ $M = 2$, $δ = 1$, and $c 1 = 1$. We will get different radii of convergence on the basis of distinct parametric values as mentioned in Table 3 and Table 4.
Example 3.
Let us return to the problem from the Introduction. We have $s * = 1 , L = L 0 = 96.662907$, $M = 2$, $δ = 1$, and $c 1 = 1$. By substituting different values of the parameters, we have distinct radii of convergence listed in Table 5.
Example 4.
The chemical reaction [12] illustrated in this case shows how $W 1$ and $W 2$ are utilized at rates $q * − Q *$ and $Q *$, respectively, for a tank reactor (known as CSTR), given by:
$W 2 + W 1 → W 3 W 3 + W 1 → W 4 W 4 + W 1 → W 5 W 5 + W 1 → W 6$
Douglas [13] analyzed the CSTR problem for designing simple feedback control systems. The following mathematical formulation was adopted:
$K C 2.98 ( x + 2.25 ) ( x + 1.45 ) ( x + 2.85 ) 2 ( x + 4.35 ) = − 1 ,$
where the parameter $K C$ has a physical meaning and is described in [12,13]. For the particular value of choice $K C = 0$, we obtain the corresponding equation:
$φ ( x ) = x 4 + 11.50 x 3 + 47.49 x 2 + 83.06325 x + 51.23266875 .$
The function φ has four zeros $s * = − 1.45 , − 2.85 , − 2.85 , − 4.35$. Nonetheless, the desired zero is $s * = − 4.35$ for Equation (30). Let us also consider $A = [ − 4.5 , − 4 ]$.
Then, we obtain:
$L 0 = 1.2547945 , L = 29.610958 , M = 2 , δ = 1 , c 1 = 1 .$
Now, with the help of different values of the parameters, we get different radii of convergence displayed in Table 6.
Example 5.
Here, we assume one of the well-known Hammerstein integral equations (see pp. 19–20, [14]) defined by:
$x ( s ) = 1 + 1 5 ∫ 0 1 F ( s , t ) x ( t ) 3 d t , x ∈ C [ 0 , 1 ] , s , t ∈ [ 0 , 1 ] ,$
where the kernel F is:
$F ( s , t ) = s ( 1 − t ) , s ≤ t , ( 1 − s ) t , t ≤ s .$
We obtain (31) by using the Gauss–Legendre quadrature formula with $∫ 0 1 ϕ ( t ) d t ≃ ∑ k = 1 8 w k ϕ ( t k ) ,$ where $t k$ and $w k$ are the abscissas and weights, respectively. Denoting the approximations of $x ( t i )$ with $x i ( i = 1 , 2 , 3 , … , 8 )$, then it yields the following $8 × 8$ system of nonlinear equations:
$5 x i − 5 − ∑ k = 1 8 a i k x k 3 = 0 , i = 1 , 2 , 3 … , 8 ,$
$a i k = w k t k ( 1 − t i ) , k ≤ i , w k t i ( 1 − t k ) , i < k .$
The values of $t k$ and $w k$ can be easily obtained from the Gauss–Legendre quadrature formula when $k = 8$. The required approximate root is:
$s * = ( 1.002096 … , 1.009900 … , 1.019727 … , 1.026436 … , 1.026436 … , 1.019727 … , 1.009900 … , 1.002096 … ) T .$
Then, we have:
$L 0 = L = 3 40 , M = 2 , δ = 1 , c 1 = 1$
and $A = Q ( s * , 0.11 ) .$ By using the different values of the considered disposable parameters, we have different radii of convergence displayed in Table 7.
Example 6.
One can find the boundary value problem in [14], given as:
$y ″ = 1 2 y 3 + 3 y ′ − 3 2 − x + 1 2 , y ( 0 ) = 0 , y ( 1 ) = 1 .$
We suppose the following partition of $[ 0 , 1 ]$:
$x 0 = 0 < x 1 < x 2 < x 3 < ⋯ < x j , where x i + 1 = x i + h , h = 1 j .$
In addition, we assume that $y 0 = y ( x 0 ) = 0 , y 1 = y ( x 1 ) , … , y j − 1 = y ( x j − 1 )$ and $y j = y ( x j ) = 1$. Now, we can discretize this problem (32) relying on the first- and second-order derivatives, which is given by:
$y k ′ = y k + 1 − y k − 1 2 h , y k ″ = y k − 1 − 2 y k + y k + 1 h 2 , k = 1 , 2 , … , j − 1 .$
Hence, we find the following general $( j − 1 ) × ( j − 1 )$ nonlinear system:
$y k + 1 − 2 y k + y k − 1 − h 2 2 y k 3 − 3 2 − x k h 2 − 1 h 2 = 0 , k = 1 , 2 , … , j − 1 .$
We choose the particular value of $j = 7$ that provides us a $6 × 6$ nonlinear systems. The roots of this nonlinear system are $s * = ( 0.07654393 … , 0.1658739 … ,$$0.2715210 … , 0.3984540 … , 0.5538864 … , 0.7486878 … ) T$, and the results are mentioned in Table 8.
Then, we get that:
$L 0 = 73 , L = 75 , M = 2 , δ = 1 , c 1 = 1 ,$
and $A = Q ( s * , 0.15 ) .$
With the help of different values of the parameters, we have the different radii of convergence listed in Table 8.
Remark 1.
It is important to note that in some cases, the radii $r i$ are larger than the radius of $Q ( s * , r )$. A similar behavior for Method (2) was noticed in Table 7. Therefore, we have to choose all $r i = 0.11$ because Expression (17) must be also satisfied.

## 4. Concluding Remarks

The local convergence of the fourth-order scheme (2) was shown in earlier works [5,6,8,15] using Taylor series expansion. In this way, the hypotheses reach to four-derivative of the function $φ$ in the particular case when $B 1 = B 2 = R m$ and $S = R$. These hypotheses limit the applicability of methods such (2). We analyze the local convergence using only the first derivative for Banach space mapping. The convergence order can be found using the computational order of convergence $( C O C )$ or the approximate computational order of convergence $( A C O C )$ (Appendix A), avoiding the computation of higher order derivatives. We found also computable radii and error bounds not given before using Lipschitz constants, expanding, therefore, the applicability of the technique. Six numerical problems were proposed for illustrating the feasibility of the new approach. Our technique can be used to study other iterative methods containing inverses of mapping such as (3)–(5) (see also [1,2,3,4,5,6,7,8,9,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45]) and to expand their applicability along the same lines.

## Author Contributions

All the authors have equal contribution for this paper.

## Funding

This research received no external funding.

## Conflicts of Interest

The authors declare no conflict of interest.

## Abbreviations

The following abbreviations are used in this manuscript:
 MDPI Multidisciplinary Digital Publishing Institute DOAJ Directory of open access journals TLA Three letter acronym LD linear dichroism COC Computational order of convergence (COC) Approximate computational order of convergence

## Appendix A

#### Remark

(a)
The procedure of studying local convergence was already given in [1,2] for similar methods. Function $M ( t ) = M = 2$ or $M ( t ) = 1 + L 0 t$, since $0 ≤ t < 1 L 0$ can be replaced by (16). The convergence radius r cannot be bigger than the radius $r A$ for the Newton method given in this paper. These results are used to solve autonomous differential equations. The differential equation plays an important role in the study of network science, computer systems, social networking systems, and biochemical systems [46].
In fact, we refer the reader to [46], where a different technique is used involving discrete samples from the existence of solution spaces. The existence of intervals with common solutions, as well as disjoint intervals and the multiplicity of intervals with common solutions is also shown. However, this work does not deal with spaces that are continuous and multidimensional.
(b)
It is important to note that the scheme (2) does not change if we adopt the hypotheses of Theorem 1 rather than the stronger ones required in [5,6,7,8,9]. In practice, for the error bounds, we adopt the following formulas [22] for the computational order of convergence $( C O C )$, when the required root is available, or the approximate computational order of convergence $( A C O C )$, when the required root is not available in advance, which can be written as:
$ξ = ln ∥ x k + 2 − s * ∥ ∥ x k + 1 − s * ∥ ln ∥ x k + 1 − s * ∥ ∥ x k − s * ∥ , k = 0 , 1 , 2 , 3 … ,$
$ξ * = ln ∥ x k + 2 − x k + 1 ∥ ∥ x k + 1 − x k ∥ ln ∥ x k + 1 − x k ∥ ∥ x k − x k − 1 ∥ , k = 1 , 2 , 3 , … ,$
respectively. By means of the above formulas, we can obtain the convergence order without using estimates on the high-order Fréchet derivative.

## References

1. Argyros, I.K. Convergence and Application of Newton-type Iterations; Springer: Berlin, Germany, 2008. [Google Scholar]
2. Argyros, I.K.; Hilout, S. Numerical Methods in Nonlinear Analysis; World Scientific Publ. Comp.: Hackensack, NJ, USA, 2013. [Google Scholar]
3. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice- Hall Series in Automatic Computation: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
4. Petkovic, M.S.; Neta, B.; Petkovic, L.; Džunič, J. Multipoint Methods for Solving Nonlinear Equations; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
5. Cordero, A.; Martínez, E.; Torregrosa, J.R. Iterative methods of order four and five for systems of nonlinear equations. J. Comput. Appl. Math. 2009, 231, 541–551. [Google Scholar] [CrossRef] [Green Version]
6. Darvishi, M.T.; Barati, A. A fourth-order method from quadrature formulae to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 257–261. [Google Scholar] [CrossRef]
7. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numer. Algorithms 2013, 62, 307–323. [Google Scholar] [CrossRef]
8. Su, Q. A new family weighted-Newton methods for solving systems of nonlinear equations, to appear in. Appl. Math. Comput.
9. Noor, M.A.; Waseem, M. Some iterative methods for solving a system of nonlinear equations. Comput. Math. Appl. 2009, 57, 101–106. [Google Scholar] [CrossRef] [Green Version]
10. Brent, R.P. Algorithms for Finding Zeros and Extrema of Functions Without Calculating Derivatives; Report TR CS 198; DCS: Stanford, CA, USA, 1971. [Google Scholar]
11. Rheinboldt, W.C. An adaptive continuation process for solving systems of nonlinear equations. Polish Acad. Sci. Banach Cent. Publ. 1978, 3, 129–142. [Google Scholar] [CrossRef] [Green Version]
12. Constantinides, A.; Mostoufi, N. Numerical Methods for Chemical Engineers with MATLAB Applications; Prentice Hall PTR: Upper Saddle River, NJ, USA, 1999. [Google Scholar]
13. Douglas, J.M. Process Dynamics and Control; Prentice Hall: Englewood Cliffs, NJ, USA, 1972; Volume 2. [Google Scholar]
14. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
15. Chun, C. Some improvements of Jarratt’s method with sixth-order convergence. Appl. Math. Comput. 2007, 190, 1432–1437. [Google Scholar] [CrossRef]
16. Candela, V.; Marquina, A. Recurrence relations for rational cubic methods I: The Halley method. Computing 1990, 44, 169–184. [Google Scholar] [CrossRef]
17. Chicharro, F.; Cordero, A.; Torregrosa, J.R. Drawing dynamical and parameters planes of iterative families and methods. Sci. World J. 2013, 780153. [Google Scholar] [CrossRef]
18. Cordero, A.; García-Maimó, J.; Torregrosa, J.R.; Vassileva, M.P.; Vindel, P. Chaos in King’s iterative family. Appl. Math. Lett. 2013, 26, 842–848. [Google Scholar] [CrossRef] [Green Version]
19. Cordero, A.; Torregrosa, J.R.; Vindel, P. Dynamics of a family of Chebyshev-Halley type methods. Appl. Math. Comput. 2013, 219, 8568–8583. [Google Scholar] [CrossRef]
20. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
21. Ezquerro, J.A.; Hernández, A.M. On the R-order of the Halley method. J. Math. Anal. Appl. 2005, 303, 591–601. [Google Scholar] [CrossRef] [Green Version]
22. Ezquerro, J.A.; Hernández, M.A. New iterations of R-order four with reduced computational cost. BIT Numer. Math. 2009, 49, 325–342. [Google Scholar] [CrossRef]
23. Grau-Sánchez, M.; Noguera, M.; Gutiérrez, J.M. On some computational orders of convergence. Appl. Math. Lett. 2010, 23, 472–478. [Google Scholar] [CrossRef] [Green Version]
24. Gutiérrez, J.M.; Hernández, M.A. Recurrence relations for the super-Halley method. Comput. Math. Appl. 1998, 36, 1–8. [Google Scholar] [CrossRef] [Green Version]
25. Herceg, D.; Herceg, D. Sixth-order modifications of Newton’s method based on Stolarsky and Gini means. J. Comput. Appl. Math. 2014, 267, 244–253. [Google Scholar] [CrossRef]
26. Hernández, M.A. Chebyshev’s approximation algorithms and applications. Comput. Math. Appl. 2001, 41, 433–455. [Google Scholar] [CrossRef]
27. Hernández, M.A.; Salanova, M.A. Sufficient conditions for semilocal convergence of a fourth order multipoint iterative method for solving equations in Banach spaces. Southwest J. Pure Appl. Math. 1999, 1, 29–40. [Google Scholar]
28. Homeier, H.H.H. On Newton-type methods with cubic convergence. J. Comput. Appl. Math. 2005, 176, 425–432. [Google Scholar] [CrossRef] [Green Version]
29. Jarratt, P. Some fourth order multipoint methods for solving equations. Math. Comput. 1966, 20, 434–437. [Google Scholar] [CrossRef]
30. Kou, J. On Chebyshev–Halley methods with sixth-order convergence for solving non-linear equations. Appl. Math. Comput. 2007, 190, 126–131. [Google Scholar] [CrossRef]
31. Kou, J.; Wang, X. Semilocal convergence of a modified multi-point Jarratt method in Banach spaces under general continuity conditions. Numer. Algorithms 2012, 60, 369–390. [Google Scholar]
32. Li, D.; Liu, P.; Kou, J. An improvement of the Chebyshev-Halley methods free from second derivative. Appl. Math. Comput. 2014, 235, 221–225. [Google Scholar] [CrossRef]
33. Magreñán, Á.A. Different anomalies in a Jarratt family of iterative root-finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar]
34. Magreñán, Á.A. A new tool to study real dynamics: The convergence plane. Appl. Math. Comput. 2014, 248, 215–224. [Google Scholar] [CrossRef] [Green Version]
35. Neta, B. A sixth order family of methods for nonlinear equations. Int. J. Comput. Math. 1979, 7, 157–161. [Google Scholar] [CrossRef]
36. Ozban, A.Y. Some new variants of Newton’s method. Appl. Math. Lett. 2004, 17, 677–682. [Google Scholar] [CrossRef]
37. Parhi, S.K.; Gupta, D.K. Recurrence relations for a Newton-like method in Banach spaces. J. Comput. Appl. Math. 2007, 206, 873–887. [Google Scholar] [Green Version]
38. Parhi, S.K.; Gupta, D.K. A sixth order method for nonlinear equations. Appl. Math. Comput. 2008, 203, 50–55. [Google Scholar] [CrossRef]
39. Ren, H.; Wu, Q.; Bi, W. New variants of Jarratt’s method with sixth-order convergence. Numer. Algorithms 2009, 52, 585–603. [Google Scholar] [CrossRef]
40. Wang, X.; Kou, J.; Gu, C. Semilocal convergence of a sixth-order Jarratt method in Banach spaces. Numer. Algorithms 2011, 57, 441–456. [Google Scholar] [CrossRef]
41. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
42. Zhou, X. A class of Newton’s methods with third-order convergence. Appl. Math. Lett. 2007, 20, 1026–1030. [Google Scholar] [CrossRef] [Green Version]
43. Amat, S.; Busquier, S.; Plaza, S. Dynamics of the King and Jarratt iterations. Aequ. Math. 2005, 69, 212–223. [Google Scholar] [CrossRef]
44. Amat, S.; Busquier, S.; Plaza, S. Chaotic dynamics of a third-order Newton-type method. J. Math. Anal. Appl. 2010, 366, 24–32. [Google Scholar] [CrossRef] [Green Version]
45. Amat, S.; Hernández, M.A.; Romero, N. A modified Chebyshev’s iterative method with at least sixth order of convergence. Appl. Math. Comput. 2008, 206, 164–174. [Google Scholar] [CrossRef]
46. Bagchi, S. Computational Analysis of Network ODE Systems in Metric Spaces: An Approach. J. Comput. Sci. 2017, 13, 1–10. [Google Scholar] [CrossRef]
Table 1. Radii of convergence for Example 1, where $L 0 < L$.
Table 1. Radii of convergence for Example 1, where $L 0 < L$.
CasesDifferent Values of Parameters That Are Defined in Theorem 1
$α$$β$$γ$$c 2$$r 1$$r 2$$r = min { r 1 , r 2 }$
1$− 2 3$$4 3$$1 3$$− 1$0.3826920.05011110.0501111
2$− 2 3$$4 3$$− 100$$1 100$0.3826920.3340080.334008
311100.3826920.3826920.382692
4111$1 100$0.3826920.3423250.342325
510$1 10$$1 10$$1 100$0.3826920.3254130.325413
Table 2. Radii of convergence for Example 1, where $L 0 = L = e$ by [3,11].
Table 2. Radii of convergence for Example 1, where $L 0 = L = e$ by [3,11].
Cases Different Values of Parameters That Are Defined in Theorem 1
$α$$β$$γ$$c 2$$r 1$$r 2$$r = min { r 1 , r 2 }$
1$− 2 3$$4 3$$1 3$$− 1$$0.245253$0.03265820.0326582
2$− 2 3$$4 3$$− 100$$1 100$0.2452530.2138260.213826
311100.2452530.2452530.245253
4111$1 100$0.2452530.2191070.219107
510$1 10$$1 10$$1 100$0.2452530.2080970.208097
Table 3. Radii of convergence for Example 2, where $L 0 < L$.
Table 3. Radii of convergence for Example 2, where $L 0 < L$.
CasesDifferent Values of Parameters That Are Defined in Theorem 1
$α$$β$$γ$$c 2$$r 1$$r 2$$r = min { r 1 , r 2 }$
1$− 2 3$$4 3$$1 3$$− 1$$0.0666667$0.006809870.00680987
2$− 2 3$$4 3$$− 100$$1 100$0.06666670.05942120.0594212
311100.06666670.06666670.0666667
4111$1 100$0.06666670.06093350.0609335
510$1 10$$1 10$$1 100$0.06666670.05880170.0588017
Table 4. Radii of convergence for Example 2, where $L 0 = L = 15$ by [3,11].
Table 4. Radii of convergence for Example 2, where $L 0 = L = 15$ by [3,11].
CasesDifferent Values of Parameters That Are Defined in Theorem 1
$α$$β$$γ$$c 2$$r 1$$r 2$$r = min { r 1 , r 2 }$
1$− 2 3$$4 3$$1 3$$− 1$0.04444440.005918280.00591828
2$− 2 3$$4 3$$− 100$$1 100$0.04444440.03874920.0387492
311100.04444440.04444440.0444444
4111$1 100$0.04444440.03970640.0397064
510$1 10$$1 10$$1 100$0.04444440.03771120.0377112
Table 5. Radii of convergence for Example 3.
Table 5. Radii of convergence for Example 3.
CasesDifferent Values of Parameters That Are Defined in Theorem 1
$α$$β$$γ$$c 2$$r 1$$r 2$$r = min { r 1 , r 2 }$
1$− 2 3$$4 3$$1 3$$− 1$$0.00689682$0.0009183890.000918389
2$− 2 3$$4 3$$− 100$$1 100$$0.00689682$$0.00601304$$0.00601304$
31110$0.00689682$$0.00689682$$0.00689682$
4111$1 100$$0.00689682$$0.00616157$$0.00616157$
510$1 10$$1 10$$1 100$$0.00689682$$0.0133132$$0.0133132$
Table 6. Radii of convergence for Example 4.
Table 6. Radii of convergence for Example 4.
CasesDifferent Values of Parameters That Are Defined in Theorem 1
$α$$β$$γ$$c 2$$r 1$$r 2$$r = min { r 1 , r 2 }$
1$− 2 3$$4 3$$1 3$$− 1$$0.0622654$0.004062870.00406287
2$− 2 3$$4 3$$− 100$$1 100$$0.0622654$$0.0582932$$0.0582932$
31110$0.0622654$$0.0622654$$0.0622654$
4111$1 100$$0.0622654$$0.0592173$$0.0592173$
510$1 10$$1 10$$1 100$$0.0622654$$0.0585624$$0.0585624$
Table 7. Radii of convergence for Example 5.
Table 7. Radii of convergence for Example 5.
CasesDifferent Values of Parameters That Are Defined in Theorem 1
$α$$β$$γ$$c 2$$r 1$$r 2$$r = min { r 1 , r 2 }$
1$− 2 3$$4 3$$1 3$$− 1$$8.88889$1.183661.18366
2$− 2 3$$4 3$$− 100$$1 100$$8.88889$$7.74984$$7.74984$
31110$8.88889$$8.88889$$8.88889$
4111$1 100$$8.88889$$7.94127$$7.94127$
510$1 10$$1 10$$1 100$$8.88889$$7.54223$$7.54223$
Table 8. Radii of convergence for Example 6.
Table 8. Radii of convergence for Example 6.
CasesDifferent Values of Parameters That Are Defined in Theorem 1
$α$$β$$γ$$c 2$$r 1$$r 2$$r = min { r 1 , r 2 }$
1$− 2 3$$4 3$$1 3$$− 1$$0.00904977$0.001191690.00119169
2$− 2 3$$4 3$$− 100$$1 100$$0.00904977$$0.00789567$$0.00789567$
31110$0.00904977$$0.00904977$$0.00904977$
4111$1 100$$0.00904977$$0.00809175$$0.00809175$
510$1 10$$1 10$$1 100$$0.00904977$$0.00809175$$0.00809175$

## Share and Cite

MDPI and ACS Style

Behl, R.; K. Argyros, I.; Machado, J.A.T.; Alshomrani, A.S. Local Convergence of a Family of Weighted-Newton Methods. Symmetry 2019, 11, 103. https://doi.org/10.3390/sym11010103

AMA Style

Behl R, K. Argyros I, Machado JAT, Alshomrani AS. Local Convergence of a Family of Weighted-Newton Methods. Symmetry. 2019; 11(1):103. https://doi.org/10.3390/sym11010103

Chicago/Turabian Style

Behl, Ramandeep, Ioannis K. Argyros, J.A. Tenreiro Machado, and Ali Saleh Alshomrani. 2019. "Local Convergence of a Family of Weighted-Newton Methods" Symmetry 11, no. 1: 103. https://doi.org/10.3390/sym11010103

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.