A Newton-like Midpoint Method for Solving Equations in Banach Space

: The present paper includes the local and semilocal convergence analysis of a fourth-order method based on the quadrature formula in Banach spaces. The weaker hypotheses used are based only on the ﬁrst Fréchet derivative. The new approach provides the residual errors, number of iterations, convergence radii, expected order of convergence, and estimates of the uniqueness of the solution. Such estimates are not provided in the approaches using Taylor expansions involving higher-order derivatives, which may not exist or may be very expensive or impossible to compute. Numerical examples, including a nonlinear integral equation and a partial differential equation, are provided to validate the theoretical results


Introduction
In the field of numerical analysis, a significant role is played by numerical methods for solving nonlinear equations. Due to lack of analytical methods, iterative techniques are required to approximate the solutions. One of the foremost objectives to use numerical methods for solving nonlinear transcendental equations is the ability to handle non-analytic and complex functions. Oftentimes, such equations arise in diverse disciplines such as science, engineering, and applied sciences [1][2][3][4]. For example, in physics, nonlinear equations often describe the behavior of systems with multiple interacting components, such as the Navier-Stokes equations in fluid dynamics. In engineering, nonlinear equations are used to model the behavior of materials under different loads and conditions. The ability to handle large and complex systems is another essential reason to use numerical methods. Nonlinear equations generally describe the behavior of systems with many interacting components, and solving them analytically can be extremely difficult, if not impossible. Numerical methods provide a way to break down these large systems into smaller, more manageable parts and find approximate solutions using iterative techniques.
A plethora of iterative methods are used for solving nonlinear transcendental equations, including fixed point iteration, root-finding methods, and the Newton-Raphson method. Each method has its own robustness and limitations, and the selection of the method depends on the particular equation being solved and the pre-decided accuracy level. For instance, the bisection method is one of the simplest and most robust methods for finding the roots of an equation but has a disadvantage of being slower and diverging for certain types of functions. The Newton-Raphson method, on the other hand, is faster and more accurate, but it requires the derivative of the function and may not converge for certain types of functions.
Moreover, the numerical method to be chosen depends on the specific equation being solved, the interval of the solutions, the number of solutions, and the desired accuracy level. For example, the bisection method is a good choice for finding all solutions in a given interval, while the Newton-Raphson method is better for finding a specific solution with an initial guess. In numerical optimization, root-finding methods are used to find the solutions of nonlinear equations that describe the behavior of the system, which enables the design of algorithms that are more efficient and more robust. There are several rootfinding methods for solving nonlinear transcendental equations in research. Some common methods include: 1 The bisection method: a simple yet robust method that involves repeatedly bisecting an interval and determining which subinterval a root lies in. 2 The Newton-Raphson method: this method uses an initial guess and an iterative process to converge on a root and requires the ability to compute the derivative of the function. 3 The Secant method: this method is similar to the Newton-Raphson method but uses the slope of the secant line between two points rather than the derivative of the function. 4 Fixed-point iteration: this method involves finding the fixed point of a function using an iterative process. It requires the function to be in a specific form. 5 Muller's method: this method is an extension of the secant method and is used for complex roots. 6 Bairstow's method: this method is used for finding the roots of polynomials with real coefficients, and it is used to find the roots of polynomials of degree greater than two. 7 Aitken's delta-squared method: this method is used for speeding up the convergence of fixed-point iteration method. 8 The Hybrid method: as the name suggests, this method combines two or more methods to find the root of the nonlinear equation.
As a workaround, iterative methods have been developed to locate the initial values of solutions to the nonlinear in the form as follows: where F is a Fréchet differentiable operator mapping between a Banach space B 1 into a Banach space B 2 , and D is a convex and open subset of B 1 . The determination of a solution x * ∈ D of the equation, whose analytical form is rarely attainable, is very important in many disciplines [1][2][3][4]. This is the case since applications are formulated as an equation such as (1) using mathematical modeling [1][2][3]5]. This is the explanation of why iterative methods are introduced producing sequences approximating x * . There is extensive literature on the convergence of iterative methods motivated by algebraic or geometrical considerations [3,[5][6][7][8].
A widely used method to solve (1) is Newton's (NM), which is defined for each n = 0, 1, 2, . . . by NM uses one function evaluation and one inverse per iteration. It is of convergence order two [5]. It is always important to develop iterative methods of a higher convergence order as they provide an efficient approximation and more accuracy in finding the solution. There is a plethora of such methods (see [9][10][11][12][13][14] and references therein) proposed by various researchers.
In particular, we investigate the convergence of the fourth convergence order method defined for each n = 0, 1, 2, . . . by where {a j } ∈ [0, 1], {b j } with ∑ k j=1 b j = 1 are sequences of nonnegative parameters, k is a natural number, and The authors in [8,9] motivated by the quadrature formula studied the local convergence of this method utilizing the Taylor series expansion of the operator F in the special case when B 1 = B 2 = R m , where m is a natural number. The benefits over the other methods of the same convergence order were also explained in [8]. The convergence is established under the differentiability assumptions on F (λ) , λ = 1, 2, 3, 4, 5. However, these results assure the convergence in case the operator is five times differentiable although the method may converge. Let us look at a simple example in the case when D = [−0.5, 1.5] and Then, one can clearly see that the results in [8,9] do not apply since F (3) is unbounded at t = 0. Other problems include: (1) The uniqueness of the solution region is not provided.
(2) The choice of the starting point x 0 ∈ D is a "shot in the dark ".
(3) There are no estimates on x n+1 − x n or x * − x n that can be computed in advance based on the properties of the operator F. (4) The semilocal convergence of the method has not been studied. (5) The derivative higher than one used in the local convergence is not on the method.
It is worth noticing that the aforementioned problems appear in numerous other methods. These problems motivate the writing of this paper. In particular, we positively address all of these problems utilizing the operators on the method and the very general ω-continuity conditions on the operator F [1,7]. In the case of the semilocal convergence, the concept of the majorizing sequences is employed [1,6,7]. The idea of this paper can also be applied to other methods [6,[15][16][17] analogously since it only depends on the inverse of the operators F and not on the method itself [12]. Moreover, see the related papers [18][19][20][21].
The paper is structured as follows: The local convergence in Section 2 is followed by the semilocal convergence in Section 3. The numerical applications and concluding remarks appearing in Sections 4 and 5, respectively, complete the paper.

Convergence I: Local
We denote the interval [0, ∞) by M for brevity. Suppose: There exists a nondecreasing and continuous function (NCF) w 0 : M → R such that the function w 0 (t) − 1 has a smallest positive root denoted by s. The function q(t) − 1 has the smallest root The function g 2 (t) − 1 has a smallest root Then, in Theorem 1 the parameter r given as is proven to be a radius of convergence for the method (3). and The sets S(x * , µ), S[x * , µ] denote, respectively, the open and closed balls in B 1 with center x * ∈ B 1 and of radius µ > 0.
The parameter r and the functions w 0 and w are connected to the operator F as follows, provided that x * is a solution of the Equation (1) The local convergence of the method (3) follows next based on the terminology and the conditions (E 1 )-(E 3 ).
Proof. We shall establish using induction the assertions and with r, g 1 , and g 2 as previously defined.
Similarly, we first have hence, That is, the iterate x 1 ∈ S(x * , r) and the assertion (10) holds if n = 0.
By switching x 0 , y 0 , x 1 with x m , y m , x m+1 in the previous calculations, the induction for the assertions (9) and (10) is terminated. Therefore, the estimate where 1) gives lim m→∞ x m = x * , and the iterate x m+1 ∈ S(x * , r).
The uniqueness of the solution region is determined in the next result.
Then, the equation (1) is uniquely solvable by x * in the region D 2 .
Proof. Let us define the linear operator T by It follows by (1)-(3) that

Convergence II: Semilocal
We still rely on the ω-continuity of F , but a scalar majorizing sequence is also employed.
Let v 0 : M → R, v : M 1 → R be NCF's. If α 0 = 0 and β 0 ≥ 0, define the sequences {t n }, {s n } by q n = k ∑ j=1 |b j |v 0 (|1 − a j |t n + a j s n ) and These scalar sequences are shown to be majorizing for the method (3). However, first, some general convergence conditions are needed for them.
Then, the sequences {t n }, {s n } given by the formula (20) are convergent to some d * ∈ [0, d].
Proof. The Formula (20) and Condition (21) imply t n ≤ s n ≤ t n+1 < d. Hence, the result follows. The functions v 0 , v and parameter d * relate to the operators F and F provided x 0 ∈ D is such that F (x 0 ) −1 ∈ L(B 2 , B 1 ) and F (x 0 ) −1 F(x 0 ) ≤ β 0 . Suppose: The semilocal convergence follows for the method (3).

Theorem 2.
Suppose that the conditions (H 1 )-(H 4 ) hold. Then, the sequence is convergent to some x * ∈ S[x 0 , d * ] solving the equation F(x) = 0 and such that Proof. The following assertions are shown using induction. and The assertion (23) holds if n = 0 by the choice of t 0 , s 0 , and the first substep of the method (3). It follows that the iterate y 0 ∈ S(x 0 , d * ). By switch x * , conditions (E 1 )-(E 3 ) by x 0 , (H 1 )-(H 4 ), we obtain and We can write by the second substep of the method (3) thus, and Hence, the iterate x m+1 ∈ S(x 0 , d * ) and (23) holds. We can write by the first substep of the method (3) in turn that Consequently, we obtain Hence, the induction is completed and the iterate y m+1 ∈ S(x 0 , d * ). It follows by Lemma 1 and the condition (H 2 ) that the sequences {t m }, {s m } are Cauchy as convergent. Then, by (23) and (24), the sequences {x m }, {y m } are also Cauchy and, as such, they are convergent to some x * ∈ S[x * , d * ]. Moreover, by letting m → ∞ in (28) and using the continuity of the operator F, we deduce that F(x * ) = 0. Furthermore, for j ≥ 0 an integer, and the estimation we conclude that (22) holds by letting j → +∞ in (29).
Next, the uniqueness region is provided.
(3) There exists d 2 ≥ d 1 such that Then, the equation F(x) = 0 is uniquely solvable by u * in the region D 4 .

Examples and Numerical Calculations
Validating and verifying theoretical results, numerical experiments are essential. This section comprises six numerical problems based on three applied science problems to check the theoretical results obtained from preceding sections. Two types of convergence analysis are mainly focused on: semi-local and local.
In order to evaluate the effectiveness of the method (3), some applications are simulated, and the results are analyzed. In particular, the residual errors, the number of iterations, the convergence radii, and the expected order of convergence are computed. The following formulas used for COC: We observe that the iterations terminate when the error is sufficiently small, according to the following sopping criterion: The first Fréchet derivative is given by Then, we find that x * = (0, 0, 0), ω 0 (t) = (e − 1)t and ω 1 (t) = et. Then, taking k = 2, a 1 = a 2 = 1/2, b 1 = b 2 = 1/2 the smallest positive roots of g i (t) − 1 = 0 for i = 1, 2 are 0.324947 and 0.264229. Then, the radius of convergence is given as r = 0.264229.

Example 5.
Consider the following nonlinear partial differential equation, also known as problem of molecular interaction and defined by subject to the following conditions: Discretize the PDE (30) by applying the central divided difference a system of nonlinear equations, where i = 1, 2, 3, . . . , l − 1, j = 1, 2, 3, . . . , l − 1. For instance l = 6, we obtain a system of 5 × 5 and a = 1 l . The COC, the number of iterations, residual errors, CPU timing, and error difference between two iterations for Example 5 are mentioned in Table 1.
which governs the flow of current in a vacuum tube, with the boundary conditions ν(0) = 0, ν(2) = 1. Further, we consider the partition of the given interval [0, 2], which is given by Moreover, we assume that If we discretize the above problem (31) by using the second order divided difference for the first and second derivatives, which are given by then, we obtain a (n − 1) × (n − 1) system of nonlinear equations Let us consider η = 1 2 and n = 8; so, we have a 7 × 7 system of nonlinear equations. The obtained results are depicted in Table 2.

Concluding Remarks
In the foregoing study, we have analyzed the local and the semilocal convergence for a fourth-order iterative method based on quadrature formulae in Banach spaces by using majorizing sequences. Local convergence analysis is based on very general ω-continuity conditions on first order Fréchet derivative, thereby extending the applicability and usage of the method. Theoretical results are applied to some numerical examples to demonstrate the efficiency of our convergence analysis. It can be observed that our theoretical conclusions worked well in the situation where the earlier analysis based on Lipschitz condition cannot be used. Future work involves other methods and applications to integral equations and to the solution of PDE's.