1. Introduction
Finding the approximate solution 
 of
      
      is one of the top priorities in the field of Numerical analysis. We assume that 
 is a Fréchet-differentiable operator, 
 are Banach spaces and 
 is a convex subset of 
. The 
 is known as the set of bounded linear operators.
The problem of finding an approximate unique solution 
 is very important, since many problems can be written as Equation (
1) in References [
1,
2,
3,
4,
5,
6,
7,
8]. However, it is not always possible to access the solution 
 in an explicit form. Hence, most of the solvers are iterative in nature. The analysis of solvers involves local convergence that stands on the knowledge around 
. It also ensures the convergence of iteration procedures. One of the most significant tasks in the analysis of iterative procedures is to yield the convergence region. Hence, it is essential to suggest the radius of convergence.
We redefine the iterative solver suggested in Reference [
7], for all 
 as
      
      where 
 is a starting guess, 
 is a 
-order iteration function solver (for 
) and
      
 stands for the first-order Fŕechet-derivative of 
F. The study of these methods is important for various reasons already stated in Reference [
7]. For brevity we refer the reader to Reference [
7] and the references therein. On top of those reasons, we also mention that method (
2) generalizes the existing widely used Newton’s type methods such as Newton’s, Traub’s and other methods. So, it is important to study these methods under the same set of convergence criteria. Keeping the linear operator frozen is also a very cheap and efficient way of increasing the order of convergence. The convergence order of (
2) was given in Reference [
7] but using hypotheses up to the 7th-order derivative of function 
F. Only the 1st-order derivative emerges in scheme (
2). Such conditions hamper the suitability of solver (
2). Consider function 
F with 
 on 
 by
      
Using this definition, we get
      
      and
      
	  It is clear from the above that the 3rd-order derivative of 
 is unbounded in 
. We have plenty of research articles on iterative solvers [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26]. The local convergence analysis of these solvers traditionally requires the usage of Taylor expansions and the operator involved must be sufficiently many times differentiable in a neighborhood of the solution 
. This way, the convergence order is established but derivatives of an order higher than one do not appear in these solvers, as we saw previously with the motivational example restricting the applicability of solvers. Another problem is that this approach does not provide error estimates on 
 that can be used to predetermine the number of steps required to attain a prescribed error tolerance. The uniqueness of the solution 
 also cannot be established in any set containing it. Moreover, the starting guess is a shot in the dark. Therefore, it is important to find a technique other than the preceding. This is what we offer in this article. Furthermore, (COC) and (ACOC) [
27] are used to compute the convergence order (to be explained in Remark 1 (d)).
These formulas do not require higher than one derivative, and in the case of ACOC, knowledge of 
 is not needed. It is worth noting that the iterates are obtained by using (
2), which involves the first derivative. Hence, these iterates also depend on the first derivative (see Remark 1 (d)). Our techniques can be used on other solvers to extend their applicability in a similar fashion.
  2. Local Convergence
Here, we present a study of local convergence for solver (
2). For this, we consider a function 
 which is nondecreasing and continuous such that 
. We assume
      
      has a minimal positive solution 
.
Define functions 
 and 
 on the interval 
 by
      
      where 
 and functions 
 are also nondecreasing and continuous, satisfying 
. We have that 
 and 
, 
 as 
. Then, by the intermediate value theorem, we notice that the functions 
 and 
 have solutions in the interval 
. Call as 
 and 
 the smallest such solutions in 
 of the functions 
 and 
, respectively. Assume 
 has minimal positive solution 
. Consider functions
      
These functions are defined in the interval 
, where 
. Consider functions 
 on 
 as
      
      where
      
Then,  and  as . Defined by  be the minimal solutions of corresponding to functions  in .
Set 
r as
      
	  Then, it follows
      
      and for all 
      and
      
Let 
, 
 be, respectively, open and closed balls in 
 centered at 
 and of radius 
. Next, the local convergence analysis of solver (
2) follows.
Theorem 1. Let  be a differentiable operator. Let  and  be a nondecreasing continuous function such that . The parameter  be defined by (4). Suppose that there exists  such thatandMoreover, suppose that for all andThen,  generated for  by solver (2) is well defined, remains in  for all  and converges to μ, so thatandFurther, ifthen, μ is the only solution of equation  in .  Proof.  We select mathematical induction to show that expressions (
18)–(
21) are satisfied. Using hypotheses 
, (
4), (
5) and (
13), we yield
        
Therefore, 
, 
 are well defined, and
        
By adopting (
2), (
5), (
7), (
14) and (
24), we have
        
        showing (
18) for 
 and 
.
By (
2), (
5), (
8), (
16) and (
25), we yield
        
        showing (
19) (for 
) and 
. We can write by (
12)
        
Then, from (
15), (
26) and (
27), we obtain
        
We must show that 
. In view of (
5), (
9), (
13) and (
25), we get
        
        so 
 exist
        
        and
        
Using (
2), (
5), (
8), (
9), (
11) (for 
), (
28) and (
31), we obtain
        
        so (
20) holds for 
 and 
. In an analogous way, we obtain for 
 that
        
        which implies (
20) holds for 
 and 
In view of solver (
2), (
5), (
11) (for 
) and the proceeding estimates
        
        showing (
21) (for 
) with 
. Now, change 
, 
 and 
 by 
, 
 and 
 in the preceding estimates. Hence, we attain (
18)–(
21). By adopting
        
        we have 
 with 
. Finally, for the uniqueness of required solution, we assume that 
 satisfying 
. Set 
, so
        
Hence, 
Q is invertible. Then,
        
        yields 
. ☐
 Remark 1.  - (a) 
- It is clear from (13) that we can drop the hypothesis (15) and choose 
- (b) 
- We can setinstead (4) provided that function  is strictly increasing. 
- (c) 
- If  are constants functions, thenandwhere  is the radius for Newton’s solver [14]. - Rheinboldt [26] and Traub [6] also provided radius of convergence instead of and by Argyros [1,2]where  is a constant for (9) on D, sosoand 
- (d) 
- By adopting conditions to the 7th-order derivative of operator F, the order of the convergence of solver (2) was given in Reference [7]. We assume hypotheses only on the 1st-order derivative of operator F. For obtaining the order of convergence, we adoptedorthe computational order of convergence COC and the approximate computational order of convergence ACOC [28,29], respectively. These definitions can also be found in Reference [27]. They do not require derivatives higher than one. Indeed, notice that to generate iterates  and therefore compute ξ and , we need to use the formula (2) using only the first derivatives. It is vital to note that ACOC does not need the prior information of exact root μ. 
- (e) 
- Consider F satisfying the autonomous differential equation [1,2] ofwhere P is a given and continuous operator. Then,  our results apply but without knowledge of  and choose . Hence, we select . 
   4. Application of Our Scheme on Large System of Nonlinear Equations
We cited the 
, 
, 
 and 
 as the index of number of iteration, absolute residual errors, errors among two iterations and computational convergence order, respectively, in 
Table 5, 
Table 6 and 
Table 7.
The whole calculation is performed in the Mathematica software (Version-9, Wolfram Research, Champaign, IL, USA). We consider at least 1000 digits of mantissa in order to minimize the round-off errors. The notation  employs .
Example 5. We assume here a boundary value problem [30], which is given by Further, we chosen a σ-point partition of  in the following way: Furthermore, we assume that . By adopting the following technique for removing derivatives for problem (61) We havea system of nonlinear equations (SNE) of order . We choose the starting approximation . We solved the problem for a  SNE by choosing . We obtained the following solutionWe depicted the numerical out comes in Table 5.  Example 6. We choose a prominent 2D Bratu problem [31,32], which is given byLet us assume that  is a numerical result over the grid points of the mesh. In addition, we consider that  and  are the number of steps in the direction of x and t, respectively. Moreover, we choose that h and k are the respective step sizes in the direction of x and y, respectively. In order to find the solution of PDE (62), we adopt the following approachwhich further yields the succeeding SNEBy choosing  and , we get a large SNE of order . The starting point isand results are depicted in Table 6.  Example 7. Finally, we deal with succeeding SNEIn order to access a giant system of nonlinear equations of order , we pick . In addition, we consider the following starting approximation for this problem:and converges to . The attained computation outcomes are illustrated in Table 7.    5. Concluding Remarks
Recently, there has been a surge in the development of multi-step solvers for nonlinear equations. In this article, we present a unifying local convergence of solver (
2), relying only on the first derivative. This way, we expand the applicability of these solvers. Notice that in earlier studies that are special cases of (
2), higher than one derivatives are used, which do not appear in the solver. Moreover, no bounds on the distances 
 are provided, nor uniqueness theorems. Furthermore, we provide computable bounds and uniqueness of solutions. This is where the novelty of our article lies. Numerical and applications are also given to test the convergence conditions. In our application, we solve the 2D-Bratu, BVP problems as well as a system of nonlinear equations of 
.