Some High-Order Iterative Methods for Nonlinear Models Originating from Real Life Problems

: We develop a sixth order Steffensen-type method with one parameter in order to solve systems of equations. Our study’s novelty lies in the fact that two types of local convergence are established under weak conditions, including computable error bounds and uniqueness of the results. The performance of our methods is discussed and compared to other schemes using similar information. Finally, very large systems of equations (100 × 100 and 200 × 200) are solved in order to test the theoretical results and compare them favorably to earlier works. SM ; and, SM consuming at least double CPU timing as compare to our methods namely PS 1, PS 2 and PS 3. So, we conclude that our methods provide results faster than the other existing methods.

In particular, we propose the following new scheme u 0 ∈ Ω is an initial point and λ ∈ R is a free parameter. In addition to this, [·, ·; F] : Ω × Ω → (B, B) is a divided difference of order one.
We shall present two convergence analyses. Later, we present the advantages over other methods using similar information.

Local Convergence Analysis I
We assume that B = R. We use method (2) with standard Taylor expansions [9] for studying local convergence. Theorem 1. Suppose that mapping F is s sufficient differentiable on Ω, with u * ∈ Ω, a simple zero of F. We also consider that the inverse of F, F (u * ) −1 ∈ (B, B). Then, lim p→∞ u p = u * provided that u 0 is close enough to u * . Moreover, the convergence order is six.
Proof. Set p = u p − u * and Q p = F (u * ) p! , where ( p ) γ = ( 1 , 2 , . . . , k ) γ , p ∈ R p . We shall use some Taylor series expansions, first for F(u p ) and F u p + F(u p ) : and respectively. By using the expressions (3) and (4) in the first substep of scheme (2), we have where Secondly, we expand F(y p ) F(y p ) = Q 1 p + Q 2 p 2 + O( p 3 ). (6) In view of (3)-(6), we get in the second substep of scheme (2) where Thirdly, we need the expansions for F(z p ) and F z p + F(z p ) Hence, by (5) and (8), we get leading together with the third substep of method (2) to where According to Theorem 1, the applicability of method (2) is limited to mappings F with derivatives up to the seventh order. Now, we choose B = R, Ω = [− 3 2 , 1 2 ] and define a function f , as follows: We have the following derivatives of function f However, f (ξ) is not bounded on Ω, so Section 2, cannot be used. In this case, we have a more general alternative given in the up coming section.

Local Convergence Analysis II
has ρ 1 as the smallest positive zero. In addition, we assume that w : is a increasingly continuous map with w(0, 0) = 0. Consider functions g 1 and h 1 defined on semi open interval [0, ρ 1 ) as follow: and h 1 (t) = g 1 (t) − 1.
By these definitions, we have h 1 (0) = −1 and h 1 (t) → ∞ as t → ρ − 1 . Subsequently, the intermediate value theorem assures that function h 1 has minimum one solution in (0, ρ 1 ). Let r 1 be the minimal such zero.
Accordingly, we have and for all t ∈ [0, r). S(v, c) denotes the open ball centered at v ∈ B and of radius c > 0. ByS(v, c), we denote the closure of S(v, c) We use the following conditions (A) in order to study the local convergence: (a 1 ) F : Ω → B is a differentiable operator in the Fréchet sense, [·, ·; F] : Ω × Ω → (B, B) is a divided difference of order one. In addition to this, we assume that u * ∈ Ω is a simple zero of F. At last, the inverse of operator F, F (u * ) −1 ∈ (B, B).
parameters a ≥ 0 and b > 0, such that for each u, y ∈ Ω

Theorem 2.
Under the hypotheses (A) further consider that u 0 ∈ S(u * , r) − {u * }. Accordingly, the proceeding assertions hold and In addition, the u * is the unique solution of F(u) = 0 in the set Ω 1 mentioned in hypothesis (a 5 ).

Numerical Examples
Here, we monitor the convergence conditions on three problems (1)-(3). We choose [u, y; F] = 1 0 F (y + θ(u − y))dθ in the examples. We can confirm the verification the hypotheses of Theorem 2 for the given choices of the "w" functions and parameters a and b.
We use (34), where t k and w k are the abscissas and weights, respectively.
We depicted the radii of Example 3 in Tables 5 and 6.  The radii of method (2) for Example 4 are listed in Tables 7 and 8.
The (j), ( F(u j ) ), u j+1 − u j , and ρ * ≈ log u j+1 −u j / u j −u j−1 log u j −u j−1 / u j−1 −u j−2 stands for index of iteration, absolute residual errors in the function F, error between two successive iterations and computational convergence order, receptively. There values are listed in Tables 9-11. Moreover, the quantity η is the final obtained value of u j+1 −u j u j −u j−1 6 . The estimation of all the above parameters have been calculated by Mathematica-9. For minimizing the round-off errors, we have chosen multiple precision arithmetic with 1000 digits of mantissa. The term b 1 (±b 2 ) symbolizes the b 1 × 10 (±b 2 ) in all mentioned tables. We adopted the command "AbsoluteTiming[]" in order to calculate the CPU time. We run our programs three times and depicted the average CPU time in Table 12, also one can observe the times used for each iterative method, where we want to point out that for big size problems the method PS1 uses the minimum time, so it is being very competitive. The configuration of the used computer is given below: Processor: Intel(R) Core(TM) i7-4790 CPU @ 3.60 GHz Made: HP RAM: 8:00 GB System type: 64-bit-Operating System, x64-based processor.

Example 5.
Here, we deal with a boundary value problem from Ortega and Rheinboldt [9], given by We assume partition of the interval [0, 1] and y 0 = y(u 0 ) = 0, y 1 = y(u 1 ), . . . , y n−1 = y(u n−1 ), y p = y(u p ) = 1. Now, we discretize expression (46) by adopting following numerical formula for derivatives y j = y j+1 − y j−1 2h , y j = y j−1 − 2y j + y j+1 h 2 , j = 1, 2, . . . , p − 1, which leads to The computational estimations are listed in Table 9 on the basis of initial approximation y (0) j = 3 2 , 3 2 , 3 2 , 3 2 , 3 2 , 3 2 T . SM Example 6. The classical 2D Bratu problem [23,24] is given by By adopting finite difference discretization, we can deduced the above PDE (48) to a nonlinear system. For this purpose, we denote ∆ i,j = u(µ i , θ j ) as numerical solution at the grid points of the mesh. In addition to this, M 1 and M 2 stand for the number of steps in the directions of µ and θ, respectively. The h and k called as the respective step sizes in the directions of µ and θ. Adopt the following central difference formula to u µµ and u θθ leads to us For obtaining a large system of 100 × 100, we choose M 1 = M 2 = 11, C = 0.1 and h = 1 11 . The numerical results are listed in Table 10 based on the initial guess u 0 = 0.1 sin(πhi) sin(πhj) T , i = j = 10.

Remark 3.
On the basis of Tables 9-11, we conclude that our methods namely PS1, PS2 and PS3 perform better in the contrast of existing schemes AS, HS, SM and SM on the basis of residual errors, errors between two consecutive iterations, and asymptotic error constant. In addition, our methods also demonstrate the stable computational order of convergence. Finally, we concluded that our methods not only perform better than existing methods in numerical results, but also take half of the CPU time in contrast to other existing methods (results can be easily found in Table 12). Methods j F(u j ) u j+1 − u j ρ * u j+1 −u j u j −u j−1 6 According to the CPU time, method PS3 is taking the lowest time for executing the results. All of the other schemes AS, HS, SM; and, SM consuming at least double CPU timing as compare to our methods namely PS1, PS2 and PS3. So, we conclude that our methods provide results faster than the other existing methods.

Conclusions
We presented a new family of Steffensen-type methods with one parameter. The local convergence is studied in Section 2 while using Taylor expansion and derivative up to the order seven, when B = R j . To extend the suitability of these iterative methods, we only use hypotheses on the first derivative in Section 3 and Banach space valued operators. This way, we also find computable error bounds on u p − u * as well as uniqueness results based on generalized Lipschitz-type real functions. Numerical examples of equations, favorable comparisons to other methods can be found in Section 4.