A New Higher-Order Iterative Scheme for the Solutions of Nonlinear Systems

: Many real-life problems can be reduced to scalar and vectorial nonlinear equations by using mathematical modeling. In this paper, we introduce a new iterative family of the sixth-order for a system of nonlinear equations. In addition, we present analyses of their convergences, as well as the computable radii for the guaranteed convergence of them for Banach space valued operators and error bounds based on the Lipschitz constants. Moreover, we show the applicability of them to some real-life problems, such as kinematic syntheses, Bratu’s, Fisher’s, boundary value, and Hammerstein integral problems. We ﬁnally wind up on the ground of achieved numerical experiments, where they perform better than other competing schemes.


Introduction
Establishment of higher-order efficient iterative schemes for finding the solutions (where F : D ⊂ R m → R m is a differentiable mapping with open domain D) is one of the foremost tasks in the area of numerical analysis and computation methods because of its wide application in real-life situations. We can easily find several real-life problems that were phrased into the nonlinear system (1) along with the same fundamental properties. For example, transport theory, combustion, reactor, kinematics syntheses, steering, chemical equilibrium, neurophysiology, and economic modeling problems were solved by being formulated to F(U) = 0, and details can be found in the research articles [1][2][3][4][5].
Analytical methods for these problems are rare. Therefore, many authors developed iterative schemes that are based on iteration procedures. These iterative methods depend on several things, like starting the initial guess/es, the considered problem, the body structure of the proposed method, efficiency, and so forth. (For more details, please go through [6][7][8][9][10]). Some authors [11][12][13][14][15][16] gave special concern to the development of higher-order multi-point iterative methods. Faster convergence toward the required root, better efficiency, less CPU time, and fast accuracy are some of the main reasons behind the importance of multi-point methods.
The inspiration behind this work was the thought to suggest a new sixth-order iterative technique based on the weight function approach along with lower computational costs for large nonlinear systems. The beauty of using this approach is that it gives us the flexibility to produce new, as well as some special cases of the earlier methods. A good variety of applied science problems is considered in order to investigate the authenticity of our presented methods. Finally, using numerical experiments, we show the superiority of our schemes when compared to others in regard to computational cost, residual error, and CPU time. Moreover, they also show the stable computational order of convergence and minimum asymptotic error constants in contrast with exiting iterative methods.

Multi-Dimensional Case
Consider the following new scheme: where . We demonstrate the sixth-order convergence in Theorem 1 by adopting the same procedure suggested in [16]. Let F : D ⊆ R m −→ R m be sufficiently differentiable in D. The kth derivative of F at u ∈ R m , k ≥ 1, is the k-linear function F (k) (u) : R m × · · · × R m −→ R m with F (k) (u)(v 1 , . . . , v k ) ∈ R m , and we have 1. (1) , . . . , v σ(k) ) = F (k) (u)(v 1 , . . . , v k ), for each permutation σ of {1, 2, . . . , k}, that further yields This ξ + h ∈ R m being contained in the neighborhood of the required root ξ of F(x) = 0, we have where We can also write I being the identity and kC k h k−1 ∈ L(R m ). By (4), we get with The e ζ = U ζ − ξ denotes the error in the ζth step. Then, where M is a p-linear function M ∈ L(R m × · · · × R m , R m ), which is known as the error equation, and where p is the convergence order. Observe that e ζ p is (e ζ , e ζ , · · · , e ζ ).
Theorem 1. Suppose F : D ⊆ R m → R m is a sufficient differentiable mapping with open domain D that consists of the required zero ξ. Further, we assume F (x) is invertible and continuous around ξ. Moreover, we consider the starting guess X 0 is close enough to ξ for sure convergence. Then, scheme (2) attains maximum sixth-order convergence, provided that where I is the identity matrix.
Proof. We can write F(U ζ ) and F (U ζ ) as follows: and where I is the identity matrix of size m × m and C m = 1 m! F (ξ) −1 F (m) (ξ), m = 2, 3, 4, 5, 6. By expressions (7) and (8), we get and Using expression (10) in (2), we have which further produces and We can easily obtain the following from the expressions (10) and (13): We deduce from expression (14) that T(U ζ ) − I = O(e ζ ). Moreover, we can write so By adopting the expressions (10) and (16), we have Then, using expression (17) in (2) , we yield where α i , i = 1, 2, 3 depend on some b and C i , 2 ≤ j ≤ 6. Moreover, we have After some simple algebraic calculations, we have Finally, we have where α 1 is a function of only b, C 2 , C 3 , C 4 .

Specializations
Some of the fruitful cases are mentioned below: (1) We assume for b = 1, which generates the following new sixth-order, Jarratt-type scheme: (2) Consider the following weight function for b = 0: (3) Now, we assume another weight function (for b = 0.5), which is another new sixth-order scheme.
In like manner, we can obtain many familiar and advanced sixth-order, Jarratt-type schemes by adopting different weight functions.

Local Convergence Analysis
It is well-known that iterative methods defined on the real line or on the m−dimensional Euclidean space constitute the motivation for extending these methods to more abstract spaces, such as Hilbert, Banach, or other spaces. The local convergence analysis of method (2) after defining it for Banach space operators for all ζ = 0, 1, 2, 3, . . . are as where E 1 , E 2 are Banach spaces, Ω ⊆ E 1 is a nonempty, convex, and open subset of E 1 , Then, under certain hypotheses given later, method (25) converges to a solution U * of equation where F : Ω → E 2 is a continuously differentiable operator in the sense of Fréchet. For acceptable convergence analysis, we first need to define some parameters and scalar functions. Let ψ 0 : T → T be a continuous and increasing function with ψ 0 (0) = 0, where T = [0, +∞). The equation has a minimum of one positive zero. Denote by ρ 0 the smallest such solution, and set T 0 = [0, ρ).
Set Ω 1 = Ω ∩K(U * , R * ). Next, we provide the local convergence analysis of method (25) using the hypotheses (H) and the previously developed notations.

Theorem 2. Assume the hypotheses (H) hold and U
Moreover, U * is the only solution of F(x) = 0 in the set Ω 1 given in (h 5 ).
Then, we replace U 0 , V 0 , W 0 , U 1 by U j , V j , W j , U j+1 in the preceding estimations to finish the induction for (40)-(42). In view of the estimate we conclude that lim j→∞ U j = U * and U j+1 ∈ K(U * , R). Let us consider that V * ∈ Ω be such that so K1 −1 ∈ LB(E 1 , E 2 ). Therefore, by the identity we deduce that U * = V * .
Application 1: Let us see how functions A and B can be chosen when P is given above (22). We get so Hence, function A can be defined by Similarly, so we can define B by Remark 1. The results in this section were obtained using hypotheses only on the first derivative, in contrast to the results in Theorem 1 where hypotheses up to the seventh derivative of F were used to show the convergence order six. Hence, we have extended the usage of method (25) in Banach space valued operators. Notice also that there are even simple functions defined on the real line, where the hypotheses of Theorem 1 do not hold. Hence, the method 2 may or may not converge. As a motivational and academic example, see Example 6 in the next section. Then, notice that the third derivative of F does not exist. Using the approach of Theorem (2), we bypass the computation of higher-order-than-one derivatives. However, we assume hypotheses only on the first-order derivative of operator F. For obtaining the order of convergence, we adopted , for each σ = 1, 2, 3, 4, . . .
the computational order of convergence COC, and the approximate computational order of convergence ACOC [17,18], respectively. These definitions can also be found in [19]. They do not require derivatives higher than one. Indeed, notice that to generate iterates u σ and therefore compute ρ and ρ * , we need to use the formula (2) using only the first derivatives. It is vital to note that ACOC does not need the prior information of the exact root ξ.

Numerical Experimentation
Here, we demonstrate the suitability of our iterative methods for real-life complications. In addition, we also want to validate our theoretical results which were presented in earlier sections. Therefore, we consider four real-life issues (namely, Bratu's one-dimension, Fisher's, kinematic synthesis, and Hammerstein integration problems), where the fifth one is a standard academic problem and the sixth one is a motivational problem. The corresponding starting initial approximation and zeroes are depicted in examples (1)- (6).
In the Tables 1, 2, 4, 6 and 7, we report our findings' iteration indexes (n), and η by using Mathematica (Version 9) with multiple precision arithmetic and minimum 300 digits of mantissa that minimize the rounding-off errors.
Further, the variable η is the last obtained value of U ζ+1 −U ζ U ζ −U ζ−1 6 . Furthermore, the radii of convergence and the consumption of central processing unit (CPU) time by distinct schemes are depicted in the Tables 8 and 9, respectively. The α (±β) indicates α × 10 (±β) .
We depicted the numerical out-coming in Table 2. Table 2. Conduct of different techniques in Fisher's Equation (2). Here, we choose a remarkable kinematic synthesis problem that is related to steering, as mentioned in [4,5], which is defined as follows: where The values of ψ i and φ i (in radians) are depicted in Table 3 and the behavior of methods in Table 4. We chose the starting approximation u 0 = (0.7, 0.7, 0.7) that converges to ξ = (0.9051567 . . . , 0.6977417 . . . , 0.6508335 . . . ) T .  Table 4. Conduct of different techniques in kinematic synthesis problem 3.  [10], pp. [19][20], which is given as follows:
Surely, we can say that F (x) is not bounded on Ω in the neighborhood of point x = 0. This means the study prior to Section 5 is not applicable. In particular, hypotheses on the seventh-order derivative of F or even higher are considered to demonstrate the convergence of the proposed scheme in Section 3. Because of this section, we now demand the hypotheses on the first order. Further, we have H = 80 + 16π + (π + 12 log 2)π 2 2π + 1 , b = 1, ψ 0 (t) = ψ(t) = Ht, ψ 1 (t) = 1 + Ht and functions A and B, as given in Application 3.2. The desired solution of (6) is x * = 1 π . The distinct radii, U 0 , COC (ρ), and CPU time are stated in Tables 8 and 9.

Concluding Remarks
In this paper, a new family of sixth-order schemes was introduced to produce sequences heading to a solution of an equation. In addition, we present analyses of their convergences, as well as the computable radii for the guaranteed convergence of them for Banach space valued operators and error bounds based on the Lipschitz constants. It turns out that these schemes are superior to existing ones utilizing similar information. Numerical experiments test the convergence criteria and also numerically show the superiority of the new schemes.