Perturbed Newton Methods for Solving Nonlinear Equations with Applications

: Symmetries play an important role in the study of a plethora of physical phenomena, including the study of microworlds. These phenomena reduce to solving nonlinear equations in abstract spaces. Therefore, it is important to design iterative methods for approximating the solutions, since closed forms of them can be found only in special cases. Several iterative methods were developed whose convergence was established under very general conditions. Numerous applications are also provided to solve systems of nonlinear equations and differential equations appearing in the aforementioned areas. The ball convergence analysis was developed for the King-like and Jarratt-like families of methods to solve equations under the same set of conditions. Earlier studies have used conditions up to the ﬁfth derivative, but they failed to show the fourth convergence order. Moreover, no error distances or results on the uniqueness of the solution were given either. However, we provide such results involving the derivative only appearing on these methods. Hence, we have expanded the usage of these methods. In the case of the Jarratt-like family of methods, our results also hold for Banach space-valued equations. Moreover, we compare the convergence ball and the dynamical features both theoretically and in numerical experiments.


Introduction
Let M = R or C and T ⊆ M be a non-empty, convex and open set. We denote by B(q * , µ) the closure of the open ball B(q * , µ) with radius µ > 0 and center q * ∈ M. Suppose the set {B L : M → M linear and bounded operators} is denoted by L(M, M). Consider the non-linear equation where F : T ⊆ M → M is differentiable. Nonlinear equations of the type (1) are often used in science and other practical domains to tackle a variety of very challenging problems from diverse disciplines. Notice that solving these equations is a difficult process; the answer has only been found analytically in a small number of situations. As a result, iterative procedures are often utilized to solve these equations. The job of developing a successful iterative approach for tackling Equation (1) is a great challenge. A traditional technique, Newton's iterative technique, is the one that is most commonly used to solve this problem. More results on advanced forms, in terms of efficiency and convergence order, of popular methods such as Newton's, Jarratt's, King's and Chebyshev's methods, are presented in [1][2][3][4][5][6][7][8][9]. Chun [10] developed fourth-order classes of new modifications of King's family of methods [11] for solving nonlinear equations. These methods involve two function evaluations and one of its first derivatives per iteration. Additionally, as a special variant of King's method, the classical Traub-Ostrowski method was derived. Wang et al. [12] presented a sixth-order variant of Jarratt's method, which requires the evaluation of the function at an additional point in the iteration procedure of Jarratt's method [13]. A new family of fourth-order methods independent of the second derivative was introduced by Ghanbari in [14]. This family generates the King's family and some other well-known methods as specific cases. Grau-Sánchez and Gutiérrez [15], by using Obreshkov-like techniques, described two families of zero-finding iterative approaches. An efficient family of nonlinear system solvers was suggested by Cordero et al. [16] using a reduced composition technique on Newton's and Jarratt's methods. Sharma et al. [17] composed two weighted-Newton steps to construct an efficient, fourth-order weighted-Newton method to solve nonlinear systems. Sharma and Arora [18] introduced iterative methods of fourth and sixth convergence order for solving nonlinear systems. Two biparametric fourth-order families of predictor-corrector iterative solvers are given in [19]. Solaiman et al. [20] proposed a modified class of fourth and eighth-convergence order iterative methods based on King's method for nonlinear equations. In each iteration, three function evaluations are required for the fourth-order methods, and the eighth-order methods require four evaluations. Other results related to the convergence and dynamics of iterative formulas can be found in [3,[21][22][23][24][25][26][27][28][29]. This paper deals with a comparison of the convergence balls and the complex dynamical features between the King-like and Jarratt-like families of methods. These methods are as follows: King-like family of methods (KLFM): and Jarratt-like family of methods (JLFM): where α, β, γ, δ ∈ M, A n = F(q n ) + (δ − 2)F(y n ), B n = F(q n ) + δF(y n ), C n = I + βH n and H n = H(y n , q n ) = F (q n ) −1 (F (y n ) − F (q n )).
If α = 1, β = 3 2 , γ = 3 4 and δ ∈ [0, 2], methods (2) and (3) reduce to the ones studied in [2,11,13], where it was shown they are of fourth-order using fifth derivative and Taylor expansions. As such results require derivatives of higher orders, these methods are very hard to execute, since their scope of application is small. Notice, however, those derivatives of orders higher than one do not appear in these methods. Hence, the earlier results limit the applicability of these methods to equations containing functions that are at least five times differentiable, although they may converge. Hence, their applicability is restricted. To support our argument, we consider the following motivational function where M = R and F is defined on T = [− 1 2 , 3 2 ]. Then, it is extremely important to emphasize that F is unbounded. As a consequence, the previous convergence findings for KLFM and JKLM, which are based on F (v) , are invalid in this case. Additionally, these convergence results supply negligible information regarding the limits on error q n − q * , the convergence domain and about the whereabouts of the solution q * . We need the ball analysis of iterative methods for determining the convergence radii, establishing error bounds and calculating the region in which q * is unique. The most significant benefit of the ball analysis is that it simplifies the very demanding task of selecting a starting point. Having this perspective, we are encouraged to analyze and compare the balls of convergence of KLFM and JKLM under the same set of assumptions based on just the first derivative of F, which only appears in these methods. In addition to providing the error estimates q n − q * and convergence radii, the convergence theorems that we established also offer a correct location of the solution q * . Notice also that the local convergence results are important, since they demonstrate the degree of difficulty in choosing the initial points. The dynamic comparison between these methods is also presented.
It is worth noticing that methods (2)-(4) are explicit. We refer the reader to [30][31][32] for important implicit methods. This type of method is out of the scope of this paper. However, we plan to study such methods in our future research, since they provide better stability during data processing along the same lines.
Various portions of this paper may be described as follows: Section 2 discusses the key convergence theorems on the ball analysis of KLFM and JFLM. Comparison of attraction basins for these methods is the main content of Section 3. Section 4 is devoted to numerical applications of various kinds. Section 5 contains the final remarks of this research.

Ball Comparison
We first present the ball convergence analysis for KLMF. Let S = [0, ∞). Suppose function(s): has a minimal zero ρ 0 ∈ S \ {0} for function ω 0 : S → S that is non-decreasing and continuous. Set have minimal zeros ρ 1 and ρ p ∈ S 0 \ {0}, respectively, where ω 1 : S 0 → S is nondecreasing and continuous, and p : S 0 → S is defined by Then, parameter d is defined by which shall be shown to be a convergence radius for KLMF. Set Notice that it is implied by (5) and are satisfied if t ∈ S 2 . The developed conditions (C) play a role in the ball convergence analysis of KLFM if functions "ω are as given previously, and q * is a simple zero of F. Suppose: Next, the main ball convergence for KLFM is given utilizing conditions (C).

Theorem 1.
Under conditions (C) ford = d, choose starting point q 0 ∈ B(q * , d) \ {q * }. Then, we get lim n→∞ q n = q * , which is the only zero of F in the domain T 1 given in (C 4 ).
Next, we present the ball convergence analysis of JLFM similarly. However, this time, functions are defined by where The convergence radius is given by where d k are supposed to be zeros of the functions h k (t) − 1, respectively. The functions h k are motivated by the estimates (obtained under the conditions (C) ford = d): where we also used Hence, we arrived at the corresponding ball convergence result for JLFM: Under conditions (C) ford = d, choose starting point q 0 ∈ B(q * ,d) \ {q * }. Then, the conclusions of Theorem 1 hold for JLFM with d and h k replacing d and h k , respectively.

Comparison of Attraction Basins
The dynamical qualities of KLFM and JLFM were compared by analyzing the attraction basins for these methods. For generating the basins, these methods were applied to various complex polynomials W k (z), k = 1, 2, ..., 10, of degrees more than or equal to two. A region Z = [−4, 4] × [−4, 4] on C was selected with a grid of 400 × 400 points. Then, these methods were applied to find solutions of the considered polynomials W k (z), where each point z 0 ∈ Z was engaged as a stater. If the point z 0 belonged to the set {z 0 ∈ C : z j → z * as j → ∞}, then it remained in the basin of a zero z * of a considered polynomial. We represent this z 0 with a distinct color related to z * . We assigned the light to dark colors to each z 0 as per the number of iterations. Non-convergence zones are displayed in black. We use the accuracy z j − z * < 10 −6 or executed 100 iterations for terminating the process. The diagrams were designed in MATLAB 2019a.
From Figures 1-10, we can deduce that KLFM has the wider basins in comparison to JLFM. It can be seen that the black zones that appear in Figures 1, 5 and 8 only appear for the JLFM and not the KLFM. Furthermore, the KLFM is better than the JLFM in terms of less chaotic behavior, as it can be seen that basins are bigger with the KLFM, and there are fewer changes of basin than for the JLFM in each figure, which means that the fractal dimension is smaller in KLFM, and consequently less chaotic. Hence, the overall conclusion of this comparison is that the numerical stability of KLFM is higher than that of JLFM. This means that KLFM is the preferred option for solving real problems. Moreover, in relation to the patterns that appear in the basin of attraction, it is clear that the KLFM is similar to third-order methods such us Halley or Chebyshev methods, and the immediate basin of attraction is big, and black zones are avoided. On the other hand, in the JLFM, everything seems more independent with different structures; for example, see Figure 9, where the roots are bounded by a small basin, and then a really big one in red appears in Figures 1, 5 and 8, where zones with no convergence appear, especially in Figure 5 where almost the half of the plane is black. Finally, in Figures 4, 6, 7 and 9, it seems that compactification appears in the roots, but one of the basins is much bigger than the rest, and this behavior is really interesting and could be considered in the future.

Numerical Examples
A comparison of the radii of convergence balls is presented in this section. By applying the newly suggested theorems, the radii of KLFM and JLFM were obtained and compared for three numerical problems. Using this definition, we have F (q) = e q and the solution q * = 0. In order to verify the conditions (C), we see that since F (q * ) = 1, Hence, we can choose w 0 (t) = (e − 1)t, w(t) = e 1 e−1 t and w 1 (t) = 2.
Using Theorem 1 and Theorem 2, the values ofd (for δ = 2) and d were calculated and are presented in Table 1.

Example 3.
In the end, we address the motivational problem given in the first section. We have q * = 1. Additionally, ω 0 (t) = ω(t) = 96.662907t and ω 1 (t) = 2. We applied Theorem 1 and Theorem 2 to compute values ofd (for δ = 2) and d. These values are presented in Table 3.

Conclusions
We provided the ball analysis results for the KLFM and JLFM under the same set of conditions. To establish these results, the first derivative and generalized Lipschitz conditions were employed. In this way, the usefulness of these methods was improved. In addition, the convergence ball and dynamics comparison between these methods were presented. Based on the comparison results, it was derived that the stability of the KLFM is higher, and it is a much better method than the JLFM in terms of convergence ball and dynamical quality. Notice that although method (3) (JLFM) was studied on M, the same proofs can be given for F : D ⊆ B 1 → B 2 , where B 1 and B 2 are Banach spaces and D = ∅ is open and convex. Hence, the earlier result also extends to hold for Banach space-valued equations. As you may have noticed, our methodology does not depend on the methods. Therefore, it can be used to extend the usage of other methods using inverses. That includes single and multistep methods. Our future research will include the study of implicit methods, such as the ones in [30][31][32] and other such methods along the same lines, since they provide better stability during data processing.