## Article Menu

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# A Class of Sixth-Order Iterative Methods for Solving Nonlinear Systems: The Convergence and Fractals of Attractive Basins

by
Xiaofeng Wang
* and
Wenshuo Li
School of Mathematical Sciences, Bohai University, Jinzhou 121000, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2024, 8(3), 133; https://doi.org/10.3390/fractalfract8030133
Submission received: 4 January 2024 / Revised: 12 February 2024 / Accepted: 22 February 2024 / Published: 26 February 2024

## Abstract

:
In this paper, a Newton-type iterative scheme for solving nonlinear systems is designed. In the process of proving the convergence order, we use the higher derivatives of the function and show that the convergence order of this iterative method is six. In order to avoid the influence of the existence of higher derivatives on the proof of convergence, we mainly discuss the convergence of this iterative method under weak conditions. In Banach space, the local convergence of the iterative scheme is established by using the $ω$-continuity condition of the first-order Fréchet derivative, and the application range of the iterative method is extended. In addition, we also give the radius of a convergence sphere and the uniqueness of its solution. Finally, the superiority of the new iterative method is illustrated by drawing attractive basins and comparing them with the average iterative times of other same-order iterative methods. Additionally, we utilize this iterative method to solve both nonlinear systems and nonlinear matrix sign functions. The applicability of this study is demonstrated by solving practical chemical problems.
MSC:
65H05; 65B99; 65D99; 90C30

## 1. Introduction

Nonlinear problems are pervasive in scientific and engineering computations; they encompass classical nonlinear finite element problems [1] and nonlinear programming problems in economics [2], as well as fundamental problems in physics, chemistry, and fluid mechanics [3,4,5]. Consequently, resolving nonlinear systems is represented by
$F ( s ) = 0$
which has emerged as a pivotal aspect in tackling scientific computing challenges. However, owing to the intricacy inherent in nonlinear systems, it is often difficult to obtain analytical solutions directly, which makes numerical solutions the key to solve such problems.
The iterative method stands out as the most frequently employed numerical approach for solving nonlinear systems. The Newton iterative method is the most classical iterative method, which has the following form [6]:
$y ( k ) = x ( k ) − F ′ ( x ( k ) ) − 1 F ( x ( k ) ) .$
where $F ′ ( x ( k ) )$ is the Jacobian matrix of the function $F$ iterated at step k, and $F ′ ( x ( k ) ) − 1$ is the inverse of $F ′ ( x ( k ) )$. Newton’s method can be used to find the approximate solution of the nonlinear systems $F ( s ) = 0$ in both real and complex fields. When the initial value $s 0$ is sufficiently proximate to the root of the function $F ( s )$, Newton’s method exhibits a convergence order of at least 2. However, Newton’s method is categorized as a single-point iterative approach. To circumvent the sluggish convergence associated with single-point iterative methods when tackling complex nonlinear problems, researchers have shifted their focus toward multi-point iterative methods, which are characterized by enhanced computational efficiency and higher convergence orders.
The concept of multi-point iterative methods was first introduced by Traub in 1964 [7]. Since then, numerous scholars have dedicated their efforts to formulating iterative methods of varying orders and conducting convergence analyses. Cordero et al. introduced a class of optimal fourth-order iterative methods with weight functions and conducted a dynamic analysis of one of the iterative methods [8]. Argyros et al. presented the following sixth-order iterative method ($M 1$) [9]:
In recognizing that the format of the iterative method $M 1$ (1) is complex, this paper introduces a Newton-type iterative method with two free parameters. The format of the proposed method is as follows:
where $t = − F ′ ( x ( k ) ) − 1 · [ x ( k ) , y ( k ) ; F ] + I$ and $[ x , y ; F ] ( x − y ) = F ( x ) − F ( y ) , x , y ∈ R n$. When $P = 2$ and $Q = 1$, Iterative Method (2) can reach the sixth order. This will be proved in Theorem 1.
To provide a more intuitive demonstration of the convergence of the proposed iterative method, fractal graphs were generated under various nonlinear functions. This approach has been employed in several studies. For instance, in leveraging fractal theory, Sabban scrutinized the stability of the proposed iterative method through dynamic plane visualization [10]. Additionally, Wang et al. explored the parameters that ensured the stability of the iterative method by studying its fractal graph with varying parameters [11]. Wang et al. utilized dynamic plane plots of the conformable vector obtained with Traub’s method [12].
The contributions of this paper are summarized as follows: (1) The proposal of a sixth-order Newton-type iterative method (30) for solving nonlinear systems accompanied by a proof of its convergence. (2) A discussion of the local convergence of the proposed sixth-order iterative method (30) in Banach space is provided, in which scenarios where equations in nonlinear systems may lack higher-order derivatives are considered. (3) An illustration of the advantages and applicability of the new iterative method (30) is delivered through comparisons of convergence rates and average iterative numbers with other iterative methods of the same order. This is achieved using fractal graphs and conducting numerical experiments.
The rest of this paper is arranged as follows. In Section 2, we outline the preparations for analyzing the convergence of Iterative Method (30). In Section 3, the conditions that need to be satisfied to make the convergence order of Iterative Method (1) reach the sixth order are given, and the local convergence of Iterative Method (30) is established in Banach space by using the $ω$-continuity condition on the first-order Fréchet derivative. The proposed analysis helps to avoid the absence of higher derivatives of the function, as well as extends Iterative Method (30). In addition, we also give the distance information between the initial point and the exact point to ensure the convergence of the convergence sequence and the uniqueness of the solution. In Section 4, we plot the fractal plot of Iterative Method (30) under nonlinear polynomials and compare the average number of iterations with other iterative methods of the same order. In Section 5, we employ Iterative Method (30) to solve nonlinear systems and nonlinear matrix symbolic functions. In addition, we apply the analysis in Section 3 to solve practical chemical problems to demonstrate its validity (see Section 5). Finally, we provide a concise summary and highlight our future research directions.

## 2. Preparation for Convergence Analysis

The convergence proof of the iterative method is provided by Theorem 1. The proof of Theorem 1 (refer to Section 3) reveals a requirement for the higher derivative of the function, thereby implying a constraint on convergence. Specifically, if the higher derivative of the solution function does not exist, the iterative method becomes inapplicable. Additionally, in establishing the convergence of iterative sequences, it is customary to assume that the initial point $s ( 0 )$ is sufficiently proximate to the exact solution $α ∗$. However, determining the precise proximity required remains uncertain. To address this, numerous scholars have undertaken research on local convergence [13,14,15]. Similar to the “problem” functions in these literature, a function $F$ defined on $[ − 1 2 , 3 2 ]$ is achieved with the following:
In examining this function, it was observed that its third derivative is unbounded within its domain. Consequently, Iterative Method (2) becomes unsuitable for solving this equation. To address this limitation, a local convergence analysis was conducted, whereby the aim was to circumvent the reliance on higher derivatives in the convergence study. This approach broadened the applicability of Iterative Method (2).
We will discuss the local convergence of the proposed iterative method in Banach spaces. Let $F : U ⊂ X 1 → X 2$ be a continuous Fréchet differentiable operator, where $X 1$ and $X 2$ are both Banach spaces, $U$ is an open set on $X 1$, and $U$ is convex. First, we constructed the space in which the conditions exist. For any point $α ∈ X 1$ and a given distance $ρ > 0$, let us say
$B ( α , ρ ) = { β ∈ X 1 : ∥ α − β ∥ < ρ } ,$
$B ¯ ( α , ρ ) = { β ∈ X 1 : ∥ α − β ∥ ≤ ρ } ,$
$L ( X 1 , X 2 ) = { G : X 1 → X 2 } ,$
where $G$ is a bounded and linear operator. Before giving the local convergence theorem, we need to assume that the following non-decreasing continuous functions exist:
On the interval $I 1 = [ 0 , ∞ )$, the function $D 1 : I 1 → I 1$ exists and satisfies condition $D 1 ( 0 ) = 0$.
Let $s m i n$ exist, where $s m i n$ is the least positive solution satisfying $D 1 ( s ) = 1$.
On the interval $I 2 = [ 0 , s m i n )$, the function $D 2 : I 2 → I 1$ exists and satisfies condition $D 2 ( 0 ) = 0$.
On the interval $I 2 = [ 0 , s m i n )$, the function $D 3 : I 2 → I 1$ exists and satisfies condition $D 3 ( 0 ) < 1$.
Given the existence of these three functions, for the sake of simplifying the expression in the proof process, the following functions were constructed:
We then defined the following functions on interval $I 2$:
$G 1 ( s ) = ∫ 0 1 D 2 ( ( 1 − θ ) s ) d θ 1 − D 1 ( s ) ,$
$M = ∫ 0 1 D 3 ( θ · G 1 ( ∥ x 0 − α ∗ ∥ ) · ∥ x 0 − α ∗ ∥ ) d θ ,$
$G 2 ( s ) = G 1 ( s ) × ( 1 + ( 2 + 2 M · G 1 ( s ) ∫ 0 1 D 3 ( θ s ) d θ ) × M 1 − D 1 ( s ) ) ,$
$N = ∫ 0 1 D 3 ( θ · G 2 ( ∥ x 0 − α ∗ ∥ ) · ∥ x 0 − α ∗ ∥ ) d θ ,$
$G 3 ( s ) = G 2 ( s ) × ( 1 + ( 2 + 2 M · G 1 ( s ) ∫ 0 1 D 3 ( θ s ) d θ ) × N 1 − D 1 ( s ) ) ,$
$U 1 ( s ) = G 1 ( s ) − 1 ,$
$U 2 ( s ) = G 2 ( s ) − 1 ,$
$U 3 ( s ) = G 3 ( s ) − 1 .$
It is easy to show that $U i ( 0 ) < 0$, and that, as s approaches $s m i n$, $U i ( s ) , i ∈ { 1 , 2 , 3 }$ approaches $+ ∞$. We know that $s i$, the smallest zero of $U i ( s )$, exists, and that . This point can be proved by applying the mean value theorem. Let us say $r = m i n { s 1 , s 2 , s 3 }$, then for any $s ∈ [ 0 , r )$, we have
$0 ≤ G 1 ( s ) < 1 ,$
$0 ≤ G 2 ( s ) < 1 ,$
$0 ≤ G 3 ( s ) < 1 .$
Under these assumptions, the local convergence proof of the proposed iterative method is presented in Theorem 2.

## 3. Analysis of Convergence

In this section, we will explore the conditions under which the free parameters $P$ and $Q$ in Iterative Method (2) satisfy the convergence requirements, whereby it is ensured that the convergence order of Iterative Method (2) can attain six.
Theorem 1.
Consider function $F : D ⊆ R n → R n$, which is a sufficiently Fréchet differentiable function in a neighborhood D of $α ∗$ and $F ( α ∗ ) = 0$. Suppose that the Jacobian $F ′ ( x )$ is continuous and non-singular in $α ∗$, and that $P = 2$ and $Q = 1$. Thus, when the initial estimate $x ( 0 )$ is close enough to $α ∗$, the iterative sequence ${ x ( k ) }$ generated by (1) converges to $α ∗$, and the error equation is as follows:
$x ( k + 1 ) − α ∗ = ( 30 A 2 5 − 11 A 2 3 A 3 + A 2 A 3 2 ) ( e ( k ) ) 6 + O ( ( e ( k ) ) 7 ) ,$
where $e ( k ) = x ( k ) − α ∗$ and $A j = F ′ ( α ∗ ) − 1 F ( j ) ( α ∗ ) j ! ∈ L j ( R n × R n ) , j = 2 , 3 , ⋯$.
Proof.
In Iterative Method (2), the first-order divided difference operator appears. We can consider it a mapping $[ · , · ; F ] : D × D ⊂ R n × R n → L ( R n )$, where
$[ x + d , x ; F ] = ∫ 0 1 F ′ ( x + ρ d ) d ρ , ∀ ( x , d ) ∈ R n × R n .$
By expanding $F ′ ( x + ρ d )$ at x by a Taylor series, we obtain
$∫ 0 1 F ′ ( x + ρ d ) d ρ = F ′ ( x ) + 1 2 F ″ ( x ) d + 1 6 F ‴ ( x ) d 2 + O ( d 3 ) .$
Let $α ∗$ be the root of the nonlinear system $F ( s ) = 0$. If $F ( x ( k ) )$ is expanded by a Taylor series at $α ∗$, then
$F ( x ( k ) ) = F ′ ( α ∗ ) [ e ( k ) + A 2 ( e ( k ) ) 2 + A 3 ( e ( k ) ) 3 + A 4 ( e ( k ) ) 4 + A 5 ( e ( k ) ) 5 + O ( ( e ( k ) ) 6 ) ] ,$
where $e ( k ) = x ( k ) − α ∗$ and $A j = F ′ ( α ∗ ) − 1 F ( j ) ( α ∗ ) j ! ∈ L j ( R n × R n ) , j = 2 , 3 , ⋯$.
By differentiating Equation (14), we can obtain
$F ′ ( x ( k ) ) = F ′ ( α ∗ ) [ I + 2 A 2 e ( k ) + 3 A 3 ( e ( k ) ) 2 + 4 A 4 ( e ( k ) ) 3 + 5 A 5 ( e ( k ) ) 4 + O ( ( e ( k ) ) 5 ) ] ,$
$F ″ ( x ( k ) ) = F ′ ( α ∗ ) [ 2 A 2 + 6 A 3 e ( k ) + 12 A 4 ( e ( k ) ) 2 + 20 A 5 ( e ( k ) ) 3 + O ( ( e ( k ) ) 4 ) ] ,$
$F ‴ ( x ( k ) ) = F ′ ( α ∗ ) [ 6 A 3 + 24 A 4 e ( k ) + 60 A 5 ( e ( k ) ) 2 + O ( ( e ( k ) ) 3 ) ] .$
Considering $F$ is invertible, we can let $F ′ ( α ∗ ) − 1 = Γ − 1$. Then, we have
$F ′ ( x ( k ) ) − 1 = [ I + D 2 e ( k ) + D 3 ( e ( k ) ) 2 + D 4 ( e ( k ) ) 3 + D 5 ( e ( k ) ) 4 ] Γ − 1 + O ( ( e ( k ) ) 5 ) .$
According to $F ′ ( x ( k ) ) − 1 F ′ ( x ( k ) ) = I$, we can determine $D 2 − D 5$ as follows:
$D 2 = − 2 A 2 ;$
$D 3 = 4 A 2 2 − 3 A 3 ;$
$D 4 = − 8 A 2 3 + 12 A 2 A 3 − 4 A 4$;
$D 5 = 16 A 2 4 + 9 A 3 2 + 16 A 2 A 4 − 36 A 2 2 A 3 − 5 A 5 .$
Therefore,
$F ′ ( x ( k ) ) − 1 F ( x ( k ) ) = e ( k ) − A 2 ( e ( k ) ) 2 + ( 2 A 2 2 − 2 A 3 ) ( e ( k ) ) 3 + ( − 5 A 2 3 + 7 A 2 A 3 − 3 A 4 ) ( e ( k ) ) 4 + O ( ( e ( k ) ) 5 ) .$
From the first step in Iterative Method (1), the following equation is established:
$y ( k ) − α ∗ = x ( k ) − α ∗ − F ′ ( x ( k ) ) − 1 F ( x ( k ) ) = A 2 ( e ( k ) ) 2 + ( 2 A 3 − 2 A 2 2 ) ( e ( k ) ) 3 + ( 5 A 2 3 − 7 A 2 A 3 + 3 A 4 ) ( e ( k ) ) 4 + O ( ( e ( k ) ) 5 ) ,$
and
$F ′ ( y ( k ) ) = F ′ ( α ∗ ) [ A 2 ( e ( k ) ) 2 − 2 ( A 2 2 − A 3 ) ( e ( k ) ) 3 + ( 5 A 2 3 − 7 A 2 A 3 + 3 A 4 ) ( e ( k ) ) ] + O ( ( e ( k ) ) 5 ) .$
Therefore, by combining Expression (18) and Expression (21), we can obtain
$F ′ ( x ( k ) ) − 1 F ( y ( k ) ) = A 2 ( e ( k ) ) 2 + ( − 4 A 2 2 + 2 A 3 ) ( e ( k ) ) 3 + ( 13 A 2 3 − 14 A 2 A 3 + 3 A 4 ) ( e ( k ) ) 4 + O ( ( e ( k ) ) 5 ) .$
By $x + d = y$, $d = y − x = − F ( x ( k ) ) − 1 F ( x ( k ) )$, we then have
$[ x ( k ) , y ( k ) ; F ] = F ′ ( α ∗ ) [ I + A 2 e ( k ) + ( A 2 2 + A 3 ) ( e ( k ) ) 2 + ( A 4 + 3 A 2 A 3 − 2 A 2 3 ) ( e ( k ) ) 3 + ( 4 A 2 4 − 8 A 2 2 A 3 + 4 A 2 A 4 + 2 A 3 2 + A 5 ) ( e ( k ) ) 4 + O ( ( e ( k ) ) 5 ) ] .$
Using the result of Equation (18), we have
$F ′ ( x ( k ) ) − 1 · [ x ( k ) , y ( k ) ; F ] = I − A 2 e ( k ) + ( 3 A 2 2 − 2 A 3 ) ( e ( k ) ) 2 + ( − 8 A 2 3 + 10 A 2 A 3 − 3 A 4 ) ( e ( k ) ) 3 + ( 20 A 2 4 − 37 A 2 2 A 3 + 8 A 3 2 + 14 A 2 A 4 − 4 A 5 ) ( e ( k ) ) 4 + O ( ( e ( k ) ) 5 ) .$
Next, let us consider the concrete form of t. By Equations (18) and (23), we have
$t = − F ′ ( x ( k ) ) − 1 · [ x ( k ) , y ( k ) ; F ] + I = A 2 e ( k ) + ( 2 A 3 − 3 A 2 2 ) ( e ( k ) ) 2 + ( 8 A 2 3 − 10 A 2 A 3 + 3 A 4 ) ( e ( k ) ) 3 + ( 4 A 5 − 14 A 2 A 4 − 8 A 3 2 + 37 A 2 2 A 3 − 20 A 2 4 ) ( e ( k ) ) 4 + O ( ( e ( k ) ) 5 ) .$
As such, we also have
$z ( k ) − α ∗ = y ( k ) − α ∗ − ( P t + Q I ) · F ′ ( x ( k ) ) − 1 F ( y ( k ) ) = ( A 2 − A 2 P ) ( e ( k ) ) 2 + ( − 2 A 3 ( − 1 + P ) + A 2 2 ( − 2 + 4 P − Q ) ) ( e ( k ) ) 3 + ( − 3 A 4 ( − 1 + P ) + A 2 A 3 ( − 7 + 14 P − 4 Q ) + A 2 3 ( 4 − 13 P + 7 Q ) ) ( e ( k ) ) 4 + O ( ( e ( k ) ) 5 ) ,$
and
$F ( z ( k ) ) = F ′ ( α ∗ ) [ ( A 2 − A 2 P ) ( e ( k ) ) 2 + ( − 2 A 3 ( − 1 + P ) + A 2 2 ( − 2 + 4 P − Q ) ) ( e ( k ) ) 3 + ( − 3 A 4 ( − 1 + P + A 2 A 3 ( − 7 + 14 P − 4 Q ) ) + A 2 3 ( 5 − 15 P + P 2 + 7 Q ) ) ( e ( k ) ) 4 ] + O ( ( e ( k ) ) 5 ) .$
Combined with Equation (18), the following formula is established:
$F ′ ( x ( k ) ) − 1 F ( z ( k ) ) = ( A 2 − A 2 P ) ( e ( k ) ) 2 + ( − 4 A 2 2 + 2 A 3 + 6 A 2 2 P − 2 A 3 P − A 2 2 Q ) ( e ( k ) ) 3 + ( 13 A 2 3 − 14 A 2 A 3 + 3 A 4 − 27 A 2 3 P + 21 A 2 A 3 P − 3 A 4 P + A 2 3 P 2 + 9 A 2 3 Q − 4 A 2 A 3 Q ) ( e ( k ) ) 4 + O ( ( e ( k ) ) 5 ) .$
At last, we have
$x ( k + 1 ) − α ∗ = z ( k ) − α ∗ − ( P t + Q I ) · F ′ ( x ( k ) ) − 1 F ( z ( k ) ) = A 2 ( − 1 + P ) 2 ( e ( k ) ) 2 + 2 ( − 1 + P ) ( A 3 ( − 1 + P ) + A 2 2 ( 1 − 3 P + Q ) ) ( e ( k ) ) 3 + E 4 ( e ( k ) ) 4 + E 5 ( e ( k ) ) 5 + E 6 ( e ( k ) ) 6 + O ( ( e ( k ) ) 7 ) ,$
where
$E 4 = 3 A 4 ( − 1 + P ) 2 − A 2 A 3 ( − 1 + P ) ( − 7 + 21 P − 8 Q ) + A 2 3 ( 4 + 27 P 2 − P 3 + 14 Q + Q 2 − 2 P ( 13 + 9 Q ) )$,
$E 5 = − 2 ( − 1 + P ) ( − 2 A 5 ( − 1 + P ) ) + A 3 2 ( − 3 + 9 P − 4 Q ) − 2 A 2 A 4 ( − 1 + P ) ( − 5 + 15 P − 6 Q ) + 2 A 2 2 A 3 ( 10 + 66 P 2 − 2 P 3 + 38 Q + 3 Q 2 − P ( 64 + 9 Q ) ) + A 2 4 ( 10 P 3 − P 2 ( 104 + 3 Q ) + 2 P ( 38 + 53 Q ) − 2 ( 4 + 33 Q + 6 Q 2 )$ and
$E 6 = ( − 1 + P ) ( 5 A 6 ( − 1 P ) + A 3 A 4 ( 17 − 5 ( P + 24 Q ) ) + A 2 2 A 4 ( 28 + 186 P 2 − 6 P 3 + 110 Q + 9 Q 2 − 2 P ( 90 + 71 Q ) ) + A 2 3 A 3 ( − 52 + 52 P 3 + P 4 − 450 Q − 89 Q 2 − 18 Q 2 ( 36 + Q ) + P ( 480 + 724 Q ) ) + A 2 5 ( 16 − 62 P 3 + 258 Q + 88 Q 2 + P 2 ( 362 + 39 Q ) ) − P ( 208 + 506 Q + 3 Q 2 ) + A 2 ( − A 5 ( − 1 + P ) ) ( − 13 + 39 P − 16 Q ) + A 3 2 ( 33 + 210 P 2 − 4 P 3 + 136 Q + 12 Q 2 − 2 P ( 103 + 88 Q ) )$.
If we choose $P = 2$ and $Q = 1$, then we have
$x ( k + 1 ) − α ∗ = ( 30 A 2 5 − 11 A 2 3 A 3 + A 2 A 3 2 ) ( e ( k ) ) 6 + O ( ( e ( k ) ) 7 ) .$
This indicates that the iterative method in the following format achieves a sixth-order convergence.
Subsequently, we will conduct an analysis of its convergence and delve into the approximate problem of its locally unique solution.
Theorem 2.
Consider the Fréchet differentiable operator $F : U ⊂ X 1 → X 2$ on a Banach space. Let $α ∗$ be the root of $F ( s ) = 0$, and let $F ′ ( α ∗ ) ≠ 0$. Suppose the following conditions apply to $F$:
$F ( α ∗ ) = 0 , F ′ ( α ∗ ) − 1 ∈ L ( X 2 , X 1 ) .$
$∥ F ′ ( α ∗ ) − 1 ( F ′ ( u ) − F ′ ( α ∗ ) ) ∥ ≤ D 1 ( ∥ u − α ∗ ∥ ) , ∀ u ∈ U .$
$∥ F ′ ( α ) − 1 ( F ′ ( u ) − F ′ ( v ) ) ∥ ≤ D 2 ( ∥ u − v ∥ ) , ∀ u , v ∈ U 0 : = U ∩ B ( α ∗ , s m i n ) .$
$∥ F ′ ( α ∗ ) − 1 F ′ ( u ) ≤ D 3 ( ∥ u − α ∗ ∥ ) ∥ , ∀ u ∈ O 0 .$
$B ¯ ( α ∗ , r ) ⊆ U .$
Based on the above five conditions, $∀ s 0 ∈ B ( α ∗ , r )$, the iterative sequence ${ s n } n ≥ 0$ can be generated by Iterative Method (30). In addition, there is ${ s n } n ≥ 0 ∈ B ( α ∗ , r )$. As n approaches $+ ∞$, the distance between $s n$ and $α ∗$ approaches 0, that is, ${ s n } n ≥ 0$ is a convergence sequence. In addition, for $n ≥ 0$, the following formulas are also true:
$∥ y n − α ∗ ∥ ≤ G 1 ( ∥ x n − α ∗ ∥ ) · ∥ x n − α ∗ ∥ ≤ ∥ x n − α ∗ ∥ < r ,$
$∥ z n − α ∗ ∥ ≤ G 2 ( ∥ x n − α ∗ ∥ ) · ∥ x n − α ∗ ∥ ≤ ∥ x n − α ∗ ∥ < r ,$
$∥ x n + 1 − α ∗ ∥ ≤ G 3 ( ∥ x n − α ∗ ∥ ) · ∥ x n − α ∗ ∥ ≤ ∥ x n − α ∗ ∥ < r .$
Finally, if there exists an $E ≥ r$ satisfying $∫ 0 1 D 1 ( θ · E ) d θ < 1$, then the root on $U ′ = U ⋂ B ¯ ( α ∗ , E )$ satisfying $F ( s ) = 0$ is unique.
Proof.
Let $η ∈ B ( α ∗ , r )$, then use Equation (32) to obtain
$∥ F ′ ( α ∗ ) − 1 ( F ′ ( η ) − F ′ ( α ∗ ) ) ∥ ≤ D 1 ( ∥ η − α ∗ ∥ ) < D 1 ( r ) < 1 .$
Through the simple transformation of the above equation, we can directly obtain
$∥ F ′ ( η ) − 1 F ′ ( α ∗ ) ∥ ≤ 1 1 − D 1 ( ∥ η − α ∗ ∥ ) < 1 1 − D 1 ( r ) .$
When $n = 0$ in Iterative Method (30), it is obtained by the first step in (30) as follows:
$y 0 − α ∗ = x 0 − α ∗ − F ′ ( x 0 ) − 1 F ( x 0 ) = − [ F ′ ( x 0 ) − 1 F ′ ( α ∗ ) ] [ ∫ 0 1 F ′ ( α ∗ ) − 1 ( F ′ ( α ∗ + θ ( x 0 − α ∗ ) ) − F ′ ( x 0 ) ) ( x 0 − α ∗ ) d θ ] .$
Its norm can be obtained as follows:
$∥ y 0 − α ∗ ∥ ≤ ∥ F ′ ( x 0 ) − 1 F ′ ( α ∗ ) ∥ · ∥ ∫ 0 1 F ′ ( α ∗ ) − 1 ( F ′ ( α ∗ + θ ( x 0 − α ∗ ) ) − F ′ ( x 0 ) ) ( x 0 − α ∗ ) d θ ∥ ≤ ∫ 0 1 D 2 ( ( 1 − θ ) · ∥ x 0 − α ∗ ∥ ) d θ · ∥ x 0 − α ∗ ∥ 1 − D 1 ( ∥ x 0 − α ∗ ∥ ) = G 1 ( ∥ x 0 − α ∗ ∥ ) · ∥ x 0 − α ∗ ∥ < ∥ x 0 − α ∗ ∥ < r .$
Notice that
$∥ F ′ ( x 0 ) − 1 F ( y 0 ) ∥ ≤ ∥ F ′ ( α ∗ ) − 1 F ( y 0 ) ∥ · ∥ F ′ ( x 0 ) − 1 F ′ ( α ∗ ) ∥ ,$
and
$∥ F ′ ( α ∗ ) − 1 F ( y 0 ) ∥ = ∥ F ′ ( α ∗ ) − 1 ∫ 0 1 F ′ ( α ∗ + θ ( y 0 − α ∗ ) ) ( y 0 − α ∗ ) d θ ∥ ≤ ∫ 0 1 D 3 ( θ · ∥ y 0 − α ∗ ∥ ) d θ · ∥ y 0 − α ∗ ∥ ≤ ∫ 0 1 D 3 ( θ · G 1 ( ∥ x 0 − α ∗ ∥ ) · ∥ x 0 − α ∗ ∥ ) d θ · ∥ y 0 − α ∗ ∥ = M · ∥ y 0 − α ∗ ∥ .$
Combined with the result of Equation (37), the following can be obtained:
$∥ F ′ ( x 0 ) − 1 F ( y 0 ) ∥ ≤ M · ∥ y 0 − α ∗ ∥ 1 − D 1 ( ∥ x 0 − α ∗ ∥ ) ∥ .$
According to
$[ x 0 , y 0 ; F ] = F ( y 0 ) − F ( x 0 ) y 0 − x 0 ,$
we can see
$F ′ ( x 0 ) − 1 [ x 0 , y 0 ; F ] = F ′ ( x 0 ) − 1 ( F ( y 0 ) − F ( x 0 ) ) y 0 − x 0 = F ′ ( x 0 ) − 1 F ( y 0 ) − F ′ ( x 0 ) − 1 F ( x 0 ) y 0 − x 0 .$
Further, we can obtain the following:
$2 F ′ ( X 0 ) − 1 [ x 0 , y 0 ; F ] − 3 I = 2 F ′ ( x 0 ) − 1 F ( y 0 ) − F ′ ( x 0 ) − 1 F ( x 0 ) y 0 − x 0 − 3 I .$
Through the second step of Iteration Method (30), we can know
$z 0 − α ∗ = y 0 − α ∗ − ( − 2 F ′ ( x 0 ) − 1 [ x 0 , y 0 ; F ] + 3 I ) F ′ ( x 0 ) − 1 F ( y 0 ) = y 0 − α ∗ + ( 2 F ′ ( x 0 ) − 1 [ x 0 , y 0 ; F ] − 3 I ) F ′ ( x 0 ) − 1 F ( y 0 ) .$
Then,
$∥ z 0 − α ∗ ∥ ≤ ∥ y 0 − α ∗ ∥ + ∥ 2 F ′ ( x 0 ) − 1 [ x 0 , y 0 ; F − 3 I ] ∥ · ∥ F ′ ( x 0 ) − 1 F ( y 0 ) ∥ ≤ ∥ y 0 − α ∗ ∥ + ∥ 2 F ′ ( x 0 ) − 1 [ x 0 , y 0 ; F ] ∥ · ∥ F ′ ( x 0 ) − 1 F ( y 0 ) ∥ ≤ ∥ y 0 − α ∗ ∥ + 2 ( ∥ F ′ ( x 0 ) − 1 F ( y 0 ) ∥ + ∥ F ( x 0 ) − 1 F ( x 0 ) ∥ ) ∥ y 0 − x 0 ∥ · ∥ F ′ ( x 0 ) − 1 F ( y 0 ) ∥ ≤ ∥ y 0 − α ∗ ∥ + 2 ( ∥ F ′ ( x 0 ) − 1 F ( y 0 ) ∥ + ∥ F ( x 0 ) − 1 F ( x 0 ) ∥ ) ∥ F ′ ( x 0 ) − 1 F ( x 0 ) ∥ · ∥ F ′ ( x 0 ) − 1 F ( y 0 ) ∥ = ∥ y 0 − α ∗ ∥ + ( 2 M · ∥ y 0 − α ∗ ∥ ∫ 0 1 D 3 ( θ · ∥ x 0 − α ∗ ∥ ) d θ · ∥ x 0 − α ∗ ∥ + 2 ) × M · ∥ y 0 − α ∗ ∥ 1 − D 1 ( ∥ x 0 − α ∗ ∥ ) = [ 1 + ( 2 + 2 M G 1 ( ∥ x 0 − α ∗ ∥ ) ∫ 0 1 D 3 ( θ · ∥ x 0 − α ∗ ∥ ) d θ ) × M 1 − D 1 ( ∥ x 0 − α ∗ ∥ ) ] · ∥ y 0 − α ∗ ∥ = [ 1 + ( 2 + 2 M G 1 ( ∥ x 0 − α ∗ ∥ ) ∫ 0 1 D 3 ( θ · ∥ x 0 − α ∗ ∥ ) d θ ) × M 1 − D 1 ( ∥ x 0 − α ∗ ∥ ) ] · G 1 ( ∥ x 0 − α ∗ ∥ ) ∥ x 0 − α ∗ ∥ = G 2 ( ∥ x 0 − α ∗ ∥ ) · ∥ x 0 − α ∗ ∥ < ∥ x 0 − α ∗ ∥ < r .$
Similar to Equation (40), is the following:
$∥ F ′ ( x 0 ) − 1 F ( z 0 ) ∥ ≤ ∥ F ′ ( α ∗ ) F ( z 0 ) ∥ · ∥ F ′ ( x 0 ) − 1 F ′ ( α ∗ ) ∥ ,$
and
$∥ F ′ ( α ∗ ) − 1 F ( z 0 ) ∥ = ∥ F ′ ( α ∗ ) − 1 ∫ 0 1 F ′ ( α ∗ + θ ( z 0 − α ∗ ) ) ( z 0 − α ∗ ) d θ ∥ ≤ ∫ 0 1 D 3 ( θ · ∥ z 0 − α ∗ ∥ ) d θ · ∥ z 0 − α ∗ ∥ ≤ ∫ 0 1 D 3 ( θ · G 2 ( ∥ x 0 − α ∗ ∥ ) · ∥ x 0 − α ∗ ∥ ) d θ · ∥ z 0 − α ∗ ∥ = N · ∥ z 0 − α ∗ ∥ .$
As such, we have
$∥ F ′ ( x 0 ) − 1 F ( z 0 ) ∥ ≤ N · ∥ z 0 − α ∗ ∥ 1 − D 1 ( ∥ x 0 − α ∗ ∥ ) .$
For the third step in Iterative Method (30), the following can be obtained:
$x 1 − α ∗ = z 0 − α ∗ + ( 2 F ′ ( x 0 ) − 1 [ x 0 , y 0 ; F ] ) F ′ ( x 0 ) − 1 F ( z 0 ) ,$
and
$∥ x 1 − α ∗ ∥ ≤ ∥ z 0 − α ∗ ∥ + ∥ 2 F ′ ( x 0 ) − 1 [ x 0 , y 0 ; F ] − 3 I ∥ · ∥ F ′ ( x 0 ) − 1 F ( z 0 ) ∥ ≤ ∥ z 0 − α ∗ ∥ + ∥ 2 F ′ ( x 0 ) − 1 [ x 0 , y 0 ; F ] ∥ · ∥ F ′ ( x 0 ) − 1 F ( z 0 ) ∥ ≤ ∥ z 0 − α ∗ ∥ + 2 ( ∥ F ′ ( x 0 ) − 1 F ( y 0 ) ∥ + ∥ F ′ ( x 0 ) − 1 F ( x 0 ) ∥ ) ∥ F ′ ( x 0 ) − 1 F ( x 0 ) ∥ · ∥ F ′ ( x 0 ) − 1 F ( z 0 ) ∥ ≤ ∥ z 0 − α ∗ ∥ + ( 2 M · ∥ y 0 − α ∗ ∥ ∫ 0 1 D 3 ( θ · ∥ x 0 − α ∗ ∥ ) d θ · ∥ x 0 − α ∗ ∥ + 2 ) × N · ∥ z 0 − α ∗ ∥ 1 D 1 ( ∥ x 0 − α ∗ ∥ ) = [ 1 + ( 2 + 2 M · G 1 ( ∥ x 0 − α ∗ ∥ ) ∫ 0 1 D 3 ( θ · ∥ x 0 − α ∗ ∥ ) d θ ) × N 1 − D 1 ( ∥ x 0 − α ∗ ∥ ) ] · ∥ z 0 − α ∗ ∥ = [ 1 + ( 2 + 2 M · G 1 ( ∥ x 0 − α ∗ ∥ ) ∫ 0 1 D 3 ( θ · ∥ x 0 − α ∗ ∥ ) d θ ) × N 1 − D 1 ( ∥ x 0 − α ∗ ∥ ) ] · G 2 ( ∥ x 0 − α ∗ ∥ ) · ∥ x 0 − α ∗ ∥ = G 3 ( ∥ x 0 − α ∗ ∥ ) · ∥ x 0 − α ∗ ∥ < ∥ x 0 − α ∗ ∥ < r .$
As such, this proves what happens when $n = 0$. By applying mathematical induction, we can prove that $∥ x n + 1 − α ∗ ∥ ≤ G 3 ( r ) · ∥ x n − α ∗ ∥ < r$, so ${ s n } n ≥ 0 ∈ B ( α ∗ , r )$. We can also prove that as n approaches $+ ∞$, the distance between $s n$ and $α ∗$ approaches 0, so ${ s n } n ≥ 0$ is a convergence sequence.
Suppose that there is a point $ξ ∈ U ′$ and $ξ ≠ α ∗$ that satisfies $F ( ξ ) = 0$, then we can construct the function $T = ∫ 0 1 F ′ ( α ∗ + θ ( ξ − α ∗ ) ) d θ$. Through Equation (32) and $∫ 0 1 D 1 ( θ · E ) d θ < 1$, we can see that the following formula is true:
$∥ F ′ ( α ∗ ) − 1 ( T − F ′ ( α ∗ ) ) ∥ ≤ ∫ 0 1 D 1 ( θ · ∥ ξ − α ∗ ∥ ) d θ ≤ ∫ 0 1 D 1 ( θ · E ) d θ < 1 .$
That means that $T − 1 ∈ L ( X 2 , X 1 )$. As such, from $0 = F ′ ( α ∗ ) − F ′ ( ξ ) = T ( α ∗ − ξ )$, we can obtain $α ∗ = ξ$. Thus, uniqueness is obtained. □

## 4. Fractals of Attractive Basins

In this section, we will generate fractal plots for Iterative Method (30) under various nonlinear functions to visually illustrate its convergence. Additionally, we will depict the fractal graphs of three sixth-order iterative methods for solving nonlinear equations. The average number of iterations after five iterations were calculated and are, respectively, presented in Figure 1 and Figure 2, as well as Table 1. The different colors in Figure 1 and Figure 2 denote the attractive basins of the different roots. The maximum number of iterations per iteration was set to 25. If the number of iterations exceeded 25 or the iteration sequence failed to converge, it is represented in black.
Let us consider the following sixth-order iterative methods: Iterative Method $M 2$, which was proposed by Wang [16]; and $M 3$, which was proposed by Behl et al. [17]. As for Iterative Method (30), we will label it as $M 4$. Specifically, $M 2$ and $M 3$ take the following forms, respectively:
$M 2$ [16]:
$M 3$ [17]:
Upon examining Figure 1 and Figure 2, it is evident that the convergence of Iterative Methods $M 2$ and $M 4$ surpasses that of Iterative Methods $M 1$ and $M 3$. The data presented in Table 1 further support this observation, with Iterative Method $M 4$ demonstrating the lowest average number of iterations over the five iterations.
Building on this conclusion, the subsequent section will involve numerical experiments to compare the performance of Iterative Method $M 2$ (52) and Iterative Method $M 4$ (30).

## 5. Numerical Experiments and Practical Applications

In this section, we will utilize Iterative Method (30) to solve nonlinear systems and matrix symbolic functions. We will then compare its performance with that of Iterative Method $M 2$ (52), where the advantages of Iterative Method (30) is emphasized. In order to demonstrate the advantages of Iterative Method (30) in terms of computational accuracy, we will also compare the following three other sixth-order iterative methods: $M 5$ (54) [18], $M 6$ (55) [19], and $M 7$ (56) [20]. Method $M 5$ is as follows:
Method $M 6$ is as follows:
Method $M 7$ is as follows:
Additionally, we will apply Iterative Method (30) to address practical chemistry problems, thus showcasing its applicability.

#### 5.1. Solving Nonlinear Systems

We will address the following three nonlinear systems (where k represents the number of iterations, the experimental accuracy was 2048, and the experimental results are presented in Table 2, Table 3 and Table 4):
$P r o b l e m 1$
During the iteration, we chose the initial value to be $x ( 0 ) = ( 1.5 , 1.5 , 1.5 , 1.5 ) T$. The solution to the system is $( 2.576 × 10 − 1 , 2.576 × 10 − 1 , 2.576 × 10 − 1 , 2.576 × 10 − 1 ) T$. The stop criterion is $∥ x ( k ) − x ( k − 1 ) ∥ < 10 − 100$.
$P r o b l e m 2$
During the iteration, we chose the initial value to be $x ( 0 ) = ( 1.1 , 1.1 ) T$. The solution to the system is $( 9.286 × 10 − 1 , 9.286 × 10 − 1 ) T$. The stop criterion is $∥ x ( k ) − x ( k − 1 ) ∥ < 10 − 100$.
$P r o b l e m 3$
Here, m is the number of equations. During the iteration, we chose the initial value to be $x ( 0 ) = ( 0.1 , ⋯ , 0.1 ) T$. When $m = 6$, the solution to the system is $( 3.131 × 10 − 1 , ⋯ , 3.131 × 10 − 1 ) T$. The stop criterion is $∥ x ( k ) − x ( k − 1 ) ∥ < 10 − 100$.
The experimental results from Table 2, Table 3 and Table 4 show that the convergence accuracy of Iterative Method $M 7$ is inferior to the other four iterative methods. Therefore, we will contrast Iterative Methods $M 2 , M 4 , M 5 ,$ and $M 6$ in the next section by solving a nonlinear matrix sign function.

#### 5.2. Solving the Matrix Sign Function

In this section, we will, respectively, apply Iterative Methods $M 2 , M 4 , M 5 ,$ and $M 6$ to solve the nonlinear matrix symbolic function $X 2 − I = 0$, where I represents the identity matrix. Therefore, when solving this function, the corresponding iterative formats for and $M 6$ are
$X n + 1 = 8192 X n 19 [ − I + 4 X n 2 − 27 X n 4 + 120 X n 6 − 306 X n 8 − 2174 X n 12 + 4104 X n 14 − 7421 X n 16 + 11,068 X n 18 + 1737 X n 20 ] − 1$,
$X n + 1 = 1024 X n 13 [ I − 13 X n 2 + 85 X n 4 − 305 X n 6 + 659 X n 8 − 951 X n 10 + 1303 X n 12 + 245 X n 14 ] − 1$,
$X n + 1 = − ( I + X n 2 ) 3 [ 2 X n 3 ( 3 + 8 X n 2 + 9 X n 4 ) ] − 1$,
The experimental results are displayed in Table 5, where n represents the number of iterations, t represents the CPU running time, and $n c$ indicates no convergence. The termination criterion was set to $∥ X n 2 − I ∥ 2 ⩽ 10 − 100$.
According to the experimental results in Table 5, it can be seen that Iterative Method $M 4$ is superior to $M 2 , M 5 ,$ and $M 6$ in solving the nonlinear matrix symbolic function.

#### 5.3. Practical Applications

The gas equation of the state problem stands out as one of the most crucial challenges in addressing practical chemical problems. In this context, we will apply Theorem 2 from Section 3 to this problem. To begin with, let us consider the following van der Waals equation:
$F ( V ) = ( p + a n 2 V 2 ) ( V − n b ) − n R T = 0 ,$
where a = 4.17 atm·L/mol2 and b = 0.0371 L/mol. Then, the volume of the container is found by considering the pressure of 945.36 kPa (9.33 atm) and the temperature of 300.2 K with 2 mol nitrogen. Finally, by substituting the data into (60), we obtain
$F ( V ) = 9.33 V 3 − 96.9611 V 2 + 16.68 V − 1.23766 = 0 .$
In the context of a practical problem, the solution to this nonlinear equation can only be found on $R$. If we further qualify $U = [ 0 , 0.2 ]$, then $α ∗ = 0.109171$, where $α ∗$ is the result of preserving 6 significant digits for the exact solution. As such, by using Theorem 2, we find
$D 1 = 15.8667 s , D 2 = 16.3673 s , D 3 = 2 .$
According to Theorem 2, we can finally obtain
$s m i n = 0.0630251 , s 1 = 0.0415794 , s 2 = 0.0138653 , s 3 = 0.00391481 .$
As such, $r = m i n { s 1 , s 2 , s 3 } = 0.0391481$.
In the chemical production process of converting nitrogen–hydrogen feed into ammonia, if the air pressure is 250 atm and the temperature is 500 degrees Celsius, then the following equation can be derived:
$F ( s ) = s 4 − 7.79075 s 3 + 14.7445 s 2 + 2.511 s − 1.674 .$
In a practical context, by limiting the range of solutions to $[ 0 , 1 ]$, we know that the root of Equation (61) is $α ∗ = 0$. Then, we have
$D 1 = 2.59403 s , D 2 = 3.28225 s , D 3 = 2 .$
According to Theorem 2, we can finally obtain
$s m i n = 0.385501 , s 1 = 0.236119 , s 2 = 0.0737151 , s 3 = 0.02006 .$
As such, $r = m i n { s 1 , s 2 , s 3 } = 0.02006$.

## 6. Conclusions and Discussion

In this paper, a class of iterative methods for solving nonlinear systems (1) was presented. Through the proof results in Theorem 1, we established that, when $P = 2$ and $Q = 1$, Iterative Method (1) can reach a sixth-order convergence, which is Iterative Method (30).
During the proof of Theorem 1, we noticed that this convergence process has a high limitation on the existence of the higher derivatives of functions. But not all functions have higher derivatives. Therefore, we discussed the local convergence of Iterative Method (30) in Section 3. By using the $ω$-continuity condition on the first-order Fréchet derivative in Banach space, we established the conditions for a local convergence of Iterative Method (30), thus avoiding the discussion of the higher-order derivative of the function.
Finally, by drawing the attractive basin of Iterative Method (30)—as well as by comparing it with the average number of iterations of five cycles for the known sixth-order iterative methods $M 1$, $M 2$, and $M 3$ for solving nonlinear systems—it was shown that the new Iterative Method $M 4$ (30) is superior to the other three iterative methods in terms of convergence and average number of iterations. In the experiment where nonlinear systems were solved, it was also shown that the convergence accuracy of Iterative Method (30) is better than that of Iterative Method $M 7$. Furthermore, when employing Iterative Methods $M 2 , M 4 , M 5 ,$ and $M 6$ to simultaneously solve the nonlinear matrix symbolic function, it is evident that $M 4$ exhibits broader applicability. Through leveraging the local convergence established in Theorem 2 for Iterative Method $M 4$, we proceeded to address the practical chemical problems. Through these experiments, we have objectively demonstrated the plausibility of our proposal iterative method.
Building upon the foundation laid in this paper, our future focus will be on proposing diverse forms of iterative methods with higher convergence orders. This will involve analyzing their local convergence, semi-local convergence, and employing fractal theory to study their stability.

## Author Contributions

Conceptualization, X.W. and W.L.; methodology, X.W.; software, X.W.; validation, X.W. and W.L.; formal analysis, X.W.; investigation, W.L.; resources, W.L.; data curation, W.L.; writing—original draft preparation, W.L.; writing—review and editing, W.L.; visualization, W.L.; supervision, W.L.; project administration, X.W.; funding acquisition, X.W. All authors have read and agreed to the published version of the manuscript.

## Funding

This research was supported by the National Natural Science Foundation of China (no. 61976027), the National Natural Science Foundation of Liaoning Province (nos. 2022-MS-371 and 2023-MS-296), the Educational Commission Foundation of Liaoning Province of China (nos. LJKMZ20221492 and LJKMZ20221498), the Key Project of Bohai University (no. 0522xn078), and the Graduate Student Innovation Foundation Project of Bohai University (no. YJC2023-010).

Not applicable.

Not applicable.

## Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

## Acknowledgments

The authors are most grateful to the anonymous referees for their constructive comments and helpful suggestions, which have greatly improved the quality of this paper.

## Conflicts of Interest

The authors declare no conflicts of interest.

## References

1. Roman, J.G.; Adam, W.; Karol, M. An adaptive Dynamic Relaxation method for solving nonlinear finite element problems. Application to brain shift estimation. Int. J. Numer. Methods Biomed. 2011, 27, 173–185. [Google Scholar]
2. Zhang, Y.; Kou, X.; Song, Z.; Fan, Y.; Mohammed, U.; Vishal, J. Research on logistics management layout optimization and real-time application based on nonlinear programming. Nonlinear Dyn. 2021, 10, 526–534. [Google Scholar] [CrossRef]
3. Hajime, K.; Hajime, K. Estimation method for inverse problems with linear forward operator and its application to magnetization estimation from magnetic force microscopy images using deep learning. Inverse Probl. Sci. Eng. 2021, 29, 2131–2164. [Google Scholar]
4. Muhammad, A.; Iitaf, H.; Masood, A. A finite-difference and Haar wavelets hybrid collocation technique for non-linear inverse Cauchy problems. Inverse Probl. Sci. Eng. 2022, 30, 121–140. [Google Scholar]
5. Anakhaev, K.N. The Problem of Nonlinear Cantilever Bending in Elementary Functions. Mech. Solids 2022, 57, 997–1005. [Google Scholar] [CrossRef]
6. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
7. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
8. Cordero, A.; Miguel, A.L.S.; Torregrosa, J.R. Dynamics and stability on a family of fourth-order optimal iterative methods. Algorithms 2022, 15, 387. [Google Scholar] [CrossRef]
9. Argyros, I.K.; Regmi, S.; John, J.A.; Jayaraman, J. Extended Convergence for Two Sixth Order Methods under the Same Weak Conditions. Foundations 2023, 3, 127–139. [Google Scholar] [CrossRef]
10. Sabban, A. Novel Meta-Fractal Wearable Sensors and Antennas for Medical, Communication, 5G, and IoT Applications. Fractal Fract. 2024, 8, 100. [Google Scholar] [CrossRef]
11. Wang, X.; Chen, X.; Li, W. Dynamical Behavior Analysis of an Eighth-Order Sharma’s Method. Int. J. Biomath. 2023, 2350068. [Google Scholar] [CrossRef]
12. Wang, X.; Xu, J. Conformable vector Traub’s method for solving nonlinear systems. Numer. Algorithms 2024. [Google Scholar] [CrossRef]
13. Argyros, I.K.; Magreñán, Á.A. A study on the local convergence and the dynamics of Chebyshev-Halley-type methods free from second derivative. Numer. Algorithms 2016, 71, 1–23. [Google Scholar] [CrossRef]
14. Saeed, K.M.; Remesh, K.; George, S.; Padikkal, J.; Argyros, I.K. Local Convergence of Traub’s Method and Its Extensions. Fractal Fract. 2023, 7, 98. [Google Scholar] [CrossRef]
15. Amat, S.; Argyros, I.K.; Busquier, S.; Magreñán, Á.A. Local convergence and the dynamics of a two-point four parameter Jarratt-like method under weak conditions. Numer. Algorithms 2017, 74, 371–391. [Google Scholar] [CrossRef]
16. Wang, X.; Li, Y. An Efficient Sixth-Order Newton-Type Method for Solving Nonlinear Systems. Algorithms 2017, 10, 45. [Google Scholar] [CrossRef]
17. Behl, R.; Argyros, I.K.; Machado, J.A.T. Ball Comparison between Three Sixth Order Methods for Banach Space Valued Operators. Mathematics 2020, 8, 667. [Google Scholar] [CrossRef]
18. Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Pseudocomposition: A technique to design predictor-corrector methods for systems of nonlinear equations. Appl. Math. Comput. 2012, 218, 11496–11504. [Google Scholar] [CrossRef]
19. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint Methods for Solving Nonlinear Equations; Elsevier: New York, NY, USA, 2013. [Google Scholar]
20. Grau-Sánchez, M.; Noguera, M.; Amat, S. On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods. J. Comput. Appl. Math. 2013, 237, 263–272. [Google Scholar] [CrossRef]
Figure 1. The attractive basins of Iterative Methods $M 1$$M 4$ under the nonlinear function $f ( x ) = x 2 − 1$.
Figure 1. The attractive basins of Iterative Methods $M 1$$M 4$ under the nonlinear function $f ( x ) = x 2 − 1$.
Figure 2. The attractive basins of Iterative Methods $M 1$$M 4$ under the nonlinear function $f ( x ) = x 3 − 1$.
Figure 2. The attractive basins of Iterative Methods $M 1$$M 4$ under the nonlinear function $f ( x ) = x 3 − 1$.
Table 1. Comparison of the average number of iterations after five iterations of Iterative Methods $M 1$$M 4$.
Table 1. Comparison of the average number of iterations after five iterations of Iterative Methods $M 1$$M 4$.
$M 1$$M 2$$M 3$$M 4$
$f ( x ) = x 2 − 1$$5.4303$$3.9349$$13.243$$3.9349$
$f ( x ) = x 3 − 1$$11.827$$7.3072$$10.359$$7.2266$
Table 2. Experimental results of Problem 1.
Table 2. Experimental results of Problem 1.
The Iterative Methodk$∥ x ( k ) − x ( k − 1 ) ∥$$∥ F ( x ( k ) ) ∥$
$M 2$4$9.193 × 10 − 158$$7.704 × 10 − 946$
$M 4$4$8.793 × 10 − 240$$9.423 × 10 − 1439$
$M 5$4$1.350 × 10 − 260$$1.409 × 10 − 1239$
$M 6$4$2.427 × 10 − 232$$6.265 × 10 − 1394$
$M 7$5$7.121 × 10 − 144$$9.423 × 10 − 575$
Table 3. Experimental results of Problem 2.
Table 3. Experimental results of Problem 2.
The Iterative Methodk$∥ x ( k ) − x ( k − 1 ) ∥$$∥ F ( x ( k ) ) ∥$
$M 2$5$3.985 × 10 − 562$$1.000 × 10 − 2048$
$M 4$5$1.018 × 10 − 568$$1.000 × 10 − 2048$
$M 5$4$5.015 × 10 − 138$$1.409 × 10 − 823$
$M 6$5$9.737 × 10 − 569$$1.000 × 10 − 2048$
$M 7$4$1.483 × 10 − 107$$2.358 × 10 − 639$
Table 4. Experimental results of Problem 3.
Table 4. Experimental results of Problem 3.
The Iterative Methodk$∥ x ( k ) − x ( k − 1 ) ∥$$∥ F ( x ( k ) ) ∥$
$M 2$5$2.417 × 10 − 297$$1.111 × 10 − 1779$
$M 4$5$6.170 × 10 − 464$$6.000 × 10 − 2048$
$M 5$5$4.248 × 10 − 465$$3.000 × 10 − 2048$
$M 6$6$9.128 × 10 − 409$$3.000 × 10 − 2048$
$M 7$5$8.615 × 10 − 290$$9.599 × 10 − 1734$
Table 5. Experimental results of the solving matrix sign function.
Table 5. Experimental results of the solving matrix sign function.
The Iterative MethodMatricesnt
$M 2$1$n c$$n c$
4$n c$$n c$
10$n c$$n c$
15$n c$$n c$
$M 4$11$0.008228$
418$0.040754$
1022$0.051711$
1536$0.089662$
$M 5$1$n c$$n c$
4$n c$$n c$
10$n c$$n c$
15$n c$$n c$
$M 6$11$0.035088$
4$n c$$n c$
10$n c$$n c$
15$n c$$n c$
 Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

## Share and Cite

MDPI and ACS Style

Wang, X.; Li, W. A Class of Sixth-Order Iterative Methods for Solving Nonlinear Systems: The Convergence and Fractals of Attractive Basins. Fractal Fract. 2024, 8, 133. https://doi.org/10.3390/fractalfract8030133

AMA Style

Wang X, Li W. A Class of Sixth-Order Iterative Methods for Solving Nonlinear Systems: The Convergence and Fractals of Attractive Basins. Fractal and Fractional. 2024; 8(3):133. https://doi.org/10.3390/fractalfract8030133

Chicago/Turabian Style

Wang, Xiaofeng, and Wenshuo Li. 2024. "A Class of Sixth-Order Iterative Methods for Solving Nonlinear Systems: The Convergence and Fractals of Attractive Basins" Fractal and Fractional 8, no. 3: 133. https://doi.org/10.3390/fractalfract8030133

Back to TopTop