Next Article in Journal
Reduction in Waiting Time in an M/M/1/N Encouraged Arrival Queue with Feedback, Balking and Maintaining of Reneged Customers
Previous Article in Journal
A General Picard-Mann Iterative Method for Approximating Fixed Points of Nonexpansive Mappings with Applications
Previous Article in Special Issue
Coupled Fixed Point Theorems with Rational Type Contractive Condition via C-Class Functions and Inverse Ck-Class Functions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Iterative Schemes to Solve Nonlinear Systems with Symmetric Basins of Attraction

1
Multidisciplinary Mathematics Institute, Universitat Politècnica de Valenència, Camino de Vera s/n, 46022 València, Spain
2
Centre for Advanced Studies in Pure and Applied Mathematics, Bahauddin Zakariya University, Multan 60800, Pakistan
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(8), 1742; https://doi.org/10.3390/sym14081742
Submission received: 31 May 2022 / Revised: 29 July 2022 / Accepted: 16 August 2022 / Published: 22 August 2022

Abstract

:
We present a new Jarratt-type family of optimal fourth- and sixth-order iterative methods for solving nonlinear equations, along with their convergence properties. The schemes are extended to nonlinear systems of equations with equal convergence order. The stability properties of the vectorial schemes are analyzed, showing their symmetric wide sets of converging initial guesses. To illustrate the applicability of our methods for the multidimensional case, we choose some real world problems such as kinematic syntheses, boundary value problems, Fisher’s and Hammerstein’s integrals, etc. Numerical comparisons are given to show the performance of our schemes, compared with the existing efficient methods.

1. Introduction

Constructing a family of iterative schemes to solve nonlinear problems is widely applied in science and engineering. This has led to the design of many numerical procedures. In this manuscript, we analyze the problem of solving the system of nonlinear equations defined by F : D R n R n ,
F ( X ) = 0
.
Our interest is to estimate a zero of a multivariate vector-valued function or an approximated value of the root X of a nonlinear equation F ( X ) = 0 . We frequently call such kind of problems root-finding problems. One of the best known schemes to solve nonlinear systems is the second-order Newton–Raphson method, expressed as
X ( n + 1 ) = X ( n ) [ F ( X ( n ) ) ] 1 F ( X ( n ) ) ,
where F ( X n ) is the Jacobian matrix of function F ( X ) . In terms of computational cost, Newton’s scheme needs the functional evaluation of F and the Jacobian F evaluation for finding the solution of a nonlinear system, per iteration.
With the aim of improving the convergence order of Newton’s scheme, various third and fourth order methods have been proposed. During the past years, researchers offered some point-to-point and multipoint techniques for solving nonlinear scalar equations but not every scheme for the scalar equations is extendable to systems of nonlinear equations. This demands extra effort to give specially-designed schemes that can be applied to scalar problems as well as to systems of nonlinear equations. In 1960, Ostrowski [1] presented a fourth order Steffensen-type method for systems of equations. In 1966, Jarratt [2] gave a derivative-based fourth order multipoint numerical method for solving scalar nonlinear equations. In 2007, Chun [3], developed the fourth order method as a new variant of King’s fourth-order family of schemes for solving the scalar and multidimensional cases. Similarly, in 2012, a pseudo-composition technique was introduced in [4]; Soleymani et al. [5], Junjua et al. [6] and Xiao [7] presented the new families of Jarratt-type fourth-order schemes. Matrix weight functions were used by Artidiello et al. in [8] to design efficient classes of procedures to solve nonlinear systems. Recently, Behl et al. [9] developed respective sixth-order iterative methods for solving nonlinear systems. As there are few high-order methods for solving systems of nonlinear equations [10,11,12,13,14], in comparison with their scalar partners, there is a need to develop methods that are extendable to the vectorial case, are simple, cost effective and of high order.

Qualitative Study of Vectorial Iterative Methods

The stability of a scalar iterative method is usually analyzed by the analysis of the rational function got when it is applied on a polynomial [15,16,17,18]. Often, the vectorial schemes are turned into the scalar case in order to study their dependence on initial estimations. However, by using the procedure defined in [19] and afterwards used in [20], any multidimensional method can be formulated as a discrete real vectorial dynamical system, and therefore its qualitative performance can be studied.
From a rational function T : R n R n , obtained by application of the vectorial iterative method on a polynomial system s ( x ) , a discrete vectorial dynamical system is defined in R n . In what follows, we introduce some dynamical concepts that can be considered as a direct extension of those used in complex dynamics [21].
Let us consider a fixed point x ¯ R n of the operator T; if s ( x ¯ ) 0 , it is named a strange fixed point. Moreover, the orbit of x ¯ R n can be defined as O ( x ¯ ) = x ¯ , T ( x ¯ ) , , T m ( x ¯ ) , . A point x * R n is a k-periodic point if T k x * = x * and T p x * x * , for p = 1 , 2 , , k 1 .
Moreover, the stability of a point of R n can be studied by using the result appearing in [22]. It establishes that a fixed or periodic point is attracting if all the eigenvalues of the Jacobian matrix evaluated at the point are, in absolute value, lower than one. If all these eigenvalues are greater than one, the point is classified as repelling and, in other cases, unstable.
Moreover, a fixed point x * is said to be hyperbolic if all the eigenvalues of T ( x * ) satisfy | λ j | 1 . Specifically, if there exists any eigenvalue λ i such that | λ i | < 1 and another one λ j satisfying | λ j | > 1 , then it is called a saddle point.
Let us consider a periodic or fixed point x * that is an attractor of operator T; its basin of attraction B ( x * ) is defined as
B ( x * ) = x 0 R n : T m ( x 0 ) = x * , for   any   m N .
Critical points of a rational function T are also an important tool in the study of an iterative scheme: they are defined as those points satisfying d e t ( G ( x ) ) = 0 . Moreover, a critical point c such that s ( c ) 0 , is called a free critical point. This information can also be inferred from those points making null the eigenvalues of T , or even those making the Jacobian matrix null. If critical points are also fixed, they are called superattracting; if not, they are called free critical points. Julia and Fatou [21] stated that there is at least one critical point embedded in each basin of attraction. Then, by calculating the orbit of these free critical points, all the attracting elements can be found.
In this paper, we give a family of fourth order methods for solving systems of equations that is optimal in the scalar case. We, further, extend this family to a sixth order scheme. The rest of the manuscript is organized as follows. Section 2 is devoted to the design of optimal fourth-order scheme and its analysis of convergence. Some special cases of the family are also defined. In Section 3, a sixth-order scheme is constructed and its convergence order is proven and some special cases of the family are presented. In Section 4, we perform a stability analysis of the particular cases extracted from the previous classes, focused on the fixed and critical points of the related vectorial rational functions. In Section 5 we analyze numerical Examples for confirming the theoretical results and comparing the numerical performance of the new methods with the existing ones. In Section 6, some concluding remarks are given.

2. Development of Fourth-Order Scheme

Let us first define a new family of optimal fourth-order schemes for solving nonlinear equations, i.e., f ( x ) = 0 , being f : I R R defined on an open interval I. The iterative expression of this class is given by:
y n = x n α f x n f ( x n ) , n > 0 . x n + 1 = y n H ( u n ) f x n f ( y n ) ,
where the weight function H : R R , is a continuous and differentiable function in the neighborhood of 1 with u n = f y n f x n , and α is a free disposable parameter. It is noteworthy that this scheme is a two-point optimal (in the sense of the Kung–Traub conjecture) fourth order family for computing simple roots of scalar problems.
Let us extend this class of iterative schemes for estimating the solution of a system of nonlinear equations F ( X ) = 0 defined by F : D R n R n . For this purpose, we write (1) in vectorial form as:
Y ( n ) = X ( n ) α [ F ( X ( n ) ) ] 1 F ( X ( n ) ) , X ( n + 1 ) = Y ( n ) H ( U n ) [ F ( Y ( n ) ) ] 1 F ( X ( n ) ) ,
where H : M n × n R L R n and
U n = [ F ( X ( n ) ) ] 1 F ( Y ( n ) ) .
Due to the variable of the weight function being a matrix, some notation must be introduced in order to set the conditions to be satisfied to guarantee the order of convergence. In this context, the weight function H satisfies
(i)
H ( u ) ( v ) = H 1 u v , being H : M n × n R L ( M n × n R ) the first derivative of H, L ( M n × n R ) denotes the space of linear mappings from M n × n R to itself and H 1 R .
(ii)
H ( u , v ) ( w ) = H 2 u v w , being H : M n × n R × M n × n R L ( M n × n R ) the second derivative of H and H 2 R .
Then, the Taylor expansion of H around I (identity matrix) up to second order gives
H ( U n ) H ( I ) + H 1 ( U n I ) + 1 2 H 2 ( U n I ) 2 .
To analyze the convergence of class (2), we state and prove the next result.
Theorem 1.
Let us consider that F : D R n R n is a sufficiently differentiable function on a closed neighborhood D of its root W . Let F ( X ) be continuous and non-singular at W . In addition, let us consider that initial guess X ( 0 ) is close enough to W , then, for α = 2 3 , class (2) we have fourth-order convergence under conditions:
H ( I ) = 1 3 I , H 1 = 5 12 , H 2 = 3 4 , | H 3 | < .
Proof. 
We denote by
E n = X ( n ) W ,
the error at n t h step such that W is the real root. By using Taylor’s expansion of F ( X ( n ) ) and F ( X ( n ) ) around W , we have:
F ( X ( n ) ) = F W ( E n + C 2 E n 2 + C 3 E n 3 + C 4 E n 4 + O E n 5 ) .
and,
F ( X ( n ) ) = F W ( 1 + 2 C 2 E n + 3 C 3 E n 2 + 4 C 4 E n 3 + 5 C 5 E n 4 + O E n 5 ) ,
respectively, where:
C j = 1 j ! [ F ( W ) ] 1 F ( j ) ( W ) .
By using Equations (3) and (4) in scheme (2), we have:
Y ( n ) = 1 3 E n + 2 3 C 2 E n 2 + 4 3 C 3 4 3 C 2 2 E n 3 + 2 C 4 14 3 C 2 C 3 + 8 3 C 2 3 E n 4 + O E n 5 .
In a similar way, by using the Taylor expansion of F Y ( n ) and F Y ( n ) about W, U n = U ( X ( n ) ) = F ( X ( n ) ) 1 F ( Y ( n ) ) is given by:
U n = I 4 3 C 2 E n + 8 3 C 3 + 4 C 2 2 E n 2 + 104 27 C 4 + 2 C 2 ( 4 3 C 3 4 3 C 2 2 ) + 16 3 C 2 C 3 + 2 ( 8 3 C 3 4 C 2 2 ) c 2 E n 3 + 400 81 C 5 + 2 C 2 ( 2 C 4 14 3 C 2 C 3 + 8 3 C 2 3 ) + 3 C 3 ( 8 9 C 3 4 9 C 2 2 ) + 56 9 C 2 C 4 + 3 ( 8 9 C 3 4 C 2 2 ) C 3 + 2 ( 104 27 C 4 40 3 C 2 C 3 + 32 3 C 2 3 ) C 2 E n 4 + O E n 5 ,
and,
H U n = H ( I ) 4 3 H 1 C 2 E n + 8 3 H 1 C 3 + 4 H 1 C 2 2 + 8 9 H 2 C 2 2 E n 2 + 104 27 H 1 C 4 + 40 3 H 1 C 2 C 3 32 3 H 1 C 2 3 32 81 H 3 C 2 3 + 32 9 H 2 C 2 C 3 16 3 H 2 C 2 3 E n 3 + 400 81 H 1 C 5 + 484 27 H 1 C 2 C 4 148 3 H 1 C 3 C 2 2 + 80 3 H 1 C 2 4 + 32 3 H 1 C 3 2 + 416 81 H 2 C 2 C 4 256 9 H 2 C 3 C 2 2 + 200 9 H 2 C 2 4 + 32 9 H 2 C 3 2 + 32 243 H 2 C 2 4 64 27 H 2 C 2 2 C 3 + 32 9 H 2 C 2 4 E n 4 + O ( E n 5 ) .
Finally, using Equations (3), (5) and (6) in scheme (2) and for H ( I ) = 1 3 I , H 1 = 5 12 , H 2 = 3 4 , the error equation is given as:
E n + 1 = 1 9 C 4 + C 2 C 3 + 7 3 C 2 3 + 32 81 H 3 C 3 C 2 3 E n 4 + O E n 5 .
This proves the fourth-order of convergence.  □

A Particular Case of Weight Function for the Fourth Order Scheme

Let us take a particular case of our proposed class (2) obtained by choosing weight function H ( U n ) .
Case 1.
We take weight function H ( U n ) in the form of a polynomial of degree three such that:
H ( U n ) = a 0 + a 1 U n + a 2 U n 2 + a 3 U n 3 ,
with
a 0 = 9 8 a 3 , a 1 = 7 6 + 3 a 3 , a 2 = 3 8 3 a 3 .
In case we choose a 3 = 0 , we have:
a 0 = 9 8 , a 1 = 7 6 , a 2 = 3 8 .
Then, we obtain a fourth-order scheme, namely SF1, by using the above weight function in the new scheme (2), i.e.,
Y n = X n 2 3 [ F ( X n ) ] 1 F ( X n ) , n > 0 . X n + 1 = Y n 9 8 I 7 6 U n + 3 8 U n 2 [ F ( Y n ) ] 1 F ( X n ) .

3. Development of Sixth-Order Schemes

Let us further increase the order of convergence of the class (2) by adding one more step. Then, we propose the following three step family of iterative schemes:
Y ( n ) = X ( n ) 2 3 [ F ( X ( n ) ) ] 1 F ( X ( n ) ) , Z ( n ) = Y ( n ) H ( U n ) [ F ( Y ( n ) ) ] 1 F ( X ( n ) ) , X ( n + 1 ) = Z ( n ) K ( U n ) [ F ( X ( n ) ) ] 1 F ( Z ( n ) ) ,
where H , K : M n × n R L R n , with
U n = [ F ( X ( n ) ) ] 1 F ( Y ( n ) ) .
Let us remark that this class needs one more functional evaluation of F as compared to the fourth-order family (2). By doing so, our aim is to increase the convergence order of the method as well as the efficiency. The class of methods has sixth order of convergence, under some conditions, as it is proven as follows.
Theorem 2.
Let us consider that F : D R n R n is a sufficiently differentiable function in a closed neighborhood D of its root W . We assume that F ( X ) is continuous and non-singular at W . Moreover, we consider an initial guess X ( 0 ) close enough to W . Then the convergence is guaranteed and the class of numerical schemes (7) has sixth order of convergence satisfying the conditions:
H ( I ) = 1 3 I , H 1 = 5 12 , H 2 = 3 4 , | H 3 | <
and,
K ( I ) = I , K 1 = 3 2 , | K 2 | < .
Proof. 
Again, by using Taylor development of F ( X ( n ) ) and F ( X ( n ) ) about W , we have:
F ( X ( n ) ) = F W ( E n + C 2 E n 2 + C 3 E n 3 + C 4 E n 4 + C 5 E n 5 + C 6 E n 6 ) + O E n 7
and,
F ( X ( n ) ) = F W ( I + 2 C 2 E n + 3 C 3 E n 2 + 4 C 4 E n 3 + 5 C 5 E n 4 + 6 C 6 E n 5 ) + O E n 6 ,
respectively.
By using (8) and (9) in class (7) we have:
Y ( n ) = 1 3 E n + 2 3 C 2 E n 2 + 4 3 C 3 4 3 C 2 2 E n 3 + 2 C 4 14 3 C 2 C 3 + 8 3 C 2 3 E n 4 + 8 3 C 5 20 3 C 2 C 4 4 C 3 2 + 40 3 C 3 C 2 2 16 3 C 2 4 E n 5 + 34 3 C 3 C 4 + 22 c 2 C 3 2 104 3 C 3 C 2 3 + 56 3 C 4 C 2 2 26 3 C 2 C 5 + 10 3 C 6 + 32 3 C 2 5 E n 6 + O E n 7 .
Consequently, from the Taylor’s series expansions of F Y ( n ) and F Y ( n ) about W, U n = [ F ( Y ( n ) ) ] 1 F ( X ( n ) ) is expanded as:
U n = 1 4 3 C 2 E n + 8 3 C 3 + 4 C 2 2 E n 2 + 104 27 C 4 + 2 C 2 ( 4 3 C 3 4 3 C 2 2 ) + 16 3 C 2 C 3 + 2 ( 8 3 C 3 4 C 2 2 ) C 2 E n 3 + 400 81 C 5 + 2 C 2 ( 2 C 4 14 3 C 2 C 3 + 8 3 C 2 3 ) + 3 C 3 ( 8 9 C 3 4 9 C 2 2 ) + 56 9 C 2 C 4 + 3 ( 8 9 C 3 4 C 2 2 ) C 3 + 2 ( 104 27 C 4 40 3 C 2 C 3 + 32 3 C 2 3 ) C 2 E n 4 + + O E n 7 ,
and
H U n = H ( I ) 4 3 H 1 C 2 E n + 8 3 H 1 C 3 + 4 H 1 C 2 2 + 8 9 H 2 C 2 2 E n 2 + 104 27 H 1 C 4 + 40 3 H 1 C 2 C 3 32 3 H 1 C 2 3 32 81 H 3 C 2 3 + 32 9 H 2 C 2 C 3 16 3 H 2 C 2 3 E n 3 + 400 81 H 1 C 5 + 484 27 H 1 C 2 C 4 148 3 H 1 C 3 C 2 2 + 80 3 H 1 C 2 4 + 32 3 H 1 C 3 2 + 416 81 H 2 C 2 C 4 256 9 H 2 C 3 C 2 2 + 200 9 H 2 C 2 4 + 32 9 H 2 C 3 2 + 32 243 H 3 C 2 4 64 27 H 3 C 2 2 C 3 + 32 9 H 3 C 2 4 E n 4 + + O ( E n 7 ) .
Using Equations (8)–(11) in class (7), we have:
Z ( n ) = 1 9 C 4 C 2 C 3 + 7 3 C 2 3 + 32 81 H 3 C 2 3 E n 4 + 64 27 H 3 C 2 2 C 3 32 243 H 4 C 2 4 832 243 H 3 C 2 4 20 9 C 2 C 4 2 C 3 2 + 16 C 2 2 C 3 116 9 C 2 4 + 8 27 C 3 E n 5 + + O ( E n 7 )
Similarly, we expand F ( Z ( n ) ) , F ( Z ( n ) ) and K ( U n ) to obtain the final error expression by choosing K ( I ) = I and K 1 = 3 2 :
E n + 1 = 14 C 5 5 1 9 C 3 C 4 + C 2 C 3 2 25 3 C 3 C 2 2 + 2 3 C 4 C 2 2 32 81 H 3 C 3 C 2 3 256 729 H 3 K 2 C 2 5 56 27 K 2 C 2 5 + 8 9 K 2 C 3 C 2 3 8 81 K 2 C 4 C 2 2 E n 6 + O E n 7 .
This shows that the proposed class has sixth order of convergence.  □

Some Special Cases of Weight Functions for Sixth Order Scheme

Let us give a few particular cases of our suggested scheme (7) by using several operators H ( U n ) and K ( U n ) . These special cases are given as follows:
Case 2.
Choosing H ( U n ) and K U n as a polynomial operator of degree 2,
H ( U n ) = a 0 I + a 1 U + a 2 U 2 , K ( U n ) = b 0 I + b 1 U + b 2 U 2 ,
with
a 0 = 9 8 , a 1 = 7 6 , a 2 = 3 8 ,
b 0 = 5 2 + b 2 , b 1 = 3 2 2 b 2 ,
which for b 2 = 27 8 gives
b 0 = 47 8 , b 1 = 33 4 .
Then, we acquire another sixth-order scheme namely SF2 by using the above choices of weight functions in the proposed class (7) as:
Y n = X n α [ F ( X n ) ] 1 F ( X n ) , n 0 Z n = Y n 9 8 I 7 6 U n + 3 8 U n 2 [ F ( Y n ) ] 1 F ( X n ) X n + 1 = Z n 47 8 I 33 4 U n + 27 8 U n 2 [ F ( X n ) ] 1 F ( Z n ) .
Case 3.
If we choose two rational weight functions H ( U n ) and K ( U n ) :
H ( U n ) = t 0 I + t 1 U n + t 2 U n 2 1 , K ( U n ) = s 0 I + s 1 U n + s 2 U n 2 1 ,
with,
t 0 = 9 16 , t 1 = 9 8 , t 2 = 21 16 .
s 0 = 1 2 + s 2 , s 1 = 3 2 2 s 2 .
For s 2 = 9 8 ,
s 0 = 13 8 , s 1 = 15 4 .
Then, we obtain another sixth-order scheme, namely SF3, by using the above weight functions in the suggested class (7), which is given by:
Y n = X n α [ F ( X n ) ] 1 F ( X n ) , n > 0 . Z n = Y n 8 13 I + 30 U n 9 U n 2 1 [ F ( Y n ) ] 1 F ( X n ) X n + 1 = Z n 16 3 3 I + 6 U n + 7 U n 2 1 [ F ( X n ) ] 1 F ( Z n ) .

4. Dynamical Analysis of Proposed Classes

In this section, the dynamical concepts defined in the introductory section are used to analyze the qualitative behavior of the proposed classes SF1, SF2 and SF3, with independence of their order of convergence. In order to achieve this aim, they are applied on the polynomial system of separate variables s ( x ) = 0 , where
s ( x ) = x 1 2 1 , x 2 2 1 ,
and the corresponding vectorial rational functions are studied.

4.1. Stability of SF1

By applying SF1 on the polynomial system s ( x ) = 0 , we obtain its rational vectorial operator R ( x ) = r 1 ( x ) , r 2 ( x ) T , with j-th coordinate
r j ( x ) = 8 a 3 x j 2 1 4 + 9 9 x j 8 + 39 x j 6 x j 4 + x j 2 144 x j 5 2 x j 2 + 1 , j = 1 , 2 .
Then, the following result is proven.
Theorem 3.
The rational operator R ( x ) related to the class of iterative schemes SF1 has, ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) as fixed points, being the roots of s ( x ) , for any value of a 3 . Moreover, they are superattracting. Indeed, there exist other fixed points of R ( x ) , whose number depends on the value of the parameter a 3 as follows:
  • If a 3 < 0 or a 3 > 207 8 , then each component of the fixed points can be ± 1 or any of the two real roots of polynomial p 1 ( t ) = 8 a 3 + ( 9 + 24 a 3 ) t 2 24 a 3 t 4 + ( 207 + 8 a 3 ) t 6 .
  • If 0 a 3 207 8 , the only fixed points are the roots of system s ( x ) .
Proof. 
From the expression of the rational function R ( x ) and the separate variables of the system, we deduce that a fixed point x = ( x 1 , x 2 ) must satisfy r j 1 ( x ) = x j , j = 1 , 2 . So,
( 1 + x j 2 ) ( 8 a 3 ( 1 + x j 2 ) 3 9 ( x j 2 + 23 x j 6 ) ) = 0 , j = 1 , 2 .
Then, values x j = ± 1 satisfy this expression, and the roots of s ( x ) : ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) are fixed points of the rational operator R ( x ) . To analyze their stability, we calculate R ( x ) ,
R ( x ) = x 1 2 1 3 8 a 3 2 x 1 4 + 17 x 1 2 + 5 + 27 6 x 1 4 + x 1 2 144 x 1 6 2 x 1 2 + 1 2 0 0 x 2 2 1 3 8 a 3 2 x 2 4 + 17 x 2 2 + 5 + 27 6 x 2 4 + x 2 2 144 x 2 6 2 x 2 2 + 1 2 .
It is clear R ( x ) ( ± 1 , ± 1 ) is the null matrix. Therefore, the fixed points are superattracting. Moreover, there are (strange) fixed points different from the roots of s ( x ) , satisfying ( 8 a 3 ( 1 + x j 2 ) 3 9 ( x j 2 + 23 x j 6 ) ) = 0 . That is, the roots of p 1 ( t ) , when they are real, are components of the strange fixed points of R ( x ) . It can be checked that when 0 a 3 207 8 , all the roots of p 1 ( t ) are complex; in other case, there are two real roots of p 1 ( t ) , that by themselves or combined with ± 1 , form the strange fixed points of R ( x ) .  □
In addition to the calculation of fixed points of R ( x ) , it is feasible that other attracting elements there exist to be avoided. To detect them, if they exist, we calculate the free critical points. The proof of the following result is straightforward from the eigenvalues of R ( x ) .
Theorem 4.
The rational operator R ( x ) related to the iterative method SF1 has points ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) , as critical points. However, depending on the value of a 3 , the number of critical points of R ( x ) can be increased as follows:
  • If a 3 , 81 8 ( 0 , + ) , then exist free critical points whose components are the different combinations among the roots of polynomial p 2 ( t ) = 40 a 3 + ( 27 + 136 a 3 ) t 2 + ( 162 + 16 a 3 ) t 4 and ± 1 .
  • For values of a 3 81 8 , 0 , there are no free critical points.
Proof. 
Critical points of R ( x ) are found as those cancelling the eigenvalues of the associated Jacobian matrix, R ( x ) . Due to the separate variables of s ( x ) , these eigenvalues are the non-null components of R ( x ) . So, the components of critical points are ± 1 and the real roots of p 2 ( t ) . It can be checked that the roots of p 2 ( t ) are all complex if a 3 81 8 , 0 and there exist two real roots in the other case. □
From Theorem 4, we state that convergence of SF1 on s ( x ) is global if a 3 81 8 , 0 , as no other behavior than convergence to the roots is allowed. In Figure 1, some dynamical planes can be seen. We use a mesh M with 400 × 400 points, and every initial guess in this mesh is iterated a maximum of 80 times with an exact error lower than 10 3 .
Each point of M is colored depending on the root (presented as white stars) it converges to. This color is brighter when the amount of iterations needed is lower. If the maximum of iterations are reached and no convergence to the roots is achieved, then the point is colored in black. In Figure 1a, the dynamical plane of the SF1 method acting on s ( x ) is presented. Let us notice that, for a 3 = 0 , it has the same performance as Newton’s method, but with fourth-order of convergence (see Figure 1a), it is still very stable for a 3 = 5 where the basins of attraction of the roots have infinite components (Figure 1b) and in case of a 3 = 15 black regions of no convergence appear, but in this case they correspond to slow convergence (Figure 1c).
In order to find undesirable values of the parameter, we look for those members of the family that are able to converge to any other attracting element different from the roots of s ( x ) . In order to achieve this, we use the parameter line. To generate it, a free critical point depending of a 3 is used as initial guess for each value of a 3 (see Figure 2). In the interval a 3 81 8 , 0 , where there exist free critical points, each point of the mesh of 500 points is painted in red if the critical point converges to one of the roots or in black otherwise.
In Figure 3, different unstable cases can be found: for a 3 = 9.4 , several black areas of no convergence to the roots can be found, corresponding to four periodic orbits of period 4 (see Figure 3a)
{ ( 1.0000 , 11.2191 ) , ( 1.0000 , 0.4249 ) , ( 1.0000 , 11.2191 ) , ( 1.0000 , 0.4249 ) } , { ( 11.2191 , 1.0000 ) , ( 0.4249 , 1.0000 ) , ( 11.2191 , 1.0000 ) , ( 0.4249 , 1.0000 ) } { ( 1.0000 , 11.2191 ) , ( 1.0000 , 0.4249 ) , ( 1.0000 , 11.2191 ) , ( 1.0000 , 0.4249 ) } , { ( 11.2191 , 1.0000 ) , ( 0.4249 , 1.0000 ) , ( 11.2191 , 1.0000 ) , ( 0.4249 , 1.0000 ) } .
Similar performance can be found in Figure 3b, where the black areas correspond to periodic orbits of higher period. So, the only values of a 3 where convergence to attracting orbits is assured are those in black in the parameter line. In the rest of the real values, the only possible behavior is convergence to the roots of s ( x ) .

4.2. Stability of SF2

Now, let us apply the SF2 method on s ( x ) = 0 and obtain its rational vectorial operator S ( x ) = s 1 ( x ) , s 2 ( x ) T , whose j-th coordinate is
s j ( x ) = 9 N j ( x j ) 2 b 2 x j 2 1 6 81 x j 4 + 2 x j 2 + 1 9216 x j 11 2 x j 2 + 1 2 , j = 1 , 2 ,
where N j ( x j ) = 909 x j 16 + 6615 x j 14 + 609 x j 12 + 1491 x j 10 489 x j 8 + 85 x j 6 5 x j 4 + x j 2 .
Therefore, we analyze the fixed points of S ( x ) in the next result. The proof is analogous to that of SF1, so it is omitted.
Theorem 5.
Rational operator S ( x ) related to the class of iterative methods SF2 has the roots of s ( x ) , ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) as superattracting fixed points, but also some strange fixed points exist, whose components are the real roots of polynomial q 1 ( t ) = t 14 ( 162 b 2 + 28683 ) + t 12 ( 6012 806 b 2 ) + t 10 ( 1602 b 2 + 9747 ) + t 8 ( 1590 b 2 3672 ) + t 6 ( 790 b 2 + 729 ) + t 4 ( 162 b 2 36 ) + t 2 ( 6 b 2 + 9 ) 2 b 2 or the combination between one of these roots and + 1 or 1 . The number of real roots of q 1 ( t ) depends on the value of b 2 :
  • If b 2 < 3187 18 or b 2 > 0 , q 1 ( t ) has only two real roots and the strange fixed points are defined by combining them with themselves or with ± 1 .
  • If 3187 18 b 2 0 , q 1 ( t ) has not real roots and there are not strange fixed points.
Now, it is important to study if there exist free critical points, as it is stated in the next result. Let us notice that, also in this case, there exist free critical points depending on the value of b 2 that can give rise to their own basins of attraction.
Theorem 6.
The rational operator S ( x ) related to the iterative method SF2 has points ( ± 1 , ± 1 ) as critical points. Moreover, the following cases can be described depending on the value of b 2 :
  • If b 2 0 , b 2 = 27 8 or b 2 101 2 , then there are no free critical points.
  • In other case, that is, if 0 < b 2 < 27 8 or 27 8 < b 2 < 101 2 , the components of the free critical points are any of the two real roots of polynomial q 2 ( t ) = 22 b 2 + ( 81 + 98 b 2 ) t 2 + ( 324 + 1238 b 2 ) t 4 + ( 3645 + 4366 b 2 ) t 6 + ( 16362 + 324 b 2 ) t 8 , or any of them combined with ± 1 .
From Theorem 6, we deduce that those elements of class SF2 with values of b 2 satisfying b 2 0 , b 2 = 27 8 or b 2 101 2 can only converge to the roots of the system, achieving the most stable performance. In other cases, the parameter line help us to deduce their behavior. In Figure 4a, the line shows convergence to the roots, for both free critical points when 0 < b 2 < 27 8 ; however, in Figure 4b two black areas of the parameter line show the values where convergence to the roots is not guaranteed when 27 8 < b 2 < 101 2 . In this case, the same performance also appears for any free critical point in this interval.
So, we can isolate values of b 2 in these areas of the parameter line looking for unstable behavior and choose any other value of the parameter for the stable one. In Figure 5, we observe the dynamical planes corresponding to SF2 method acting on s ( x ) (Figure 5a). We observe that in both cases the complexity of the boundary between the basins of convergence is higher than in stable elements of SF2 and, in the case of Figure 5b, a black area of slower convergence (or even divergence) appears. The first case, for b 2 = 45.5 , shows in yellow color a periodic orbit of period 4, close to one of the roots; specifically, this orbit is { ( 1.0000 , 0.4919 ) , ( 1.0000 , 11.7098 ) , ( 1.0000 , 0.4919 ) , ( 1.0000 , 11.7098 ) } and there exist other three symmetrical ones for this value of b 2 . In Figure 5b, a similar performance with black lines corresponding to the basin of attraction of the periodic orbit
{ ( 18.7737 , 1.0 ) , ( 0.3175 , 1.0 ) , ( 2338.7 , 1.0 ) , ( 20.5559 , 1.0 ) , ( 0.3201 , 1.0 ) , ( 2136.1 , 1.0 ) }
of period 6 and other symmetric ones in the vertical lines containing the roots.
Nevertheless, the most common performance of this class of iterative methods is stability, with convergence to the roots as the only possible performance. Some examples of this behavior can be observed in Figure 6 for different values of b 2 with no free critical points (Figure 6a,b,d) or in the red areas of the parameter lines (Figure 6c).

4.3. Stability of SF3

Finally, let us apply the SF3 method on s ( x ) = 0 and obtain its rational multidimensional operator T ( x ) = t 1 ( x ) , t 2 ( x ) T , whose j-th coordinate is
t j ( x ) = 2 s 2 x j 2 1 2 m 1 ( x j ) + 3 m 2 ( x j ) x j 2 3 x j 2 x j 2 + 1 2 91 x j 4 + 46 x j 2 + 7 2 2 s 2 x j 2 1 2 + 9 x 1 4 + x 1 2 , j = 1 , 2 ,
where m 1 ( x j ) = 26936 x j 14 + 166132 x j 12 + 203838 x j 10 + 115793 x j 8 + 38344 x j 6 + 7842 x j 4 + 938 x j 2 + 49 and m 2 ( x j ) = 29452 x j 16 + 325588 x j 14 + 518017 x j 12 + 485983 x j 10 + 238015 x j 8 + 69238 x j 6 + 12091 x j 4 + 1183 x j 2 + 49 . Therefore, the results that give us information about the stability T ( x ) appear below. The proofs are omitted as they are similar to the previous ones.
Theorem 7.
The rational operator T ( x ) related to the iterative method SF3 has only the roots of s ( x ) , ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) and ( 1 , 1 ) as fixed points (being superattracting) if s 2 0 . However, there exist also strange fixed points, depending on negative values of s 2 :
  • If s 2 < 179409 36218 , the entries of the strange fixed points are the four real roots of polynomial m 3 ( t ) = t 16 ( 144872 s 2 + 717636 ) + t 14 ( 1456980 77464 s 2 ) + t 12 ( 1641285 143110 s 2 ) + t 10 ( 880986 8836 s 2 ) + t 8 ( 44822 s 2 + 280986 ) + t 6 ( 29072 s 2 + 54612 ) + t 4 ( 9062 s 2 + 6069 ) + t 2 ( 1484 s 2 + 294 ) + 98 s 2 , combined with ± 1 .
  • If 179409 36218 s 2 < 0 , the entries of the strange fixed points are the two real roots of m 3 ( t ) , combined with ± 1 .
Now, it is important to study if there exist free critical points, as it is stated in the next result. In this case there also exist points, different from the roots of s ( x ) , that make null both eigenvalues of T ( x ) .
Theorem 8.
The rational operator T ( x ) related to the iterative method SF3 has points ( ± 1 , ± 1 ) , as the only critical points if s 2 0 . In the other case, the number of free critical points depends of the roots of polynomials of eighteenth degree.
In Figure 7, the dynamical planes corresponding to SF3 method acting on s ( x ) can be observed. Stable behavior correspond to positive or null values of s 2 (Figure 7a,b), meanwhile stable or unstable performance is found for some negative values of s 2 , as can be seen in Figure 7c,d. Several attracting strange fixed points have been found for s 2 = 5.5 , whose components are ± 5 and/or ± 1 , whose basins of attraction appear in black color.

5. Numerical Results of Fourth Order Schemes

In this section, we work on some numerical examples. We assume four mathematical problems. For the multidimensional case, the first is a kinematic synthesis problem [23,24], the second is a boundary value problem, the third is 2D Bratu’s problem and the fourth is a typical nonlinear problem [25]. We compare the behavior of SF1 with the behavior obtained by the scheme proposed by Soleymani et al. [5], denoted as FS1
Y n = X n 2 3 [ F ( X n ) ] 1 F ( X n ) , X n + 1 = X n 2 G ( η n ) H ( λ n ) F ( X n ) + F ( Y n ) 1 F ( X n ) ,
where, η n = [ F ( X n ) ] 1 F X n and λ n = [ F ( X n ) ] 1 F ( Y n ) , with
G ( η n ) = I
and
H ( λ n ) = I + 1 4 λ n I + 3 4 λ n I 2 .
We have included the iteration index, ( n ) , the approximated computational order of convergence,
p n = ln ( i n c r n + 1 / i n c r n ) ln ( i n c r n / i n c r n 1 ) ,
the residual error of the corresponding function F ( X ( n + 1 ) ) , errors between the two consecutive iterates i n c r n + 1 = X ( n + 1 ) X ( n ) at each step, the numerical estimation of asymptotic error estimation (AEC) and its last value, η . All computations have been done by using the software Maple 16.
Example 1.
We consider the kinematic synthesis for steering, that is described as the system of nonlinear equations
[ E j ( x 2 sin ( Ψ j ) x 3 ) F j ( x 2 sin ( Φ j ) x 3 ) ] 2 + [ F j ( x 2 cos ( Φ j ) + 1 ) E j ( x 2 cos ( Ψ j ) 1 ) ] 2 [ x 1 ( x 2 sin ( Φ j ) x 3 ) ( x 2 cos ( Φ j ) + 1 ) x 1 ( x 2 cos ( Ψ j ) x 3 ) ( x 2 sin ( Φ j ) x 3 ) ] 2 ,
where X = X n ,
E j = x 3 x 2 ( sin ( Φ j ) sin ( Φ 0 ) ) x 1 ( x 2 sin ( Φ j ) x 3 ) + x 2 ( cos ( Φ j ) cos ( Φ 0 ) ) ,
for j = 1 , 2 , 3 and
F j = x 3 x 2 sin ( Ψ j ) + ( x 2 ) cos ( Ψ j ) + ( x 3 x 1 ) x 2 sin ( Ψ 0 ) + x 2 cos ( Ψ 0 ) + x 1 x 3 ,
j = 1 , 2 , 3 . We present the value of Ψ j and Φ j in Table 1. We assume the seed X ( 0 ) = ( 0.7 , 0.7 . 0.7 ) T to approximate the solution W = ( 0.9051567 , 0.6977417 , 0.6508335 ) T . The numerical results for this example are shown in Table 2.
Example 2.
Let we suppose a boundary value problem given by:
y = 1 2 y 3 + 3 y 3 2 x + 1 2 , y 0 = 0 ,   y 1 = 1 .
In addition, we assume the following partition of 0 , 1 :
x 0 = 0 < x 1 < x 2 < x 3 < < x n = 1 ,
where x i + 1 = x i + h , and h = 1 m such that
y 0 = y ( x 0 ) = 0 , y 1 = y ( x 1 ) , , y m = y ( x m ) = 1 .
Now, we discretize the above problem with the help of first and second divided difference for approximating the derivatives.
y k = y k + 1 y k 1 2 h , y = y k 1 2 y k + y k + 1 h 2 , k = 1 , 2 , 3 , , m 1 .
In this way, we produce the ( m 1 ) × ( m 1 ) system of nonlinear equations
1 + 3 h 2 y k 1 2 y k + 1 + 3 h 2 y k + 1 h 2 2 y k 3 = 1 2 3 2 x k h 2 , k = 1 , 2 , 3 , , m 1 .
Let us assume initial guess,
y k 0 = 1 2 , 1 2 , 1 2 , 1 2 , 1 2 , 1 2 T .
In particular we solve this problem for n = 7 so that, we obtain 6 × 6 system of nonlinear equations. The solution of this problem is given by:
W = ( 0.07654393 , 0.1658739 , 0.2715210 , 0.3984540 , 0.5538864 , 0.7486878 ) T .
The numerical results are shown in Table 3.
Example 3.
The 2D Bratu’s problem is described as:
u x x + u t t + C e u = 0 , ( x , t ) Ω ,
with
Ω = [ 0 , 1 ] × [ 0 , 1 ] ,
and boundary condition u = 0 on Ω . The solution of a nonlinear partial differential equation can be estimated by means of a finite difference discretization. This reduces the problem to solving a nonlinear system. Let us assume that w i , j = u ( x i , t j ) is the approximated solution at grid points and that m and l are the number of steps in x and t directions, being h and k the step sizes. To solve the given PDE, we can apply the central difference to u x x and u t t respectively,
u x x = u i + 1 , j 2 u i , j + u i 1 , j h 2 , u t t = u i , j + 1 2 u i , j + u i , j 1 k 2 .
Then, we have the equation:
u i + 1 , j + u i 1 , j 4 u i , j + u i , j + 1 + u i , j 1 + h 2 C e u i , j = 0 ,
with C = 0.1 , t [ 0 , 1 ] and we have the system:
u i + 1 , j + u i 1 , j 4 u i , j + u i , j + 1 + u i , j 1 + 0.1 h 2 e u i , j = 0 .
We have assumed that m = l = 4 with the initial vector of size 9,
X ( 0 ) = 0.1 sin ( π h ) sin ( π k ) , sin ( 2 π h ) sin ( 2 π k ) , , sin ( 10 π h ) sin ( 10 π k ) T .
Numerical results are shown in Table 4.
Example 4.
Let us assume the nonlinear Fisher’s equation with homogeneous boundary conditions of Neumann-type and the diffusion coefficient D is:
u t = D u x x + u ( 1 u ) , u ( x , 0 ) = 1.5 + 0.5 cos ( π x ) , 0 x 1 , u x ( 0 , t ) = 0 , t 0 , u x ( 1 , t ) = 0 , t 0 ,
Now we use finite difference discretization to reduce it to a system of nonlinear equations. Let us assume that w i , j = u ( x i , t j ) is the approximated solution at grid points of the mesh, m and l are the number of steps in x and t directions and h and k are step sizes. Let us apply the central difference formula to u x x ( x i , t j ) , backward difference to u t ( x i , t j ) , and forward difference for u x ( x i , t j ) , where t [ 0 , 1 ] . Then,
u i , j 2 8 D ( u i + 1 , j 2 u i , j + u i 1 , j ) = u i , j 1 .
In this Example, D = 1 , then the problem becomes:
u i , j 2 8 ( u i + 1 , j 2 u i , j + u i 1 , j ) = u i , j 1 .
For the solution of the system we have considered m = l = 4 which reduces to a nonlinear system of size 9 with: x i 0 = i ( m 1 ) 2 , i = 1 , 2 , , m 1 . Numerical results are shown in Table 5.

6. Numerical Comparison of Sixth-Order Schemes

Now, we compare the proposed schemes with those designed by Alzahrani et al. [26] denoted as (JR1),
Y n = X n 2 3 [ F ( X n ) ] 1 F ( X n ) , T n = X n Q ( Γ n ) [ F ( X n ) ] 1 F ( X n ) , X n + 1 = T n [ 2 F ( X n ) 1 + 6 ( F ( X n ) + F ( Y n ) ) 1 ] F ( T n ) ,
where the weight function Q : C C is continuous given by Q ( Γ n ) = 12 Γ n 2 9 Γ n + 5 2 and differentiable function in the neighborhood of W with Γ n = ( F ( X n ) + F ( Y n ) ) 1 F ( X n ) and a 1 , a 2 R .
Furthermore, we have the scheme by Behl et al. [9] denoted by (JR2),
Y n = X n 2 3 [ F ( X n ) ] 1 F ( X n ) , T n = X n a 1 + a 2 [ F ( Y n ] 1 F ( X n ) ) 2 [ F ( X n ) ] 1 F ( X n ) , X n + 1 = T n ( b 2 F ( X n ) + b 3 F ( Y n ) ) 1 F ( X n ) + b 1 F ( Y n ) F ( X n ) 1 F ( T n )
where a 1 = 5 8 , a 2 = 3 8 , b 1 = 2.6 , b 2 = 4.4 and b 3 = 8 .
All computations have been done using the software Maple 13 and we have included in Table 6, Table 7, Table 8 and Table 9 the same elements as in the fourth-order schemes.

7. Conclusions

We have developed a fourth order optimal Jarratt-type family for solving nonlinear equations. High-order schemes for solving system of nonlinear equations with simple algorithmic structure are of much importance and demand. We have extended our fourth order optimal Jarratt-type family for nonlinear equations to a sixth-order family by adding only one substep and one function evaluation. We give the extension of our scheme to the system of nonlinear equations. We choose those members of the family in terms of simplicity and a vectorial stability analysis is made, showing their good properties. The ability of these schemes is checked on different test problems and it is concluded that they work better than several existing ones in the scalar case. In addition, we have also tested their working for the multidimensional case on few real life problems. The scheme SF1 works better in all cases.

Author Contributions

Conceptualization, F.Z. and S.I.; software, A.C. and S.I.; validation, J.R.T.; formal analysis, A.C.; investigation, F.Z.; writing—original draft preparation, F.Z. and S.I.; writing—review and editing, J.R.T. All authors have read and agreed to the published version of the manuscript.

Funding

Research partially supported by Grant PGC2018-095896-B-C22 funded by MCIN/AEI/10.13039/5011000113033 by “ERDF A way to making Europe”, European Union.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank the help of the reviewers. Their suggestions and comments have improved this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ostrowski, A.M. Solutions of Equations and System of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
  2. Jarratt, P. Some fourth order multipoint iterative methods for solving equations. Math. Comput. 1966, 95, 434–437. [Google Scholar] [CrossRef]
  3. Chun, C. Some variants of King’s fourth order family of methods for nonlinear equations. Appl. Math. Comput. 2007, 190, 57–62. [Google Scholar] [CrossRef]
  4. Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Pseudocomposition: A technique to design predictor-corrector methods for systems of nonlinear equations. Appl. Math. Comput. 2012, 218, 11496–11504. [Google Scholar] [CrossRef]
  5. Soleymani, F.; Khattri, S.; Vanani, S.K. The new classes of optimal Jarratt type fourth order methods. Appl. Math. Lett. 2012, 25, 847–853. [Google Scholar] [CrossRef] [Green Version]
  6. Junjua, M.; Akram, S.; Yasmin, N.; Zafar, F. A new Jarratt type fourth order method for solving system of nonlinear equations and applications. Appl. Math. 2015, 2015, 805278. [Google Scholar] [CrossRef]
  7. Xiao, X.Y. New techniques to develop higher order iterative methods for systems of nonlinear equations. Comput. Appl. Math. 2022, 41, 243. [Google Scholar] [CrossRef]
  8. Artidiello, S.; Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Design of High-Order Iterative Methods for Nonlinear Systems by Using Weight Function Procedure. Abstr. Appl. Anal. 2015, 2015, 289029. [Google Scholar] [CrossRef] [Green Version]
  9. Behl, R.; Sarria, I.; Gonzalez, R.; Magreñán, Á.A. Highly efficient family of iterative methods for solving non linear models. Comput. Appl. Math. 2019, 346, 110–132. [Google Scholar] [CrossRef]
  10. Behl, R.; Arora, H. CMMSE: A novel scheme having seventh-order convergence for nonlinear systems. Comput. Appl. Math. 2022, 404, 113301. [Google Scholar] [CrossRef]
  11. Hueso, J.L.; Martínez, E.; Teruel, C. Convergence, efficiency and dynamics of new fourth and sixth order families of iterative methods for nonlinear systems. Comput. Appl. Math. 2015, 275, 412–420. [Google Scholar] [CrossRef]
  12. Sharma, J.R.; Arora, H. Efficient higher order derivative-free multipoint methods with and without memory for systems of nonlinear equations. Int. J. Comput. Math. 2018, 95, 920–938. [Google Scholar] [CrossRef]
  13. Xiao, X.Y.; Yin, H.W. Increasing the order of convergence for iterative methods to solve nonlinear systems. Calcolo 2016, 53, 285–300. [Google Scholar] [CrossRef]
  14. Wang, X. Fixed-Point Iterative Method with Eighth-Order Constructed by Undetermined Parameter Technique for Solving Nonlinear Systems. Symmetry 2021, 13, 863. [Google Scholar] [CrossRef]
  15. Argyros, I.K.; Sharma, D.; Parhi, S.K.; Sunanda, S.K. On the convergence, dynamics and applications of a new class of nonlinear system solvers. Int. J. Appl. Comput. Math. 2020, 6, 142. [Google Scholar] [CrossRef]
  16. Behl, R.; Bhalla, S.; Magreñán, Á.A.; Kumar, S. An efficient high order iterative scheme for large nonlinear systems with dynamics. Comput. Appl. Math. 2022, 404, 113249. [Google Scholar] [CrossRef]
  17. Howk, C.L.; Hueso, J.L.; Martínez, E.; Teruel, C. A class of efficient high-order iterative methods with memory for nonlinear equations and their dynamics. Math. Meth. Appl. Sci. 2018, 41, 7263–7282. [Google Scholar] [CrossRef]
  18. Zhanlav, T.; Chun, C.; Otgondorj, K. Construction and dynamics of efficient high-order methods for nonlinear systems. Int. J. Comput. Meth. 2022. [Google Scholar] [CrossRef]
  19. Cordero, A.; Soleymani, F.; Torregrosa, J.R. Dynamical analysis of iterative methods for nonlinear systems or how to deal with the dimension? Appl. Math. Comput. 2014, 244, 398–412. [Google Scholar] [CrossRef]
  20. Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. On the effect of the multidimensional weight functions on the stability of iterative processes. Comput. Appl. Math. 2022, 405, 113052. [Google Scholar] [CrossRef]
  21. Blanchard, P. Complex Analytic Dynamics on the Riemann Sphere. Bull. AMS 1984, 11, 85–141. [Google Scholar] [CrossRef] [Green Version]
  22. Robinson, R.C. An Introduction to Dynamical Systems, Continuous and Discrete; Americal Mathematical Society: Providence, RI, USA, 2012. [Google Scholar]
  23. Balaji, G.V.; Seader, J.D. Application of interval Newton’s method to chemical engineering problems. Reliab. Comput. 1995, 1, 215–223. [Google Scholar] [CrossRef]
  24. Shacham, M. An improved memory method for the solution of a nonlinear equation. Chem. Eng. Sci. 1989, 44, 1495–1501. [Google Scholar] [CrossRef]
  25. Edelstein–Keshet, L. Differential Calculus for the Life Sciences; University of British Columbia: Vancouver, BC, Canada, 2017. [Google Scholar]
  26. Alzahrani, A.K.H.; Behl, R.; Alshomrani, A.S. Some higher-order iteration functions for solving non linear models. Appl. Math. Comput. 2018, 334, 80–93. [Google Scholar]
Figure 1. Dynamical planes of SF1 on s ( x ) . Blue, Orange, green and brown colors correspond to the basins of attraction of fixed points ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) and ( 1 , 1 ) , respectively. These points appears marked with white asterisks. (a) a 3 = 0 . (b) a 3 = 5 . (c) a 3 = 15 .
Figure 1. Dynamical planes of SF1 on s ( x ) . Blue, Orange, green and brown colors correspond to the basins of attraction of fixed points ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) and ( 1 , 1 ) , respectively. These points appears marked with white asterisks. (a) a 3 = 0 . (b) a 3 = 5 . (c) a 3 = 15 .
Symmetry 14 01742 g001
Figure 2. Parameter line of SF1 on s ( x ) for a 3 81 8 , 0 . Red and black color correspond to values of a 3 where the scheme converges to the roots or not, respectively.
Figure 2. Parameter line of SF1 on s ( x ) for a 3 81 8 , 0 . Red and black color correspond to values of a 3 where the scheme converges to the roots or not, respectively.
Symmetry 14 01742 g002
Figure 3. Dynamical planes of unstable elements of SF1 on s ( x ) . Blue, Orange, green and brown colors correspond to the basins of attraction of fixed points ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) and ( 1 , 1 ) , respectively. These points appears marked with white asterisks. (a) a 3 = 9.4 . (b) a 3 = 10.1 .
Figure 3. Dynamical planes of unstable elements of SF1 on s ( x ) . Blue, Orange, green and brown colors correspond to the basins of attraction of fixed points ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) and ( 1 , 1 ) , respectively. These points appears marked with white asterisks. (a) a 3 = 9.4 . (b) a 3 = 10.1 .
Symmetry 14 01742 g003
Figure 4. Parameter line of SF2 on s ( x ) for (a) 0 < b 2 < 27 8 or (b) 27 8 < b 2 < 101 2 .
Figure 4. Parameter line of SF2 on s ( x ) for (a) 0 < b 2 < 27 8 or (b) 27 8 < b 2 < 101 2 .
Symmetry 14 01742 g004
Figure 5. Dynamical planes of SF2 on s ( x ) for unstable values of b 2 . Blue, Orange, green and brown colors correspond to the basins of attraction of fixed points ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) and ( 1 , 1 ) , respectively. These points appears marked with white asterisks. (a) b 2 = 45.5 . (b) b 2 = 48.5 .
Figure 5. Dynamical planes of SF2 on s ( x ) for unstable values of b 2 . Blue, Orange, green and brown colors correspond to the basins of attraction of fixed points ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) and ( 1 , 1 ) , respectively. These points appears marked with white asterisks. (a) b 2 = 45.5 . (b) b 2 = 48.5 .
Symmetry 14 01742 g005
Figure 6. Dynamical planes of SF2 on s ( x ) for stable values of b 2 . Blue, Orange, green and brown colors correspond to the basins of attraction of fixed points ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) and ( 1 , 1 ) , respectively. These points appears marked with white asterisks. (a) b 2 = 0 . (b) b 2 = 27 8 . (c) b 2 = 20 . (d) b 2 = 60 .
Figure 6. Dynamical planes of SF2 on s ( x ) for stable values of b 2 . Blue, Orange, green and brown colors correspond to the basins of attraction of fixed points ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) and ( 1 , 1 ) , respectively. These points appears marked with white asterisks. (a) b 2 = 0 . (b) b 2 = 27 8 . (c) b 2 = 20 . (d) b 2 = 60 .
Symmetry 14 01742 g006
Figure 7. Dynamical planes of SF3 on s ( x ) for stable and unstable cases. Blue, Orange, green and brown colors correspond to the basins of attraction of fixed points ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) and ( 1 , 1 ) , respectively. These points appears marked with white asterisks. Black region is the basin of attraction of attracting strange fixed points.(a) s 2 = 0 . (b) s 2 = 10 . (c) s 2 = 10 . (d) s 2 = 5.5 .
Figure 7. Dynamical planes of SF3 on s ( x ) for stable and unstable cases. Blue, Orange, green and brown colors correspond to the basins of attraction of fixed points ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) and ( 1 , 1 ) , respectively. These points appears marked with white asterisks. Black region is the basin of attraction of attracting strange fixed points.(a) s 2 = 0 . (b) s 2 = 10 . (c) s 2 = 10 . (d) s 2 = 5.5 .
Symmetry 14 01742 g007aSymmetry 14 01742 g007b
Table 1. Value of Ψ j and Φ j for (18).
Table 1. Value of Ψ j and Φ j for (18).
i Ψ j Φ j
0 1.39541700417470 1.74617564941508
1 1.74448285457357 2.03646911277919
2 2.06562343694053 2.23909778682659
3 2.46006784789125 2.46006784098093
Table 2. Comparison of fourth-order methods for Example 1.
Table 2. Comparison of fourth-order methods for Example 1.
Casesn X ( n ) F 1 ( X ( n ) ) X ( n + 1 ) X ( n ) p n AEC η
SF11 0.920218 5.40036 × 10 4 2.20218 × 10 1
2 0.905232 3.81192 × 10 6 1.49859 × 10 3 4.54336 5326.035
3 0.905158 1.20000 × 10 10 1.192189 × 10 4 1.720052 5326.0353
FS11 0.914870 8.82812 × 10 5 2.148701 × 10 1
2 0.905186 9.51730 × 10 7 9.68459 × 10 3 4.5433 5325.93
3 0.905158 1.10000 × 10 10 4.68410 × 10 3 1.720058 5325.93
Table 3. Comparison of fourth-order methods for Example 2.
Table 3. Comparison of fourth-order methods for Example 2.
Casesn X ( n ) F 2 ( X ( n ) ) X ( n + 1 ) X ( n ) p n AEC η
SF11 0.748682 4.32119 × 10 6 4.23462 × 10 1
2 0.748688 3.77602 × 10 24 1.06282 × 10 5 0.0003117 0.0015813
3 0.748688 6.47000 × 10 50 1.40987 × 10 23 3.8478801 0.0015813
FS11 0.748682 4.32119 × 10 6 4.23462 × 10 1
2 0.748688 3.77602 × 10 24 1.062820 × 10 5 0.0003117 0.0015813
3 0.748688 3.6400 × 10 50 1.40987 × 10 23 3.8478800 0.0015813
Table 4. Comparison of fourth-order methods for Example 3.
Table 4. Comparison of fourth-order methods for Example 3.
Casesn X ( n ) F 3 ( X ( n ) ) X ( n + 1 ) X ( n ) p n AEC η
SF11 0.003526 2.07661 × 10 9 9.65854 × 10 2
2 0.003526 9.81449 × 10 42 8.12024 × 10 10 9.0723 × 10 6 8.2 × 10 6
3 0.003526 5.00000 × 10 52 3.58063 × 10 42 4.005219 8.2329 × 10 6
FS11 0.003414 2.07596 × 10 9 9.72579 × 10 2
2 0.003526 9.79852 × 10 10 8.11738 × 10 10 9.0722 × 10 6 8.4 × 10 6
3 0.003526 7.0000 × 10 52 3.57364 × 10 42 4.005217 8.4233 × 10 6
Table 5. Comparison of fourth-order methods for Example 4.
Table 5. Comparison of fourth-order methods for Example 4.
Casesn X ( n ) F 4 ( X ( n ) ) X ( n + 1 ) X ( n ) COC AEC η
SF11 88.809964 20066.06 × 10 1 887.09 × 10 1
2 26.893604 1917.5 × 10 1 619.16 × 10 1 0.00075 0.00043
3 61.916359 182.88 × 10 1 191.75 × 10 1 5.9091609 0.00043
FS11 10.041642 272.96817 × 10 1 99.41225 × 10 1
2 2.619390 20.18571 × 10 1 74.22055 × 10 1 0.00075 0.00043
3 1.696888 2.04109 × 10 2 13.19775 × 10 1 5.9091764 0.00043
Table 6. Comparison of sixth-order methods for Example 1.
Table 6. Comparison of sixth-order methods for Example 1.
Casesn X ( n ) F 1 ( X ( n ) ) X ( n + 1 ) X ( n ) p n AEC η
SF21 0.908308 1.157258 × 10 4 2.08308 × 10 1
2 0.905158 3.11000 × 10 10 3.21515 × 10 3 1.90372 91.1777
3 0.905158 3.10000 × 10 10 2.40000 × 10 9 3.783584 91.1777
SF31 0.908833 5.57417 × 10 5 2.08832 × 10 1
2 0.905158 4.40000 × 10 10 3.76250 × 10 3 1.97828 26.9455
3 0.905158 3.20000 × 10 10 5.40000 × 10 9 3.3497758 26.9455
JR11 0.906593 3.83351 × 10 4 2.06592 × 10 1
2 0.905158 8.90000 × 10 10 4.06564 × 10 3 2.23190 41.6145
3 0.905158 2.20000 × 10 10 1.13700 × 10 8 3.2552278 41.6145
JR21 0.908646 5.63716 × 10 5 2.086463 × 10 1
2 0.905158 1.80000 × 10 10 3.64084 × 10 3 1.92113 5.69106
3 0.905158 3.20000 × 10 10 1.0000 × 10 9 3.731752734 5.69106
Table 7. Comparison of sixth-order methods for Example 2.
Table 7. Comparison of sixth-order methods for Example 2.
Casesn X ( n ) F 2 ( X ( n ) ) X ( n + 1 ) X ( n ) COC AEC η
SF21 0.748688 1.12585 × 10 8 4.23456 × 10 1
2 0.748688 4.00000 × 10 50 2.90685 × 10 8 0.5283261 0.000187
3 0.748688 6.58000 × 10 50 4.0000 × 10 50 5.8211310 0.0001870
SF31 0.748688 1.15735 × 10 8 4.23456 × 10 1
2 0.748688 6.00000 × 10 50 2.76367 × 10 8 0.0000052 0.000117
3 0.748688 4.00000 × 10 50 7.00000 × 10 50 5.8117811 0.0001176
JR11 0.748688 1.00615 × 10 8 4.23456 × 10 1
2 0.748688 8.00000 × 10 50 2.35777 × 10 8 0.0000041 0.000810
3 0.748688 4.00000 × 10 50 1.20000 × 10 49 5.6851001 0.0008101
JR21 0.748606 4.06025 × 10 3 4.21306 × 10 1
2 0.748688 3.55800 × 10 7 2.62289 × 10 3 0.0000054 0.000941
3 0.748688 6.31861 × 10 15 3.50043 × 10 7 5.82719 0.0009412
Table 8. Comparison of sixth-order methods for Example 3.
Table 8. Comparison of sixth-order methods for Example 3.
Casesn X ( n ) F 3 ( X ( n ) ) X ( n + 1 ) X ( n ) p n AEC η
SF21 0.003526 6.20 × 10 15 9.66 × 10 2
2 0.003526 8.0 × 10 52 2.45 × 10 15 2.73965 × 10 11 2.7696
3 0.003527 6.0 × 10 52 2.0 × 10 52 2.7495174 2.76962
SF31 0.003526 6.20 × 10 15 2.77 × 10 2
2 0.003526 5.0 × 10 51 2.45 × 10 15 2.73939 × 10 11 8.3119
3 0.003526 5.0 × 10 52 3.0 × 10 52 2.7144199 8.31199
JR11 0.003526 6.29 × 10 15 9.73 × 10 2
2 0.003526 5.0 × 10 52 2.49 × 10 15 2.78588 × 10 11 7.7709
3 0.003526 3.0 × 10 52 3.0 × 10 52 2.7164170 7.77098
JR21 0.003526 6.20 × 10 15 9.73 × 10 2
2 0.003526 7.0 × 10 52 2.45 × 10 15 2.74005 × 10 11 2.7680
3 0.003526 3.0 × 10 52 1.0 × 10 42 2.7495349 2.76800
Table 9. Comparison of sixth-order methods for Example 4.
Table 9. Comparison of sixth-order methods for Example 4.
Casesn X ( n ) F 4 ( X ( n ) ) X ( n + 1 ) X ( n ) p n AEC η
SF21 75875.046 234927.08 × 10 1 758751.4 × 10 1
2 16789.904 3430609.5 × 10 1 590861.4 × 10 1 6.3 × 10 10 4.3 × 10 10
3 3716.106 167959.14 × 10 1 130737.9 × 10 1 5.63157 4.3 × 10 10
SF31 205.297 107037.10 × 10 1 2052.97 × 10 1
2 34.135 3215.6015 × 10 1 1711.64 × 10 1 9.6 × 10 8 3.5 × 10 8
3 3.862 110.1805 × 10 1 302.72 × 10 1 9.52711 3.5 × 10 8
JR11 44.785 5208.084 × 10 1 447.85 × 10 1
2 9.297 258.2679 × 10 1 354.88 × 10 1 8.0 × 10 5 4.0 × 10 5
3 1.855 8.724381 × 10 1 7.4423 × 10 1 6.71356 4.0 × 10 5
JR21 1.686 15.85787 × 10 1 15.857 × 10 1
2 1.697 3.119860 × 10 9 2.2004 × 10 5 1.16981 7.3 × 10 8
3 1.697 3.1545310 × 10 33 1.8000 × 10 17 8.26277 7.3 × 10 8
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cordero, A.; Iqbal, S.; Torregrosa, J.R.; Zafar, F. New Iterative Schemes to Solve Nonlinear Systems with Symmetric Basins of Attraction. Symmetry 2022, 14, 1742. https://doi.org/10.3390/sym14081742

AMA Style

Cordero A, Iqbal S, Torregrosa JR, Zafar F. New Iterative Schemes to Solve Nonlinear Systems with Symmetric Basins of Attraction. Symmetry. 2022; 14(8):1742. https://doi.org/10.3390/sym14081742

Chicago/Turabian Style

Cordero, Alicia, Smmayya Iqbal, Juan R. Torregrosa, and Fiza Zafar. 2022. "New Iterative Schemes to Solve Nonlinear Systems with Symmetric Basins of Attraction" Symmetry 14, no. 8: 1742. https://doi.org/10.3390/sym14081742

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop