Next Article in Journal
Observable and Unobservable Mechanical Motion
Previous Article in Journal
Ensemble Learning Using Fuzzy Weights to Improve Learning Style Identification for Adapted Instructional Routines
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Visual Analysis of Dynamics Behaviour of an Iterative Method Depending on Selected Parameters and Modifications

by
Ireneusz Gościniak
* and
Krzysztof Gdawiec
Institute of Computer Science, University of Silesia, Bȩdzińska 39, 41–200 Sosnowiec, Poland
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(7), 734; https://doi.org/10.3390/e22070734
Submission received: 5 May 2020 / Revised: 8 June 2020 / Accepted: 29 June 2020 / Published: 2 July 2020

Abstract

:
There is a huge group of algorithms described in the literature that iteratively find solutions of a given equation. Most of them require tuning. The article presents root-finding algorithms that are based on the Newton–Raphson method which iteratively finds the solutions, and require tuning. The modification of the algorithm implements the best position of particle similarly to the particle swarm optimisation algorithms. The proposed approach allows visualising the impact of the algorithm’s elements on the complex behaviour of the algorithm. Moreover, instead of the standard Picard iteration, various feedback iteration processes are used in this research. Presented examples and the conducted discussion on the algorithm’s operation allow to understand the influence of the proposed modifications on the algorithm’s behaviour. Understanding the impact of the proposed modification on the algorithm’s operation can be helpful in using it in other algorithms. The obtained images also have potential artistic applications.

1. Introduction

Most of the algorithms determining the local extremes and especially the minimum of the function F are based on determining the value of the function or its gradient. In the case of such an algorithm, the new position of the particle is calculated basing on its previous position and the gradient which determine the direction of the movement [1]—an example of this approach is the gradient descent method:
z i = z i γ F ( z i ) ,
where: F —negative gradient, γ —step size, z i —the current position of the ith particle in a D dimensional environment, z i —the previous position of the ith particle. There are many algorithms implementing modifications of this method, and its operation is well described in the literature.
Evolutionary algorithms are another group of algorithms used to solve optimisation problems, including minimisation. As shown in [2], the analysis of particle behaviour in the evolutionary algorithms is a very complex problem. In practice, many algorithms similar to the evolutionary algorithms are used [3]. Particularly noteworthy is the group of particle swarm optimisation algorithms (PSO) [4]. The complex nature of the interaction between particles does not allow us to determine their impact on the operation of the algorithm in a simple way. The behaviour of particles in the PSO algorithm can be described by the following equation:
z i = z i + v i ,
where: z i —the current position of the ith particle in a D dimensional environment, z i —the previous position of the ith particle, v i —the current velocity of the ith particle in a D dimensional environment that is given by the following formula:
v i = ω v i + η 1 r 1 ( z p b e s t i z i ) + η 2 r 2 ( z g b e s t z i ) ,
where: v i —the previous velocity of the ith particle, ω —inertia weight ( ω [ 0 , 1 ] ), η 1 , η 2 —acceleration constants ( η 1 , η 2 ( 0 , 1 ] ), r 1 , r 2 —random numbers generated uniformly in the [ 0 , 1 ] interval, z p b e s t i —the best position of the ith particle, z g b e s t —the global best position of the particles. The best position of the particle and the best global position are calculated in each iteration. The position of the particle and the best global position have a significant impact on the behaviour of not only the particle, but the entire swarm, directing them towards the best solution. The particle behaviour is determined by three parameters: inertia weight ( ω ) and two acceleration constants ( η 1 , η 2 ). The PSO algorithm belongs to the group of stochastic algorithms for which the convergence can be proved [5]. The ω parameter informs about the influence of the particle velocity at the moment t 1 on its velocity at the moment t. On the other hand, the η parameter affects the changes in the particle acceleration at the time t. For ω < 1 the particles slow down to reach the velocity of zero. For the value of ω close to 0.7 and the value of η 1 and η 2 close to 1.5 the PSO algorithm has good convergence. The cooperation of particles releases an adaptive mechanism that drives the particle and determines its dynamics. The selection of the proper values of inertia weight and acceleration constants is a very important problem for the algorithm [6]—the algorithm tuning is intuitive and based on developer’s experience.
Another important problem that arises in practice is finding the solutions of a system of non-linear equations. Due to many difficulties in solving such systems by using analytical methods many numerical methods were developed. The most famous method of this type is the Newton’s method [7]. In recent years methods with different approaches were proposed. In [8] Goh et al. inspired by the simplex algorithm, proposed a partial Newton’s method. Despite this method is based on Newton’s method, it has slower convergence. Another method based on Newton’s method was proposed in [9]. In this method Saheya et al. revised the Jacobian matrix by a rank one matrix each iteration. Sharma et al. in [10] introduced a three-step iterative scheme that is based on Newton–Traub’s method. A method that is based on a Jarratt-like method was proposed by Abbasbandy et al. in [11]. This method uses only one LU factorisation which preserves and reduces computational complexities. An acceleration technique that can be used in many iterative methods was proposed in [12]. Moreover, the authors introduced a new family of two-step third order methods for solving systems of non-linear equations. Recently, Alqahtani et el. proposed a three-step family of iterative methods [13]. This family is a generalisation of the fourth-order King’s family to the multidimensional case. Awwal et al. in [14] proposed a two-step iterative algorithm that is based on projection technique and solves system of monotone non-linear equations with convex constraints.
In addition to the methods based on classical root finding methods (Newton, Traub, Jarratt, etc.), in the literature we can find approaches based on the PSO algorithm and other metaheuristic algorithms. In [15] Wang et al. proposed a modification of the PSO algorithm in which the search dynamics is controlled by a traditional controller such as the PID (Proportional–Integral–Derivative) controller. A combination of Nelder–Mead simplex method and PSO algorithm was proposed in [16]. To overcome problems of being trapped in local minima and slow convergence, Jaberipour et al. proposed a new way of updating particle’s position [17]. Another modification of the PSO algorithm for solving systems of non-linear equations was proposed by Li et al. in [18]. The authors proposed: (1) a new way of inertia weight selection, (2) dynamics conditions of stopping iterating, (3) calculation of the standardised number of restarting times based on reliability theory. Ibrahim and Tawhid in [19] introduced a hybridisation of cuckoo search and PSO for solving system of non-linear equations, and in [20] a hybridisation of differential evolution and the monarch butterfly optimisation. Recently, Liao et al. in [21] introduced a decomposition re-initialisation-based differential evolution to locate multiple roots of non-linear equations system. A study on different metaheuristic methods, namely PSO, firefly algorithm and cuckoo search algorithm, in the solving of system of non-linear equation task was conducted in [22]. The study showed that the PSO algorithm obtains better results than the other two algorithms.
Dynamic analysis is a separate field of research using numerous investigation methods [23]. Among them, there is a graphical visualisation of dynamics [24]. In the analysis of the complex polynomial root finding process the polynomiography is a very popular method of visualising the dynamics [25,26,27]. In this method the number of iterations required to obtain the solution is visualised. The generated images show the dynamics of the root-finding process using some colour maps. Today, dynamics study of root-finding methods with polynomiography is an essential part of modern analysis of the quality of these methods [28]. It turns out that this form of dynamics study is a powerful tool in selecting optimal parameters that appear in the iterative methods for solving non-linear equations and to compare these methods [29,30]. The visualisation methods proposed in the work will also be called polynomiography because of their similarity to the primary polynomiography, despite the fact that we use any functions, not just polynomials.
This article proposes modifications of a root finding algorithm, that is based on the Newton’s method, by using the inertia weight and the acceleration constant as well as the best position of the particle and the implementation of various iteration methods. The conducted research serves to visualise the impact of the proposed algorithm modifications on the behaviour of the particle. They allow conducting not only the discussion on the behaviour of a particle, but also to present images of aesthetic character and artistic meaning [31,32].
Nammanee et al. [33] introduced a strong convergence theorem for the modified Noor iterations in the framework of uniformly smooth Banach spaces. Their results allow to suppose that the proposed modifications using the best particle position will be an effective way to increase the effectiveness of algorithm’s operation.
The rest of the paper is organised as follows. Section 2 introduces root finding algorithm that is based on the Newton–Raphson method. This method is used to illustrate the tuning influence on its behaviour. Next, Section 3 introduces iteration processes known in literature. Moreover, basing on these iterations we propose iterations that use the best position of the particle. Section 4 presents algorithms for creating polynomiographs. Then, in Section 5 a discussion on the research results illustrated by the obtained polynomiographs is made. Finally, Section 6 gives short concluding remarks.

2. The Algorithm

Numerous methods can be used to solve a system of D non-linear equations with D variables. One of the best known and intensively studied methods is the Newton–Raphson method [7]. Let f 1 , f 2 , , f D : R D R and let
F ( z 1 , z 2 , , z D ) = f 1 ( z 1 , z 2 , , z D ) f 2 ( z 1 , z 2 , , z D ) f D ( z 1 , z 2 , , z D ) = 0 0 0 = 0 .
Let F : R D R D be a continuous function which has continuous first partial derivatives. Now, to solve the equation F ( z ) = 0 , where z = [ z 1 , z 2 , , z D ] , by using the Newton–Raphson method, a starting point is selected z 0 = [ z 0 1 , z 0 2 , , z 0 D ] and then the iterative formula is used as follows:
z n + 1 = z n J 1 ( z n ) F ( z n ) n = 0 , 1 , 2 , ,
where:
J ( z ) = f 1 z 1 ( z ) f 1 z 2 ( z ) f 1 z D ( z ) f 2 z 1 ( z ) f 2 z 2 ( z ) f 2 z D ( z ) f D z 1 ( z ) f D z 2 ( z ) f D z D ( z )
is the Jacobian matrix of F and J 1 is its inverse.
By taking N ( z ) = J 1 ( z ) F ( z ) the Newton–Raphson method can be described as follows:
z n + 1 = z n + N ( z n ) , n = 0 , 1 , 2 , .
To solve (4), the following algorithm can be used:
z n + 1 = z n + v n + 1 ,
where: z 0 R D is a starting position, v 0 = [ 0 , 0 , , 0 ] is a starting velocity, v n + 1 is the current velocity of particle ( v n + 1 = [ v n + 1 1 , v n + 1 2 , , v n + 1 D ] ), z n is the previous position of particle ( z n = [ z n 1 , z n 2 , , z n D ] ). The algorithm sums the position of the particle z n with its current velocity v n + 1 . The current velocity of the particle is determined by the inertia weight and the acceleration constants:
v n + 1 = ω v n + η N ( z n ) ,
where: v n —the previous velocity of particle, ω [ 0 , 1 ) —inertia weight, η ( 0 , 1 ] —acceleration constant. Let us notice that (8) reduces to the classical Newton’s method [25] if ω = 0 and η = 1 , and to the relaxed Newton’s method [34] if ω = 0 and η 1 . Moreover, we can observe that the proposed method joins the PSO algorithm and the Newton’s method. The acceleration term in the PSO algorithm is responsible for pointing the particle in the direction of the best solution. In our method as the acceleration term we use the direction used in the Newton’s method, which is responsible for moving the current point into direction of the solution. Thus, in each iteration we are moving towards the solution and eventually we will stop in the solution, because for ω < 1 the particle slows down and N ( z n ) = 0 , when we reach the solution.
The implementation of inertia weight ( ω ) and the acceleration constant ( η ) allows controlling particle dynamics in a wider range [4]. The selection of the value of these parameters is not deterministic. It depends on the problem being solved and the selected iteration method. These values are selected by tuning—it is a kind of art. The relationships between the ω and η parameters show a complex nature and the change of these parameters influences particle’s dynamics, which can be visualised using algorithms described in Section 4.
A very important feature in numerical methods for solving systems of equations is the order of convergence of the method. For instance, the order of convergence of Newton’s method is 2 [25]. In the case of the methods that are based on Newton’s, Traub, Jarratt, etc. methods the order of convergence is proved (see for instance [8,9,10,11,12,13,14]), but in case of the methods that are based on metaheuristic algorithms, like the method presented in this paper, the order of convergence is not derived (see for instance [15,16,17,18,19,20,21,22]). This is due the fact that this order greatly varies for different values of the parameters used in the metaheuristic algorithms. We will show this on a very simple example. In this example to approximate the order of convergence we will use the computational order convergence introduced in [35], i.e.,
p ln z n + 1 z n / z n z n 1 ln z n z n 1 / z n 1 z n 2 .
Let us consider f 1 ( x , y ) = x 3 3 x y 2 1 , f 2 ( x , y ) = 3 x 2 y y 3 . We will solve system (4) by using (8) with various values of ω and η in (9) and three different starting points z 0 . The obtained results are presented in Table 1. For ω = 0 and η = 1 (classic Newton’s method) we see that the method obtains second order of convergence, which is in accordance with the result proved in the literature. When we take a non-zero inertia, then we see that for a fixed starting point and varying ω and η we obtain very different values of the order. The same situation happens when we fix ω and η and change the starting points.
Another problem studied in the context of solving systems of non-linear equation with iterative methods is the problem of strange (extraneous) fixed points, i.e., fixed points which are found by the method and are not the roots of the considered system [36]. Let us assume that (8) has converged to a solution. In such case the v n in (9) is equal to 0. Therefore, method (8) behaves like relaxed Newton’s method. As a result that relaxed Newton’s method has no strange fixed point [25], the proposed method also has no strange fixed points.

3. Iteration Processes

Picard’s iteration [37] is widely used in many iterative algorithms. This iteration can be described as follows:
z n + 1 = T ( z n ) .
Notice that (8) uses the Picard iteration, where T : R D R D takes the form
T ( z n ) = z n + v n + 1 .
In the literature many other iteration processes can be found. Three most basic and commonly implemented algorithms of the approximate finding of fixed points of mappings are as follows:
  • The Mann iteration [38]:
    z n + 1 = ( 1 α n ) z n + α n T ( z n ) , n = 0 , 1 , 2 , ,
    where: α n ( 0 , 1 ] for all n N , and the Mann iteration for α n = 1 reduces to the Picard iteration.
  • The Ishikawa iteration [39]:
    z n + 1 = ( 1 α n ) z n + α n T ( u n ) , u n = ( 1 β n ) z n + β n T ( z n ) , n = 0 , 1 , 2 , ,
    where: α n ( 0 , 1 ] and β n [ 0 , 1 ] for all n N , the Ishikawa iteration reduces to the Mann iteration when β n = 0 , and to the Picard iteration when α n = 1 and β n = 0 .
  • The Agarwal iteration [40] (S-iteration):
    z n + 1 = ( 1 α n ) T ( z n ) + α n T ( u n ) , u n = ( 1 β n ) z n + β n T ( z n ) , n = 0 , 1 , 2 , ,
    where α n [ 0 , 1 ] and β n [ 0 , 1 ] for all n N , let us notice that the S-iteration reduces to the Picard iteration when α n = 0 , or α n = 1 and β n = 0 .
A survey of various iteration processes and their dependencies can be found in [41].
An additional particle (a sample in the solution space) u n (acting as a reference point) is applied in the Ishikawa and Agarwal iterations. In this case, each of these iterations performs two steps. The reference point u n is set in the first step, and next a new particle position z n + 1 is computed using the reference point. It allows better control of the particle’s motion. In the paper a modification of the above-discussed iterations is proposed by applying the best position of the reference point and the best position of the particle—similar to the PSO algorithm.
The positions of the reference point u n and the particle z n are evaluated and compared to u b e s t and z b e s t . In solutions space, the lower value of F ( q ) determines the better position of the reference point or the particle ( q ). The next u b e s t and z b e s t are updated if they are worse than u n and z n . The best position of the reference point u b e s t or position of the reference point u n and the best position of the particle z b e s t or the position of the particle z n (depending on the result of the evaluation) are used to determine the new position of the particle z n + 1 .
As a result of this modification, the next position of the particle z n + 1 in the Equations (13)–(15) is determined by the following formulas:
  • The modified Mann iteration
    z n + 1 = ( 1 α n ) z { n | b e s t } + α n T ( z n ) ,
  • The modified Ishikawa iteration
    z n + 1 = ( 1 α n ) z { n | b e s t } + α n T ( u { n | b e s t } ) ,
  • The modified Agarwal iteration
    z n + 1 = ( 1 α n ) T ( z { n | b e s t } ) + α n T ( u { n | b e s t } ) .
All the presented iterations used only one mapping, but in the fixed point theory exist iterations that use several mappings and are applied to find common fixed points of the mappings. Examples of this type of iterations are the following:
  • The Das–Debata iteration [42]:
    z n + 1 = ( 1 α n ) z n + α n T 2 ( u n ) , u n = ( 1 β n ) z n + β n T 1 ( z n ) , n = 0 , 1 , 2 , ,
    where α n ( 0 , 1 ] and β n [ 0 , 1 ] for all n N , the Das–Debata iteration for T 1 = T 2 reduces to the Ishikawa iteration.
  • The Khan–Cho–Abbas iteration [43]:
    z n + 1 = ( 1 α n ) T 1 ( z n ) + α n T 2 ( u n ) , u n = ( 1 β n ) z n + β n T 1 ( z n ) , n = 0 , 1 , 2 , ,
    where α n ( 0 , 1 ] and β n [ 0 , 1 ] for all n N , let us notice that the Khan–Cho–Abbas iteration reduces to the Agarwal iteration when T 1 = T 2 .
  • The generalised Agarwal’s iteration [43]:
    z n + 1 = ( 1 α n ) T 3 ( z n ) + α n T 2 ( u n ) , u n = ( 1 β n ) z n + β n T 1 ( z n ) , n = 0 , 1 , 2 , ,
    where α n ( 0 , 1 ] and β n [ 0 , 1 ] for all n N , moreover the generalised Agarwal iteration reduces to the Khan–Cho–Abbas iteration when T 1 = T 3 and to the Agarwal iteration when T 1 = T 2 = T 3 .
Iterations described by Equations (19)–(21) can be also modified by introducing the best position of the reference point and the best position of the particle. As a result of this modification, equations describing z n + 1 take the following form:
  • The modified Das–Debata iteration
    z n + 1 = ( 1 α n ) z { n | b e s t } + α n T 2 ( u { n | b e s t } ) ,
  • The modified Khan–Cho–Abbas iteration
    z n + 1 = ( 1 α n ) T 1 ( z { n | b e s t } ) + α n T 2 ( u { n | b e s t } ) ,
  • The modified generalised Agarwal iteration
    z n + 1 = ( 1 α n ) T 3 ( z { n | b e s t } ) + α n T 2 ( u { n | b e s t } ) .
For all iterations z n is used for the base type of iteration (Algorithm 1), u b e s t is used for the proposed algorithm with the best reference point (Algorithm 2), z b e s t is used for the proposed algorithm with the best particle position (Algorithm 3) and both u b e s t and z b e s t are used for the proposed Algorithm 4. The variety of different iteration processes is implemented in our algorithms. In the iterations we use (12) as the mapping with different values of ω and η parameters.
Algorithm 1: Visualisation of the dynamics—the base Algorithm ( I )
Entropy 22 00734 i001
Algorithm 2: Visualisation of the dynamics with the best reference point ( I I )
Entropy 22 00734 i002
Algorithm 3: Visualisation of the dynamics with the best particle ( I I I )
Entropy 22 00734 i003
Algorithm 4: Visualisation of the dynamics with both the best reference point and particle ( I V )
Entropy 22 00734 i004

4. Visualisation of the Dynamics

A very similar method to the polynomiography [44] is used to visualise the dynamics of the proposed algorithms. The iteration methods presented in Section 3 are implemented in the algorithms and proper values of parameters are selected. ω , η for a single mapping T or ω 1 , ω 2 , ω 3 and η 1 , η 2 , η 3 for T 1 , T 2 , T 3 (depending on the chosen iteration). The maximum number of iterations m which algorithm should make, accuracy of the computations ε and a colouring function C : N { 0 , 1 , , 255 } 3 are set. Then, each z 0 in the solution space A is used in the algorithm. The iterations of the algorithm proceed till the convergence criterion:
z n + 1 z n < ε
or the maximum number of iterations is reached. A colour corresponding to the performed number of iterations is assigned to z 0 using colouring function C. Iterations described in Section 3 and their modifications using the best particle position and the best reference point position allow proposing five algorithms. Algorithm 1 (denoted in the text by I) is the base algorithm, without modification. Algorithm 2 (denoted in the article by I I ) performs the selection of the best position of the reference point. Algorithm 3 (denoted by I I I ) performs the selection of the best particle. Algorithm 4 (denoted by I V ) combines the modifications introduced in Algorithms 2 and 3. Algorithm 5 is a modification of Algorithm 3 because Mann iteration does not use a reference point and it is denoted by I I I M .
Algorithm 5: Visualisation of the dynamics with the best particle for the Mann iteration ( I I I M )
Entropy 22 00734 i005
The iteration method is denoted by I q a for a given algorithm, where q is a vector of parameters of the iteration and a is one of the algorithms { I | I I | I I I | I I I M | I V } or r—it is the part of iteration determining position of the reference point.
The domain (solution space) A is defined in a D-dimensional space, thus the algorithms return polynomiographs in this space. For D = 2 a single image is obtained or for D > 2 a two-dimensional cross-section of A can be selected for visualisation.

5. Discussion on the Research Results

In this section we present and discuss the obtained results of visualising the dynamics of the method introduced in Section 2 together with the various iteration methods presented in Section 3 using algorithms described in Section 4. Let C be the field of complex numbers with a complex number c = x + i y where i = 1 and x , y R . In the experiments we want to solve the following non-linear equation
p ( c ) = 0
where p ( c ) = c 3 1 .
This equation can be written in the following form:
0 = c 3 1 = ( x + i y ) 3 1 = x 3 3 x y 2 1 + ( 3 x 2 y y 3 ) i .
Now, Equation (27) can be transformed into a system of two equations with two variables, i.e.,
F ( x , y ) = f 1 ( x , y ) f 2 ( x , y ) = 0 0 = 0 ,
where f 1 ( x , y ) = x 3 3 x y 2 1 , f 2 ( x , y ) = 3 x 2 y y 3 . The set of solutions of this system is the following: [ 1 , 0 ] , [ 0.5 , 0.866025 ] , [ 0.5 , 0.866025 ] and A = [ 2.0 , 2.0 ] 2 ( [ 0.5 , 0.5 ] 2 is used for magnification of the centre parts of polynomiographs).
Moreover, several additional test functions are used:
0 = c 4 10 c 2 + 9 = ( x + i y ) 4 10 ( x + i y ) 2 + 9 = x 4 6 x 2 y 2 + y 4 10 x 2 + 10 y 2 + 9 + ( 4 x 3 y 4 x y 3 20 x y ) i ,
where: f 1 ( x , y ) = x 4 6 x 2 y 2 + y 4 10 x 2 + 10 y 2 + 9 , f 2 ( x , y ) = 4 x 3 y 4 x y 3 20 x y and the set of solutions of this system is the following: [ 3.0 , 0.0 ] , [ 1.0 , 0.0 ] , [ 1.0 , 0.0 ] , [ 3.0 , 0.0 ] and A = [ 4.0 , 4.0 ] × [ 2.0 , 2.0 ] ;
0 = c 5 c = ( x + i y ) 5 ( x + i y ) = x 5 10 x 3 y 2 + 5 x y 4 x + ( 5 x 4 y 10 x 2 y 3 + y 5 y ) i ,
where: f 1 ( x , y ) = x 5 10 x 3 y 2 + 5 x y 4 x , f 2 ( x , y ) = 5 x 4 y 10 x 2 y 3 + y 5 y and the set of solutions of this system is the following: [ 1.0 , 0.0 ] , [ 0.0 , 1.0 ] , [ 0.0 , 0.0 ] , [ 0.0 , 1.0 ] , [ 1.0 , 0.0 ] and A = [ 2.0 , 2.0 ] 2 ;
0 = c 6 + 10 c 3 8 = ( x + i y ) 6 + 10 ( x + i y ) 3 8 = x 6 15 x 4 y 2 + 15 x 2 y 4 y 6 + 10 x 3 30 x y 2 8 + ( 6 x 5 y 20 x 3 y 3 + 6 x y 5 + 30 x 2 y 10 y 3 ) i ,
where: f 1 ( x , y ) = x 6 15 x 4 y 2 + 15 x 2 y 4 y 6 + 10 x 3 30 x y 2 8 , f 2 ( x , y ) = 6 x 5 y 20 x 3 y 3 + 6 x y 5 + 30 x 2 y 10 y 3 and the set of solutions of this system is the following (approximately): [ 2.207 , 0 ] , [ 0.453 , 0.785 ] , [ 0.453 , 0.785 ] , [ 0.906 , 0 ] , [ 1.103 , 1.911 ] , [ 1.103 , 1.911 ] and A = [ 2.3 , 1.7 ] × [ 2.0 , 2.0 ] .
In the experiments, to solve (28) we use the method introduced in Section 2. The same colour map (Figure 1) was used to colour all the obtained polynomiographs. Furthermore, in every experiment we used the following common parameters: m = 256 , ε = 0.01 , image resolution 800 × 800 pixels.
The algorithms used in the experiments were implemented in the C++ programming language. The experiments were conducted on a computer with the Intel Core i7 processor, 8 GB RAM and Linux Ubuntu 18.04 LTS.

5.1. The Picard Iteration

The motion of particles depends on the acceleration constant ( η ) and inertia weight ( ω ). These parameters control dynamics of a particle behaviour. The polynomiograph visualises particle dynamics and allows analysing parameters impact on the algorithm’s operation. Polynomiographs are created using Algorithm 1. The dynamics visualisations for the Picard iteration using ω = 0.0 and varying η are presented in Figure 2. Earlier, in Section 2, we noticed that the proposed method reduces to the relaxed Newton’s method for ω = 0 and η 1 and to the Newton’s method for ω = 0 and η = 1 , so Figure 2a–e present polynomiographs for the relaxed Newton’s method, whereas Figure 2f for the classical Newton’s method. The polynomiograph generation times are also given in Figure 2. The time decreases as the dynamics controlled by the acceleration constant increases.
Polynomiographs of the Picard iteration for low value of acceleration constant ( η = 0.1 ) and varying ω are presented in Figure 3. The increase in the inertia weight results in the increase in particle dynamics. Both too small and too high particle dynamics cause the increase in the polynomiographs creation time. The shortest time was obtained for Figure 3b. Every change of dynamics creates a new image.
The acceleration constant is equal to 0.3 for images in Figure 4. Change in particle’s dynamics is realised by the increase of the inertia weight. As in the previous example, the proper selection of particle’s dynamics minimises the creation time of the polynomiograph (see Figure 4b). The increase in the particle dynamics is expressively illustrated by polynomiographs.
In Figure 5 we see polynomiographs generated using the same values of the inertia weight as in Figure 4, but with higher value of the acceleration constant— η = 0.5 . The increase in the inertia weight causes the increase of the time of image creation. It is the consequence of the excessive growth of particle’s dynamics.
Polynomiographs of the Picard iteration for a constant value of the inertia weight ( ω = 0.7 ) and varying acceleration constant are presented in Figure 6. For the images in Figure 6c,d the dynamics presented by the polynomiographs is high (the parts of the image are blurred—there are non-contrast patterns created by the points). Moreover, let us notice that the generation time for Figure 6a is lower than the time obtained for Figure 2a ( 0.62 s—the Picard iteration with ω = 0 , η = 0.2 ). Thus, by increasing the inertia we were able to obtain a shorter time than in the case of the relaxed Newton’s method with the same value of acceleration.
The magnifications of the central part of selected polynomiographs of the Picard iteration are presented in Figure 7. Changes in particle dynamics paint beautiful patterns. They can be an inspiration for creating mosaics.

5.2. The Mann Iteration

Su and Qin in [45] discussed a strong convergence of iterations in the case of a fixed reference point. We start the examples for the Mann iteration with an example showing the behaviour of the Mann iteration implementing a fixed reference point. The results are presented in Figure 8. For polynomiograph presented in Figure 8a the fixed reference point is in its centre [ 0 , 0 ] , whereas in Figure 8b this point is in the position [ 1 , 0 ] —the behaviour of the algorithm is similar in both cases. For the polynomiograph in Figure 8c the fixed reference point is placed in the root position [ 1 , 0 ] (in the best position). It causes a significant improvement in the algorithm operation near the root position. For polynomiograph presented in Figure 8d, a fixed reference point is the starting point—it results in greater efficiency in the algorithm operation near positions of the roots. A better position of the reference point can significantly improve the algorithm operation—however, it is impossible to determine its best position. In the proposed methods, instead of using a fixed reference point we use the locally best position of the reference point. This position is modified when the new position of the reference point is better than the previous one.
The Mann iteration allows to compare the operation of algorithm I with the Algorithm I I I M . The choice of the best position of the particle will cause the change in the dynamics, it can result in a better convergence of the particle—this mechanics is used in many algorithms. Visualisation is a great tool for presenting this mechanics of algorithm operation.
Polynomiographs of the Mann iteration for ω = 0.0 , η = 0.6 and varying α are presented in Figure 9. A small value of α causes a reduction in particle’s dynamics. When Algorithm I I I M is used, the areas with the higher dynamics (areas of dynamically changing pattern) are narrowed (compare Figure 9a with Figure 9c).
The particle dynamics for polynomiographs in Figure 10 is greater than in Figure 9. The high value of the α coefficient limits the effect of selecting the best particle’s position in Algorithm I I I M . Nevertheless, in Figure 10c we observe a narrowing of areas of high dynamics and small changes in the shape of neighbouring areas. Limiting the particle’s dynamics by decreasing in the value of the α coefficient affects the visualisation of the influence of the best particle position on the Algorithm I I I M operation (compare Figure 10b and Figure 10d)—similarly to Figure 9d.
In the examples presented in Figure 11, the increase in particle dynamics is obtained by including inertia. The increase in particle dynamics is evident in polynomiographs in Figure 11a,c. As in the previous cases, Algorithm I I I M makes changes visible only in areas with higher dynamics.
Another increase in the inertia weight ( ω = 0.7 ) causes the increase in dynamics shown in images in Figure 12a,b. The changes in dynamics caused by the Algorithm I I I M operation are visible in the whole area of polynomiograph (see Figure 12c,d). The time of creation of polynomiograph in Figure 12d, when compared to Figure 12b, is shorter. The proposed modification of the algorithm can accelerate finding the solution.
In polynomiographs in Figure 13, the particle dynamics are limited by the small value of η ( η = 0.3 ). The changes in dynamics caused by the Algorithm I I I M are in areas of high dynamics, the neighbourhood of these areas and places where the particle gets high velocity (the long distance from the centre of the image). This is clearly visible in images in Figure 13c,d.
Polynomiographs presented in Figure 14 show the increase in dynamics due to the increase in the acceleration constant ( η = 0.7 ). The dynamics is high for image in Figure 14a, and in Figure 14b the dynamics is limited by the small value of α . Algorithm I I I M clearly influences the change in particle’s dynamics—Figure 14c. The dynamics caused by the Algorithm I I I M operation are much smaller in Figure 14d because the dynamics are limited by the small value of α .
The inertia weight has much greater effect on particle dynamics than the acceleration constant. The dynamics presented in Figure 15 is high due to the high value of ω and η . The dynamics changes, caused by the best position of the particle, are clearly visible.
Similarly, as in the example presented in Figure 15, the dynamics in Figure 16 are shaped by the high value of ω and η ( ω = 0.6 , η = 0.8 ). Additionally, in this case the dynamics paints some interesting images.
The magnifications of the central part of selected polynomiographs of the Mann iteration and its modifications are presented in Figure 17. Changes in the particle dynamics are visualised by colourful mosaics, which can be used in design to create ornaments.

5.3. The Ishikawa and the Das–Debata Iterations

Ishikawa and Das–Debata iterations use a reference point to determine a new particle position. In the presented analysis, base Algorithm I and its three modifications are used: Algorithm I I using the best position of the reference point, the Algorithm I I I using the best position of the particle, and both these modifications in the Algorithm I V .
Figure 18 shows polynomiographs of all four algorithms for two values of α ( α = 0.4 and α = 0.8 ). The low α value limits the influence of the reference point, so images in Figure 18a and Figure 18c are very similar. Algorithm I I I using the best particle position has a significant influence on the form of the polynomiograph. It results in a strong similarity between images in Figure 18e and Figure 18g. The dynamics shown in Figure 18b is high. The influence of the best reference point is shown in Figure 18d because of the large value of α . Changes in dynamics are also visible in Figure 18f created by the algorithm using the best position of particle. Each of the algorithms has a different influence on the dynamics of the particle. These features are combined in Algorithm I V —it clearly visualises the image in Figure 18h.
A low value of β reduces the impact of the transformation operator on the dynamics of reference point creation. It has the same effect as the low value of α —for this reason Figure 19a is similar to Figure 19c,e–g. The increase in the β coefficient results in obtaining different features of the images in Figure 19b,d,f,h—the image in Figure 19h combines the features of the images in Figure 19d,f.
The dynamics presented by the polynomiographs depends on the dynamics of the creation of the reference point and its processing. The parameters ω 1 and η 1 are responsible for the dynamics of the reference point creation and the parameters ω 2 and η 2 are responsible for its processing. The change in these parameters allows to obtain the effects described above.
Polynomiographs of the Das–Debata iteration for varying ω 1 are presented in Figure 20 and for varying η 1 in Figure 21. Polynomiographs created by Algorithms I and I I are similar—it means that the influence of choosing the best reference point is small for such selected parameters of the algorithm. The influence of the best position of particle has a significant effect on the form of the polynomiographs created by the Algorithm I I I —these polynomiographs have many common features with the polynomiographs created by the Algorithm I V (the influence of Algorithm I I is small).
The parameters ω 2 and η 2 influence the processing of the reference point. In Figure 22 polynomioographs for varying ω 2 and in Figure 23 for varying η 2 are shown. For small values of ω 2 , polynomiographs presented in Figure 22a,c,e,g are similar—it means, that the operation of all the algorithms is similar. Significant increase in the value of ω 2 shows a significant influence of the reference point on the polynomiograph’s creation—Algorithms I I and I V similarly create polynomiographs.
All the algorithms make characteristic changes in polynomiographs shown in Figure 23. For high value of η 2 the polynomiographs created by Algorithms I I and I V are similar (see Figure 23d).
The magnifications of the central part of selected polynomiographs obtained with the Ishikawa and the Das–Debata iterations are presented in Figure 24. Similar to the previous examples these images have artistic features and can be used for instance in ornamentation.

5.4. The Agarwal and the Khan–Cho–Abbas Iterations

The generalised Agarwal iteration introduces three operators parameters of which can be independently specified. It allows for a wide range of changes in particle’s dynamics. In addition, α and β parameters allow to determine the impact of these operators on the algorithm work. In Figure 25 polynomiographs of the Agarwal iteration for varying α and in Figure 26 for varying β are presented. Incorrect selection of these parameters can cause that the algorithm does not reach a solution in a given number of iterations—it results in extension of the algorithm’s operation time (see Figure 25e). Nevertheless, the increased particle dynamics gives the opportunity to obtain interesting patterns. Poorly selected particle dynamics can cause a significant increase in the creation time of a polynomiograph. This case is visible in the polynomiographs in Figure 27 and Figure 28 for Algorithms I I I and I V .
Depending on the choice of parameters, cases that have already been discussed in the previous iterations can be observed. The big similarity of polynomiographs created by Algorithms I and I I as well as I I I and I V is observed in Figure 29 and Figure 30 ( ω 2 = 0.1 ). Polynomiographs are similar for all algorithms with the low particle dynamics—this is observed in Figure 31. The strong similarity of polynomiographs is shown in Figure 32 and Figure 33 for Algorithms I I and I V —it indicates a strong influence of the reference point on the polynomiographs creation. Next, the strong similarity of polynomiographs is observed in Figure 34 for Algorithm I and I I I —it results from the strong influence of the best particle position on the polynomiographs creation.
The magnifications of the central part of selected polynomiographs of Agarwal and Khan–Cho–Abbas iterations are presented in Figure 35. High dynamics can create interesting patterns, especially for the Agarwal iteration. These patterns can be an artistic inspiration.

5.5. Algorithms Operation in the Selected Test Environments

The discussion conducted in the previous sections presented in detail the impact of individual factors on the operation of the proposed algorithm. The authors want to supplement this discussion with a presentation of algorithms’ operation for the selected test environments, which we can be found in the world literature and were presented at the beginning of Section 5. Figure 36, Figure 37, Figure 38, Figure 39, Figure 40, Figure 41, Figure 42 present polynomiographs for three test environments and the iterations discussed in the article. They will allow to generalise the observations discussed in the previous sections. In Figure 36 the Picard’s iteration, in Figure 37a,c,e the Mann’s iteration and in sub-figures (a), (e), (i) of Figure 38, Figure 39, Figure 40, Figure 41, Figure 42 polynomiographs realised by the base algorithm are presented. These polynomiographs are characterised by smooth boundaries of areas with different particle dynamics. The implementation of the algorithm’s modification consisting in the selection of position of the best reference point and particle causes the blur and rag of smooth edges of the areas presented on the polynomiographs obtained for the base algorithm. The selection of the best position of the reference point implemented in the Algorithm ( I I ) results in a significant reduction of small elements (details) of the patterns visible on the polynomiograph. It is a significant limitation of particle dynamics, which can be seen in the (b) sub-figures of Figure 38, Figure 39, Figure 40, Figure 41, Figure 42. Figure 37b,d,f and the (c) sub-figures of Figure 38, Figure 39, Figure 40, Figure 41, Figure 42 present polynomiographs of the Algorithm ( I I I M and I I I ) selecting the best particle position. The operation of the algorithm results in much better presentation of small details in the polynomiograph—these are the areas in which the particle wants to achieve a worse position. Polynomiographs obtained by using the algorithm denoted by ( I V ) , i.e., with the selection of the best reference point and the best particle position, are shown in the (d) sub-figures of Figure 38, Figure 39, Figure 40, Figure 41, Figure 42. We can observe both features of the polynomiographs from the sub-figures (b) and (c) of Figure 38, Figure 39, Figure 40, Figure 41, Figure 42. However, the polynomiograph’s features have a dominant influence for choosing the best particle position. The analysis of the changes in particle dynamics resulting from the applied algorithm can also be performed by analysing the simulation time. The best position of the reference point has a significant impact on the reduction of particle dynamics. One can only complete the observations regarding the test environments. In environments p ( c ) = c 3 1 , p ( c ) = c 4 10 c 2 + 9 , p ( c ) = c 6 + 10 c 3 8 the particle obtains similar ranges of dynamics, whereas for the p ( c ) = c 5 c environment the particle achieves larger dynamics ranges.

6. Conclusions

The paper proposes a modification of the Newton’s method and introduces algorithms based on it. Similar approaches are presented in many optimisation algorithms which require parameters tuning, e.g., the PSO algorithm. The discussion presented in the article allows showing inertia weight and the acceleration constant impact on the operation of the proposed algorithms. Both too large and too small dynamics of particle can have an adverse influence on the algorithm’s operation. Polynomiography is a tool illustrating the behaviour of particles. It also creates an artistic mosaics and allows obtaining images that can be named as art.

Author Contributions

All the authors have equally contributed. All authors have read and agree to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Polak, E. Optimization Algorithms and Consistent Approximations; Springer: New York, NY, USA, 1997. [Google Scholar] [CrossRef]
  2. Gosciniak, I. Discussion on semi-immune algorithm behaviour based on fractal analysis. Soft Comput. 2017, 21, 3945–3956. [Google Scholar] [CrossRef] [Green Version]
  3. Weise, T. Global Optimization Algorithms—Theory and Application, 2nd ed. Self-Published. 2009. Available online: http://www.it-weise.de/projects/book.pdf (accessed on 1 May 2020).
  4. Zhang, W.; Ma, D.; Wei, J.; Liang, H. A Parameter Selection Strategy for Particle Swarm Optimization Based on Particle Positions. Expert Syst. Appl. 2014, 41, 3576–3584. [Google Scholar] [CrossRef]
  5. Van den Bergh, F.; Engelbrecht, A.P. A Convergence Proof for the Particle Swarm Optimiser. Fundam. Inf. 2010, 105, 341–374. [Google Scholar] [CrossRef] [Green Version]
  6. Freitas, D.; Lopes, L.; Morgado-Dias, F. Particle Swarm Optimisation: A Historical Review Up to the Current Developments. Entropy 2020, 22, 362. [Google Scholar] [CrossRef] [Green Version]
  7. Cheney, W.; Kincaid, D. Numerical Mathematics and Computing, 6th ed.; Brooks/Cole: Pacific Groove, CA, USA, 2007. [Google Scholar]
  8. Goh, B.; Leong, W.; Siri, Z. Partial Newton Methods for a System of Equations. Numer. Algebr. Control Optim. 2013, 3, 463–469. [Google Scholar] [CrossRef]
  9. Saheya, B.; Chen, G.Q.; Sui, Y.K.; Wu, C.Y. A New Newton-like Method for Solving Nonlinear Equations. SpringerPlus 2016, 5, 1269. [Google Scholar] [CrossRef] [Green Version]
  10. Sharma, J.; Sharma, R.; Bahl, A. An Improved Newton-Traub Composition for Solving Systems of Nonlinear Equations. Appl. Math. Comput. 2016, 290, 98–110. [Google Scholar] [CrossRef]
  11. Abbasbandy, S.; Bakhtiari, P.; Cordero, A.; Torregrosa, J.; Lotfi, T. New Efficient Methods for Solving Nonlinear Systems of Equations with Arbitrary Even Order. Appl. Math. Comput. 2016, 287–288, 94–103. [Google Scholar] [CrossRef]
  12. Xiao, X.Y.; Yin, H.W. Accelerating the Convergence Speed of Iterative Methods for Solving Nonlinear Systems. Appl. Math. Comput. 2018, 333, 8–19. [Google Scholar] [CrossRef]
  13. Alqahtani, H.; Behl, R.; Kansal, M. Higher-Order Iteration Schemes for Solving Nonlinear Systems of Equations. Mathematics 2019, 7, 937. [Google Scholar] [CrossRef] [Green Version]
  14. Awwal, A.; Wang, L.; Kumam, P.; Mohammad, H. A Two-Step Spectral Gradient Projection Method for System of Nonlinear Monotone Equations and Image Deblurring Problems. Symmetry 2020, 12, 874. [Google Scholar] [CrossRef]
  15. Wang, Q.; Zeng, J.; Jie, J. Modified Particle Swarm Optimization for Solving Systems of Equations. In Advanced Intelligent Computing Theories and Applications; Volume 2: Communications in Computer and Information Science; Huang, D., Heutte, L., Loog, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 361–369._41. [Google Scholar] [CrossRef]
  16. Ouyang, A.; Zhou, Y.; Luo, Q. Hybrid Particle Swarm Optimization Algorithm for Solving Systems of Nonlinear Equations. In Proceedings of the 2009 IEEE International Conference on Granular Computing, Nanchang, China, 17–19 August 2009; pp. 460–465. [Google Scholar] [CrossRef]
  17. Jaberipour, M.; Khorram, E.; Karimi, B. Particle Swarm Algorithm for Solving Systems of Nonlinear Equations. Comput. Math. Appl. 2011, 62, 566–576. [Google Scholar] [CrossRef] [Green Version]
  18. Li, Y.; Wei, Y.; Chu, Y. Research on Solving Systems of Nonlinear Equations Based on Improved PSO. Math. Probl. Eng. 2015, 2015, 727218. [Google Scholar] [CrossRef] [Green Version]
  19. Ibrahim, A.; Tawhid, M. A Hybridization of Cuckoo Search and Particle Swarm Optimization for Solving Nonlinear Systems. Evol. Intell. 2019, 12, 541–561. [Google Scholar] [CrossRef]
  20. Ibrahim, A.; Tawhid, M. A hybridization of Differential Evolution and Monarch Butterfly Optimization for Solving Systems of Nonlinear Equations. J. Comput. Des. Eng. 2019, 6, 354–367. [Google Scholar] [CrossRef]
  21. Liao, Z.; Gong, W.; Wang, L.; Yan, X.; Hu, C. A Decomposition-based Differential Evolution with Reinitialization for Nonlinear Equations Systems. Knowl. Based Syst. 2020, 191, 105312. [Google Scholar] [CrossRef]
  22. Kamsyakawuni, A.; Sari, M.; Riski, A.; Santoso, K. Metaheuristic Algorithm Approach to Solve Non-linear Equations System with Complex Roots. J. Phys. Conf. Ser. 2020, 1494, 012001. [Google Scholar] [CrossRef]
  23. Broer, H.; Takens, F. Dynamical Systems and Chaos; Springer: New York, NY, USA, 2011. [Google Scholar]
  24. Boeing, G. Visual Analysis of Nonlinear Dynamical Systems: Chaos, Fractals, Self-Similarity and the Limits of Prediction. Systems 2016, 4, 37. [Google Scholar] [CrossRef] [Green Version]
  25. Kalantari, B. Polynomial Root-Finding and Polynomiography; World Scientific: Singapore, 2009. [Google Scholar] [CrossRef]
  26. Gościniak, I.; Gdawiec, K. Control of Dynamics of the Modified Newton-Raphson Algorithm. Commun. Nonlinear Sci. Numer. Simul. 2019, 67, 76–99. [Google Scholar] [CrossRef]
  27. Ardelean, G. A Comparison Between Iterative Methods by Using the Basins of Attraction. Appl. Math. Comput. 2011, 218, 88–95. [Google Scholar] [CrossRef]
  28. Petković, I.; Rančić, L. Computational Geometry as a Tool for Studying Root-finding Methods. Filomat 2019, 33, 1019–1027. [Google Scholar] [CrossRef] [Green Version]
  29. Chun, C.; Neta, B. Comparison of Several Families of Optimal Eighth Order Methods. Appl. Math. Comput. 2016, 274, 762–773. [Google Scholar] [CrossRef] [Green Version]
  30. Chun, C.; Neta, B. Comparative Study of Methods of Various Orders for Finding Repeated Roots of Nonlinear Equations. J. Comput. Appl. Math. 2018, 340, 11–42. [Google Scholar] [CrossRef]
  31. Kalantari, B. Polynomiography: From the Fundamental Theorem of Algebra to Art. Leonardo 2005, 38, 233–238. [Google Scholar] [CrossRef]
  32. Gościniak, I.; Gdawiec, K. One More Look on Visualization of Operation of a Root-finding Algorithm. Soft Comput. 2020, in press. [Google Scholar] [CrossRef] [Green Version]
  33. Nammanee, K.; Noor, M.; Suantai, S. Convergence Criteria of Modified Noor Iterations with Errors for Asymptotically Nonexpansive Mappings. J. Math. Anal. Appl. 2006, 314, 320–334. [Google Scholar] [CrossRef]
  34. Gilbert, W. Generalizations of Newton’s Method. Fractals 2001, 9, 251–262. [Google Scholar] [CrossRef]
  35. Cordero, A.; Torregrosa, J. Variants of Newton’s Method using Fifth-order Quadrature Formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  36. Magreñán, A.; Argyros, I. A Contemporary Study of Iterative Methods: Convergence, Dynamics and Applications; Academic Press: San Diego, CA, USA, 2018. [Google Scholar]
  37. Picard, E. Mémoire sur la théorie des équations aux dérivées partielles et la méthode des approximations successives. J. De Mathématiques Pures Et Appliquées 1890, 6, 145–210. [Google Scholar]
  38. Mann, W. Mean Value Methods in Iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  39. Ishikawa, S. Fixed Points by a New Iteration Method. Proc. Am. Math. Soc. 1974, 44, 147–150. [Google Scholar] [CrossRef]
  40. Agarwal, R.; O’Regan, D.; Sahu, D. Iterative Construction of Fixed Points of Nearly Asymptotically Nonexpansive Mappings. J. Nonlinear Convex Anal. 2007, 8, 61–79. [Google Scholar]
  41. Gdawiec, K.; Kotarski, W. Polynomiography for the Polynomial Infinity Norm via Kalantari’s Formula and Nonstandard Iterations. Appl. Math. Comput. 2017, 307, 17–30. [Google Scholar] [CrossRef] [Green Version]
  42. Das, G.; Debata, J. Fixed Points of Quasinonexpansive Mappings. Indian J. Pure Appl. Math. 1986, 17, 1263–1269. [Google Scholar]
  43. Khan, S.; Cho, Y.; Abbas, M. Convergence to Common Fixed Points by a Modified Iteration Process. J. Appl. Math. Comput. 2011, 35, 607–616. [Google Scholar] [CrossRef]
  44. Gdawiec, K. Fractal Patterns from the Dynamics of Combined Polynomial Root Finding Methods. Nonlinear Dyn. 2017, 90, 2457–2479. [Google Scholar] [CrossRef]
  45. Su, Y.; Qin, X. Strong Convergence of Modified Noor Iterations. Int. J. Math. Math. Sci. 2006, 2006, 21073. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Colour map used in the experiments.
Figure 1. Colour map used in the experiments.
Entropy 22 00734 g001
Figure 2. Polynomiographs of the Picard iteration for ω = 0.0 and varying η .
Figure 2. Polynomiographs of the Picard iteration for ω = 0.0 and varying η .
Entropy 22 00734 g002
Figure 3. Polynomiographs of the Picard iteration for η = 0.1 and varying ω .
Figure 3. Polynomiographs of the Picard iteration for η = 0.1 and varying ω .
Entropy 22 00734 g003
Figure 4. Polynomiographs of the Picard iteration for η = 0.3 and varying ω .
Figure 4. Polynomiographs of the Picard iteration for η = 0.3 and varying ω .
Entropy 22 00734 g004
Figure 5. Polynomiographs of the Picard iteration for η = 0.5 and varying ω .
Figure 5. Polynomiographs of the Picard iteration for η = 0.5 and varying ω .
Entropy 22 00734 g005
Figure 6. Polynomiographs of the Picard iteration for ω = 0.7 and varying η .
Figure 6. Polynomiographs of the Picard iteration for ω = 0.7 and varying η .
Entropy 22 00734 g006
Figure 7. Magnification of the central part of selected polynomiographs of the Picard iteration.
Figure 7. Magnification of the central part of selected polynomiographs of the Picard iteration.
Entropy 22 00734 g007
Figure 8. Polynomiographs of the Mann iteration with the fixed reference point for α = 0.9 , ω = 0.3 , η = 0.5 . The positions of the reference point: (a) [ 0 , 0 ] , (b) [ 1 , 0 ] , (c) [ 1 , 0 ] (one of the roots), (d) the starting point.
Figure 8. Polynomiographs of the Mann iteration with the fixed reference point for α = 0.9 , ω = 0.3 , η = 0.5 . The positions of the reference point: (a) [ 0 , 0 ] , (b) [ 1 , 0 ] , (c) [ 1 , 0 ] (one of the roots), (d) the starting point.
Entropy 22 00734 g008
Figure 9. Polynomiographs of the Mann iteration for ω = 0.0 and η = 0.6 varying α .
Figure 9. Polynomiographs of the Mann iteration for ω = 0.0 and η = 0.6 varying α .
Entropy 22 00734 g009
Figure 10. Polynomiographs of the Mann iteration for ω = 0.0 , η = 0.9 and varying α .
Figure 10. Polynomiographs of the Mann iteration for ω = 0.0 , η = 0.9 and varying α .
Entropy 22 00734 g010
Figure 11. Polynomiographs of the Mann iteration for ω = 0.3 , η = 0.5 and varying α .
Figure 11. Polynomiographs of the Mann iteration for ω = 0.3 , η = 0.5 and varying α .
Entropy 22 00734 g011
Figure 12. Polynomiographs of the Mann iteration for ω = 0.7 , η = 0.5 and varying α .
Figure 12. Polynomiographs of the Mann iteration for ω = 0.7 , η = 0.5 and varying α .
Entropy 22 00734 g012
Figure 13. Polynomiographs of the Mann iteration for ω = 0.5 , η = 0.3 and varying α .
Figure 13. Polynomiographs of the Mann iteration for ω = 0.5 , η = 0.3 and varying α .
Entropy 22 00734 g013
Figure 14. Polynomiographs of the Mann iteration for ω = 0.5 , η = 0.7 and varying α .
Figure 14. Polynomiographs of the Mann iteration for ω = 0.5 , η = 0.7 and varying α .
Entropy 22 00734 g014
Figure 15. Polynomiographs of the Mann iteration for ω = 0.7 , η = 0.6 and varying α .
Figure 15. Polynomiographs of the Mann iteration for ω = 0.7 , η = 0.6 and varying α .
Entropy 22 00734 g015
Figure 16. Polynomiographs of the Mann iteration for ω = 0.6 , η = 0.8 and varying α .
Figure 16. Polynomiographs of the Mann iteration for ω = 0.6 , η = 0.8 and varying α .
Entropy 22 00734 g016
Figure 17. Magnification of the central part of selected polynomiographs of the Mann iteration.
Figure 17. Magnification of the central part of selected polynomiographs of the Mann iteration.
Entropy 22 00734 g017
Figure 18. Polynomiographs of the Ishikawa iteration for β = 0.4 , ω 1 = 0.7 , η 1 = 0.3 , ω 2 = 0.7 , η 2 = 0.3 and varying α .
Figure 18. Polynomiographs of the Ishikawa iteration for β = 0.4 , ω 1 = 0.7 , η 1 = 0.3 , ω 2 = 0.7 , η 2 = 0.3 and varying α .
Entropy 22 00734 g018
Figure 19. Polynomiographs of the Ishikawa iteration for α = 0.7 , ω 1 = 0.4 , η 1 = 0.8 , ω 2 = 0.4 , η 2 = 0.8 and varying β .
Figure 19. Polynomiographs of the Ishikawa iteration for α = 0.7 , ω 1 = 0.4 , η 1 = 0.8 , ω 2 = 0.4 , η 2 = 0.8 and varying β .
Entropy 22 00734 g019
Figure 20. Polynomiographs of the Das–Debata iteration for α = 0.6 , β = 0.7 , η 1 = 0.4 , ω 2 = 0.5 , η 2 = 0.5 and varying ω 1 .
Figure 20. Polynomiographs of the Das–Debata iteration for α = 0.6 , β = 0.7 , η 1 = 0.4 , ω 2 = 0.5 , η 2 = 0.5 and varying ω 1 .
Entropy 22 00734 g020
Figure 21. Polynomiographs of the Das–Debata iteration for α = 0.5 , β = 0.6 , ω 1 = 0.3 , ω 2 = 0.6 , η 2 = 0.4 and varying η 1 .
Figure 21. Polynomiographs of the Das–Debata iteration for α = 0.5 , β = 0.6 , ω 1 = 0.3 , ω 2 = 0.6 , η 2 = 0.4 and varying η 1 .
Entropy 22 00734 g021
Figure 22. Polynomiographs of the Das–Debata iteration for α = 0.9 , β = 0.7 , ω 1 = 0.6 , η 1 = 0.2 , η 2 = 0.5 and varying ω 2 .
Figure 22. Polynomiographs of the Das–Debata iteration for α = 0.9 , β = 0.7 , ω 1 = 0.6 , η 1 = 0.2 , η 2 = 0.5 and varying ω 2 .
Entropy 22 00734 g022
Figure 23. Polynomiographs of the Das–Debata iteration for α = 0.9 , β = 0.7 , ω 1 = 0.4 , η 1 = 0.2 , ω 2 = 0.5 and varying η 2 .
Figure 23. Polynomiographs of the Das–Debata iteration for α = 0.9 , β = 0.7 , ω 1 = 0.4 , η 1 = 0.2 , ω 2 = 0.5 and varying η 2 .
Entropy 22 00734 g023
Figure 24. Magnification of the central part of selected polynomiographs of Ishikawa and Das–Debata iterations.
Figure 24. Magnification of the central part of selected polynomiographs of Ishikawa and Das–Debata iterations.
Entropy 22 00734 g024
Figure 25. Polynomiographs of the Agarwal iteration for β = 0.5 , ω 1 = 0.4 , η 1 = 0.3 , ω 2 = 0.4 , η 2 = 0.3 , ω 3 = 0.4 , η 3 = 0.3 and varying α .
Figure 25. Polynomiographs of the Agarwal iteration for β = 0.5 , ω 1 = 0.4 , η 1 = 0.3 , ω 2 = 0.4 , η 2 = 0.3 , ω 3 = 0.4 , η 3 = 0.3 and varying α .
Entropy 22 00734 g025
Figure 26. Polynomiographs of the Agarwal iteration for α = 0.5 , ω 1 = 0.5 , η 1 = 0.5 , ω 2 = 0.5 , η 2 = 0.5 , ω 3 = 0.5 , η 3 = 0.5 and varying β .
Figure 26. Polynomiographs of the Agarwal iteration for α = 0.5 , ω 1 = 0.5 , η 1 = 0.5 , ω 2 = 0.5 , η 2 = 0.5 , ω 3 = 0.5 , η 3 = 0.5 and varying β .
Entropy 22 00734 g026
Figure 27. Polynomiographs of the generalised Agarwal iteration for α = 0.5 , ω 2 = 0.6 , η 2 = 0.7 , ω 3 = 0.9 .
Figure 27. Polynomiographs of the generalised Agarwal iteration for α = 0.5 , ω 2 = 0.6 , η 2 = 0.7 , ω 3 = 0.9 .
Entropy 22 00734 g027
Figure 28. Polynomiographs of the generalised Agarwal iteration for ω 1 = 0.3 , η 1 = 0.4 , ω 2 = 0.6 , η 2 = 0.7 , ω 3 = 0.9 , η 3 = 0.5 .
Figure 28. Polynomiographs of the generalised Agarwal iteration for ω 1 = 0.3 , η 1 = 0.4 , ω 2 = 0.6 , η 2 = 0.7 , ω 3 = 0.9 , η 3 = 0.5 .
Entropy 22 00734 g028
Figure 29. Polynomiographs of the generalised Agarwal iteration for α = 0.5 , β = 0.5 , ω 1 = 0.6 , ω 2 = 0.6 , η 2 = 0.3 , ω 3 = 0.6 , η 3 = 0.3 ( T 2 = T 3 ) and varying η 1 .
Figure 29. Polynomiographs of the generalised Agarwal iteration for α = 0.5 , β = 0.5 , ω 1 = 0.6 , ω 2 = 0.6 , η 2 = 0.3 , ω 3 = 0.6 , η 3 = 0.3 ( T 2 = T 3 ) and varying η 1 .
Entropy 22 00734 g029
Figure 30. Polynomiographs of the Khan–Cho–Abbas iteration for α = 0.5 , β = 0.5 , ω 1 = 0.7 , η 1 = 0.2 , η 2 = 0.5 , ω 3 = 0.7 , η 3 = 0.2 and varying ω 2 .
Figure 30. Polynomiographs of the Khan–Cho–Abbas iteration for α = 0.5 , β = 0.5 , ω 1 = 0.7 , η 1 = 0.2 , η 2 = 0.5 , ω 3 = 0.7 , η 3 = 0.2 and varying ω 2 .
Entropy 22 00734 g030
Figure 31. Polynomiographs of the generalised Agarwal iteration for α = 0.5 , β = 0.5 , η 1 = 0.6 , ω 2 = 0.4 , η 2 = 0.6 , ω 3 = 0.4 , η 3 = 0.6 ( T 2 = T 3 ) and varying ω 1 .
Figure 31. Polynomiographs of the generalised Agarwal iteration for α = 0.5 , β = 0.5 , η 1 = 0.6 , ω 2 = 0.4 , η 2 = 0.6 , ω 3 = 0.4 , η 3 = 0.6 ( T 2 = T 3 ) and varying ω 1 .
Entropy 22 00734 g031
Figure 32. Polynomiographs of the Khan–Cho–Abbas iteration for α = 0.5 , β = 0.5 , ω 1 = 0.3 , η 1 = 0.7 , ω 2 = 0.6 , ω 3 = 0.3 , η 3 = 0.7 and varying η 2 .
Figure 32. Polynomiographs of the Khan–Cho–Abbas iteration for α = 0.5 , β = 0.5 , ω 1 = 0.3 , η 1 = 0.7 , ω 2 = 0.6 , ω 3 = 0.3 , η 3 = 0.7 and varying η 2 .
Entropy 22 00734 g032
Figure 33. Polynomiographs of the generalised Agarwal iteration for α = 0.8 , β = 0.4 , ω 1 = 0.2 , η 1 = 0.4 , ω 2 = 0.7 , η 2 = 0.7 , ω 3 = 0.5 and varying η 3 .
Figure 33. Polynomiographs of the generalised Agarwal iteration for α = 0.8 , β = 0.4 , ω 1 = 0.2 , η 1 = 0.4 , ω 2 = 0.7 , η 2 = 0.7 , ω 3 = 0.5 and varying η 3 .
Entropy 22 00734 g033
Figure 34. Polynomiographs of the generalised Agarwal iteration for α = 0.5 , β = 0.3 , ω 1 = 0.4 , η 1 = 0.6 , ω 2 = 0.6 , η 2 = 0.7 , η 3 = 0.5 and varying ω 3 .
Figure 34. Polynomiographs of the generalised Agarwal iteration for α = 0.5 , β = 0.3 , ω 1 = 0.4 , η 1 = 0.6 , ω 2 = 0.6 , η 2 = 0.7 , η 3 = 0.5 and varying ω 3 .
Entropy 22 00734 g034
Figure 35. Magnification of the central part of selected polynomiographs of Agarwal and Khan–Cho–Abbas iterations.
Figure 35. Magnification of the central part of selected polynomiographs of Agarwal and Khan–Cho–Abbas iterations.
Entropy 22 00734 g035
Figure 36. Polynomiographs of the Picard iteration for the polynomiograph settings from the Figure 4d.
Figure 36. Polynomiographs of the Picard iteration for the polynomiograph settings from the Figure 4d.
Entropy 22 00734 g036
Figure 37. Polynomiographs of the Mann iteration for the polynomiograph settings from the Figure 15b.
Figure 37. Polynomiographs of the Mann iteration for the polynomiograph settings from the Figure 15b.
Entropy 22 00734 g037
Figure 38. Polynomiographs of the Ishikawa iteration for the polynomiograph settings from the Figure 18b.
Figure 38. Polynomiographs of the Ishikawa iteration for the polynomiograph settings from the Figure 18b.
Entropy 22 00734 g038aEntropy 22 00734 g038b
Figure 39. Polynomiographs of the Das–Debata iteration for the polynomiograph settings from the Figure 21a.
Figure 39. Polynomiographs of the Das–Debata iteration for the polynomiograph settings from the Figure 21a.
Entropy 22 00734 g039aEntropy 22 00734 g039b
Figure 40. Polynomiographs of the Agarwal iteration for the polynomiograph settings from the Figure 26a.
Figure 40. Polynomiographs of the Agarwal iteration for the polynomiograph settings from the Figure 26a.
Entropy 22 00734 g040
Figure 41. Polynomiographs of the generalised Agarwal iteration for the polynomiograph settings from the Figure 29a.
Figure 41. Polynomiographs of the generalised Agarwal iteration for the polynomiograph settings from the Figure 29a.
Entropy 22 00734 g041
Figure 42. Polynomiographs of the Khan–Cho–Abbas iteration for the polynomiograph settings from the Figure 30a.
Figure 42. Polynomiographs of the Khan–Cho–Abbas iteration for the polynomiograph settings from the Figure 30a.
Entropy 22 00734 g042aEntropy 22 00734 g042b
Table 1. Computational order of convergence for system given by f 1 ( x , y ) = x 3 3 x y 2 1 , f 2 ( x , y ) = 3 x 2 y y 3 solved by (8).
Table 1. Computational order of convergence for system given by f 1 ( x , y ) = x 3 3 x y 2 1 , f 2 ( x , y ) = 3 x 2 y y 3 solved by (8).
Parameters ( ω , η ) z 0 = [ 2.5 , 2.5 ] z 0 = [ 2 , 2 ] z 0 = [ 2.5 , 2.5 ]
( 0.0 , 1.0 ) 2.06 2.01 2.03
( 0.1 , 0.9 ) 1.15 1.93 0.91
( 0.1 , 0.8 ) 1.10 0.61 0.31
( 0.1 , 0.7 ) 0.95 1.18 2.43
( 0.1 , 0.6 ) 1.28 1.30 1.34
( 0.2 , 0.7 ) 1.67 2.47 2.79
( 0.2 , 0.6 ) 0.79 1.96 1.18
( 0.25 , 0.6 ) 1.52 2.01 0.94
( 0.3 , 0.6 ) 0.54 1.68 2.82

Share and Cite

MDPI and ACS Style

Gościniak, I.; Gdawiec, K. Visual Analysis of Dynamics Behaviour of an Iterative Method Depending on Selected Parameters and Modifications. Entropy 2020, 22, 734. https://doi.org/10.3390/e22070734

AMA Style

Gościniak I, Gdawiec K. Visual Analysis of Dynamics Behaviour of an Iterative Method Depending on Selected Parameters and Modifications. Entropy. 2020; 22(7):734. https://doi.org/10.3390/e22070734

Chicago/Turabian Style

Gościniak, Ireneusz, and Krzysztof Gdawiec. 2020. "Visual Analysis of Dynamics Behaviour of an Iterative Method Depending on Selected Parameters and Modifications" Entropy 22, no. 7: 734. https://doi.org/10.3390/e22070734

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop