Visual Analysis of Dynamics Behaviour of an Iterative Method Depending on Selected Parameters and Modifications

There is a huge group of algorithms described in the literature that iteratively find solutions of a given equation. Most of them require tuning. The article presents root-finding algorithms that are based on the Newton–Raphson method which iteratively finds the solutions, and require tuning. The modification of the algorithm implements the best position of particle similarly to the particle swarm optimisation algorithms. The proposed approach allows visualising the impact of the algorithm’s elements on the complex behaviour of the algorithm. Moreover, instead of the standard Picard iteration, various feedback iteration processes are used in this research. Presented examples and the conducted discussion on the algorithm’s operation allow to understand the influence of the proposed modifications on the algorithm’s behaviour. Understanding the impact of the proposed modification on the algorithm’s operation can be helpful in using it in other algorithms. The obtained images also have potential artistic applications.


Introduction
Most of the algorithms determining the local extremes and especially the minimum of the function F are based on determining the value of the function or its gradient. In the case of such an algorithm, the new position of the particle is calculated basing on its previous position and the gradient which determine the direction of the movement [1]-an example of this approach is the gradient descent method: where: −∇F-negative gradient, γ-step size, z i -the current position of the ith particle in a D dimensional environment, z i -the previous position of the ith particle. There are many algorithms implementing modifications of this method, and its operation is well described in the literature. Evolutionary algorithms are another group of algorithms used to solve optimisation problems, including minimisation. As shown in [2], the analysis of particle behaviour in the evolutionary algorithms is a very complex problem. In practice, many algorithms similar to the evolutionary algorithms are used [3]. Particularly noteworthy is the group of particle swarm optimisation algorithms (PSO) [4]. The complex nature of the interaction between particles does not allow us to determine their impact on the operation of the algorithm in a simple way. The behaviour of particles in the PSO algorithm can be described by the following equation: where: z i -the current position of the ith particle in a D dimensional environment, z i -the previous position of the ith particle, v i -the current velocity of the ith particle in a D dimensional environment that is given by the following formula: where: v i -the previous velocity of the ith particle, ω-inertia weight (ω ∈ [0, 1]), η 1 , η 2 -acceleration constants (η 1 , η 2 ∈ (0, 1]), r 1 , r 2 -random numbers generated uniformly in the [0, 1] interval, z pbest i -the best position of the ith particle, z gbest -the global best position of the particles. The best position of the particle and the best global position are calculated in each iteration. The position of the particle and the best global position have a significant impact on the behaviour of not only the particle, but the entire swarm, directing them towards the best solution. The particle behaviour is determined by three parameters: inertia weight (ω) and two acceleration constants (η 1 , η 2 ). The PSO algorithm belongs to the group of stochastic algorithms for which the convergence can be proved [5].
The ω parameter informs about the influence of the particle velocity at the moment t − 1 on its velocity at the moment t. On the other hand, the η parameter affects the changes in the particle acceleration at the time t. For ω < 1 the particles slow down to reach the velocity of zero. For the value of ω close to 0.7 and the value of η 1 and η 2 close to 1.5 the PSO algorithm has good convergence. The cooperation of particles releases an adaptive mechanism that drives the particle and determines its dynamics. The selection of the proper values of inertia weight and acceleration constants is a very important problem for the algorithm [6]-the algorithm tuning is intuitive and based on developer's experience.
Another important problem that arises in practice is finding the solutions of a system of non-linear equations. Due to many difficulties in solving such systems by using analytical methods many numerical methods were developed. The most famous method of this type is the Newton's method [7]. In recent years methods with different approaches were proposed. In [8] Goh et al. inspired by the simplex algorithm, proposed a partial Newton's method. Despite this method is based on Newton's method, it has slower convergence. Another method based on Newton's method was proposed in [9]. In this method Saheya et al. revised the Jacobian matrix by a rank one matrix each iteration. Sharma et al. in [10] introduced a three-step iterative scheme that is based on Newton-Traub's method. A method that is based on a Jarratt-like method was proposed by Abbasbandy et al. in [11]. This method uses only one LU factorisation which preserves and reduces computational complexities. An acceleration technique that can be used in many iterative methods was proposed in [12]. Moreover, the authors introduced a new family of two-step third order methods for solving systems of non-linear equations. Recently, Alqahtani et el. proposed a three-step family of iterative methods [13]. This family is a generalisation of the fourth-order King's family to the multidimensional case. Awwal et al. in [14] proposed a two-step iterative algorithm that is based on projection technique and solves system of monotone non-linear equations with convex constraints.
In addition to the methods based on classical root finding methods (Newton, Traub, Jarratt, etc.), in the literature we can find approaches based on the PSO algorithm and other metaheuristic algorithms. In [15] Wang et al. proposed a modification of the PSO algorithm in which the search dynamics is controlled by a traditional controller such as the PID (Proportional-Integral-Derivative) controller. A combination of Nelder-Mead simplex method and PSO algorithm was proposed in [16]. To overcome problems of being trapped in local minima and slow convergence, Jaberipour et al. proposed a new way of updating particle's position [17]. Another modification of the PSO algorithm for solving systems of non-linear equations was proposed by Li et al. in [18]. The authors proposed: (1) a new way of inertia weight selection, (2) dynamics conditions of stopping iterating, (3) calculation of the standardised number of restarting times based on reliability theory. Ibrahim and Tawhid in [19] introduced a hybridisation of cuckoo search and PSO for solving system of non-linear equations, and in [20] a hybridisation of differential evolution and the monarch butterfly optimisation. Recently, Liao et al. in [21] introduced a decomposition re-initialisation-based differential evolution to locate multiple roots of non-linear equations system. A study on different metaheuristic methods, namely PSO, firefly algorithm and cuckoo search algorithm, in the solving of system of non-linear equation task was conducted in [22]. The study showed that the PSO algorithm obtains better results than the other two algorithms.
Dynamic analysis is a separate field of research using numerous investigation methods [23]. Among them, there is a graphical visualisation of dynamics [24]. In the analysis of the complex polynomial root finding process the polynomiography is a very popular method of visualising the dynamics [25][26][27]. In this method the number of iterations required to obtain the solution is visualised. The generated images show the dynamics of the root-finding process using some colour maps. Today, dynamics study of root-finding methods with polynomiography is an essential part of modern analysis of the quality of these methods [28]. It turns out that this form of dynamics study is a powerful tool in selecting optimal parameters that appear in the iterative methods for solving non-linear equations and to compare these methods [29,30]. The visualisation methods proposed in the work will also be called polynomiography because of their similarity to the primary polynomiography, despite the fact that we use any functions, not just polynomials.
This article proposes modifications of a root finding algorithm, that is based on the Newton's method, by using the inertia weight and the acceleration constant as well as the best position of the particle and the implementation of various iteration methods. The conducted research serves to visualise the impact of the proposed algorithm modifications on the behaviour of the particle. They allow conducting not only the discussion on the behaviour of a particle, but also to present images of aesthetic character and artistic meaning [31,32].
Nammanee et al. [33] introduced a strong convergence theorem for the modified Noor iterations in the framework of uniformly smooth Banach spaces. Their results allow to suppose that the proposed modifications using the best particle position will be an effective way to increase the effectiveness of algorithm's operation.
The rest of the paper is organised as follows. Section 2 introduces root finding algorithm that is based on the Newton-Raphson method. This method is used to illustrate the tuning influence on its behaviour. Next, Section 3 introduces iteration processes known in literature. Moreover, basing on these iterations we propose iterations that use the best position of the particle. Section 4 presents algorithms for creating polynomiographs. Then, in Section 5 a discussion on the research results illustrated by the obtained polynomiographs is made. Finally, Section 6 gives short concluding remarks.

The Algorithm
Numerous methods can be used to solve a system of D non-linear equations with D variables. One of the best known and intensively studied methods is the Newton-Raphson method [7]. Let f 1 , f 2 , . . . , f D : R D → R and let . . .
Let F : R D → R D be a continuous function which has continuous first partial derivatives. Now, to solve the equation F(z) = 0, where z = [z 1 , z 2 , . . . , z D ], by using the Newton-Raphson method, a starting point is selected z 0 = [z 1 0 , z 2 0 , . . . , z D 0 ] and then the iterative formula is used as follows: where: is the Jacobian matrix of F and J −1 is its inverse. By taking N(z) = −J −1 (z)F(z) the Newton-Raphson method can be described as follows: To solve (4), the following algorithm can be used: where: z 0 ∈ R D is a starting position, v 0 = [0, 0, . . . , 0] is a starting velocity, v n+1 is the current velocity , z n is the previous position of particle (z n = [z 1 n , z 2 n , . . . , z D n ]). The algorithm sums the position of the particle z n with its current velocity v n+1 . The current velocity of the particle is determined by the inertia weight and the acceleration constants: where: v n -the previous velocity of particle, ω ∈ [0, 1)-inertia weight, η ∈ (0, 1]-acceleration constant. Let us notice that (8) reduces to the classical Newton's method [25] if ω = 0 and η = 1, and to the relaxed Newton's method [34] if ω = 0 and η = 1. Moreover, we can observe that the proposed method joins the PSO algorithm and the Newton's method. The acceleration term in the PSO algorithm is responsible for pointing the particle in the direction of the best solution. In our method as the acceleration term we use the direction used in the Newton's method, which is responsible for moving the current point into direction of the solution. Thus, in each iteration we are moving towards the solution and eventually we will stop in the solution, because for ω < 1 the particle slows down and N(z n ) = 0, when we reach the solution.
The implementation of inertia weight (ω) and the acceleration constant (η) allows controlling particle dynamics in a wider range [4]. The selection of the value of these parameters is not deterministic. It depends on the problem being solved and the selected iteration method. These values are selected by tuning-it is a kind of art. The relationships between the ω and η parameters show a complex nature and the change of these parameters influences particle's dynamics, which can be visualised using algorithms described in Section 4.
A very important feature in numerical methods for solving systems of equations is the order of convergence of the method. For instance, the order of convergence of Newton's method is 2 [25]. In the case of the methods that are based on Newton's, Traub, Jarratt, etc. methods the order of convergence is proved (see for instance [8][9][10][11][12][13][14]), but in case of the methods that are based on metaheuristic algorithms, like the method presented in this paper, the order of convergence is not derived (see for instance [15][16][17][18][19][20][21][22]). This is due the fact that this order greatly varies for different values of the parameters used in the metaheuristic algorithms. We will show this on a very simple example. In this example to approximate the order of convergence we will use the computational order convergence introduced in [35], i.e., Let us consider f 1 (x, y) = x 3 − 3xy 2 − 1, f 2 (x, y) = 3x 2 y − y 3 . We will solve system (4) by using (8) with various values of ω and η in (9) and three different starting points z 0 . The obtained results are presented in Table 1. For ω = 0 and η = 1 (classic Newton's method) we see that the method obtains second order of convergence, which is in accordance with the result proved in the literature. When we take a non-zero inertia, then we see that for a fixed starting point and varying ω and η we obtain very different values of the order. The same situation happens when we fix ω and η and change the starting points. Table 1. Computational order of convergence for system given by f 1 (x, y) = x 3 − 3xy 2 − 1, f 2 (x, y) = 3x 2 y − y 3 solved by (8). Another problem studied in the context of solving systems of non-linear equation with iterative methods is the problem of strange (extraneous) fixed points, i.e., fixed points which are found by the method and are not the roots of the considered system [36]. Let us assume that (8) has converged to a solution. In such case the v n in (9) is equal to 0. Therefore, method (8) behaves like relaxed Newton's method. As a result that relaxed Newton's method has no strange fixed point [25], the proposed method also has no strange fixed points.

Iteration Processes
Picard's iteration [37] is widely used in many iterative algorithms. This iteration can be described as follows: Notice that (8) uses the Picard iteration, where T : R D → R D takes the form In the literature many other iteration processes can be found. Three most basic and commonly implemented algorithms of the approximate finding of fixed points of mappings are as follows: 1. The Mann iteration [38]: where: α n ∈ (0, 1] for all n ∈ N, and the Mann iteration for α n = 1 reduces to the Picard iteration. 2. The Ishikawa iteration [39]: where: α n ∈ (0, 1] and β n ∈ [0, 1] for all n ∈ N, the Ishikawa iteration reduces to the Mann iteration when β n = 0, and to the Picard iteration when α n = 1 and β n = 0. 3. The Agarwal iteration [40] (S-iteration): where α n ∈ [0, 1] and β n ∈ [0, 1] for all n ∈ N, let us notice that the S-iteration reduces to the Picard iteration when α n = 0, or α n = 1 and β n = 0.
A survey of various iteration processes and their dependencies can be found in [41]. An additional particle (a sample in the solution space) u n (acting as a reference point) is applied in the Ishikawa and Agarwal iterations. In this case, each of these iterations performs two steps. The reference point u n is set in the first step, and next a new particle position z n+1 is computed using the reference point. It allows better control of the particle's motion. In the paper a modification of the above-discussed iterations is proposed by applying the best position of the reference point and the best position of the particle-similar to the PSO algorithm.
The positions of the reference point u n and the particle z n are evaluated and compared to u best and z best . In solutions space, the lower value of F(q) determines the better position of the reference point or the particle (q). The next u best and z best are updated if they are worse than u n and z n . The best position of the reference point u best or position of the reference point u n and the best position of the particle z best or the position of the particle z n (depending on the result of the evaluation) are used to determine the new position of the particle z n+1 .
As a result of this modification, the next position of the particle z n+1 in the Equations (13)-(15) is determined by the following formulas:

The modified Mann iteration
2. The modified Ishikawa iteration 3. The modified Agarwal iteration All the presented iterations used only one mapping, but in the fixed point theory exist iterations that use several mappings and are applied to find common fixed points of the mappings. Examples of this type of iterations are the following: 1. The Das-Debata iteration [42]: where α n ∈ (0, 1] and β n ∈ [0, 1] for all n ∈ N, the Das-Debata iteration for T 1 = T 2 reduces to the Ishikawa iteration. 2. The Khan-Cho-Abbas iteration [43]: where α n ∈ (0, 1] and β n ∈ [0, 1] for all n ∈ N, let us notice that the Khan-Cho-Abbas iteration reduces to the Agarwal iteration when T 1 = T 2 .
3. The generalised Agarwal's iteration [43]: where α n ∈ (0, 1] and β n ∈ [0, 1] for all n ∈ N, moreover the generalised Agarwal iteration reduces to the Khan-Cho-Abbas iteration when T 1 = T 3 and to the Agarwal iteration when Iterations described by Equations (19)- (21) can be also modified by introducing the best position of the reference point and the best position of the particle. As a result of this modification, equations describing z n+1 take the following form: 1. The modified Das-Debata iteration 2. The modified Khan-Cho-Abbas iteration 3. The modified generalised Agarwal iteration For all iterations z n is used for the base type of iteration (Algorithm 1), u best is used for the proposed algorithm with the best reference point (Algorithm 2), z best is used for the proposed algorithm with the best particle position (Algorithm 3) and both u best and z best are used for the proposed Algorithm 4. The variety of different iteration processes is implemented in our algorithms. In the iterations we use (12) as the mapping with different values of ω and η parameters. I I q -iteration method, q ∈ [0, 1] N -parameters of the iteration I q , ω, ω 1 , ω 2 , ω 3 , η, η 1 , η 1 , η 2 , η 3 -parameters defining functions T, T 1 , T 2 , T 3 , C-colouring function, ε-accuracy Output: visualisation of the dynamics Algorithm 2: Visualisation of the dynamics with the best reference point (I I) Input: F-function, A ⊂ R D -solution space, m-the maximum number of iterations, I r q -part of the iteration method computing the reference point u n , I I I q -part of the iteration method computing the particle z n+1 using the best position of the reference point u best , q ∈ [0, 1] N -parameters of the iteration, ω, ω 1 , ω 2 , ω 3 , η, η 1 , η 1 , η 2 , η 3 -parameters defining functions T, T 1 , T 2 , T 3 , C-colouring function, ε-accuracy Output: visualisation of the dynamics Algorithm 3: Visualisation of the dynamics with the best particle (I I I) Input: F-function, A ⊂ R D -solution space, m-the maximum number of iterations, I r q -part of the iteration method computing the reference point u n , I I I I q -part of the iteration method computing the particle z n+1 using the best position of the particle z best , q ∈ [0, 1] N -parameters of the iteration, ω, ω 1 , ω 2 , ω 3 , η, η 1 , η 1 , η 2 , η 3 -parameters defining functions T, T 1 , T 2 , T 3 , C-colouring function, ε-accuracy Output: visualisation of the dynamics Algorithm 4: Visualisation of the dynamics with both the best reference point and particle (IV) Input: F-function, A ⊂ R D -solution space, m-the maximum number of iterations, I IV q -part of the iteration method computing the particle z n+1 using the best position of the reference point u best and the best position of the particle z best , q ∈ [0, 1] N -parameters of the iteration, ω, ω 1 , ω 2 , ω 3 , η, η 1 , η 1 , η 2 , η 3 -parameters defining functions T, T 1 , T 2 , T 3 , C-colouring function, ε-accuracy Output: visualisation of the dynamics

Visualisation of the Dynamics
A very similar method to the polynomiography [44] is used to visualise the dynamics of the proposed algorithms. The iteration methods presented in Section 3 are implemented in the algorithms and proper values of parameters are selected. ω, η for a single mapping T or ω 1 , ω 2 , ω 3 and η 1 , η 2 , η 3 for T 1 , T 2 , T 3 (depending on the chosen iteration). The maximum number of iterations m which algorithm should make, accuracy of the computations ε and a colouring function C : N → {0, 1, . . . , 255} 3 are set. Then, each z 0 in the solution space A is used in the algorithm. The iterations of the algorithm proceed till the convergence criterion: or the maximum number of iterations is reached. A colour corresponding to the performed number of iterations is assigned to z 0 using colouring function C. Iterations described in Section 3 and their modifications using the best particle position and the best reference point position allow proposing five algorithms. Algorithm 1 (denoted in the text by I) is the base algorithm, without modification. -part of the iteration method computing the particle z n+1 using the best position of the particle z best , q ∈ [0, 1] N -parameters of the iteration, ω, ω 1 , ω 2 , ω 3 , η, η 1 , η 1 , η 2 , η 3 -parameters defining functions T, T 1 , T 2 , T 3 , C-colouring function, ε-accuracy Output: visualisation of the dynamics The iteration method is denoted by I a q for a given algorithm, where q is a vector of parameters of the iteration and a is one of the algorithms {I|I I|I I I|I I I M|IV} or r-it is the part of iteration determining position of the reference point.
The domain (solution space) A is defined in a D-dimensional space, thus the algorithms return polynomiographs in this space. For D = 2 a single image is obtained or for D > 2 a two-dimensional cross-section of A can be selected for visualisation.

Discussion on the Research Results
In this section we present and discuss the obtained results of visualising the dynamics of the method introduced in Section 2 together with the various iteration methods presented in Section 3 using algorithms described in Section 4. Let C be the field of complex numbers with a complex number c = x + iy where i = √ −1 and x, y ∈ R. In the experiments we want to solve the following non-linear equation where p(c) = c 3 − 1. This equation can be written in the following form: Now, Equation (27) can be transformed into a system of two equations with two variables, i.e., where f 1 (x, y) = x 3 − 3xy 2 − 1, f 2 (x, y) = 3x 2 y − y 3 . The set of solutions of this system is the following: where: f 1 (x, y) = x 4 − 6x 2 y 2 + y 4 − 10x 2 + 10y 2 + 9, f 2 (x, y) = 4x 3 y − 4xy 3 where: where: In the experiments, to solve (28) we use the method introduced in Section 2. The same colour map (Figure 1) was used to colour all the obtained polynomiographs. Furthermore, in every experiment we used the following common parameters: m = 256, ε = 0.01, image resolution 800 × 800 pixels.

The Picard Iteration
The motion of particles depends on the acceleration constant (η) and inertia weight (ω). These parameters control dynamics of a particle behaviour. The polynomiograph visualises particle dynamics and allows analysing parameters impact on the algorithm's operation. Polynomiographs are created using Algorithm 1. The dynamics visualisations for the Picard iteration using ω = 0.0 and varying η are presented in Figure 2. Earlier, in Section 2, we noticed that the proposed method reduces to the relaxed Newton's method for ω = 0 and η = 1 and to the Newton's method for ω = 0 and η = 1, so Figure 2a-e present polynomiographs for the relaxed Newton's method, whereas Figure 2f for the classical Newton's method. The polynomiograph generation times are also given in Figure 2. The time decreases as the dynamics controlled by the acceleration constant increases.
Polynomiographs of the Picard iteration for low value of acceleration constant (η = 0.1) and varying ω are presented in Figure 3. The increase in the inertia weight results in the increase in particle dynamics. Both too small and too high particle dynamics cause the increase in the polynomiographs creation time. The shortest time was obtained for Figure 3b. Every change of dynamics creates a new image.
The acceleration constant is equal to 0.3 for images in Figure 4. Change in particle's dynamics is realised by the increase of the inertia weight. As in the previous example, the proper selection of particle's dynamics minimises the creation time of the polynomiograph (see Figure 4b). The increase in the particle dynamics is expressively illustrated by polynomiographs.     In Figure 5 we see polynomiographs generated using the same values of the inertia weight as in Figure 4, but with higher value of the acceleration constant-η = 0.5. The increase in the inertia weight causes the increase of the time of image creation. It is the consequence of the excessive growth of particle's dynamics.

The Mann Iteration
Su and Qin in [45] discussed a strong convergence of iterations in the case of a fixed reference point. We start the examples for the Mann iteration with an example showing the behaviour of the Mann iteration implementing a fixed reference point. The results are presented in Figure 8.  [1,0] (in the best position). It causes a significant improvement in the algorithm operation near the root position. For polynomiograph presented in Figure 8d, a fixed reference point is the starting point-it results in greater efficiency in the algorithm operation near positions of the roots. A better position of the reference point can significantly improve the algorithm operation-however, it is impossible to determine its best position. In the proposed methods, instead of using a fixed reference point we use the locally best position of the reference point. This position is modified when the new position of the reference point is better than the previous one. The Mann iteration allows to compare the operation of algorithm I with the Algorithm I I I M. The choice of the best position of the particle will cause the change in the dynamics, it can result in a better convergence of the particle-this mechanics is used in many algorithms. Visualisation is a great tool for presenting this mechanics of algorithm operation.
Polynomiographs of the Mann iteration for ω = 0.0, η = 0.6 and varying α are presented in Figure 9. A small value of α causes a reduction in particle's dynamics. When Algorithm I I I M is used, the areas with the higher dynamics (areas of dynamically changing pattern) are narrowed (compare Figure 9a with Figure 9c). The particle dynamics for polynomiographs in Figure 10 is greater than in Figure 9. The high value of the α coefficient limits the effect of selecting the best particle's position in Algorithm I I I M. Nevertheless, in Figure 10c we observe a narrowing of areas of high dynamics and small changes in the shape of neighbouring areas. Limiting the particle's dynamics by decreasing in the value of the α coefficient affects the visualisation of the influence of the best particle position on the Algorithm I I I M operation (compare Figure 10b and Figure 10d)-similarly to Figure 9d. In the examples presented in Figure 11, the increase in particle dynamics is obtained by including inertia. The increase in particle dynamics is evident in polynomiographs in Figure 11a  In polynomiographs in Figure 13, the particle dynamics are limited by the small value of η (η = 0.3). The changes in dynamics caused by the Algorithm I I I M are in areas of high dynamics, the neighbourhood of these areas and places where the particle gets high velocity (the long distance from the centre of the image). This is clearly visible in images in Figure 13c,d. Polynomiographs presented in Figure 14 show the increase in dynamics due to the increase in the acceleration constant (η = 0.7). The dynamics is high for image in Figure 14a, and in Figure 14b   The inertia weight has much greater effect on particle dynamics than the acceleration constant. The dynamics presented in Figure 15 is high due to the high value of ω and η. The dynamics changes, caused by the best position of the particle, are clearly visible. Similarly, as in the example presented in Figure 15, the dynamics in Figure 16 are shaped by the high value of ω and η (ω = 0.6, η = 0.8). Additionally, in this case the dynamics paints some interesting images. The magnifications of the central part of selected polynomiographs of the Mann iteration and its modifications are presented in Figure 17. Changes in the particle dynamics are visualised by colourful mosaics, which can be used in design to create ornaments.

The Ishikawa and the Das-Debata Iterations
Ishikawa and Das-Debata iterations use a reference point to determine a new particle position. In the presented analysis, base Algorithm I and its three modifications are used: Algorithm I I using the best position of the reference point, the Algorithm I I I using the best position of the particle, and both these modifications in the Algorithm IV. Figure 18 shows polynomiographs of all four algorithms for two values of α (α = 0.4 and α = 0.8). The low α value limits the influence of the reference point, so images in Figure 18a and Figure 18c are very similar. Algorithm I I I using the best particle position has a significant influence on the form of the polynomiograph. It results in a strong similarity between images in Figure 18e and Figure 18g. The dynamics shown in Figure 18b is high. The influence of the best reference point is shown in Figure 18d because of the large value of α. Changes in dynamics are also visible in Figure 18f created by the algorithm using the best position of particle. Each of the algorithms has a different influence on the dynamics of the particle. These features are combined in Algorithm IV-it clearly visualises the image in Figure 18h.
A low value of β reduces the impact of the transformation operator on the dynamics of reference point creation. It has the same effect as the low value of α-for this reason Figure 19a is similar to Figure 19c,e-g. The increase in the β coefficient results in obtaining different features of the images in Figure 19b,d,f,h-the image in Figure 19h combines the features of the images in Figure 19d,f.
The dynamics presented by the polynomiographs depends on the dynamics of the creation of the reference point and its processing. The parameters ω 1 and η 1 are responsible for the dynamics of the reference point creation and the parameters ω 2 and η 2 are responsible for its processing. The change in these parameters allows to obtain the effects described above.  Polynomiographs of the Das-Debata iteration for varying ω 1 are presented in Figure 20 and for varying η 1 in Figure 21. Polynomiographs created by Algorithms I and I I are similar-it means that the influence of choosing the best reference point is small for such selected parameters of the algorithm. The influence of the best position of particle has a significant effect on the form of the polynomiographs created by the Algorithm I I I-these polynomiographs have many common features with the polynomiographs created by the Algorithm IV (the influence of Algorithm I I is small).
(a) ω 1 = 0.2, t = 0.50s, (I) The parameters ω 2 and η 2 influence the processing of the reference point. In Figure 22 polynomioographs for varying ω 2 and in Figure 23 for varying η 2 are shown. For small values of ω 2 , polynomiographs presented in Figure 22a,c,e,g are similar-it means, that the operation of all the algorithms is similar. Significant increase in the value of ω 2 shows a significant influence of the reference point on the polynomiograph's creation-Algorithms I I and IV similarly create polynomiographs.
All the algorithms make characteristic changes in polynomiographs shown in Figure 23. For high value of η 2 the polynomiographs created by Algorithms I I and IV are similar (see Figure 23d). (a) η 2 = 0.2, t = 0.70s, (I)

The Agarwal and the Khan-Cho-Abbas Iterations
The generalised Agarwal iteration introduces three operators parameters of which can be independently specified. It allows for a wide range of changes in particle's dynamics. In addition, α and β parameters allow to determine the impact of these operators on the algorithm work. In Figure 25 polynomiographs of the Agarwal iteration for varying α and in Figure 26 for varying β are presented. Incorrect selection of these parameters can cause that the algorithm does not reach a solution in a given number of iterations-it results in extension of the algorithm's operation time (see Figure 25e). Nevertheless, the increased particle dynamics gives the opportunity to obtain interesting patterns. Poorly selected particle dynamics can cause a significant increase in the creation time of a polynomiograph. This case is visible in the polynomiographs in Figures 27 and 28 for Algorithms I I I and IV.
Depending on the choice of parameters, cases that have already been discussed in the previous iterations can be observed. The big similarity of polynomiographs created by Algorithms I and I I as well as I I I and IV is observed in Figures 29 and 30 (ω 2 = 0.1). Polynomiographs are similar for all algorithms with the low particle dynamics-this is observed in Figure 31. The strong similarity of polynomiographs is shown in Figures 32 and 33 for Algorithms I I and IV-it indicates a strong influence of the reference point on the polynomiographs creation. Next, the strong similarity of polynomiographs is observed in Figure 34 for Algorithm I and I I I-it results from the strong influence of the best particle position on the polynomiographs creation.  (a) β = 0.8,

Algorithms Operation in the Selected Test Environments
The discussion conducted in the previous sections presented in detail the impact of individual factors on the operation of the proposed algorithm. The authors want to supplement this discussion with a presentation of algorithms' operation for the selected test environments, which we can be found in the world literature and were presented at the beginning of Section 5. Figures 36-42 present polynomiographs for three test environments and the iterations discussed in the article. They will allow to generalise the observations discussed in the previous sections. In Figure 36 the Picard's iteration, in Figure 37a,c,e the Mann's iteration and in sub-figures (a), (e), (i) of Figures 38-42 polynomiographs realised by the base algorithm are presented. These polynomiographs are characterised by smooth boundaries of areas with different particle dynamics. The implementation of the algorithm's modification consisting in the selection of position of the best reference point and particle causes the blur and rag of smooth edges of the areas presented on the polynomiographs obtained for the base algorithm. The selection of the best position of the reference point implemented in the Algorithm (I I) results in a significant reduction of small elements (details) of the patterns visible on the polynomiograph. It is a significant limitation of particle dynamics, which can be seen in the (b) sub-figures of Figures 38-42. Figure 37b,d,f and the (c) sub-figures of Figures 38-42 present polynomiographs of the Algorithm (I I I M and I I I) selecting the best particle position. The operation of the algorithm results in much better presentation of small details in the polynomiograph-these are the areas in which the particle wants to achieve a worse position. Polynomiographs obtained by using the algorithm denoted by (IV), i.e., with the selection of the best reference point and the best particle position, are shown in the (d) sub-figures of Figures 38-42. We can observe both features of the polynomiographs from the sub-figures (b) and (c) of Figures 38-42. However, the polynomiograph's features have a dominant influence for choosing the best particle position. The analysis of the changes in particle dynamics resulting from the applied algorithm can also be performed by analysing the simulation time. The best position of the reference point has a significant impact on the reduction of particle dynamics. One can only complete the observations regarding the test environments. In environments p(c) = c 3 − 1, p(c) = c 4 − 10c 2 + 9, p(c) = c 6 + 10c 3 − 8 the particle obtains similar ranges of dynamics, whereas for the p(c) = c 5 − c environment the particle achieves larger dynamics ranges.

Conclusions
The paper proposes a modification of the Newton's method and introduces algorithms based on it. Similar approaches are presented in many optimisation algorithms which require parameters tuning, e.g., the PSO algorithm. The discussion presented in the article allows showing inertia weight and the acceleration constant impact on the operation of the proposed algorithms. Both too large and too small dynamics of particle can have an adverse influence on the algorithm's operation. Polynomiography is a tool illustrating the behaviour of particles. It also creates an artistic mosaics and allows obtaining images that can be named as art.
Author Contributions: All the authors have equally contributed. All authors have read and agree to the published version of the manuscript.