Next Article in Journal
Algebraic and Geometric Methods for Construction of Topological Quantum Codes from Lattices
Previous Article in Journal
An RBF Method for Time Fractional Jump-Diffusion Option Pricing Model under Temporal Graded Meshes
Previous Article in Special Issue
A New Methodology for the Development of Efficient Multistep Methods for First–Order IVPs with Oscillating Solutions IV: The Case of the Backward Differentiation Formulae
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Robust and Optimal Iterative Algorithm Employing a Weight Function for Solving Nonlinear Equations with Dynamics and Applications

by
Shahid Abdullah
1,*,†,
Neha Choubey
1,†,
Suresh Dara
1,†,
Moin-ud-Din Junjua
2,*,† and
Tawseef Abdullah
3,†
1
School of Advanced Sciences and Languages, VIT Bhopal University, Kothri-Kalan, Sehore 466114, India
2
School of Mathematical Sciences, Zhejiang Normal University, Jinhua 321004, China
3
Department of Mechanical Engineering, National Institute of Technology, Hazratbal, Srinagar 190006, India
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Axioms 2024, 13(10), 675; https://doi.org/10.3390/axioms13100675
Submission received: 3 September 2024 / Revised: 24 September 2024 / Accepted: 26 September 2024 / Published: 30 September 2024
(This article belongs to the Special Issue The Numerical Analysis and Its Application)

Abstract

:
This study introduces a novel, iterative algorithm that achieves fourth-order convergence for solving nonlinear equations. Satisfying the Kung–Traub conjecture, the proposed technique achieves an optimal order of four with an efficiency index ( I ) of 1.587 , requiring three function evaluations. An analysis of convergence is presented to show the optimal fourth-order convergence. To verify the theoretical results, in-depth numerical comparisons are presented for both real and complex domains. The proposed algorithm is specifically examined on a variety of polynomial functions, and it is shown by the efficient and accurate results that it outperforms many existing algorithms in terms of speed and accuracy. The study not only explores the proposed method’s convergence properties, computational efficiency, and stability but also introduces a novel perspective by considering the count of black points as an indicator of a method’s divergence. By analyzing the mean number of iterations necessary for methods to converge within a cycle and measuring CPU time in seconds, this research provides a holistic assessment of both the efficiency and speed of iterative methods. Notably, the analysis of basins of attraction illustrates that our proposed method has larger sets of initial points that yield convergence.

1. Introduction

In all the fields of science and engineering, numerical methods have great significance for solving nonlinear equations. Implicit equations, also known as nonlinear transcendental equations, require iterative techniques to approximate their solutions because they lack analytical solutions. The fact that numerical approaches can handle complex and non-analytic functions is a major justification for using them to solve these equations. In general, models containing nonlinear equations are used to describe nonlinear phenomena, and they are not limited to academia but are found in a variety of fields including problems of steering and reactors [1,2], transport theory [3], economics modeling problems, neuro-physiology [4], social sciences, engineering, and business. In [5,6,7,8], fluid flow, combustion, the triangulation of GPS signals, economic modeling, and kinematics are some more complex problems with nonlinear equations. As a result, numerical techniques with higher orders of convergence and lower computational expenses must be used to solve them. The dynamics of systems with multiple components that interact are often described by nonlinear equations, which present significant challenges for analytical solutions. Iterative techniques for approximation solutions can be derived by breaking down these complicated structures into smaller, more manageable components using numerical methods. In this way, Newton’s method [9] is one of the oldest methods with quadratic convergence, and it is optimal. According to the conjecture of Kung and Traub [10], the order of convergence of an optimal iterative method without memory is at least 2 n 1 , where n denotes the number of function evaluations. A number of researchers have used different techniques to develop optimal iterative methods in order to reach this bound. For example, Kou et al. [11], Sharma et al. [12], Nadeem et al. [13], Panday et al. [14], Kansal et al. [15], Qureshi et al. [16], Jaiswal et al. [17], and Choubey et al. [18] have employed combination technique to derive optimal higher order iterative schemes for solving nonlinear equations.
Although the objective of recent studies has been either to propose modifications of the existing methods or to develop new ones, iterative techniques have been designed as a workaround for determining the initial values of solutions to the nonlinear equations in the following form:
g ( u n ) = 0 ,
where g : D R R is a continuously differentiable nonlinear function and D is an open interval. As accurate solutions are rarely found for such equations, iterative approaches suffer from issues including divergence at all, non-convergence, slow convergence, inefficiency, and failure. As a result, researchers worldwide are committed to improving and developing higher-order multi-point iterative techniques that can overcome the above problems. Multi-point techniques theoretically outperform one-point techniques in terms of effectiveness and system compatibility.
In this paper, a two-step, fourth-order, and optimal iterative method is developed for solving nonlinear equations. With the goal of achieving the optimal convergence order, the proposed method requires just one evaluation of function, in addition to two evaluations of its first derivative. We also have a relatively straightforward framework for our method. Furthermore, we also suggest a main theorem that illustrates convergence to the fourth order when the number of roots is known in advance. Additionally, we demonstrate the numerical effectiveness of our approach on real-world examples such as a quarter car suspension model, a hanging object, a series circuit analogue problem, a vertical stress problem, a chemical engineering problem and a blood rheology model. We provide numerical and graphical examples to further highlight how our approaches behave in contrast to the existing methods.
The rest of the paper is structured as follows: After the introduction, Section 2 presents a concise overview of various existing methods documented in the literature. Section 3 details the development and convergence analysis of our proposed fourth-order method. Section 4 delves into the stability analysis of specific methods using the basins of attraction technique. In Section 5, we present numerical results for certain methods in the real domain, along with comparative analyses. Finally, Section 6 concludes the paper.

2. Brief Overview of Some Existing Methods

Before proceeding with the discussion and derivation of the proposed method, we will first review some of the frequently used methods found in the literature. These established methods will later serve as benchmarks to compare against the results obtained from our proposed method. By doing so, we aim to highlight the effectiveness and potential advantages of our new approach. Additionally, we will examine their respective performance metrics and computational costs to provide a comprehensive evaluation. In nutshell, we use the following fourth-order optimal methods as follows.
King [19,20] proposed the following two-point iterative technique with optimal fourth order of convergence requiring to evaluate only three functional evaluations, given as ( J T 4 )
v n = u n g ( u n ) g ( u n ) , u n + 1 = u n g ( v n ) g ( u n ) g ( u n ) + β g ( v n ) g ( u n ) + ( β 2 ) g ( v n ) .
Kou et al. presented [11] a fourth-order iterative method with the help of composing the Newton–Steffensen method [21] and Potra–Ptak method [22]. Their method is given by ( J H 4 )
v n = u n g ( u n ) g ( u n ) , u n + 1 = u n θ g ( u n ) + g ( v n ) g ( u n ) ( 1 θ ) g ( u n ) 2 g ( u n ) ( g ( u n ) g ( v n ) ) .
In 2012, Chun et al. [23] developed fourth-order optimal iterative method given as C H 4 :
v n = u n 2 g ( u n ) 3 g ( u n ) , u n + 1 = u n 16 g ( u n ) g ( u n ) 5 g 2 ( u n ) + 30 g ( u n ) g ( v n ) 9 g ( v n ) 2 .
In 2015, Junjua et al. [24] proposed the following fourth-order optimal iterative algorithm, denoted as J T 4 :
v n = u n 2 3 g ( u n ) g ( u n ) , u n + 1 = u n g ( u n ) g ( v n ) 1 + 1 4 g ( v n ) g ( u n ) 1 + 3 8 g ( v n ) g ( v n ) 1 2 .
In 2019, Chicharro et al. [25] devised a fourth-order optimal iterative method as follows:
v n = u n g ( u n ) g ( u n ) , u n + 1 = u n g ( u n ) 2 + g ( u n ) g ( v n ) + 2 g ( v n ) 2 g ( u n ) g ( u n ) .
We denote scheme (6) by F C 4 .
In 2020, Sharma et al. [26] developed a fourth-order optimal iterative method as E K 4 :
v n = u n 2 3 g ( u n ) g ( u n ) , u n + 1 = u n 4 g ( u n ) g ( u n ) + g ( v n ) 1 + g ( u n ) g ( u n ) 3 9 16 ψ g ( u n ) 2 g ( u n ) g ( u n ) 3 ,
where ψ = g ( u n ) g ( u n ) g ( v n ) g ( u n ) .
In 2023, Qureshi et al. [16] developed an optimal fourth-order method with the help of a combination technique given as S Q 4 :
v n = u n g ( u n ) g ( u n ) , u n + 1 = u n g ( u n ) ( g ( u n ) 2 g ( v n ) ) g ( v n ) 2 g ( u n ) ( g ( u n ) 3 g ( v n ) ) .
In 2023, Panday et al. [14] proposed a fourth-order optimal iterative algorithm by taking the linear combination of two existing third-order methods given as S P 4 :
v n = u n g ( u n ) g ( u n ) , u n + 1 = u n g ( u n ) ( g ( u n ) 2 3 g ( u n ) g ( v n ) + g ( v n ) 2 ) g ( u n ) ( g ( u n ) 3 g ( v n ) ) ( g ( u n ) g ( v n ) ) .
In 2024, Abdullah et al. [27] developed a new root finding algorithm of the fourth order satisfying the Kung and Traub conjecture given as S N 4 :
v n = u n g ( u n ) g ( u n ) , u n + 1 = u n g ( u n ) ( g ( u n ) 3 3 g ( u n ) g ( v n ) + g ( v n ) 3 ) g ( u n ) ( g ( u n ) g ( v n ) ) ( g ( u n ) 2 g ( v n ) 2 ) .
Again in 2024, Qureshi et al. [28] developed an optimal fourth-order method by merging two existing third-order algorithms with the the aid of a linear combination technique given as Q S 4 :
v n = u n 2 3 g ( u n ) g ( u n ) , u n + 1 = u n 3 4 g ( u n ) 1 g ( u n ) + 1 g ( v n ) + 8 g ( u n ) g ( u n ) + 3 g ( v n ) .

3. Development and Convergence Analysis of the Proposed Method

We introduce a new fourth-order optimal method in this section. We consider an existing third-order method discussed in [29] and incorporate a weight function S ( ω n ) in the second step. The following is the formulation of the iterative expression:
v n = u n 2 3 g ( u n ) g ( u n ) , u n + 1 = u n 4 g ( u n ) g ( u n ) + 3 g ( v n ) S ( ω n ) ,
where S ( ω n ) represents a weight function and is defined as S ( ω n ) = P ω n 2 + Q ω n ω n + R , ω n = g ( v n ) g ( u n ) , where the constants P, Q and R are to be determined such that the proposed scheme (12) achieves optimal fourth order convergence.

Analysis of Convergence

The convergence analysis for the proposed iterative method is presented in the next theorem, where we have utilized the programming package M a t h e m a t i c a 11 to demonstrate that the convergence order of the iterative scheme (12) is four.
Theorem 1. 
Suppose β is a simple root of a nonlinear equation g ( u ) = 0 , and let g : D R R be a sufficiently differentiable real function defined in an open interval D containing β. If u 0 is sufficiently close to β and P = 7 25 , Q = 9 25 and R = 9 25 , then the iterative method (12) converges to β with a convergence order four, and its error equation is
e n + 1 = 11 d 2 3 12 d 2 d 3 + d 4 9 e n 4 + O ( e n 5 ) .
Proof. 
Let the error at the n t h iteration be e n = x n β . Expanding g ( u n ) and g ( u n ) around β by applying Taylor’s method yields
g ( u n ) = g ( β ) e n + d 2 e n 2 + d 3 e n 3 + d 4 e n 4 + O ( e n 5 ) ,
and
g ( u n ) = g ( β ) 1 + 2 d 2 e n + 3 d 3 e n 2 + 4 d 4 e n 3 + O ( e n 5 ) ,
where d j = g ( j ) ( β ) i ! g ( β ) for j N . Utilizing (14) and (15) in the initial step of (12), we have
g ( u n ) g ( u n ) = e n d 2 e n 2 + ( 2 d 2 2 2 d 3 ) e n 3 + ( 4 d 2 3 + 7 d 2 d 3 + 3 d 4 ) e n 4 + O ( e n 5 ) ,
Now, using (16) in the first step of (12), we obtain
v n β = e n 2 3 g ( u n ) g ( u n ) = 1 3 e n + 2 3 d 2 e n 2 4 3 d 2 2 d 3 e n 3 + 2 3 ( 4 d 2 3 7 d 2 d 3 + 3 d 4 ) e n 4 + O ( e n 5 ) ,
Similarly, we calculate g ( v n ) around β as
g ( v n ) = g ( β ) [ 1 + 2 3 d 2 e n 4 3 ( d 2 2 + d 3 ) e n 2 + ( 8 3 d 2 ( d 2 2 d 3 ) + 4 3 d 2 d 3 + 4 27 ) d 4 e n 3 ( ( 4 3 d 2 2 8 3 ( d 2 2 d 3 ) ) d 3 + 43 d 2 ( 3 d 4 + 4 d 2 3 7 d 2 d 3 ) + 5 81 d 5 ) e n 4 + O ( e n 5 ) ] ,
The expansion of the variable ω n in the weight function is as follows:
ω n = g ( v n ) g ( u n ) = 1 4 3 d 2 e n + ( 12 3 d 2 2 8 d 3 ) e n 2 ( 16 d 2 4 + ( 4 d 2 2 3 d 3 ) ( 4 3 d 2 2 + d 3 3 ) 36 d 3 d 2 2 + 3 ( 4 9 d 2 2 8 9 ( d 2 2 d 3 ) ) d 3 + 9 d 3 2 + 2 3 d 2 ( 8 d 2 2 + 12 d 2 d 3 4 d 4 ) 2 d 2 ( 8 3 d 2 ( d 2 2 d 3 ) + 4 3 d 2 d 3 + 4 27 d 4 + 15 9 2 d 2 d 4 + 4 3 d 2 ( 4 d 2 3 7 d 2 d 3 + 3 d 4 ) 400 d 5 81 ) e n 4 ,
Also, the weight function S ( ω n ) around zero gives
S ( ω n ) = P + Q 1 + R 4 ( ( P + 2 P R + Q R ) d 2 ) 3 ( 1 + R 2 ) e n + 4 9 ( 1 + R 3 ) ( ( Q R ( 5 + 9 R ) + P ( 9 + R ( 27 + 22 R ) ) ) d 2 2 6 ( 1 + R ) ( P + 2 P R + Q R ) c 3 ) e n 2 + + 1 81 ( 1 + R ) 5 4 ( 4 ( Q R ( 2 + 3 R ) ( 1 + 15 R ( 2 + 3 R ) ) + P ( 135 + R ( 675 + R ( 1348 + 99 R ( 13 + 5 R ) ) ) ) ) d 2 4 9 ( 1 + R ) ( Q R ( 15 + R ( 94 + 111 R ) ) + P ( 111 + R ( 444 + 7 R ( 93 + 50 R ) ) ) ) d 2 2 d 3 + ( 1 + R ) 2 ( Q R ( 155 + 363 R ) + P ( 363 + R ( 1089 + 934 R ) ) ) d 2 d 4 + 4 ( 1 + R ) 2 ( 18 ( Q R ( 1 + 3 R ) + P ( 3 + R ( 9 + 8 R ) ) ) d 3 2 25 ( 1 + R ) ( P + 2 P R + Q R ) d 5 ) ) e n 4 + O ( e n 5 ) .
From (14), (15) and (18), we have
2 g ( u n ) g ( u n ) + g ( v n ) = e n + d 2 2 e n 2 + ( 3 d 2 3 3 d 2 d 3 1 9 d 4 ) e n 4 + O ( e n 5 ) .
Now, by substituting (20) and (21) in the second step of (12), we obtain
e n + 1 = 1 P + Q 1 + R e n + 4 3 ( Q + P R + 2 Q R ) d 2 e n 2 ( 1 + R ) 2 1 9 1 ( 1 + R ) 3 ( ( ( P ( 9 + R ( 2 + 27 R ) ) + Q ( 27 + R ( 90 + 79 R ) ) ) d 2 2 ) + 24 ( 1 + R ) ( Q + P R + 2 Q R ) d 3 ) e n 3 + 1 27 ( 1 + R 4 ) ( ( P ( 81 + R ( 215 + 9 R ( 3 + 19 R ) ) ) + Q ( 171 + R ( 765 + R ( 1241 + 711 R ) ) ) ) d 2 3 + 3 ( 1 + R ) ( P ( 27 + R ( 2 + 93 R ) ) + Q ( 93 + R ( 306 + 277 R ) ) ) d 2 d 3 ) + ( 1 + R ) 2 ( P ( 3 + 107 R ) + Q ( 107 + 211 R ) d 4 ) e n 4 .
To eliminate the term involving e n 3 in (22), we have to consider following equations:
( 1 + R ) ( P + Q ) = 0 4 ( Q + P R + 2 Q R ) = 0 ( P ( 9 + R ( 2 + 27 R ) ) + Q ( 27 + R ( 90 + 79 R ) ) ) = 0
By solving the system above, we obtain the values P = 7 25 , Q = 9 25 and R = 9 25 . Substituting these values into Equation (22), we obtain the required error equation for the iterative scheme (12) given by Equation (13). □
Now, by replacing the values of the weight function and performing some simplifications on (12), we obtain the following optimal fourth-order iterative approach as
v n = u n 2 3 g ( u n ) g ( u n ) , u n + 1 = u n 4 g ( v n ) ( 7 g ( u n ) + 9 g ( v n ) ) g ( u n ) g ( u n ) ( 9 g ( u n ) 25 g ( v n ) ) ( g ( u n ) + 3 g ( v n ) ) .
We refer to our new technique as (24) by S H 4 , while keeping in consideration that each step needs three functional evaluations.

4. Basins of Attraction

In this section, we investigate the dynamic properties of several fourth-order iterative techniques for solving the nonlinear equation g ( z ) = 0 , in which the function g : C C operates in a complex plane of complex geometry. In 1879, Cayley [30] initiated the exploration of Newton’s technique by plotting basins of attraction for a quadratic polynomial g ( z ) . Firstly, Stewart in [30] presented the study of basins of attractions for the visual comparisons of different iteration techniques. He displayed the attraction basins of the roots of a polynomial obtained by different iterative procedures. He compared Newton’s scheme with numerous other methods of different orders, such as Leguerre’s method, Popovski’s method and Halley’s method.
The basins of attraction technique is a way to illustrate how different initial points impact the iterative method’s behavior. An iterative method can converge to the required root regardless of the order of convergence provided an appropriate initial estimate is given. Therefore, evaluating an iterative method’s stability using one or more initial estimates on different test functions does not yield a complete picture of the method’s stability. The convergence region of the basins of attraction of the root allows us to evaluate different iterative techniques. An iterative technique which provides less divergence (black dots) and wider regions of convergence is regarded as stable and reliable.
In order to plot the basins of attraction, we select a square region [ 2 , 2 ] × [ 2 , 2 ] C in the complex plane, labeled as I, that encompasses all of the roots of the complex polynomial g ( z ) under consideration. For a given initial point z 0 I , distinct colors are allocated to different roots to which an iterative method converges. The presence of black-colored dots in the images indicates that the method did not succeed in finding a solution within the specified conditions, including the maximum number of iterations and the predetermined error threshold allowed to find the solution. The color brightness varies according to the number of iterations needed for convergence; a darker color indicates that more iterations are needed for the iterative technique to converge, while a brighter color specifies that fewer iterations are needed. We assume a maximum of 15 iterations and a tolerance of 10 3 in this section. The number of black points in column number 3 shows that non-convergent points have a significant impact on the average number of iterations per point because they always require the maximum number of iterations allowed. Column 5 indicates the time it took to arrive at the solution, while column 4 displays the average number of iterations per point.
Example 1. 
Consider the nonlinear function
p 1 ( z ) = z 3 1 ,
having roots 1, ( 1 ) 2 3 and ( 1 ) 1 3 . The basins are depicted in Figure 1. The distinct methods Q S 4 , S Q 4 , S P 4 , S N 4 , C H 4 , E K 4 , F C 4 , J H 4 , J T 4 and S H 4 of the same order were compared with the new iterative approach S H 4 . One can observe from Table 1 that our proposed method produces fewer black points as compared to others. Furthermore, a mean number of iterations is lower for S H 4 than for the rest of the iterative techniques.
Example 2. 
Consider the nonlinear function:
p 2 ( z ) = z 4 z + ι .
The basins are depicted in Figure 2. The distinct methods Q S 4 , S Q 4 , S P 4 , S N 4 , C H 4 , E K 4 , F C 4 , J H 4 , J T 4 and S H 4 of the same order were compared with the new iterative approach S H 4 . One can observe from Table 2 that basins of attraction for the proposed method S H 4 contain fewer black points as compared to others. Furthermore, the mean number of iterations is lower for S H 4 than for the rest of the techniques.
Example 3. 
Consider the nonlinear function:
p 3 ( z ) = z 6 10 z 3 8
having roots ( 5 33 ) 1 3 , ( 5 33 ) 1 3 , ( 5 + 33 ) 1 3 , ( 1 ) 2 3 ( 5 + 33 ) 1 3 , ( 5 + 33 ) 1 3 and ( 1 ) 2 3 ( 5 + 33 ) 1 3 . The basins are depicted in Figure 3. The distinct methods Q S 4 , S Q 4 , S P 4 , S N 4 , C H 4 , E K 4 , F C 4 , J H 4 , J T 4 and S H 4 of the same order were compared with the new iterative approach S H 4 . One can observe from Table 3 that basins of attraction for the proposed method S H 4 contain fewer black points as compared to others. Furthermore, the mean number of iterations is less for S H 4 than for the rest of the techniques.
Example 4. 
Consider the nonlinear function:
p 4 ( z ) = ( z 5 + 10 ) ( 10 z 5 1 ) ,
having roots 1.28221 + 0.931577 ι ,   0.510455 0.370867 ι ,   0.630957 ,   0.194977 + 0.600076 ι ,   0.194977 0.600076 ι ,   0.510455 + 0.370867 ι ,   1.58489 ,   0.489759 1.50732 ι ,   0.489759 + 1.50732 ι ,   1.28221 0.931577 ι . The basins are depicted in Figure 4. The distinct methods Q S 4 , S Q 4 , S P 4 , S N 4 , C H 4 , E K 4 , F C 4 , J H 4 , J T 4 and S H 4 of the fourth order are compared with the new iterative approach S H 4 . One can observe from Table 4 that the basins of attraction for the proposed method S H 4 have fewer black points as compared to others. Furthermore, mean number of iterations is less for S H 4 than for the rest of the techniques.
Moreover, Figure 5 and Figure 6 depict the graphical comparison among various iterative methods based on the number of black points and the mean number of iterations required for convergence, respectively.
From Table 1, Table 2, Table 3 and Table 4 and Figure 1, Figure 2, Figure 3 and Figure 4, it is evident that our proposed method S H 4 excels by demonstrating larger basins of attraction and fewer divergent points compared to other methods. Furthermore, it is observed from Table 3 and Table 4 and Figure 3 and Figure 4 that for Examples 3 and 4, most of the recently available optimal iterative algorithms failed to converge for solving highly complex problems since they provide a large number of black dots and a small area of convergence while the presented method remains effective in these circumstances.

5. Numerical Experiments

In this numerical study, we investigate numerical comparison of several iterative methods for solving some application-oriented nonlinear equations. To evaluate the performance of our recently developed iterative approach and to demonstrate its applications, some real-life application-based problems and their numerical solutions found by applying several iterative schemes are presented in this section.
The programming package M a t h e m a t i c a 11 was used for all numerical computations and the results were rounded to 2000 significant digits to obtain accurate results. Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10 show the absolute residual error | g ( u n ) | , the absolute error in the approximation of the root (( | u n β | ) for the initial three iterations, the computational order of convergence ( C O C ) and the CPU time in seconds, for the different methods. Additionally, the CPU time (measured in seconds) required by each method is computed to achieve the stopping criterion condition | g ( u n ) | 10 1000 . The computational order of convergence (COC) is given by the formula [19]:
C O C l n | g ( u n ) / g ( u n 1 ) | l n | g ( u n 1 ) / g ( u n 2 ) | .
All of the comparison results are shown in Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10.
Problem 1: Quarter car suspension model [31]. The shock absorber (also called a damper) is an important component of the suspension system that controls the dynamic response of the vehicle’s mass as well as the mass of the suspension system (Konieczny [32] and Pulvirenti [33]). The shock absorber’s nonlinear properties make it one of the suspension system’s most intricate parts. An asymmetric nonlinear hysteresis loop [34] characterizes the damping force generated by the damper. The vehicle’s features are simulated in this study using a quarter-car model with two degrees of freedom. Comparing the effects of linear and nonlinear damping properties enables an investigation of the damper’s impact. Simple models, even linear and independently linear ones, are insufficient to adequately characterise the behavior of the damper. The mass motion equations are as follows [35]:
m p v p + k p ( v p v u ) + F = 0 m u v u k p ( v p v u ) k δ ( v p v u ) F = 0 .
The spring stiffness and suspension coefficients in the tire stiffness system are represented by the symbols k p and k δ , respectively. m p and m u stand for the masses above and below the springs, and u p and u u for their respective displacements. According to [31], the following polynomial is used to estimate the damping force coefficient F in Equation (30),
g 1 ( u ) = 77.14 u 4 + 23.14 u 3 + 342.7 u 2 + 956.7 u + 124.5 ,
measuring the acceleration, velocity, and displacement of a mass over time (refer to Figure 7), where β 1 = 3.090556803 ,   β 2 = 1.326919946 + 1.434668028 ι ,   β 3 = 0.1367428388 ,   β 4 = 1.326919946 1.434668028 ι , are exact roots of (31) up to ten decimal places.
Table 5 provides an in-depth comparison of various fourth-order optimal algorithms applied to the quarter car suspension model described in (31). Our proposed method, S H 4 , demonstrates outstanding performance by achieving the smallest absolute error after the third iteration ( 1.5210 × 10 59 ) and the lowest final function value ( 4.7069 × 10 235 ), indicating remarkable accuracy in the shortest C P U time ( 0.611 s), showcasing its efficiency in comparison with the other methods Q S 4 , S P 4 , C H 4 , F C 4 , J H 4 , J T 4 and S N 4 .
Problem 2: Hanging Object. The following nonlinear initial value problem originates from a situation in which a chain tied to an object on the ground is pushed upwards vertically by constant forces opposing gravity [36]:
d 2 v d u 2 d v d u 2 + 13 u + 1 = 0 , v ( 0 ) = 0 , v ( 0 ) = e 2 1 e 2 + 1 .
By utilizing the method discussed in [37], a polynomial is employed to replicate Equation (32) as follows:
g 2 ( u ) = 0.02590111180 u 4 0.1066166681 u 3 0.2099871708 u 2 + 0.7615941560 u ,
where β 1 = 0 ,   β 2 = 1.6609 ,   β 3 = 2.8886 + 3.0592 ι ,   β 4 = 2.8886 3.0592 ι , denotes the exact solution of (33) up to four decimal places.
Table 6 provides the comparison of various fourth-order optimal algorithms applied to the hanging object problem given by (33). The proposed method, S H 4 , shows better performance by achieving the smallest absolute error after the third iteration ( 1.0139 × 10 63 ) and the lowest final function value ( 4.3490 × 10 254 ) in the shortest C P U time ( 0.657 s), showcasing its efficiency.
Problem 3: Series circuit series analogue. Suppose we have a flexible spring with a mass m hanging from its free end, suspended vertically from a solid support. The mass attached to the spring controls how much it stretches or elongates; varied masses will result in different spring lengths. In accordance with Hooke’s law, a spring operates in the opposite direction of elongation and produces a restoring force, F, that is exactly proportional to the amount of elongation. In short, F = k s , where k denotes the spring constant, serves as the proportionality constant. The differential equation for an undamped spring mass system is symbolized by F ( x ) , which is mathematically represented as per [37]:
d 2 v d u 2 + v 3 = 0 , v ( 0 ) = 1 , v ( 0 ) = 1 .
By utilizing the method discussed in [37], the polynomial is employed to replicate Equation (34) as follows:
g 3 ( u ) = 0.5 u 3 0.5 u 2 + u + 1 .
β 1 = 1.0 ,   β 2 = 1.414213562 ,   β 3 = 1.414213562 denote the exact solution of (35) up to nine decimal places.
Table 7 provides a comparison of various fourth-order optimal algorithms applied to the series circuit analogue problem described in (35). The proposed method S H 4 demonstrates outstanding performance in comparison with other methods, Q S 4 , S P 4 , C H 4 , F C 4 , J H 4 , J T 4 and S N 4 , by achieving the smallest absolute error after the third iteration ( 2.1625 × 10 87 ) and the lowest final function value ( 2.2800 × 10 349 ), indicating remarkable accuracy. Moreover, S H 4 provides these results in the shortest C P U time ( 0.625 s), showcasing its efficiency.
Problem 4: Civil Engineering Problem. You are designing a bookshelf with a height ranging from 8.5 inches to 11 inches and a total length of 29 inches. The shelf is made of wood with a Young’s Modulus of 3.66 Msi. It has a width of 12 inches and a thickness of 3/8 inches. Determine the maximum vertical deflection of the shelf in the figure:
The mathematical equation for the deflection of the shelf can be expressed as
z ( x ) = ( 0.42493 × 10 4 ) u 3 ( 0.13533 × 10 8 ) u 5 ( 0.66722 × 10 6 ) u 4 0.018507 u .
In this example, u represents the position along the length of the beam. Therefore, to determine the optimal deflection, we need to find g 1 ( u ) = d z d u = 0 and perform the second derivative test, and thus,
g 4 ( u ) = ( 0.12748 × 10 3 ) u 2 ( 0.67665 × 10 8 ) u 4 ( 0.26689 × 10 5 ) u 3 0.018507 .
The critical point for the deflection of the shelf, denoted as z ( u ) , occurs at a position of 14.5724522206 along the length of the beam. To determine the optimal deflection, we apply the second derivative test from single-variable calculus:
d d u d z d u = d 2 z d u 2 = ( 2.5496 × 10 4 ) u ( 8.0067 × 10 6 ) u 2 u 3 ( 2.7066 × 10 8 )
Now, d 2 z d u 2 at u = 14.5724522206 = 0.00193136156 > 0 .
Hence, the minimum deflection of the shelf z ( u ) is 0.16917328872 at u = 14.5724522206 .
Table 8 provides an in-depth comparison of various fourth-order optimal algorithms applied to the civil engineering problem described in (37). The performance of our proposed method, S H 4 , is better than that of other methods like Q S 4 , S P 4 , C H 4 , F C 4 , J H 4 , J T 4 and S N 4 , as it produces the smallest absolute error after the third iteration ( 5.5022 × 10 195 ) and the lowest final function value ( 2.4842 × 10 786 ), indicating remarkable accuracy in the shortest C P U time ( 0.525 s), showcasing its efficiency.
Problem 5: Chemical Engineering Problem [27]. The concentration of [ H 3 O + ] determines the acidity of a solution containing M g O H in H C l :
3.64 × 10 11 H 3 O + = [ H 3 O + ] + 3.6 × 106 4 .
The following nonlinear model is obtained by defining x as 104 times the hydronium ion concentration [ H 3 O + ] as
g 5 ( u ) = u 3 3.6 u 2 36.4 ,
where β 1 = 3.0 + 2.3 ι ,   β 2 = 3.0 2.3 ι ,   β 3 = 2.4 are all the desired roots of Equation (40).
Table 9 provides the numerical comparison of various fourth-order optimal algorithms applied to the chemical engineering problem described in (40). Our proposed method, S H 4 , demonstrates outstanding performance by achieving the smallest absolute error after the third iteration ( 4.4077 × 10 90 ) and the lowest final function value ( 2.2345 × 10 357 ), indicating remarkable accuracy. Moreover, S H 4 delivers these results in the shortest C P U time ( 0.525 s), showcasing its efficiency in comparison with Q S 4 , S P 4 and S P 4 .
Problem 6: Blood rheology model [38]. Blood rheology is the field of medical science that studies the physical and flow characteristics of blood. Blood’s non-Newtonian behavior often leads to its description as a Casson fluid. A simple fluid, such as blood or water, flows through a tube in a way that produces a velocity gradient along the tube walls. This fluid moves as a cohesive plug with little deformation, in accordance with the flow characteristics of a Casson fluid. Upon investigating the plug flow of Caisson fluid, particular attention is directed towards the following nonlinear equation:
g 6 ( u ) = u 8 441 8 u 5 63 0.05714285714 u 4 + 16 u 2 9 3.624489796 u + 0.36 ,
where u denotes the plug flow of Caisson fluid flow. One root of the Equation (41) is β 1 = 0.1469 . . .
Table 10 provides an in-depth comparison of various fourth-order optimal algorithms applied to the blood rheology model described in (41). The proposed method, S H 4 , demonstrates efficient performance by achieving the smallest absolute error after the third iteration ( 1.22928 × 10 30 ), ( 7.2771 × 10 28 ) and the lowest final function value ( 5.7621 × 10 122 ), ( 1.2569 × 10 109 ), respectively, indicating remarkable accuracy. Moreover, S H 4 delivers these results in the shortest C P U time ( 0.625 s), showcasing its efficiency in comparison with other methods, S P 4 , C H 4 , F C 4 , J H 4 , J T 4 and S N 4 .
Furthermore, Figure 5, Figure 6 and Figure 8 illustrate the graphical comparison of the different fourth-order optimal methods based on a percentage of black points, the mean number of iterations for methods to converge, and the absolute error in consecutive iterations represented as | u n β | after completing three iterations, respectively. The Figure 5, Figure 6 and Figure 8 show that the proposed method S H 4 is highly competitive, demonstrating fast convergence towards the root in less CPU time and possesses superior accuracy compared to other well-known methods.

6. Conclusions

We employed the weight function technique to devise a two-step fourth-order optimal iterative method aimed at solving nonlinear equations. We present a thorough convergence investigation and employed basins of attraction to evaluate the stability of the techniques. Our proposed algorithm performs better in terms of the errors in the consecutive iterations, residual errors and CPU time calculated in seconds. Additionally, the proposed method exhibits the lowest percentage of black points and requires the fewest average number of iterations to converge to the required solution. Analysis of basins of attraction in the complex plane, along with numerical results in the real domain, demonstrate that the newly devised method showcases superior stability and accuracy compared to other methods of the same order.

Author Contributions

Conceptualization, S.A. and N.C.; Methodology, S.A. and N.C.; Validation, S.D., M.-u.-D.J. and N.C.; Formal Analysis, M.-u.-D.J. and T.A.; Investigation, S.D. and M.-u.-D.J.; Writing−Original Draft Preparation, S.A., N.C. and M.-u.-D.J.; Writing−Review & Editing, S.D., M.-u.-D.J. and T.A.; Visualization, S.A. and M.-u.-D.J.; Supervision, N.C. and S.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Enquiries about data availability should be directed to the authors.

Conflicts of Interest

There are no conflicts of interest among all the authors.

References

  1. Tsoulos, I.; Stavrakoudis, A. On locating all roots of systems of nonlinear equations inside bounded domain using global optimization methods. Nonlinear Anal. Real World Appl. 2010, 11, 2465–2471. [Google Scholar] [CrossRef]
  2. Awawdeh, F. On new iterative method for solving systems of nonlinear equations. Numer. Algorithms 2010, 54, 395–409. [Google Scholar] [CrossRef]
  3. Lin, Y.; Bao, L.; Jia, X. Convergence analysis of a variant of the Newton method for solving nonlinear equations. Comput. Math. Appl. 2010, 59, 2121–2127. [Google Scholar] [CrossRef]
  4. Grosan, C.; Abraham, A. A new approach for solving nonlinear equations systems. IEEE Trans. Syst. Man Cybern.-Part A Syst. Humans 2008, 38, 698–714. [Google Scholar] [CrossRef]
  5. Kaplan, E.D.; Hegarty, C. Understanding GPS/GNSS: Principles and Applications; Artech House: London, UK, 2017. [Google Scholar]
  6. Rouzbar, R.; Eyi, S. Reacting flow analysis of a cavity-based scramjet combustor using a Jacobian-free Newton–Krylov method. Aeronaut. J. 2018, 122, 1884–1915. [Google Scholar] [CrossRef]
  7. Nourgaliev, R.; Greene, P.; Weston, B.; Barney, R.; Anderson, A.; Khairallah, S.; Delplanque, J.P. High-order fully implicit solver for all-speed fluid dynamics: AUSM ride from nearly incompressible variable-density flows to shock dynamics. Shock Waves 2019, 29, 651–689. [Google Scholar] [CrossRef]
  8. Zhang, H.; Guo, J.; Lu, J.; Niu, J.; Li, F.; Xu, Y. The comparison between nonlinear and linear preconditioning JFNK method for transient neutronics/thermal-hydraulics coupling problem. Ann. Nucl. Energy 2019, 132, 357–368. [Google Scholar] [CrossRef]
  9. Ortega, J.M. Numerical Analysis: A Second Course; SIAM: Philadelphia, PA, USA, 1990. [Google Scholar]
  10. Kung, H.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. ACM 1974, 21, 643–651. [Google Scholar] [CrossRef]
  11. Kou, J.; Li, Y.; Wang, X. A composite fourth-order iterative method for solving non-linear equations. Appl. Math. Comput. 2007, 184, 471–475. [Google Scholar] [CrossRef]
  12. Sharma, H.; Kansal, M.; Behl, R. An Efficient Optimal Derivative-Free Fourth-Order Method and Its Memory Variant for Non-Linear Models and Their Dynamics. Math. Comput. Appl. 2023, 28, 48. [Google Scholar] [CrossRef]
  13. Nadeem, A.; Ali, F.; He, J.H. New optimal fourth-order iterative method based on linear combination technique. Hacet. J. Math. Stat. 2021, 50, 1692–1708. [Google Scholar] [CrossRef]
  14. Panday, S.; Sharma, A.; Thangkhenpau, G. Optimal fourth and eighth-order iterative methods for non-linear equations. J. Appl. Math. Comput. 2023, 69, 953–971. [Google Scholar] [CrossRef]
  15. Kansal, M.; Cordero, A.; Bhalla, S.; Torregrosa, J.R. New fourth-and sixth-order classes of iterative methods for solving systems of nonlinear equations and their stability analysis. Numer. Algorithms 2021, 87, 1017–1060. [Google Scholar] [CrossRef]
  16. Qureshi, S.; Argyros, I.K.; Soomro, A.; Gdawiec, K.; Shaikh, A.A.; Hincal, E. A new optimal root-finding iterative algorithm: Local and semilocal analysis with polynomiography. Numer. Algorithms 2024, 95, 1715–1745. [Google Scholar] [CrossRef]
  17. Jaiswal, J.; Choubey, N. A new efficient optimal eighth-order iterative method for solving nonlinear equations. arXiv 2013, arXiv:1304.4702. [Google Scholar]
  18. Choubey, N.; Jaiswal, J.P. An improved optimal eighth-order iterative scheme with its dynamical behaviour. Int. J. Comput. Sci. Math. 2016, 7, 361–370. [Google Scholar] [CrossRef]
  19. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint methods for solving nonlinear equations: A survey. Appl. Math. Comput. 2014, 226, 635–660. [Google Scholar] [CrossRef]
  20. King, R.F. A family of fourth order methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
  21. Sharma, J. A composite third order Newton–Steffensen method for solving nonlinear equations. Appl. Math. Comput. 2005, 169, 242–246. [Google Scholar] [CrossRef]
  22. Potra, F.A.; Pták, V. Nondiscrete Induction and Iterative Processes; Research Notes in Mathematics; Pitman Advanced Pub.: Boston, MA, USA, 1984; Volume 103. [Google Scholar]
  23. Chun, C.; Lee, M.Y.; Neta, B.; Džunić, J. On optimal fourth-order iterative methods free from second derivative and their dynamics. Appl. Math. Comput. 2012, 218, 6427–6438. [Google Scholar] [CrossRef]
  24. Junjua, M.u.D.; Akram, S.; Yasmin, N.; Zafar, F. A New Jarratt-Type Fourth-Order Method for Solving System of Nonlinear Equations and Applications. J. Appl. Math. 2015, 2015, 805278. [Google Scholar] [CrossRef]
  25. Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. Wide stability in a new family of optimal fourth-order iterative methods. Comput. Math. Methods 2019, 1, e1023. [Google Scholar] [CrossRef]
  26. Sharma, E.; Panday, S.; Dwivedi, M. New optimal fourth order iterative method for solving nonlinear equations. Int. J. Emerg. Technol. 2020, 11, 755–758. [Google Scholar]
  27. Abdullah, S.; Choubey, N.; Dara, S. Optimal fourth-and eighth-order iterative methods for solving nonlinear equations with basins of attraction. J. Appl. Math. Comput. 2024, 70, 3477–3507. [Google Scholar] [CrossRef]
  28. Qureshi, S.; Chicharro, F.I.; Argyros, I.K.; Soomro, A.; Alahmadi, J.; Hincal, E. A New Optimal Numerical Root-Solver for Solving Systems of Nonlinear Equations Using Local, Semi-Local, and Stability Analysis. Axioms 2024, 13, 341. [Google Scholar] [CrossRef]
  29. Singh, A.; Jaiswal, J. Several new third-order and fourth-order iterative methods for solving nonlinear equations. Int. J. Eng. Math 2014, 2014, 828409. [Google Scholar] [CrossRef]
  30. Stewart, B.D. Attractor Basins of Various Root-Finding Methods. Ph.D. Thesis, Naval Postgraduate School, Monterey, CA, USA, 2001. [Google Scholar]
  31. Barethiye, V.; Pohit, G.; Mitra, A. Analysis of a quarter car suspension system based on nonlinear shock absorber damping models. Int. J. Automot. Mech. Eng. 2017, 14, 4401–4418. [Google Scholar] [CrossRef]
  32. Konieczny, Ł. Analysis of simplifications applied in vibration damping modelling for a passive car shock absorber. Shock Vib. 2016, 2016, 6182847. [Google Scholar] [CrossRef]
  33. Pulvirenti, G.; Faria, C. Influence of Housing Wall Compliance on Shock Absorbers in the Context of Vehicle Dynamics. Proc. IOP Conf. Ser. Mater. Sci. Eng. 2017, 252, 012026. [Google Scholar] [CrossRef]
  34. Liu, Y.; Zhang, J. Nonlinear dynamic responses of twin-tube hydraulic shock absorber. Mech. Res. Commun. 2002, 29, 359–365. [Google Scholar] [CrossRef]
  35. Shams, M.; Carpentieri, B. Efficient Inverse Fractional Neural Network-Based Simultaneous Schemes for Nonlinear Engineering Applications. Fractal Fract. 2023, 7, 849. [Google Scholar] [CrossRef]
  36. Shams, M.; Carpentieri, B. On highly efficient fractional numerical method for solving nonlinear engineering models. Mathematics 2023, 11, 4914. [Google Scholar] [CrossRef]
  37. Shams, M.; Kausar, N.; Yaqoob, N.; Arif, N.; Addis, G.M. Techniques for finding analytical solution of generalized fuzzy differential equations with applications. Complexity 2023, 2023, 3000653. [Google Scholar] [CrossRef]
  38. Qureshi, S.; Soomro, A.; Shaikh, A.A.; Hincal, E.; Gokbulut, N. A Novel Multistep Iterative Technique for Models in Medical Sciences with Complex Dynamics. Comput. Math. Methods Med. 2022, 2022, 7656451. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Basins of attraction for optimal fourth-order iterative methods Q S 4 , S Q 4 , S P 4 , S N 4 , C H 4 , E K 4 , F C 4 , J H 4 , J T 4 , S H 4 generated for p 1 ( z ) .
Figure 1. Basins of attraction for optimal fourth-order iterative methods Q S 4 , S Q 4 , S P 4 , S N 4 , C H 4 , E K 4 , F C 4 , J H 4 , J T 4 , S H 4 generated for p 1 ( z ) .
Axioms 13 00675 g001
Figure 2. Basins of attraction for optimal fourth-order iterative methods Q S 4 , S Q 4 , S P 4 , S N 4 , C H 4 , E K 4 , F C 4 , J H 4 , J T 4 , S H 4 for p 2 ( z ) .
Figure 2. Basins of attraction for optimal fourth-order iterative methods Q S 4 , S Q 4 , S P 4 , S N 4 , C H 4 , E K 4 , F C 4 , J H 4 , J T 4 , S H 4 for p 2 ( z ) .
Axioms 13 00675 g002
Figure 3. Basins of attraction for optimal fourth-order iterative methods Q S 4 , S Q 4 , S P 4 , S N 4 , C H 4 , E K 4 , F C 4 , J H 4 , J T 4 , S H 4 for p 3 ( z ) .
Figure 3. Basins of attraction for optimal fourth-order iterative methods Q S 4 , S Q 4 , S P 4 , S N 4 , C H 4 , E K 4 , F C 4 , J H 4 , J T 4 , S H 4 for p 3 ( z ) .
Axioms 13 00675 g003
Figure 4. Basins of attraction for optimal fourth-order iterative methods Q S 4 , S Q 4 , S P 4 , S N 4 , C H 4 , E K 4 , F C 4 , J H 4 , J T 4 , S H 4 for p 4 ( z ) .
Figure 4. Basins of attraction for optimal fourth-order iterative methods Q S 4 , S Q 4 , S P 4 , S N 4 , C H 4 , E K 4 , F C 4 , J H 4 , J T 4 , S H 4 for p 4 ( z ) .
Axioms 13 00675 g004
Figure 5. Graphical comparison of optimal fourth-order methods based on the percentage of black points for p 1 ( z ) , p 2 ( z ) , p 3 ( z ) and p 4 ( z ) .
Figure 5. Graphical comparison of optimal fourth-order methods based on the percentage of black points for p 1 ( z ) , p 2 ( z ) , p 3 ( z ) and p 4 ( z ) .
Axioms 13 00675 g005
Figure 6. Graphical comparison of optimal fourth-order methods based on the mean number of iterations to converge for p 1 ( z ) , p 2 ( z ) , p 3 ( z ) and p 4 ( z ) .
Figure 6. Graphical comparison of optimal fourth-order methods based on the mean number of iterations to converge for p 1 ( z ) , p 2 ( z ) , p 3 ( z ) and p 4 ( z ) .
Axioms 13 00675 g006
Figure 7. Quarter car suspension model.
Figure 7. Quarter car suspension model.
Axioms 13 00675 g007
Figure 8. Comparison of the optimal fourth-order methods based on the absolute error, | u n β | , after the first three full iterations for g 1 ( u ) , g 2 ( u ) , g 3 ( u ) , g 4 ( u ) , g 5 ( u ) and g 6 ( u ) respectively.
Figure 8. Comparison of the optimal fourth-order methods based on the absolute error, | u n β | , after the first three full iterations for g 1 ( u ) , g 2 ( u ) , g 3 ( u ) , g 4 ( u ) , g 5 ( u ) and g 6 ( u ) respectively.
Axioms 13 00675 g008aAxioms 13 00675 g008b
Table 1. Comparison of optimal fourth-order Methods in terms of the number of black points, average number of iterations, and CPU time for p 1 ( z ) .
Table 1. Comparison of optimal fourth-order Methods in terms of the number of black points, average number of iterations, and CPU time for p 1 ( z ) .
FunctionMethodCount of Black PointsAverage No. of IterationsCPU Time
p 1 Q S 4 3 3.7542 27.5938
S Q 4 346 4.2615 25.5000
S P 4 1 2.7666 21.7188
S N 4 41 4.1664 27.0469
C H 4 5931 3.0868 34.5938
E K 4 13,218 2.7552 70.8750
F C 4 4058 4.7356 32.3438
J H 4 1111 4.4957 23.8750
J T 4 2514 5.1265 30.2500
S H 4 28 3.0779 23.0469
Table 2. Comparison of optimal fourth-order methods in terms of number of black points, average number of iterations and CPU time for p 2 ( z ) .
Table 2. Comparison of optimal fourth-order methods in terms of number of black points, average number of iterations and CPU time for p 2 ( z ) .
FunctionMethodCount of Black PointsAverage No. of IterationsCPU Time
p 2 Q S 4 27 4.02595 36.2188
S Q 4 403 4.6777 35.0156
S P 4 0 3.7250 40.6094
S N 4 262 4.5576 35.7188
C H 4 11,210 3.8171 55.4531
E K 4 6515 3.9982 73.1875
F C 4 4869 4.7555 40.8125
J H 4 1832 4.5303 32.7031
J T 4 3957 5.1100 42.3438
S H 4 0 3.33324 32.3906
Table 3. Comparison of optimal fourth-order methods based on number of black points, average iterations, and CPU time for p 3 ( z ) .
Table 3. Comparison of optimal fourth-order methods based on number of black points, average iterations, and CPU time for p 3 ( z ) .
FunctionMethodCount of Black PointsAverage No. of IterationsCPU Time
p 3 S Q 4 857 4.32199 47.7500
S Q 4 34,386 1.7196 76.9375
S P 4 26,562 2.0068 71.1250
S N 4 2256 4.8844 48.8438
C H 4 14,133 3.8749 72.0313
E K 4 10,122 3.6570 103.016
F C 4 15,312 3.7646 56.7188
J H 4 9557 4.0719 46.9531
J T 4 14,512 4.1450 59.2813
S H 4 30 3.29484 40.671
Table 4. Comparison of optimal fourth-order methods based on number of black points, average iterations, and CPU time for p 4 ( z ) .
Table 4. Comparison of optimal fourth-order methods based on number of black points, average iterations, and CPU time for p 4 ( z ) .
FunctionMethodCount of Black PointsAverage No. of IterationsCPU Time
p 4 Q S 4 2814 6.1930 116.531
S Q 4 56,478 0.5676 203.094
S P 4 52,549 0.7642 220.672
S N 4 5701 6.5614 111.359
C H 4 17,358 4.1651 140.625
E K 4 13,256 5.5663 216.594
F C 4 16,627 5.2942 128.875
J H 4 11,668 5.5922 110.438
J T 4 16,423 5.6325 121.750
S H 4 341 4.8714 90.3594
Table 5. Numerical comparisons of several optimal fourth-order iterative methods for g 1 ( u ) .
Table 5. Numerical comparisons of several optimal fourth-order iterative methods for g 1 ( u ) .
Fun.MethodGuess | u 1 β | | u 2 β | | u 3 β | | g ( u n ) | COCCPU
g 1 Q S 4 0.3 1.5589 × 10 3 2.9424 × 10 13 3.7308 × 10 52 8.3415 × 10 205 4.0000 0.689
S Q 4 7.7159 × 10 4 2.8797 × 10 14 5.5805 × 10 56 6.8079 × 10 220 4.0000 0.611
S P 4 1.0153 × 10 4 3.0097 × 10 14 2.3248 × 10 57 7.1605 × 10 224 4.0000 0.719
S N 4 2.2088 × 10 3 1.8420 × 10 12 8.8962 × 10 49 4.1874 × 10 191 4.0000 0.735
C H 4 1.7343 × 10 3 6.0948 × 10 13 9.2971 × 10 51 4.3546 × 10 199 4.0000 0.657
E K 4 2.0701 × 10 2 1.5548 × 10 7 4.8329 × 10 28 3.9028 × 10 107 4.0000 0.750
F C 4 3.8868 × 10 3 5.3512 × 10 11 1.9328 × 10 42 2.8454 × 10 165 4.0000 0.687
J H 4 2.7209 × 10 3 7.1168 × 10 12 3.3372 × 10 46 1.3959 × 10 180 4.0000 0.766
J T 4 3.48201 × 10 3 1.8262 × 10 10 3.7962 × 10 40 6.1324 × 10 156 4.0000 0.625
S H 4 8.8305 × 10 4 6.2194 × 10 15 1.5210 × 10 59 4.7069 × 10 235 4.0000 0.611
Table 6. Numerical comparisons of several optimal fourth-order iterative methods for g 2 ( u ) .
Table 6. Numerical comparisons of several optimal fourth-order iterative methods for g 2 ( u ) .
Fun.MethodGuess | u 1 β | | u 2 β | | u 3 β | | g ( u n ) | COCCPU
g 2 Q S 4 0.7 2.9079 × 10 4 4.9839 × 10 16 4.3037 × 10 63 1.8224 × 10 251 4.0000 0.743
S Q 4 1.0780 × 10 3 2.3905 × 10 14 5.7600 × 10 57 1.4786 × 10 227 4.0000 0.703
S P 4 1.0814 × 10 3 5.2722 × 10 14 2.9821 × 10 55 2.3248 × 10 220 4.0000 0.765
S N 4 1.0883 × 10 3 1.1262 × 10 13 1.2953 × 10 53 1.7260 × 10 213 4.0000 0.719
C H 4 2.9206 × 10 5 5.5786 × 10 16 7.4326 × 10 63 1.7837 × 10 250 4.0000 0.735
E K 4 2.266 × 10 1 2.6063 × 10 3 1.6740 × 10 11 1.8443 × 10 42 4.0000 0.780
F C 4 1.0978 × 10 3 2.0734 × 10 13 2.6504 × 10 52 5.3895 × 10 208 4.0000 0.735
J H 4 1.0914 × 10 3 1.4347 × 10 13 4.2995 × 10 53 2.6410 × 10 211 4.0000 0.735
J T 4 1.1039 × 10 3 2.7385 × 10 13 1.0423 × 10 51 1.6660 × 10 205 4.0000 0.733
S H 4 2.8773 × 10 4 3.7011 × 10 16 1.0139 × 10 63 4.3490 × 10 254 4.0000 0.657
Table 7. Numerical comparisons of several optimal fourth-order iterative methods for g 3 ( u ) .
Table 7. Numerical comparisons of several optimal fourth-order iterative methods for g 3 ( u ) .
Fun.MethodGuess | u 1 β | | u 2 β | | u 3 β | | g ( u n ) | COCCPU
g 3 Q S 4 0.2 2.6043 × 10 5 7.5678 × 10 21 5.3958 × 10 83 1.5440 × 10 331 4.0000 0.689
S Q 4 1.4496 × 10 4 3.8147 × 10 14 1.8296 × 10 67 1.0718 × 10 268 4.0000 0.656
S P 4 8.9439 × 10 5 3.2859 × 10 18 5.9872 × 10 72 7.3071 × 10 287 4.0000 0.797
S N 4 3.4913 × 10 5 2.7818 × 10 20 1.1210 × 10 80 3.2735 × 10 322 4.0000 0.672
C H 4 4.0307 × 10 5 7.4251 × 10 20 8.5503 × 10 79 1.6647 × 10 314 4.0000 0.640
E K 4 1.1932 × 10 3 2.2240 × 10 12 2.6867 × 10 47 6.3359 × 10 187 4.0000 0.641
F C 4 1.6751 × 10 4 9.7517 × 10 17 1.1199 × 10 65 2.1570 × 10 261 4.0000 0.750
J H 4 7.7136 × 10 5 1.9033 × 10 18 7.0550 × 10 73 1.4746 × 10 290 4.0000 0.750
J T 4 2.4733 × 10 4 7.2565 × 10 16 5.3770 × 10 62 1.7948 × 10 246 4.0000 0.679
S H 4 1.6247 × 10 5 6.8488 × 10 22 2.1625 × 10 87 2.3800 × 10 349 4.0000 0.625
Table 8. Numerical comparisons of several optimal fourth-order iterative methods for g 4 ( u ) .
Table 8. Numerical comparisons of several optimal fourth-order iterative methods for g 4 ( u ) .
Fun.MethodGuess | u 1 β | | u 2 β | | u 3 β | | g ( u n ) | COCCPU
g 4 Q S 4 14.5 4.8783 × 10 11 7.9538 × 10 48 5.6209 × 10 195 2.7077 × 10 786 4.0000 0.626
S Q 4 5.9327 × 10 11 2.2174 × 10 47 4.3271 × 10 193 1.2119 × 10 778 4.0000 0.546
S P 4 5.9396 × 10 11 2.2296 × 10 47 4.4264 × 10 193 1.3282 × 10 778 4.0000 0.656
S N 4 5.9535 × 10 11 2.2541 × 10 47 4.6316 × 10 193 1.5947 × 10 778 4.0000 0.516
C H 4 4.8804 × 10 11 7.9704 × 10 48 5.6700 × 10 195 2.8045 × 10 786 4.0000 0.611
E K 4 2.7563 × 10 5 5.7721 × 10 19 1.1101 × 10 73 2.9326 × 10 295 4.0000 0.625
F C 4 5.9743 × 10 11 2.2912 × 10 47 4.9565 × 10 193 2.0964 × 10 778 4.0000 0.594
J H 4 5.9604 × 10 11 2.2664 × 10 47 4.7376 × 10 193 1.7471 × 10 778 4.0000 0.578
J T 4 3.9864 × 10 11 2.3160 × 10 47 5.1832 × 10 193 2.5111 × 10 778 4.0000 0.562
S H 4 4.8729 × 10 11 7.9130 × 10 48 5.5022 × 10 195 2.4842 × 10 786 4.0000 0.525
Table 9. Numerical comparisons of several optimal fourth-order iterative methods for g 5 ( u ) .
Table 9. Numerical comparisons of several optimal fourth-order iterative methods for g 5 ( u ) .
Fun.MethodGuess | u 1 β | | u 2 β | | u 3 β | | g ( u n ) | COCCPU
g 5 Q S 4 5.2 2.0770 × 10 5 6.1360 × 10 21 4.6742 × 10 83 6.2668 × 10 329 4.0000 0.765
S Q 4 7.7652 × 10 3 1.8090 × 10 18 5.3280 × 10 81 4.0092 × 10 331 4.0000 0.781
S P 4 1.9704 × 10 3 1.7339 × 10 20 1.0397 × 10 89 1.3442 × 10 382 4.0000 0.797
S N 4 2.6583 × 10 5 2.0485 × 10 20 7.2237 × 10 81 4.4472 × 10 320 4.0000 0.781
C H 4 2.4864 × 10 5 1.5679 × 10 20 2.4787 × 10 81 6.1653 × 10 322 4.0000 0.733
E K 4 5.8552 × 10 4 1.0709 × 10 13 1.1976 × 10 52 7.4579 × 10 206 4.0000 0.750
F C 4 6.4103 × 10 5 1.9154 × 10 18 1.5268 × 10 72 2.4541 × 10 286 4.0000 0.766
J H 4 3.8715 × 10 4 1.4638 × 10 19 2.9915 × 10 77 2.0777 × 10 305 4.0000 0.719
J T 4 8.7417 × 10 5 9.4423 × 10 18 1.2855 × 10 69 1.7584 × 10 274 4.0000 0.751
S H 4 9.6922 × 10 6 1.3121 × 10 22 4.4077 × 10 90 2.2345 × 10 357 4.0000 0.75
Table 10. Numerical comparisons of several optimal fourth-order iterative methods for g 6 ( u ) .
Table 10. Numerical comparisons of several optimal fourth-order iterative methods for g 6 ( u ) .
Fun.MethodGuess | u 1 β | | u 2 β | | u 3 β | | g ( u n ) | COCCPU
g 6 Q S 4 1.1 6.6556 × 10 2 4.3773 × 10 6 9.5161 × 10 23 1.0318 × 10 75 4.0000 0.854
S Q 4 1.0840 × 10 1 3.3362 × 10 5 2.0840 × 10 19 1.0318 × 10 75 4.0000 0.734
S P 4 5.9738 × 10 2 1.1949 × 10 7 1.2928 × 10 30 5.7621 × 10 122 4.0000 0.844
S N 4 7.6080 × 10 2 9.0982 × 10 6 2.1746 × 10 21 2.3083 × 10 83 4.0000 0.891
C H 4 6.9293 × 10 2 5.9909 × 10 6 4.0338 × 10 22 2.6966 × 10 86 4.0000 0.86
E K 4 6.6262 × 10 1 1.1012 × 10 1 1.8903 × 10 4 6.7953 × 10 15 4.0000 0.813
F C 4 1.1441 × 10 1 9.1386 × 10 5 5.5981 × 10 17 2.5649 × 10 65 4.0000 0.750
J H 4 8.6253 × 10 2 2.0472 × 10 5 8.4168 × 10 20 7.8225 × 10 77 4.0000 0.767
J T 4 1.3178 × 10 1 1.9959 × 10 4 1.7863 × 10 15 3.7315 × 10 59 4.0000 0.811
S H 4 3.8007 × 10 2 2.6957 × 10 7 7.2771 × 10 28 1.2569 × 10 109 4.0000 0.625
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abdullah, S.; Choubey, N.; Dara, S.; Junjua, M.-u.-D.; Abdullah, T. A Robust and Optimal Iterative Algorithm Employing a Weight Function for Solving Nonlinear Equations with Dynamics and Applications. Axioms 2024, 13, 675. https://doi.org/10.3390/axioms13100675

AMA Style

Abdullah S, Choubey N, Dara S, Junjua M-u-D, Abdullah T. A Robust and Optimal Iterative Algorithm Employing a Weight Function for Solving Nonlinear Equations with Dynamics and Applications. Axioms. 2024; 13(10):675. https://doi.org/10.3390/axioms13100675

Chicago/Turabian Style

Abdullah, Shahid, Neha Choubey, Suresh Dara, Moin-ud-Din Junjua, and Tawseef Abdullah. 2024. "A Robust and Optimal Iterative Algorithm Employing a Weight Function for Solving Nonlinear Equations with Dynamics and Applications" Axioms 13, no. 10: 675. https://doi.org/10.3390/axioms13100675

APA Style

Abdullah, S., Choubey, N., Dara, S., Junjua, M. -u. -D., & Abdullah, T. (2024). A Robust and Optimal Iterative Algorithm Employing a Weight Function for Solving Nonlinear Equations with Dynamics and Applications. Axioms, 13(10), 675. https://doi.org/10.3390/axioms13100675

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop