Abstract
We proposed and analyzed the fractional simultaneous technique for approximating all the roots of nonlinear equations in this research study. The newly developed fractional Caputo-type simultaneous scheme’s order of convergence is , according to convergence analysis. Engineering-related numerical test problems are taken into consideration to demonstrate the efficiency and stability of fractional numerical schemes when compared to previously published numerical iterative methods. The newly developed fractional simultaneous approach converges on random starting guess values at random times, demonstrating its global convergence behavior. Although the newly developed method shows global convergent behavior when all starting guess values are distinct, the method diverges otherwise. The total computational time, number of iterations, error graphs and maximum residual error all clearly illustrate the stability and consistency of the developed scheme. The rate of convergence increases as the fractional parameter’s value rises from 0.1 to 1.0.
Keywords:
computational efficiency; error graph; optimal order; simultaneous methods; computer algorithm MSC:
65H04; 65H05; 65H17
1. Introduction
When analytical approaches are not available, iterative schemes are the only viable strategy for numerically approximating the roots of nonlinear equations
in a stable manner. They start with an initial approximation and iteratively refine the solution using algebraic equations until a satisfactory approximation is obtained. This approximation of the solution is carried out in this manner until every root is identified. There are two types of iterative root-finding schemes: simultaneous techniques, which approximate all roots simultaneously, and methods which approximate one root at a time (see, for example, Traub’s method [1], Jarratt’s method [2], King’s method [3], Ostrowski’s method [4], Chun et al.’s method [5], and many others). In recent years, simultaneous techniques have grown in popularity as a result of their global convergence and inherent parallelism (see, for example, the works by Weierstrass [6], Kanno [7], Proinov [8], Mir [9], Farmer [10], Nourein [11], Aberth [12], and Cholakov [13] and the references therein). On the other hand, because of the intrinsic difficulties of these equations, such as the non-linearity and non-locality, standard analytical, semi-analytical, and classical numerical approaches are typically ineffective.
In order to decrease the overall computational time, parallel numerical schemes utilize parallel computing [14] to solve nonlinear equations. This is achieved by decomposing the problem into smaller tasks, which can be executed simultaneously on multiple processors or cores. Therefore, these schemes are particularly useful when dealing with large-scale or computationally intensive engineering problems [15]. A comprehensive understanding of parallel programming techniques, algorithms, and the specific characteristics of the problem at hand is necessary for the effective implementation of parallel numerical schemes. Furthermore, the selection of a parallel scheme is often influenced by the nature of the nonlinear equations being solved, the hardware at hand, and the size of the problem. An overview of parallel numerical methods for solving nonlinear equations can be found in [16,17,18].
The performance of simultaneous root-finding algorithms varies depending on the initial guess and the problem at hand, and convergence is not always guaranteed [19,20,21]. As a result, efforts have been made to develop more robust and efficient procedures. In this research, we propose highly efficient fractional numerical techniques for simultaneously approximating all the roots of nonlinear equations. Fractional simultaneous methods utilize fractional-order derivatives of the function to solve (1). Fractional calculus, which is concerned with non-integer-order derivatives and integrals, is used in many areas, including physics, engineering, and finance [22,23,24]. A comprehensive analysis of the convergence and of the computational complexity of our method is derived. The performance and global convergence behavior of the algorithm is assessed for solving some practical engineering applications by considering various factors, including CPU time, maximum computational time on random initial guess values, maximum residual error, and local computational order of convergence.
The structure of the paper is outlined as follows. After the introduction, we discuss some basic definitions in Section 2. In Section 3, parallel computing schemes are developed and analyzed to solve (1). Section 4 compares the computational aspects of newly proposed simultaneous techniques to existing methods in the literature. In Section 5, we discuss the numerical results of the newly developed scheme. The conclusion of the paper is in Section 6.
2. Some Preliminaries
In this section, we will go over some fundamental aspects of fractional calculus as well as the fractional iterative approach to solving nonlinear equations using Caputo-type derivatives, even though, apart from the Caputo derivative, all fractional-type derivatives do not fulfill the criteria for a fractional calculus. if is not a natural number.
The gamma function is described as follows [25]:
where Gamma is a generalization of the factorial function due to and , where
Order ’s Caputo fractional derivative [26,27,28,29] with is stated as:
where is a gamma function with
Theorem 1
(Generalized Taylor formula [30,31]). Suppose for where , then
and and (n-times).
In terms of the Caputo-type Taylor development of around , then
Taking the common, we have:
where
The corresponding derivative of the Caputo type of around is
The classic Newton–Raphson technique is the most widely used method for locating a single root:
Akgül et al. [31], Torres-Hernandez et al. [32], Gajori et al. [33] and Kumar et al. [34] discuss the fractional Newton method with different types of fractional derivatives. For the Caputo type of the classical Newton’s method (FNN), Candelario et al. [35] propose the following fractional version:
where for any The order of convergence of the fractional Newton method is , satisfying the following error equation:
where and and
Candelario et al. [35] also present the following fractional numerical scheme () for solving simple roots of nonlinear equations:
The order of convergence of is , and the error equation is given as:
where and .
3. Construction of Fractional Parallel Computing Scheme for Estimating All Distinct and Multiple Roots
Weierstrass-Dochive [18] presents the following local quadratic convergence scheme:
where
is Weierstrass’ Correction.
In order to construct an iterative process for approximating all the multiple roots of polynomial, let us assume a monic polynomial of degree n with roots having known multiplicities such that
Consider the Newton correction and
This implies that
where is the exact root and is its approximation. This gives
Substituting the roots by its approximations in (19), we obtain the third-order convergent Ehrlich–Aberth method [36] for the roots with multiplicities
where is new approximation to the root . Instead of simple approximation , we can apply some better approximation to . The main goal in this accelerating process is to improve convergence. The aim can be achieved by choosing Newton’s approximation instead of in (22):
Now, we derive a new -order method for the determination of all the roots of (1). Let be reasonably close approximations to the roots , respectively, of polynomial , which means that is a sufficiently small quantity. Let us return to the relation (19) by replacing with . We have:
Assuming that is small enough to provide , we use the development into geometric series and obtain:
Neglecting terms of a higher order in the last relations, we obtain:
We name the method introduced in (30) as the SFM-Method. Now, we calculate the convergence order of the SFM-Method. Firstly, we introduce some notations as:
Now, we suppose the condition
where The conditions hold for each, where
Convergence Analysis: Here, we prove the following lemma:
Lemma 1.
Let be reasonably close approximations of roots , respectively. Let , , where are the new approximations produced by the iterative SFM. If (32) is satisfied, then the following estimate is also true:
- (i)
- (ii)
Proof.
Using (34), we obtain:
Now, we introduce some new notations:
As
As
From (34), we have . Therefore,
and therefore,
using Newton’s correction, we obtain,
where,
From (55), we get:
Therefore, where
and therefore,
Since is a monotonically decreasing sequence, let us estimate from the above the absolute values of . We have:
Since and for all
Hence, we have the proof of Lemma 1 (i). Now, from Equation (86), we have
which completes the proof of Lemma 1 (ii). □
Let be the good initial guesses to roots of an algebraic polynomial f and suppose where approximations are obtained in the iterative step by the simultaneous SFM-method. Using the conditions of Lemma 1, now we state the main convergence theorem of our SFM-Method.
Theorem 2.
According to the following assumptions
the iterative formula SFM is convergent, having convergent order .
Proof.
In Lemma 1 (i), we develop the results (86) under the assumptions (32). Using the same arguments under condition (88) of theorem 1, we have from (86):
So according to Lemma 1 (ii), we have:
We prove the theorem by mathematical induction; condition (88) implies
for every, and Using (91) becomes
Let , then from assumptions (93), it follows that . For all and from (93), we obtain for each and Therefore, from (93), we obtain:
which shows that the proposition converges to zero. Consequently, the sequence also converges to zero. That is, for all i as increases. Finally, from (94), it can be concluded that the method (SFM-Method) has convergence order . □
4. Computational Analysis of Simultaneous Methods
Global convergence behavior dominates the computing complexity of the simultaneous technique as compared to a simple roots-finding computer algorithm. This implies that the overall complexity of the parallel computer technique for (1) is . As presented in [37], the computational efficiency of an iterative method can be estimated using the efficiency index given by
Table 1.
Operations per cycle.
Figure 1a–e graphically illustrate these percentage ratios.
Figure 1.
(a–e) The computational efficiency ratios of fractional simultaneous schemes with respect to each other for different fractional parameter values. (a) Computational efficiency ratio of with respect to . (b) Computational efficiency ratio of with respect to . (c) Computational efficiency ratio of with respect to . (d) Computational efficiency ratio of with respect to . (e) Computational efficiency ratio of with respect to .
Here, , and SFM is a simultaneous method for the fractional parameter value equals one.
5. Numerical Outcomes
To compare our recently developed simultaneous methods SFMSFM of order to SFM, we look at a few numerical test examples in this section. With Maple 18’s 64 digits floating point arithmetic, all calculations were completed. The parallel computer algorithm was terminated based on the following conditions:
where denotes the absolute error of consecutive iterations. In Table, 2-21, the numerical schemes for various fractional parameter values, i.e., 0.1, 0.3, 0.5, 0.8, 1.0, are represented by SFMSFM, SFM respectively, and B** denotes digits floating point arithmetic. In all tables, we use the following computer terminating criteria (Algorithm 1).
| Algorithm 1 For the fractional numerical scheme SFMs |
![]() |
Engineering Applications
This section presents many problems in engineering whose solutions are approximated by our newly created parallel approaches SFMSFM and SFM.
Engineering Application 1: Emden–Fowler equation
The Emden–Fowler second-order nonlinear differential equation arises in various fields of physics and engineering, fluid dynamics, heat transfer, and astrophysics, in particular, to model the structure of self-gravitating, spherically symmetric objects, such as stars. The equation is named in honor of Ralph H. Fowler and Robert Emden, two German astrophysicists who made significant contributions to its formulation. The general form of the Emden–Fowler equation is given by [38,39]:
Because of its nonlinearity, solving the Emden–Fowler equation is often difficult, and closed-form solutions exists only in specific cases. Choosing and in (97), we obtain the following nonlinear initial value problem:
Using the procedure described in [40], the numerical solution of (98) can be performed by solving the following polynomial:
The Caputo-type derivative of (99) is given as:
The exact solution of (99) up to four decimal places is:
In order to determine the global convergence behavior of the parallel scheme, we generate a random initial guess ranging from to using Matlab as explained in Appendix A Table A1. According to the results presented in Table 2, when an arbitrary starting value is used, SFMSFMSFM converges to exact zeros after 19, 17, 13, 10, and 10 iterations for fraction parameters 0.1, 0.3, 0.5, 0.8, and 1.0, respectively. The corresponding CPU times are 2.1254, 1.0874, 1.0078, 0.0784 and 0.0078 as shown in Table 3. The acceleration of the convergence rate of SFMSFMSFM as the fractional parameter value increases from 0.1 to 1.0 can be clearly seen in Table 4. Global convergence is demonstrated by the fact that the newly developed method converges to exact roots for randomly generated initial guess values.
Table 2.
Experiments using random initial approximation for finding all polynomial roots simultaneously.
Table 3.
CPU-Time using random initial approximation for finding all polynomial roots.
Table 4.
Local computational order of convergence using random initial approximation.
Table 2 shows the number of iterations of the fractional simultaneous scheme SFMSFM SFM for different choices of the random initial vector given in Appendix A, Table A1. Table 2 clearly shows that the number of iterations decreased as the fractional parameter values increased from 0.1 to 1.0.
Table 5 shows the maximum error (Max-Err) computed by the fractional simultaneous scheme SFMSFM SFM for different selections of the random initial vector given in Appendix A Table A1 to approximate all roots of the polynomial equations used in application 1. Table 5 clearly demonstrates that as the fractional parameter values increased from 0.1 to 1.0, the accuracy computed by the simultaneous scheme increased significantly (Figure 2).
Table 5.
Maximum error using random initial approximation for finding all polynomial roots.
Figure 2.
Residual error of the SFM for approximating all polynomial equation roots used in engineering application 1 for various fractional parameter values, namely .
Table 4 shows the approximate local computational order of convergence. The approximate local computational order of convergence increases as the fractional parameter values increase from 0.1 to 1.0.
Table 3 shows the computational CPU time in seconds to approximate all roots of the polynomial equation used in application 1 employing the fractional simultaneous scheme.
The rate of convergence increases as the initial guess values are chosen to be sufficiently close to the exact root of (99) as:
If we start with initial guessed values that are close to the exact root, Table 6 demonstrates that the fractional simultaneous scheme’s accuracy and convergence order improve. As the fractional parameter value was increased from 0.1 to 1.0, the residual error calculated using numerical methods also increased.
Table 6.
Computation of all polynomial equation roots.
Engineering Application 2: Under Conservative Force—Mass Spring System
Let us now examine an external force acting on a vibrating mass on a spring. A driving force that causes the spring support to oscillate vertically, for instance, could be represented by . If the mechanical system is conservative, the following nonlinear equation arises [41,42]:
The Caputo-type derivative of (102) is given as:
The exact solution up to four decimal places is written as:
To determine the global convergence component of the parallel scheme, use Matlab to generate a random initial guess value ranging from as specified in Appendix A Table A2. With an arbitrary starting value, SFMSFM SFM converges to exact zeros after 19, 16, 14, 10 and 10 iterations as indicated in Table 7 for fraction parameters values 0.1, 0.3, 0.5, 0.8, and 1.0, respectively. As described in Table 8, the corresponding CPU times are 3.1254, 1.0729, 1.0137, 0.0881 and 0.0141, respectively. Table 9 clearly illustrates how the rate of convergence of SFMSFMSFM accelerates as the value of the fractional parameter increases from 0.1 to 1.0. The newly developed method converges to exact roots for randomly generated initial guess values, demonstrating its global convergence.
Table 7.
Iteration numbers using random initial approximation for finding all polynomial roots.
Table 8.
CPU-Time using random initial approximation for finding all polynomial roots.
Table 9.
Maximum error using random initial approximation for finding all polynomial roots.
Table 7 shows the number of iterations of fractional simultaneous scheme SFMSFM SFM for different random initial vectors given in Appendix A Table A2. Table 7 clearly shows that the number of iterations decreased as the fractional parameter values increased from 0.1 to 1.0.
Table 9 shows the maximum error (Max-Err) computed by fractional simultaneous scheme SFMSFM SFM for different random initial vectors given in Appendix A Table A2 to approximate all the roots of polynomial equations used in application 2. Table 9 clearly demonstrates that as the fractional parameter values increased from 0.1 to 1.0, the accuracy computed by simultaneous scheme increased significantly (Figure 3).
Figure 3.
The residual error of SFM for approximating all polynomial equation roots used in engineering application 2 for various fractional parameter values, namely .
The approximate local computational order of convergence is shown in Table 10. From 0.1 to 1.0, as the fractional parameter values increase, the approximate local computational order of convergence increases.
Table 10.
Local computational order of convergence using random initial approximation.
According to the fractional simultaneous scheme, Table 10 displays the local computational order of convergence needed for the approximation of all roots of the polynomial equation used in application 2. Convergence rates increase as the following initial estimations are sufficiently adjusted to the exact roots of the engineering application 2:
are chosen as initial guessed values.
Table 11 shows that the convergence order and accuracy of the fractional simultaneous scheme are increased if we take the initial guessed values close to the exact root. The residual error computed by numerical methods also increased as we increased the fractional parameter value from 0.1 to 1.0.
Table 11.
Determination of all polynomial equation roots.
Engineering Application 3: Series Circuit Analogue
Consider a flexible spring that is stretched vertically from a rigid support and has a mass m attached to its free end. Naturally, the mass will determine how much the spring elongates or stretches; different weight masses will result in different ways that the spring will stretch. Hooke’s law states that the spring itself generates a restoring force F that is opposed to the direction of elongation and proportional to the amount of elongation s. In short, a proportionality constant is defined as , where k is the spring constant. In an undamped spring/mass system, the differential equation represents , which is mathematically modeled as [40,42]:
The Caputo-type derivative of (105) is given as:
The exact solution of (105) up to the decimal places is written as follows:
To determine the global convergence component of the parallel scheme, use Matlab to generate a random initial guess value ranging from as specified in Appendix A Table A3. With an arbitrary starting value, SFMSFM SFM converges to exact zeros after 19, 16, 13, 8, and 8 iterations as indicated in Table 12 for various fractional parameters, i.e., 0.1, 0.3, 0.5, 0.8, and 1.0. As described in Table 13, the corresponding CPU times are 3.1364, 1.0701, 1.0078, 0.0874 and 0.0975, respectively. Table 14 clearly illustrates how the rate of convergence of SFMSFM SFM accelerates as the value of the fractional parameter increases from 0.1 to 1.0. The newly developed method converges to exact roots for randomly generated initial guess values, demonstrating its global convergence.
Table 12.
Using random initial approximation for finding all polynomial roots simultaneously.
Table 13.
CPU-Time using random initial values for finding all polynomial roots.
Table 14.
Maximum error using random initial approximation for finding all polynomial roots.
Table 12 shows the number of iterations of fractional simultaneous scheme SFMSFM SFM for different random initial vectors given in Appendix A Table A3. Table 12 clearly shows that the number of iterations decreased as the fractional parameter values increased from 0.1 to 1.0.
Table 14 shows the maximum error (Max-Err) computed by fractional simultaneous scheme SFMSFM SFM for different random initial vectors given in Appendix A Table A3 to approximate all roots of the polynomial equations used in application 3. Table 14 clearly demonstrates that as the fractional parameter values increased from 0.1 to 1.0, the accuracy computed by simultaneous scheme increased significantly (Figure 4).
Figure 4.
The residual error of the SFM for approximating all polynomial equation roots used in engineering application 3 for various fractional parameter values, namely .
The approximate local computational order of convergence is shown in Table 15. As the fractional parameter values increase from 0.1 to 1.0, the approximate local computational order of convergence increases.
Table 15.
Local computational order of convergence using random initial approximation.
Table 13 displays the computational CPU time in seconds required to approximate all roots of the polynomial equation used in application 3 using the fractional simultaneous scheme.
Convergence rates increase as the following initial estimations are sufficiently adjusted to the exact root of engineering application 3:
are chosen as the initial guessed values.
Table 16 shows how the convergence order and accuracy of the fractional simultaneous scheme increase when we use initial guessed values close to the exact root. The residual error computed by numerical schemes increased as we increased the fractional parameter value from 0.1 to 1.0.
Table 16.
Determination of all polynomial equation roots.
Application 4: Hanging Object
A chain attached to an object on the ground is pulled vertically upward by constant forces against gravity, causing the following nonlinear initial value problem:
The Caputo-type derivative of (108) is given as:
The exact solution of (109) up to 4 decimal places is written as follows:
To determine the global convergence component of the parallel scheme, use Matlab to generate a random initial guess value ranging from as specified in Appendix A Table A4. With an arbitrary starting value, SFMSFM SFM converges to exact zeros after 19, 16, 14, 8 and 8 iterations as indicated in Table 17 for various fractional parameters, i.e., 0.1, 0.3, 0.5, 0.8, and 1.0. The newly developed method converges to exact roots for randomly generated initial guess values, demonstrating its global convergence.
Table 17.
Using random initial approximation for finding all polynomial roots simultaneously.
Table 17 shows the number of iterations of fractional simultaneous scheme SFMSFM SFM for different random initial vectors given in Appendix A Table A4. Table 18 shows the maximum error (Max-Err) computed by fractional simultaneous scheme SFMSFM SFM for different random initial vectors given in Appendix A Table A4 to approximate all roots of the polynomial equations used in application 4. Table 18 clearly demonstrates that as the fractional parameter values increased from 0.1 to 1.0, the accuracy computed by the simultaneous scheme increased significantly (Figure 5). This indicates the behavior of our recently developed simultaneous scheme in terms of global convergence. Table 19 clearly illustrates how the computational order of convergence of SFMSFMSFM increase as the value of the fractional parameter increases from 0.1 to 1.0. As described in Table 20, the corresponding CPU times are 2.1254, 1.0874, 1.0078, 0.0874, and 0.0078, are consumed respectively.
Table 18.
Maximum error using random initial approximation for finding all polynomial roots.
Figure 5.
The residual error of the SFM for approximating all polynomial equation roots used in engineering application 4 for various fractional parameter values, namely .
Table 19.
Local computational order of convergence using random initial approximation.
Table 20.
CPU-Time using random initial values for finding all polynomial roots.
The approximate local computational order of convergence is shown in Table 19. As the fractional parameter values increase from 0.1 to 1.0, the approximate local computational order of convergence increases.
Table 20 displays the computational CPU time in seconds required to approximate all roots of the polynomial equation used in application 4 using the fractional simultaneous scheme.
Convergence rates increase as the following initial estimations are sufficiently adjusted to the exact answer of engineering application 4:
are chosen as initial guessed values.
Table 21 shows how the convergence order and accuracy of the fractional simultaneous scheme increase when we use initial guessed values close to the exact root. The residual error computed by the numerical schemes increased as we increased the fractional parameter value from 0.1 to 1.0.
Table 21.
Determination of all polynomial equation roots.
6. Conclusions
- In order to approximate all roots of nonlinear equations, a new fractional parallel approach with convergence orders of is presented. The global convergence behavior of the fractional parallel schemes is demonstrated using a variety of random starting estimates of SFMSFM SFM.
- The numerical results of the engineering applications from Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16, Table 17, Table 18, Table 19, Table 20 and Table 21 and Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5 clearly show the efficiency of the newly developed methods in terms of CPU-time, computational error, maximum residual error, and local computational order of convergence (LCOC). The acceleration of the convergence rate is observed when the initial approximations close to the exact roots are selected as shown in Table 6, Table 11, Table 16 and Table 21.
- In the future, higher-order parallel iterative approaches for solving (1) will be developed to handle more difficult engineering problems using fractional derivatives of Riemann–Liouville and Grunwald–Letnikov types.
Author Contributions
Conceptualization, M.S. and B.C.; methodology, M.S.; software, M.S.; validation, M.S.; formal analysis, B.C.; investigation, M.S.; resources, B.C.; writing—original draft preparation, M.S. and B.C.; writing—review and editing, B.C.; visualization, M.S. and B.C.; supervision, B.C.; project administration, B.C.; funding acquisition, B.C. All authors have read and agreed to the published version of the manuscript.
Funding
The work is supported by Provincia autonoma di Bolzano/Alto Adigeâ euro ” Ripartizione Innovazione, Ricerca, Universitá e Musei (contract nr. 19/34). Bruno Carpentieri is a member of the Gruppo Nazionale per it Calcolo Scientifico (GNCS) of the Istituto Nazionale di Alta Matematia (INdAM) and this work was partially supported by INdAM-GNCS under Progetti di Ricerca 2022.
Data Availability Statement
Data are contained within the article.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this article.
Abbreviations
The following abbreviations are utilized in this study’s article:
| SFMSFM SFM | Fractional parallel scheme |
| Error it | Iteration number |
| Ex-Time | Computer CPU-Time in seconds |
| Computational local order of convergence | |
| Per-E | Percentage effectiveness |
| Ini-V | Initial vector |
| D** | Digits floating point arithmetic |
| CPU-Time | Computational time in seconds |
Appendix A
Table A1.
Initial random vectors used in fractional simultaneous schemes for approximating all polynomial roots used in engineering application 1.
Table A1.
Initial random vectors used in fractional simultaneous schemes for approximating all polynomial roots used in engineering application 1.
| [−0.160, 0.643, 0.967, 0.085, 0.967, 0.881, 0.760, 0.643, 0.874, 0.475, 0.876, −0.153, 0.392, 0.615, 0.171, 0.743, 0.643, 0.967] | |
| [0.743, 0.392, 0.655, 0.171, 0.743, 0.392, 0.855, 0.071, 0.145, 0.874, 0.775, 0.076, 0.643, 0.967, 0.085, 0.967, 0.881, 0.076] | |
| [−0.145, 0.874, 0.475, 0.876, −0.153, 0.392, 0.615, 0.171, 0.743, 0.775, 0.076, 0.3456, 0.74125, 0.643, 0.967, 0.874, 0.473, 0.145] | |
| ⋮ | [, , , |
Table A2.
Initial random vectors used in fractional simultaneous schemes for approximating all polynomial roots used in engineering application 2.
Table A2.
Initial random vectors used in fractional simultaneous schemes for approximating all polynomial roots used in engineering application 2.
| [−0.760, 0.643,0.967, 0.881, 0.760, 0.643, 0.967, 0.085, 0.01451, 0.1452] | |
| [−0.153, 0.392, 0.615, 0.171, 0.743, 0.392, 0.855, 0.071, 0.4512, 0.5641] | |
| [−0.905, 0.874, 0.473, 0.076, 0.145, 0.874, 0.775, 0.076, 0.3456, 0.74125] | |
| ⋮ | , , |
Table A3.
Initial random vectors used in fractional simultaneous schemes for approximating all polynomial roots used in engineering application 3.
Table A3.
Initial random vectors used in fractional simultaneous schemes for approximating all polynomial roots used in engineering application 3.
| [−0.760, 0.643,0.967] | |
| [0.743, 0.392, 0.855] | |
| [0.076, 0.145, 0.874] | |
| ⋮ | [, , |
Table A4.
Initial random vectors used in fractional simultaneous schemes for approximating all polynomial roots used in engineering application 4.
Table A4.
Initial random vectors used in fractional simultaneous schemes for approximating all polynomial roots used in engineering application 4.
| [−0.160, 0.643, 0.967, 0.085] | |
| [0.743, 0.392, 0.655, 0.171] | |
| [−0.145, 0.874, 0.475, 0.876] | |
| ⋮ | [ , , , |
References
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
- Jarratt, P. Some efficient fourth order multiple methods for solving equations. BIT 1969, 9, 119–124. [Google Scholar] [CrossRef]
- King, R. A family of fourth order methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
- Ostrowski, A.M. Solution of Equation in Euclidean and Banach Space, 3rd ed.; Academic Press: New York, NY, USA, 1973. [Google Scholar]
- Chun, C. Some fourth-order iterative methods for solving nonlinear equations. Appl. Math. Lett. 2008, 195, 454–456. [Google Scholar] [CrossRef]
- Weierstrass, K. Neuer Beweis des Satzes, dass jede ganze rationale Function einer Verän derlichen dargestellt werden kann als ein Product aus linearen Functionen derselben Verän derlichen. Sitzungsberichte KöNiglich Preuss. Akad. Der Wiss. Berl. 1981, 2, 1085–1101. [Google Scholar]
- Kanno, S.; Kjurkchiev, N.V.; Yamamoto, T. On some methods for the simultaneous determination of polynomial zeros. Japan J. Appl. Math. 1995, 13, 267–288. [Google Scholar] [CrossRef]
- Proinov, P.D.; Cholakov, S.I. Semilocal convergence of Chebyshev-like root-finding method for simultaneous approximation of polynomial zeros. Appl. Math. Comput. 2014, 236, 669–682. [Google Scholar] [CrossRef]
- Mir, N.A.; Muneer, R.; Jabeen, I. Some families of two-step simultaneous methods for determining zeros of nonlinear equations. ISRN Appl. Math. 2011, 2011, 817174. [Google Scholar] [CrossRef]
- Farmer, M.R. Computing the Zeros of Polynomials Using the Divide and Conquer Approach; Department of Computer Science and Information Systems; Birkbeck: London, UK, 2014. [Google Scholar]
- Nourein, A.W. An improvement on Nourein’s method for the simultaneous determination of the zeroes of a polynomial (an algorithm). J. Comput. Appl. Math. 1977, 3, 109–112. [Google Scholar] [CrossRef]
- Aberth, O. Iteration methods for finding all zeros of a polynomial simultaneously. Math. Comput. 1973, 27, 339–344. [Google Scholar] [CrossRef]
- Cholakov, S.I.; Vasileva, M.T. A convergence analysis of a fourth-order method for computing all zeros of a polynomial simultaneously. J. Comput. Appl. Math. 2017, 321, 270–283. [Google Scholar] [CrossRef]
- Consnard, M.; Fraigniaud, P. Finding the roots of a polynomial on an MIMD multicomputer. Parall. Comput. 1990, 15, 75–85. [Google Scholar] [CrossRef]
- Petković, M.S.; Petković, L.D.; Džunić, J. On an efficient method for the simultaneous approximation of polynomial multiple roots. Appl. Anal. Disc. Math. 2014, 8, 73–94. [Google Scholar] [CrossRef]
- Rafiq, N.; Shams, M.; Mir, N.A.; Gaba, Y.U. A highly efficient computer method for solving polynomial equations appearing in Engineering Problems. Math. Probl. Eng. 2023, 2021, 9826693. [Google Scholar] [CrossRef]
- Shams, M.; Rafiq, N.; Kausar, N.; Agarwal, P.; Park, C.; Mir, N.A. On iterative techniques for estimating all roots of nonlinear equation and its system with application in differential equation. Adv. Differ. Equ. 2021, 2021, 480. [Google Scholar] [CrossRef]
- Kyncheva, V.K.; Yotov, V.V.; Ivanov, S.I. Convergence of Newton, Halley and Chebyshev iterative methods as methods for simultaneous determination of multiple polynomial zeros. Appl. Numer. Math. 2017, 112, 146–154. [Google Scholar] [CrossRef]
- Nedzhibov, H. Iterative methods for simultaneous computing arbitrary number of multiple zeros of nonlinear equations. Int. J. Comp. Math. 2013, 90, 994–1007. [Google Scholar] [CrossRef]
- Sendov, B.L.; Andreev, A.; Kjurkchiev, N. Numerical solution of polynomial equations. Handb. Numer. Anal. 1994, 3, 625–778. [Google Scholar]
- Kyurkchiev, N.; Iliev, A. A general approach to methods with a sparse Jacobian for solving nonlinear systems of equations. Serdica Math. J. 2007, 33, 433–448. [Google Scholar]
- Shams, M.; Kausar, N.; Agarwal, P.; Oros, G.I. On Efficient Fractional Caputo-type Simultaneous Scheme for Finding all Roots of polynomial equations. Fractals 2023, 6, 2340075. [Google Scholar] [CrossRef]
- Dimitrov, Y.; Georgiev, S.; Todorov, V. Approximation of Caputo Fractional Derivative and Numerical Solutions of Fractional Differential Equations. Fractal Fract. 2023, 7, 750. [Google Scholar] [CrossRef]
- Shams, M.; Kausar, N.; Agarwal, P.; Shah, M.A. On family of Caputo-Type fractional numerical scheme for solving polynomial. Appl. Math. Sci. Eng. 2023, 31, 2181959. [Google Scholar] [CrossRef]
- Oliveira, D.E.C.; Tenreiro Machado, J.A. A review of definitions for fractional derivatives and integral. Math. Probl. Eng. 2014, 2014, 238459. [Google Scholar] [CrossRef]
- Oldham, K.; Spanier, J. The Fractional Calculus Theory and Applications of Differentiation and Integration to Arbitrary Order; Elsevier: Amsterdam, The Netherlands, 1974. [Google Scholar]
- Kukushkin, M.V. Abstract fractional calculus for m-accretive operators. arXiv 2019, arXiv:1901.06118. [Google Scholar] [CrossRef]
- Samko, S.G.; Kilbas, A.A.; Marichev, O.I. Fractional Ntegrals and Derivatives: Theory and Applications; Gordon and Breach Science Publishers: Philadelphia, PA, USA, 1993. [Google Scholar]
- Shams, M.; Carpentieri, B. Efficient Inverse Fractional Neural Network-Based Simultaneous Schemes for Nonlinear Engineering Applications. Fractal. Fract. 2023, 7, 849. [Google Scholar] [CrossRef]
- Odibat, Z.M.; Shawagfeh, N.T. Generalized Taylor’s formula. Appl. Math. Comput. 2007, 186, 286–293. [Google Scholar] [CrossRef]
- Akgül, A.; Cordero, A.; Torregrosa, J.R. A fractional Newton method with 2th-order of convergence and its stability. Appl. Math. Lett. 2019, 98, 344–351. [Google Scholar] [CrossRef]
- Torres-Hernandez, A.; Brambila-Paz, F. Sets of fractional operators and numerical estimation of the order of convergence of a family of fractional fixed-point methods. Fractal Fract. 2021, 4, 240. [Google Scholar] [CrossRef]
- Cajori, F. Historical note on the Newton-Raphson method of approximation. Am. Math. Mon. 1911, 18, 29–32. [Google Scholar] [CrossRef]
- Kumar, P.; Agrawal, O.P. An approximate method for numerical solution of fractional differential equations. Signal Process. 2006, 86, 2602–2610. [Google Scholar] [CrossRef]
- Candelario, G.; Cordero, A.; Torregrosa, J.R. Multipoint fractional iterative methods with (2 + 1) th-order of convergence for solving nonlinear problems. Mathematics 2020, 8, 452. [Google Scholar] [CrossRef]
- Proinov, P.D.; Vasileva, M.T. On the convergence of high-order Ehrlich-type iterative methods for approximating all zeros of a polynomial simultaneously. J. Ineq. Appl. 2015, 2015, 336. [Google Scholar] [CrossRef]
- Chu, Y.; Rafiq, N.; Shams, M.; Akram, S.; Mir, N.A.; Kalsoom, H. Computer methodologies for the comparison of some efficient derivative free simultaneous iterative methods for finding roots of non-linear equations. Comput. Mater. Cont. 2020, 66, 275–290. [Google Scholar] [CrossRef]
- Naseem, A.; Rehman, M.A.; Abdeljawad, T. Computational methods for non-linear equations with some real-world applications and their graphical analysis. Intell. Autom. Soft Comput. 2021, 30, 1–14. [Google Scholar] [CrossRef]
- Akin-Bohnera, E.; Hoackerb, J. Oscillation properties of an Emden-Fowler type equation on discrete time scales. J. Diff. Equ. Appl. 2003, 9, 603612. [Google Scholar] [CrossRef][Green Version]
- Shams, M.; Kausar, N.; Yaqoob, N.; Arif, N.; Addis, G.M. Techniques for finding analytical solution of generalized fuzzy differential equations with applications. Complexity 2023, 2023, 3000653. [Google Scholar] [CrossRef]
- Zill, D.G. Differential Equations with Boundary-Value Problems; Cengage Learning: Boston, MA, USA, 2016. [Google Scholar]
- Chapra, S. EBOOK: Applied Numerical Methods with MATLAB for Engineers and Scientists; McGraw Hill: New York, NY, USA, 2011. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
