1. Introduction
Solving nonlinear equations is a fundamental problem in science and engineering, with a history dating back to the early days of modern mathematics. These equations, characterized by non-trivial relationships between variables, are crucial for simulating and understanding complex natural phenomena such as biological interactions, turbulent fluid dynamics, and chaotic systems [
1,
2,
3]. The importance of solving nonlinear equations lies in their ability to provide precise descriptions and predictions of these systems, thereby leading to significant advances across various scientific and engineering disciplines [
4]. Recent developments in computational methods and the increasing complexity of modern engineering problems have heightened the need for efficient and accurate solutions to nonlinear equations. For instance, in physics, solving Maxwell’s equations for electromagnetism [
5] and the Navier–Stokes equations for fluid dynamics [
6] is essential for understanding and predicting electromagnetic wave propagation [
7] and turbulent flows [
8]. These solutions are critical for designing advanced technologies in telecommunications, aerospace, and renewable energy [
9,
10]. In engineering, nonlinear equations are used to develop control systems that optimize performance and ensure stability in sectors such as aerospace, automotive, and manufacturing. The design of structures to withstand dynamic loads and the creation of sophisticated algorithms for digital signal processing also rely heavily on these equations [
11]. In biology and medicine, nonlinear models help replicate the behavior of complex biological systems [
12], enhance our understanding of brain networks [
13], and improve disease prediction models [
14]. Despite their significance, solving nonlinear equations remains a difficult task due to the equations’ inherent complexity and the need for high computational resources. Recent progress in numerical techniques, such as adaptive methods, machine learning algorithms, and parallel computing, offer promising routes for tackling these challenges.
In this paper, we address the solution of fractional differential equations of the form
where
is free parameter and
Differential equations of both integer and fractional orders [
15] are crucial for simulating phenomena in physical science and engineering that require precise solutions [
16]. Fractional-order differential equations, for example, effectively describe the memory and hereditary characteristics of viscoelastic materials and anomalous diffusion processes [
17]. Accurate solutions to these equations are critical for understanding and designing systems with complex behaviors. Solving fractional nonlinear problems
requires advanced numerical iterative methods to obtain approximate solutions, see e.g., [
18,
19,
20]. The intrinsic non-locality of these type of models, where the derivative at a point depends on the entire function’s history, makes them notoriously challenging to solve both analytically and numerically. Exact techniques [
21], analytical techniques [
22,
23,
24], or direct numerical methods, such as explicit single-step methods [
25], multi-step methods [
26], and hybrid block methods [
27], have significant limitations, including high computational costs, stability issues, and sensitivity to small errors.
Numerical techniques for solving such equations can be classified into two groups: those that find a single solution at a time and those that find all solutions simultaneously. Well-known methods for finding simple roots include the Newton method [
28], the Halley method [
29], the Chun method [
30], the Ostrowski method [
31], the King method [
32], the Shams method [
33], the Cordero method [
34], the Mir method [
35], the Samima method [
36], and the Neta method [
37]. For multi-step methods, see, for example, Refs. [
38,
39] and references therein. Recent studies by Torres-Hernandez et al. [
40], Akgül et al. [
41], Gajori et al. [
42], and Kumar et al. [
43] describe fractional versions of single root-finding techniques with various fractional derivatives. These techniques are versatile and often straightforward to implement but have several significant drawbacks. While these methods converge rapidly near initial guesses, they can diverge if the guess is far from the solution or if the function is complex. They are sensitive to initial assumptions, requiring precise estimations for each root, making them time-consuming and computationally intensive. Evaluating both the function and its derivative increases computational costs, especially for complex functions. Additionally, distinguishing between real and complex roots can be challenging without modifications. In contrast, parallel root-finding methods offer greater stability, consistency, and global convergence compared to single root-finding techniques. They can be implemented on parallel computer architectures, utilizing multiple processes to approximate all solutions to (
2) simultaneously.
Among parallel numerical schemes, the Weierstrass–Durand–Kerner approach [
44] is particularly attractive from a computational standpoint. This method is given by
where
is Weierstrass’ correction. Method (
3) has local quadratic convergence. Nedzibov et al. [
45] present the modified Weierstrass method,
also known as the inverse Weierstrass method, which has quadratic convergence. The inverse parallel schemes outperform the classical simultaneous methods because the inverse parallel scheme efficiently handles nonlinear equations by utilizing parallel processing to accelerate convergence. It reduces computing time and improves accuracy by dynamically adapting to the unique properties of the problem. This strategy is especially useful in large-scale or complicated systems where conventional methods may be too slow or ineffectual. Shams et al. [
46] presented the following inverse parallel scheme:
Inverse parallel scheme (
6) has local quadratic convergence.
In 1977, Ehrlich [
47] introduced a convergent simultaneous method of the third order given by
where
Using
as a correction in (
4), Petkovic et al. [
48] accelerated the convergence order from three to six:
where
, and
.
Except for the Caputo derivative, all fractional-type derivatives fail to satisfy if is not a natural number. Therefore, we will cover some basic ideas in fractional calculus, as well as the fractional iterative approach for solving nonlinear equations using Caputo-type derivatives.
Definition 1 (Gamma Function)
. The Gamma function, also known as the generalized factorial function, is defined as follows [49]: where , , and for Definition 2 (Caputo Fractional Derivative)
. For the Caputo fractional derivative [50] of order ψ is defined as where is the Gamma function with Theorem 1. Suppose for where . Then, the Generalized Taylor Formula [51] is given bywhereand Consider the Caputo-type Taylor expansion of
near
as
Taking
as a common factor, we have
where
The corresponding Caputo-type derivative of
arround
is
These expansions are used in the convergence analysis of the proposed method.
Using the Caputo-type fractional version of the classical Newton’s method, Candelario et al. [
52] presented the following variant:
where
for any
The error equation satisfied by the fractional Newton method’s order of convergence, which is
, is
where
and
and
The rest of the study is organized as follows: following the introduction,
Section 2 investigates the construction, convergence, and stability analysis of fractional-order schemes for solving (
2).
Section 3 presents the development and analysis of a simultaneous method for determining all solutions to nonlinear equations.
Section 4 evaluates the efficiency and stability of the proposed approach through numerical results and compares it with existing methods. Finally,
Section 5 concludes the paper with a summary of findings and suggestions for future research.
2. Fractional Scheme Construction and Analysis
The fractional-order iterative method is a powerful tool for solving nonlinear equations, offering faster and more accurate convergence compared to classical algorithms. Shams et al. [
53] proposed the following single-step fractional iterative method as
The order of convergence of the method in (
19) is
, which satisfies the following error equation:
where
,
and
The Caputo-type fractional version of (
20) was proposed in [
54] as
where
and
for any
The order of convergence of the (
21) technique is
, which satisfies the following error equation:
where
,
and
In this paper, we focus on the technique described in [
55], which offers faster convergence speed, higher accuracy, better processing efficiency, and more robustness compared to other single root-finding methods. We extend this method to handle fractional derivatives, enabling the more precise modeling of systems with memory and non-local effects. The original method is given by
where
By incorporating the Caputo-type fractional derivative into (
23), we propose the following fractional version of the single root finding method:
where
. We abbreviate this method as
.
2.1. Convergence Analysis
For the iterative scheme (
23), we prove the following theorem to establish its order of convergence.
Theorem 2. Letbe a continuous function with fractional derivatives of order for any and , containing the exact root ξ of in the open interval ᘐ. Let us suppose be continuous and not null at ξ. Furthermore, for a sufficiently close starting value , the convergence order of the Caputo-type fractional iterative schemeis at least , and the error equation iswhere , ,
and
Proof. Let
be a root of
g and
. By the Taylor series expansion of
and
around
taking
we get
and
where
,
,
Dividing (
28) by (
29), we have:
where
and
Using the generalized binomial theorem
where
expanding
around
we have
where
Then,
where
Therefore, using
in second step, we have
where
Thus
where
Thus
Hence, the theorem is proven. □
2.2. Stability Analysis of the -Scheme
The stability of single root-finding methods for nonlinear equations is crucial for ensuring the reliability and robustness of the iterative solution process [
56]. Stability, in this context, refers to a method’s ability to converge to a real root from an initial guess, even when minor perturbations or errors occur in the calculations. Single root-finding approaches exhibit local convergence around the root, making them effective when the initial guess is sufficiently close to the exact root. However, their stability is influenced by the nature of the function and the initial estimate. If the function is poorly behaved or the initial estimate is far from the root, single root-finding methods may diverge or converge to extraneous fixed points unrelated to the actual roots of the nonlinear equations [
57]. The stability of single root-finding methods can be evaluated using concepts from complex dynamical systems, which measure the sensitivity of the root to changes in the input, and convergence criteria, which assess how rapidly the method approaches the root. To minimize the impact of computational errors and ensure consistent and reliable root-finding performance, stability is often achieved by balancing the method’s inherent convergence properties with the careful selection of the initial guess, function parameters, and stopping criteria [
58,
59]. The following rational map
is obtained as
where
For
, we have
where
Thus,
depends on
, and the variable. Using Möbius transformation
we see
is conjugate with operator
for
and independent of
a and
b. Therefore,
exactly fits with:
which has interesting properties [
60].
The next proposition examines the fixed points of the rational map, which are essential for understanding the behavior and convergence properties of these schemes.
Proposition 1. The fixed points of are as follows:
and are super attracting points.
is a repelling point.
The critical points are 0 and 1, which are super attracting and repelling points, respectively, for
Proof. The fixed points of
are determined by solving
Therefore, 0 is the fixed points. Further solving
gives the remaining fixed points. Furthermore,
so
is also a fixed point. The derivative of
is
Thus, the critical points are
and
. Evaluating the derivative at these points,
indicates that 0 is a super attracting point and
indicates that 1 is a repelling point. □
The dynamical planes in iterative methods are essential for solving nonlinear equations because they provide visual insights into the behavior and stability of iterating processes. By investigating the convergence and divergence patterns within these planes, fixed points, attractors, and chaotic zones can be identified, allowing the iterative process to be optimized for improved accuracy and efficiency. The stability of the single root-finding method for different fractional parameter values
is examined using dynamical planes (see
Figure 1 and
Figure 2). In these figures, the orange color represents the basins of attraction of
when mapped to 0. If the root of
maps to infinity, it is marked in blue. If the map diverges, it is marked in black. Strange fixed points are depicted by white circles, free critical points by white squares, and fixed points by white squares with a star. The dynamical planes are generated by taking starting values from the square
. In
Figure 1, the dynamical planes show large basins where the rational map converges to 0 or infinity. In
Figure 2a–e, the region of the basins of attraction decreases as the fractional parameter value decreases from 1 to 0.5 and diverges at 0. This indicates that the single-step method is more stable when the fractional parameter values are close to 1 and becomes unstable as the fractional parameter values approach 0. Using the newly developed stable fractional-order single root-finding method
as a correction factor in (
23), we propose a novel inverse fractional scheme for analyzing (
1) in the following section.