Next Article in Journal
Dementia Classification Approach Based on Non-Singleton General Type-2 Fuzzy Reasoning
Next Article in Special Issue
An RBF Method for Time Fractional Jump-Diffusion Option Pricing Model under Temporal Graded Meshes
Previous Article in Journal
A General Fixed Point Theorem for a Sequence of Multivalued Mappings in S-Metric Spaces
Previous Article in Special Issue
Two-Dimensional Time Fractional River-Pollution Model and Its Remediation by Unsteady Aeration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient and Stable Caputo-Type Inverse Fractional Parallel Scheme for Solving Nonlinear Equations

by
Mudassir Shams
1,2 and
Bruno Carpentieri
1,*
1
Faculty of Engineering, Free University of Bozen-Bolzano (BZ), 39100 Bozen-Bolzano, Italy
2
Department of Mathematics and Statistics, Riphah International University, I-14, Islamabad 44000, Pakistan
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(10), 671; https://doi.org/10.3390/axioms13100671
Submission received: 9 August 2024 / Revised: 18 September 2024 / Accepted: 20 September 2024 / Published: 27 September 2024
(This article belongs to the Special Issue Fractional Calculus and the Applied Analysis)

Abstract

:
Nonlinear problems, which often arise in various scientific and engineering disciplines, typically involve nonlinear equations or functions with multiple solutions. Analytical solutions to these problems are often impossible to obtain, necessitating the use of numerical techniques. This research proposes an efficient and stable Caputo-type inverse numerical fractional scheme for simultaneously approximating all roots of nonlinear equations, with a convergence order of 2 ψ + 2 . The scheme is applied to various nonlinear problems, utilizing dynamical analysis to determine efficient initial values for a single root-finding Caputo-type fractional scheme, which is further employed in inverse fractional parallel schemes to accelerate convergence rates. Several sets of random initial vectors demonstrate the global convergence behavior of the proposed method. The newly developed scheme outperforms existing methods in terms of accuracy, consistency, validation, computational CPU time, residual error, and stability.

1. Introduction

Solving nonlinear equations is a fundamental problem in science and engineering, with a history dating back to the early days of modern mathematics. These equations, characterized by non-trivial relationships between variables, are crucial for simulating and understanding complex natural phenomena such as biological interactions, turbulent fluid dynamics, and chaotic systems [1,2,3]. The importance of solving nonlinear equations lies in their ability to provide precise descriptions and predictions of these systems, thereby leading to significant advances across various scientific and engineering disciplines [4]. Recent developments in computational methods and the increasing complexity of modern engineering problems have heightened the need for efficient and accurate solutions to nonlinear equations. For instance, in physics, solving Maxwell’s equations for electromagnetism [5] and the Navier–Stokes equations for fluid dynamics [6] is essential for understanding and predicting electromagnetic wave propagation [7] and turbulent flows [8]. These solutions are critical for designing advanced technologies in telecommunications, aerospace, and renewable energy [9,10]. In engineering, nonlinear equations are used to develop control systems that optimize performance and ensure stability in sectors such as aerospace, automotive, and manufacturing. The design of structures to withstand dynamic loads and the creation of sophisticated algorithms for digital signal processing also rely heavily on these equations [11]. In biology and medicine, nonlinear models help replicate the behavior of complex biological systems [12], enhance our understanding of brain networks [13], and improve disease prediction models [14]. Despite their significance, solving nonlinear equations remains a difficult task due to the equations’ inherent complexity and the need for high computational resources. Recent progress in numerical techniques, such as adaptive methods, machine learning algorithms, and parallel computing, offer promising routes for tackling these challenges.
In this paper, we address the solution of fractional differential equations of the form
ψ 1 n ψ c ˇ g x + ϑ [ ] g x = f ( x , g x ) ; x x 0 , x 1 g x 0 = θ 0 [ ] , g x 0 = θ 1 [ ] , g ( n 1 ) ψ x 0 = θ n 1 [ ] ,
where ϑ [ ] is free parameter and θ 0 [ ] , θ 1 [ ] , , θ n 1 [ ] R . Differential equations of both integer and fractional orders [15] are crucial for simulating phenomena in physical science and engineering that require precise solutions [16]. Fractional-order differential equations, for example, effectively describe the memory and hereditary characteristics of viscoelastic materials and anomalous diffusion processes [17]. Accurate solutions to these equations are critical for understanding and designing systems with complex behaviors. Solving fractional nonlinear problems
g ( x ) = 0 ,
requires advanced numerical iterative methods to obtain approximate solutions, see e.g., [18,19,20]. The intrinsic non-locality of these type of models, where the derivative at a point depends on the entire function’s history, makes them notoriously challenging to solve both analytically and numerically. Exact techniques [21], analytical techniques [22,23,24], or direct numerical methods, such as explicit single-step methods [25], multi-step methods [26], and hybrid block methods [27], have significant limitations, including high computational costs, stability issues, and sensitivity to small errors.
Numerical techniques for solving such equations can be classified into two groups: those that find a single solution at a time and those that find all solutions simultaneously. Well-known methods for finding simple roots include the Newton method [28], the Halley method [29], the Chun method [30], the Ostrowski method [31], the King method [32], the Shams method [33], the Cordero method [34], the Mir method [35], the Samima method [36], and the Neta method [37]. For multi-step methods, see, for example, Refs. [38,39] and references therein. Recent studies by Torres-Hernandez et al. [40], Akgül et al. [41], Gajori et al. [42], and Kumar et al. [43] describe fractional versions of single root-finding techniques with various fractional derivatives. These techniques are versatile and often straightforward to implement but have several significant drawbacks. While these methods converge rapidly near initial guesses, they can diverge if the guess is far from the solution or if the function is complex. They are sensitive to initial assumptions, requiring precise estimations for each root, making them time-consuming and computationally intensive. Evaluating both the function and its derivative increases computational costs, especially for complex functions. Additionally, distinguishing between real and complex roots can be challenging without modifications. In contrast, parallel root-finding methods offer greater stability, consistency, and global convergence compared to single root-finding techniques. They can be implemented on parallel computer architectures, utilizing multiple processes to approximate all solutions to (2) simultaneously.
Among parallel numerical schemes, the Weierstrass–Durand–Kerner approach [44] is particularly attractive from a computational standpoint. This method is given by
y i l = x i l w x i l ,
where
w x i l = g x i l Π n j i j = 1 x i l x j l , ( i , j = 1 , , n ) ,
is Weierstrass’ correction. Method (3) has local quadratic convergence. Nedzibov et al. [45] present the modified Weierstrass method,
y i l = x i l 2 Π n j i j = 1 x i l x j l x i l Π n j i j = 1 x i l x j l + g x i l ,
also known as the inverse Weierstrass method, which has quadratic convergence. The inverse parallel schemes outperform the classical simultaneous methods because the inverse parallel scheme efficiently handles nonlinear equations by utilizing parallel processing to accelerate convergence. It reduces computing time and improves accuracy by dynamically adapting to the unique properties of the problem. This strategy is especially useful in large-scale or complicated systems where conventional methods may be too slow or ineffectual. Shams et al. [46] presented the following inverse parallel scheme:
y i l = x i l w x i l 1 + g x i l 1 + 1 α g x i l + w x i l x i l 1 + g x i l .
Inverse parallel scheme (6) has local quadratic convergence.
In 1977, Ehrlich [47] introduced a convergent simultaneous method of the third order given by
y i l = x i l 1 1 N i x i l j i j = 1 n 1 x i l x j l ,
where N i ( x i l ) = g x j l g x j l . Using x j l = u j l as a correction in (4), Petkovic et al. [48] accelerated the convergence order from three to six:
y i l = x i l 1 1 N i ( x i l ) j i j = 1 n 1 x i l u j l ,
where u j l = x j l g s j l g x j l 2 g s j l g x j l g x j l g x j l , and s j l = x j l g x j l g x j l .
Except for the Caputo derivative, all fractional-type derivatives fail to satisfy ψ ( 1 ) = 0 if ψ is not a natural number. Therefore, we will cover some basic ideas in fractional calculus, as well as the fractional iterative approach for solving nonlinear equations using Caputo-type derivatives.
Definition 1
(Gamma Function). The Gamma function, also known as the generalized factorial function, is defined as follows [49]:
Γ ( x ) = 0 + u x 1 e u d u ,
where x > 0 , Γ ( 1 ) = 1 , and Γ ( n + 1 ) = n ! for n N .
Definition 2
(Caputo Fractional Derivative). For
g : R R , g C + ψ , x , < ψ < x < + , ψ 0 , m = ψ + 1
the Caputo fractional derivative [50] of order ψ is defined as
ψ 1 ψ c ˇ g ( x ) = 1 Γ ( m ψ ) 0 x d m d t m g ( t ) 1 x t ψ m + 1 d t , ψ N , d m 1 d t m 1 g ( x ) , ψ = m 1 N 0 ,
where Γ ( x ) is the Gamma function with x > 0 .
Theorem  1.
Suppose ψ 1 γ ψ c ˇ g ( x ) C ψ 1 , ψ 2 for γ = 1 , , n + 1 where ψ 0 , 1 . Then, the Generalized Taylor Formula [51] is given by
g ( x ) = i = 0 n ψ 1 i ψ c ˇ g ψ 1 x ψ 1 i ψ Γ ( i ψ + 1 ) + ψ 1 n + 1 ψ c ˇ g ξ x ψ 1 n + 1 ψ Γ ( n + 1 ψ + 1 ) ,
where
ψ 1 ξ x , x ψ 1 , ψ 2 ,
and
ψ 1 n ψ c ˇ = ψ 1 ψ c ˇ . ψ 1 ψ c ˇ ψ 1 ψ c ˇ ( n t i m e s ) .
Consider the Caputo-type Taylor expansion of g ( x ) near ψ 1 = ξ as
g ( x ) = ξ 1 ψ c ˇ g ξ Γ ( + 1 ) x ξ ψ + ξ 2 ψ c ˇ g ξ Γ ( 2 ψ + 1 ) x ξ 2 ψ + O x ξ 3 ψ .
Taking ξ 1 ψ c ˇ g ξ Γ ( ψ + 1 ) as a common factor, we have
g ( x ) = ξ 1 ψ c ˇ g ξ Γ ( ψ + 1 ) x ξ ψ + c ˇ 2 x ξ 2 ψ + O x ξ 3 ψ ,
where
c ˇ j = Γ ( ψ + 1 ) Γ ( γ ψ + 1 ) ξ γ ψ c ˇ g ξ ξ ψ c ˇ g ξ , γ = 2 , 3 ,
The corresponding Caputo-type derivative of g ( x ) arround ξ is
ξ ψ c ˇ g x = ξ 1 ψ c ˇ g ξ Γ ( ψ + 1 ) Γ ( ψ + 1 ) + Γ ( 2 ψ + 1 ) Γ ( ψ + 1 ) c ˇ 2 x ξ ψ + O x ξ 2 ψ .
These expansions are used in the convergence analysis of the proposed method.
Using the Caputo-type fractional version of the classical Newton’s method, Candelario et al. [52] presented the following variant:
y l = x l Γ ( ψ + 1 ) g ( x l ) ψ 1 ψ c ˇ g ( x l ) 1 / ψ ,
where ψ 1 ψ c ˇ g ( x i ) ξ ψ c ˇ g ( ξ ) for any ψ R . The error equation satisfied by the fractional Newton method’s order of convergence, which is ψ + 1 , is
e = Γ ( ψ + 1 ) Γ ( ψ + 1 ) Γ ( ψ + 1 ) c ˇ 2 e l ψ + 1 + O e l 2 ψ + 1 ,
where e = y l ξ and e l = x l ξ and c ˇ j = Γ ( ψ + 1 ) Γ ( γ ψ + 1 ) ξ γ ψ c ˇ g ξ ξ ψ c ˇ g ξ , γ = 2 , 3 ,
The rest of the study is organized as follows: following the introduction, Section 2 investigates the construction, convergence, and stability analysis of fractional-order schemes for solving (2). Section 3 presents the development and analysis of a simultaneous method for determining all solutions to nonlinear equations. Section 4 evaluates the efficiency and stability of the proposed approach through numerical results and compares it with existing methods. Finally, Section 5 concludes the paper with a summary of findings and suggestions for future research.

2. Fractional Scheme Construction and Analysis

The fractional-order iterative method is a powerful tool for solving nonlinear equations, offering faster and more accurate convergence compared to classical algorithms. Shams et al. [53] proposed the following single-step fractional iterative method as
y l = x l Γ ( ψ + 1 ) g x l ψ 1 ψ c ˇ g ( x l ) 1 1 α g ( x l ) 1 + g ( x l ) 1 / ψ ,
The order of convergence of the method in (19) is 2 ψ + 1 , which satisfies the following error equation:
e i = α + c ˇ 2 Γ 2 ( ψ + 1 ) c ˇ 2 Γ ( 2 ψ + 1 ) ψ Γ ( ψ + 1 ) c ˇ 2 e l ψ + 1 + O e l 2 ψ + 1 ,
where e = y l ξ , e l = x l ξ and c ˇ j = Γ ( ψ + 1 ) Γ ( γ ψ + 1 ) ξ γ ψ c ˇ g ξ ξ ψ c ˇ g ξ , γ = 2 , 3 , The Caputo-type fractional version of (20) was proposed in [54] as
v l = y [ l ] Γ ( ψ + 1 ) g ( y [ l ] ) ψ 1 ψ c ˇ g ( x l ) 1 / ψ ,
where y l = x l Γ ( ψ + 1 ) g ( x l ) ψ 1 ψ c ˇ g ( x l ) 1 / ψ , and ψ 1 ψ c ˇ g ( x l ) ξ ψ c ˇ g ( ξ ) for any ψ R . The order of convergence of the (21) technique is 2 ψ + 1 , which satisfies the following error equation:
e = Γ ( ψ + 1 ) Γ ( ψ + 1 ) Γ ( ψ + 1 ) c ˇ 2 e l 2 ψ + 1 + O e l 3 ψ + 1 ,
where e = v l ξ , e l = e l ξ and c ˇ j = Γ ( ψ + 1 ) Γ ( γ ψ + 1 ) ξ γ ψ c ˇ g ξ ξ ψ c ˇ g ξ , γ = 2 , 3 ,
In this paper, we focus on the technique described in [55], which offers faster convergence speed, higher accuracy, better processing efficiency, and more robustness compared to other single root-finding methods. We extend this method to handle fractional derivatives, enabling the more precise modeling of systems with memory and non-local effects. The original method is given by
u l = x l 1 2 3 g ( ς l ) g ( x l ) g ( x l ) g ( x l ) ,
where ς l = x l g ( x l ) g ( x l ) . By incorporating the Caputo-type fractional derivative into (23), we propose the following fractional version of the single root finding method:
u l = x l 1 2 3 ψ 1 ψ c ˇ g ( ς l ) ψ 1 ψ c ˇ g ( x l ) g ( x l ) ψ 1 ψ c ˇ g ( x l ) 1 / ψ ,
where ς l = x l Γ ( ψ + 1 ) g ( x l ) ψ 1 ψ c ˇ g ( x l ) 1 / ψ . We abbreviate this method as SCM ψ .

2.1. Convergence Analysis

For the iterative scheme (23), we prove the following theorem to establish its order of convergence.
Theorem  2.
Let
g : R R
be a continuous function with fractional derivatives of order l ψ for any l 0 and ψ 0 , 1 , containing the exact root ξ of g ( x ) in the open interval ᘐ. Let us suppose ψ 1 ψ c ˇ g ( x ) be continuous and not null at ξ. Furthermore, for a sufficiently close starting value x 0 , the convergence order of the Caputo-type fractional iterative scheme
u l = x l 1 2 Γ ( ψ + 1 ) 3 ψ 1 ψ c ˇ g ( ς l ) ψ 1 ψ c ˇ g ( x l ) g ( x l ) ψ 1 ψ c ˇ g ( x l ) 1 / ψ ,
is at least 2 ψ + 1 , and the error equation is
e = Θ 1 [ ] + Θ 2 [ ] + Θ 3 [ ] e l 2 ψ + 1 + O e l 3 ψ + 1 .
where Θ 1 [ ] = c ˇ 2 2 ψ 2 Γ 2 ( ψ ) + 3 2 c ˇ 3 ψ Γ ( ψ ) 1 2 c ˇ 3 ψ 2 Γ 2 ( ψ ) 3 2 2 ψ 2 Γ ( ψ + 1 2 ) c ˇ 2 ψ 2 Γ ( ψ ) 2 π ,
Θ 2 [ ] = 3 2 2 ψ 4 Γ 2 ( ψ + 1 2 ) c ˇ 2 2 ψ 3 Γ ( ψ ) 3 π 3 2 2 ψ 4 Γ 2 ( ψ + 1 2 ) c ˇ 2 2 ψ 4 Γ ( ψ ) 4 π , Θ 3 [ ] = 3 4 c ˇ 2 c ˇ 3 3 ψ 3 Γ ( ψ + 1 3 ) Γ ( ψ + 2 3 ) ψ 2 Γ 2 ( ψ ) π 2 ψ 2 Γ ( ψ + 1 2 ) + 1 2 c ˇ 2 c ˇ 3 3 ψ 3 Γ ( ψ + 1 3 ) Γ ( ψ + 2 3 ) ψ 3 Γ 3 ( ψ ) π 2 ψ 2 Γ ( ψ + 1 2 ) ,
c ˇ j = Γ ( ψ + 1 ) Γ ( γ ψ + 1 ) ξ γ ψ c ˇ g ξ ξ ψ c ˇ g ξ , γ = 2 , 3 , and Γ . n = Γ n ( . ) .
Proof. 
Let ξ be a root of g and x l = ξ + e l . By the Taylor series expansion of g ( x l ) and ψ 1 ψ c ˇ g ( x l ) around x = ξ , taking g ( ξ ) = 0 , we get
g ( x l ) = ξ 1 ψ c ˇ g ξ Γ ( ψ + 1 ) e l ψ + c ˇ 2 e l 2 ψ + c ˇ 3 e l 3 ψ + O ( e l 4 ψ ) ,
and
ψ 1 ψ c ˇ g ( x l ) = ξ 1 ψ c ˇ g ξ Γ ( ψ + 1 ) Γ ( ψ + 1 ) + Γ ( 2 ψ + 1 ) Γ ( ψ + 1 ) c ˇ 2 e l ψ + Γ ( 3 ψ + 1 ) Γ ( 2 ψ + 1 ) c ˇ 3 e l 2 ψ + O ( e l 3 ψ ) ,
ψ 1 ψ c ˇ g ( x l ) 1 = 1 Γ ( ψ + 1 ) Δ { 1 } e l ψ + Δ { 2 } e l 2 ψ + Δ { 3 } e l 3 ψ + O e l 4 ψ ,
where Δ { 1 } = Γ ( 2 ψ + 1 ) Γ ( ψ + 1 ) c ˇ 2 , Δ { 2 } = Γ ( 3 ψ + 1 ) Γ ( ψ + 1 ) Γ ( 2 ψ + 1 ) + Γ ( 2 ψ + 1 ) 2 Γ ( ψ + 1 ) 4 Γ ( ψ + 1 ) c ˇ 3 , Δ { 3 } = Γ ( 3 ψ + 1 ) Γ ( ψ + 1 ) c ˇ 2 c ˇ 3 Γ ( 2 ψ + 1 ) 3 c ˇ 2 2 Γ ( 3 ψ + 1 ) Γ ( ψ + 1 ) 3 c ˇ 2 c ˇ 3 Γ ( ψ + 1 ) 6 Γ ( ψ + 1 ) .
Dividing (28) by (29), we have:
g ( x l ) ψ 1 ψ c ˇ g x l = 1 ψ Γ ( ψ ) e l ψ + Δ { 4 } e l 2 ψ + Δ { 5 } e l 3 ψ + O ( e l 4 ψ ) ,
where
Δ { 4 } = 2 ψ 2 Γ ( ψ + 1 2 ) c ˇ 2 ψ 2 Γ ( ψ ) 2 π + c ˇ 2 ψ Γ ( ψ ) ,
Δ { 5 } = 2 ψ 4 Γ ( ψ + 1 2 ) 2 c ˇ 2 2 ψ 3 Γ ( ψ ) 3 π 2 ψ 2 Γ ( ψ + 1 2 ) c ˇ 2 3 ψ 2 Γ ( ψ ) 2 π 1 2 c ˇ 3 3 ψ 3 Γ ( ψ + 1 3 ) Γ ( ψ + 2 3 ) ψ 2 Γ ( ψ ) 2 π 2 ψ 2 Γ ( ψ + 1 2 ) + c ˇ 3 ψ Γ ( ψ ) ,
and
Γ 1 ψ + 1 = 1 ψ Γ 1 ψ .
Next,
ς l ξ = x l ξ Γ ( ψ + 1 ) g ( x l ) ψ 1 ψ c ˇ g ( x l ) 1 / ψ ,
and
ς l ξ = Δ { 6 } e l ψ + 1 + Δ { 7 } + Δ { 8 } e l 2 ψ + 1 + O ( e l 3 ψ + 1 ) ,
where
Δ { 6 } = c ˇ 2 + 2 ψ 2 Γ ( ψ + 1 2 ) c ˇ 2 ψ Γ ( ψ ) π ,
Δ { 7 } = c ˇ 3 + 2 ψ 2 Γ ( ψ + 1 2 ) 2 c ˇ 2 2 ψ Γ ( ψ ) π + 1 2 2 ψ 2 Γ ( ψ + 1 2 ) c ˇ 2 3 ψ Γ ( ψ ) π ,
Δ { 8 } = 1 2 c ˇ 3 3 ψ 3 Γ ( ψ + 1 3 ) Γ ( ψ + 2 3 ) ψ Γ ( ψ ) π 2 ψ 2 Γ ( ψ + 1 2 ) 2 ψ 4 Γ ( ψ + 1 2 ) 2 c ˇ 2 2 ψ 2 Γ ( ψ ) 2 π .
Using the generalized binomial theorem x + y t = i = 0 + t i x t i y i where t i = Γ ( t + 1 ) i ! Γ ( t i + 1 ) , expanding g x i 1 around ξ , we have
ψ 1 ψ c ˇ g ( ς l ) = 1 + 2 c ˇ 2 Δ { 6 } e l ψ + 1 + 2 Δ { 9 } e l 3 ψ + 1 +
where
Δ { 9 } = c ˇ 3 + 2 ψ 2 Γ ( ψ + 1 2 ) c ˇ 2 ψ Γ ( ψ ) π + 1 2 c ˇ 3 3 ψ 3 Γ ( ψ + 1 3 ) Γ ( ψ + 2 3 ) ψ Γ ( ψ ) π 2 ψ 2 Γ ( ψ + 1 2 ) 2 ψ 4 Γ ( ψ + 1 2 ) 2 c ˇ 2 2 ψ 2 Γ ( ψ ) 3 π .
Then,
ψ 1 ψ c ˇ g ( ς l ) ψ 1 ψ c ˇ g ( x l ) = 1 ψ Γ ( ψ ) Δ { 10 } e l + Δ { 11 } e l ψ + 1 +
where
Δ { 10 } = 2 ψ 2 Γ ( ψ + 1 2 ) c ˇ 2 ψ 2 Γ ( ψ ) 2 π , Δ { 11 } = 1 2 c ˇ 3 3 ψ 3 Γ ( ψ + 1 3 ) Γ ( ψ + 2 3 ) ψ Γ ( ψ ) π 2 ψ 2 Γ ( ψ + 1 2 ) 2 c ˇ 2 2 ψ Γ ( ψ ) + 2 ψ 4 Γ 2 ( ψ + 1 2 ) c ˇ 2 2 ψ 3 Γ ( ψ ) 3 π + 2 2 ψ 2 Γ ( ψ + 1 2 ) c ˇ 2 2 ψ 2 Γ ( ψ ) 2 π .
Therefore, using ψ 1 ψ c ˇ g ( ς l ) ψ 1 ψ c ˇ g ( x l ) in second step, we have
1 2 3 ψ 1 ψ c ˇ g ( ς l ) ψ 1 ψ c ˇ g ( x l ) = 1 2 ψ Γ ( ψ ) + 3 2 + Δ { 12 } e l + Δ { 13 } e l ψ + 1 +
where
Δ { 12 } = 1 2 2 ψ 2 Γ ( ψ + 1 2 ) c ˇ 2 ψ 2 Γ ( ψ ) 2 π , Δ { 13 } = 1 4 c ˇ 3 3 ψ 3 Γ ( ψ + 1 3 ) Γ ( ψ + 2 3 ) ψ Γ ( ψ ) π 2 ψ 2 Γ ( ψ + 1 2 ) c ˇ 2 2 ψ Γ ( ψ ) + 1 2 2 ψ 4 Γ 2 ( ψ + 1 2 ) c ˇ 2 2 ψ 3 Γ ( ψ ) 3 π + 2 2 ψ 2 Γ ( ψ + 1 2 ) c ˇ 2 2 ψ 2 Γ ( ψ ) 2 π .
Thus
u l ζ = x l ζ 1 2 Γ ( ψ + 1 ) 3 ψ 1 ψ c ˇ g ( ς l ) ψ 1 ψ c ˇ g ( x l ) g ( x l ) ψ 1 ψ c ˇ g ( x l ) 1 / ψ ,
e = e e + Θ 1 [ ] + Θ 2 [ ] + Θ 3 [ ] e l 2 ψ + 1 + Θ 4 [ ] + Θ 5 [ ] + Θ 6 [ ] e l 3 ψ + 1 +
where
Θ 1 [ ] = c ˇ 2 2 ψ 2 Γ 2 ( ψ ) + 3 2 c ˇ 3 ψ Γ ( ψ ) 1 2 c ˇ 3 ψ 2 Γ 2 ( ψ ) 3 2 2 ψ 2 Γ ( ψ + 1 2 ) c ˇ 2 ψ 2 Γ ( ψ ) 2 π ,
Θ 2 [ ] = 3 2 2 ψ 4 Γ 2 ( ψ + 1 2 ) c ˇ 2 2 ψ 3 Γ ( ψ ) 3 π 3 2 2 ψ 4 Γ 2 ( ψ + 1 2 ) c ˇ 2 2 ψ 4 Γ ( ψ ) 4 π ,
Θ 3 [ ] = 3 4 c ˇ 2 c ˇ 3 3 ψ 3 Γ ( ψ + 1 3 ) Γ ( ψ + 2 3 ) ψ 2 Γ 2 ( ψ ) π 2 ψ 2 Γ ( ψ + 1 2 ) + 1 2 c ˇ 2 c ˇ 3 3 ψ 3 Γ ( ψ + 1 3 ) Γ ( ψ + 2 3 ) ψ 3 Γ 3 ( ψ ) π 2 ψ 2 Γ ( ψ + 1 2 ) ,
Θ 4 [ ] = c ˇ 2 c ˇ 3 3 ψ 3 Γ ( ψ + 1 3 ) Γ ( ψ + 2 3 ) ψ 4 Γ ( ψ ) 4 π 1 4 c ˇ 2 c ˇ 3 3 ψ 3 Γ ( ψ + 1 3 ) Γ ( ψ + 2 3 ) ψ 3 Γ 3 ( ψ ) π 2 ψ 2 Γ ( ψ + 1 2 ) ,
Θ 5 [ ] = c ˇ 2 c ˇ 3 ψ 2 Γ 2 ( ψ ) + 1 2 2 ψ 2 Γ ( ψ + 1 2 ) c ˇ 2 c ˇ 3 ψ 3 Γ ( ψ ) 3 π + c ˇ 2 3 ψ 2 Γ 2 ( ψ ) π ,
Θ 6 [ ] = 3 2 c ˇ 2 3 2 ψ 6 Γ 3 ( ψ + 1 2 ) ψ 5 Γ ( ψ ) 5 π 3 2 4 c ˇ 2 3 2 ψ 2 Γ ( ψ + 1 2 ) ψ 3 Γ 3 ( ψ ) π + 2 c ˇ 2 3 2 ψ 4 Γ 2 ( ψ + 1 2 ) ψ 4 Γ 4 ( ψ ) π .
Thus
e = Θ 1 [ ] + Θ 2 [ ] + Θ 3 [ ] e l 2 ψ + 1 + O e l 3 ψ + 1 .
Hence, the theorem is proven.    □

2.2. Stability Analysis of the SCM ψ -Scheme

The stability of single root-finding methods for nonlinear equations is crucial for ensuring the reliability and robustness of the iterative solution process [56]. Stability, in this context, refers to a method’s ability to converge to a real root from an initial guess, even when minor perturbations or errors occur in the calculations. Single root-finding approaches exhibit local convergence around the root, making them effective when the initial guess is sufficiently close to the exact root. However, their stability is influenced by the nature of the function and the initial estimate. If the function is poorly behaved or the initial estimate is far from the root, single root-finding methods may diverge or converge to extraneous fixed points unrelated to the actual roots of the nonlinear equations [57]. The stability of single root-finding methods can be evaluated using concepts from complex dynamical systems, which measure the sensitivity of the root to changes in the input, and convergence criteria, which assess how rapidly the method approaches the root. To minimize the impact of computational errors and ensure consistent and reliable root-finding performance, stability is often achieved by balancing the method’s inherent convergence properties with the careful selection of the initial guess, function parameters, and stopping criteria [58,59]. The following rational map R ( x ) = x a x b is obtained as
R ( x ) = x ϑ 1 [ ^ ] ϑ 11 [ ^ ] 1 ψ + 2 ϑ 11 [ ^ ] 2 ψ ϑ 4 [ ^ ] + a b ϑ 5 [ ^ ] ϑ 11 [ ^ ] 1 ψ ϑ 6 [ ^ ] ϑ 7 [ ^ ] + Γ ( 2 ψ ) ϑ 8 [ ^ ] ,
where
ϑ 1 [ ^ ] = Γ 3 ψ Γ 1 ψ a + b ,
ϑ 2 [ ^ ] = x a x b Γ ψ + 1 ,
ϑ 3 [ ^ ] = 2 x 2 ψ Γ 3 ψ a + b x 1 ψ Γ 2 ψ + a b x ψ Γ 1 ψ ,
ϑ 4 [ ^ ] = Γ 2 ψ Γ 1 ψ ,
ϑ 5 [ ^ ] = Γ 3 ψ Γ 2 ψ + 3 ϑ 1 [ ^ ] x 1 ψ 3 Γ 2 ψ + ϑ 8 [ ^ ] ,
ϑ 6 [ ^ ] = Γ 3 ψ Γ 2 ψ Γ 1 ψ Γ 1 + ψ x a x b ,
ϑ 7 [ ^ ] = a + b x 1 ψ Γ 3 ψ Γ 1 ψ ,
ϑ 8 [ ^ ] = a Γ 3 ψ b x ψ ψ + 2 Γ 1 ψ x 2 ψ ,
ϑ 11 [ ^ ] = x ϑ 2 [ ^ ] ϑ 3 [ ^ ] 1 ψ . For ψ 1 , we have
R ( x ) = x x a x b 5 x 2 5 a + b x + a 2 + 3 a b + b 2 2 x + a + b 3 ,
where a , b C . Thus, R ( x ) depends on a , b , and the variable. Using Möbius transformation M x = x a x b , we see R ( x ) is conjugate with operator
O x = M o R o M 1 ( x ) = x 3 x + 2 2 x + 1 ,
for ψ 1 and independent of a and b. Therefore, R ( x ) exactly fits with:
R [ ] ( x ) = i = 0 n a i x i i = 0 n a n i x i ; a i i = 0 n R
which has interesting properties [60].
The next proposition examines the fixed points of the rational map, which are essential for understanding the behavior and convergence properties of these schemes.
Proposition  1.
The fixed points of O ε are as follows:
  • ε 0 = 0 and ε = are super attracting points.
  • ε 1 = 1 is a repelling point.
  • The critical points are 0 and 1, which are super attracting and repelling points, respectively, for ψ 1 .
Proof. 
The fixed points of O x are determined by solving
O x = x x 3 x + 2 2 x + 1 = x x 3 x + 2 2 x + 1 x = 0 x x 2 x + 2 2 x + 1 1 = 0 .
Therefore, 0 is the fixed points. Further solving
x 2 x + 2 2 x + 1 1 = 0 x = 1 , x 3 , 4 = 3 2 ± 5 2
gives the remaining fixed points. Furthermore, lim x 1 O 1 x = x 3 x + 2 2 x + 1 = 0 , so x = is also a fixed point. The derivative of O x is
O x = 6 x 2 x 2 + 2 x + 1 2 x + 1 2 = 0 x = 0 , 1 .
Thus, the critical points are x = 0 and x = 1 . Evaluating the derivative at these points, O 0 = 0 indicates that 0 is a super attracting point and O 1 < 1 indicates that 1 is a repelling point.    □
The dynamical planes in iterative methods are essential for solving nonlinear equations because they provide visual insights into the behavior and stability of iterating processes. By investigating the convergence and divergence patterns within these planes, fixed points, attractors, and chaotic zones can be identified, allowing the iterative process to be optimized for improved accuracy and efficiency. The stability of the single root-finding method for different fractional parameter values ψ 0 , 1 is examined using dynamical planes (see Figure 1 and Figure 2). In these figures, the orange color represents the basins of attraction of O x when mapped to 0. If the root of O x maps to infinity, it is marked in blue. If the map diverges, it is marked in black. Strange fixed points are depicted by white circles, free critical points by white squares, and fixed points by white squares with a star. The dynamical planes are generated by taking starting values from the square [ 1.3 , 1.3 ] × [ 1.3 , 1.3 ] . In Figure 1, the dynamical planes show large basins where the rational map converges to 0 or infinity. In Figure 2a–e, the region of the basins of attraction decreases as the fractional parameter value decreases from 1 to 0.5 and diverges at 0. This indicates that the single-step method is more stable when the fractional parameter values are close to 1 and becomes unstable as the fractional parameter values approach 0. Using the newly developed stable fractional-order single root-finding method SCM ψ as a correction factor in (23), we propose a novel inverse fractional scheme for analyzing (1) in the following section.

3. Development and Analysis of Inverse Fractional Parallel Scheme

The inverse fractional parallel iterative approach is useful for determining the roots of nonlinear equations because it efficiently finds all roots of (2) simultaneously. This iterative technique converges quickly from any initial value, has global convergence, and computes both real and complex roots simultaneously. The inverse fractional schemes provide greater stability, consistency and are well suited for parallel computing. Shams et al. [61] presented the fractional parallel approach with convergence order 2 for solving (2) as follows:
x i 1 + 1 = x i [ l ] 2 Π n j i j = 1 x i l u j l x i [ l ] Π n j i j = 1 x i l u j l + g x i l ,
where u j l = x j l Γ ( ψ + 1 ) g ( x j l ) ψ 1 ψ c ˇ g ( x j l ) 1 / ψ . Using (4) in (23), we propose the following inverse fractional parallel scheme ( CSM [ ψ ] ) as
x i 1 + 1 = x i [ l ] w [ ] x i l 1 + w [ ] x i l x i l ,
where w [ ] ( x i l ) = 1 2 3 w [ ] ( x i l ) w ( ς i 1 ) g ς i 1 g x i l w [ ] ( x i l ) , ς i 1 = x i [ l ] w [ ] x i l 1 + w [ ] x i l x i l , w [ ] ( x i l ) = g x j l Π n j i j = 1 x i l u j l , u j l = γ j l 1 2 Γ ( ψ + 1 ) 3 ψ 1 ψ c ˇ g ς j l ψ 1 ψ c ˇ g x j l g x j l ψ 1 ψ c ˇ g x j l 1 / ψ , ς j l = x j l Γ ( ψ + 1 ) g x j l ψ 1 ψ c ˇ g x j l 1 / ψ . The parallel fractional scheme can also be written as
x i 1 + 1 = x i [ l ] 1 2 3 Π n j i j = 1 ς i l ς j l Π n j i j = 1 x i l u j l g x j l Π n j i j = 1 x i l u j l 1 + 1 2 3 Π n j i j = 1 ς i l ς j l Π n j i j = 1 x i l u j l g x j l Π n j i j = 1 x i l u j l x i l ,
where
ς i 1 = x i [ l ] g x j l Π n j i j = 1 x i l u j l 1 + g x j l Π n j i j = 1 x i l u j l x i l ,
and
ς j l = x j l Γ ( ψ + 1 ) g x j l ψ 1 ψ c ˇ g x j l 1 / ψ , u j l = γ j l 1 2 Γ ( ψ + 1 ) 3 ψ 1 ψ c ˇ g ς j l ψ 1 ψ c ˇ g x j l g x j l ψ 1 ψ c ˇ g x j l 1 / ψ .
The following theorem presents the local order of convergence of the inverse fractional scheme.
Theorem  3.
Let ζ 1 , , ζ σ be simple zeros of the nonlinear equation. For sufficiently close initial distinct estimates ε 1 [ 0 ] , , ε n [ 0 ] of the roots, respectively, CIM [ ψ ] has a convergence order of 2 ψ + 2 .
Proof. 
Let e i = x i [ l ] ζ i , e ς = ς i [ l ] ζ i , and e i [ ] = x i [ l + 1 ] ζ i be the errors in x i [ l ] , ς i [ l ] , and x i [ l + 1 ] , respectively. From the first step of CSM [ ψ ] , we have
ς i [ l ] ζ i = x i [ l ] ζ i g x j l Π n j i j = 1 x i l u j l 1 + w [ ] x i l x i l ,
e i = e i e i n j i j = 1 x i [ l ] ζ j x i [ l ] u j [ l ] 1 + w [ ] x i l x i l ,
e ς = e i 1 n j i j = 1 x i [ l ] ζ j x i [ l ] u j [ l ] 1 + w [ ] x i l x i l = e i 1 n j i j = 1 x i [ l ] ζ j x i [ l ] u j [ l ] + w [ ] x i l x i l 1 + w [ ] x i l x i l ,
e ς = e i 1 n j i j = 1 x i [ l ] ζ j x i [ l ] u j [ l ] + u i [ l ] ζ j x i l n j i j = 1 u i [ l ] ζ j x i [ l ] u j [ l ] 1 + w [ ] x i l x i l ,
where n j i j = 1 x i [ l ] ζ j x i [ l ] u j [ l ] 1 = k i n x i [ l ] u j [ l ] k 1 j i u i [ l ] ζ j x i [ l ] u j [ l ] and from (27), u j [ l ] ζ j = O e i 2 ψ + 1 . Therefore,
e ς = e i k i n e i 2 ψ + 1 x i [ l ] u j [ l ] k 1 j i u i [ l ] ζ j x i [ l ] u j [ l ] + e j 2 ψ + 1 x i l n j i j = 1 u i [ l ] ζ j x i [ l ] u j [ l ] 1 + w [ ] x i l x i l ,
By assuming e i 2 ψ + 1 = e j 2 ψ + 1 ,
e ς = e i e i 2 ψ + 1 k i n e i 2 ψ + 1 x i [ l ] u j [ l ] k 1 j i u i [ l ] ζ j x i [ l ] u j [ l ] + e j 2 ψ + 1 x i l n j i j = 1 u i [ l ] ζ j x i [ l ] u j [ l ] 1 + w [ ] x i l x i l ,
e ς = O e i 2 ψ + 2 .
Considering the second step, we have
x i [ l + 1 ] ζ i = x i [ l ] ζ i w [ ] x i l 1 + w [ ] x i l x i l ,
e i [ ] = e i w [ ] x i l 1 + w [ ] x i l x i l ,
where w [ ] x i l = 1 2 3 w [ ] x i l w ς i 1 g ς i 1 g x i l w [ ] ( x i l ) and w [ ] ( x i l ) w ( ς i 1 ) g ς i 1 g x i l = 1 [1]. Therefore,
e i [ ] = e i 1 2 3 1 w [ ] x i l 1 + 1 2 3 1 w [ ] x i l x i l ,
e i [ ] = e i w [ ] x i l 1 + w [ ] x i l x i l = e ς = O e i 2 ψ + 2 ,
e i [ ] = O e i 2 ψ + 2 .
Hence, the theorem is proven.    □

4. Numerical Results

To examine the effectiveness and stability of our proposed method, several engineering applications are analyzed in this section. In our experiments, we use the following termination criteria:
( i ) e i = x i 1 + 1 x i 1 < 10 32 ,
where e i represents the residual error norm-2. Additionally, for percentage convergence, we use
Per - C = x i 1 + 1 x i 1 x i 1 100 < 10 32 .
Additionally, we measure the CPU execution time using an Intel(R) Core(TM) i7-4330m CPU running at 8.2 GHz with a 64-bit operating system. All computations are carried out in Maple 20 and C++ to determine the more realistic run time of the numerical approach for comparison. Further, we compare our newly developed method with the Nourien method [62] ( NSM [ 4 ] ), which has a convergence order of four:
x i l + 1 = x i l w x i l 1 + j i j = 1 n w x j l x i l w x j l x j l ;
and the Zhang et al. method [63] ( ZSM [ 5 ] ):
x i l + 1 = x i l w x i l 1 + Δ [ ] x i l + 1 + Δ [ ] x i l 2 + 4 w x i l j i j = 1 n w x j l x i l x j l x i l w ( x i l ) x j l ;
where Δ [ ] x i l = j i j = 1 n w x j l x i l x j l .
To compute all the roots of (2), we used the Algorithm 1 block diagram Figure 3, which depicts the flow chart of the inverse parallel scheme CSM [ ψ ] .
Algorithm 1: Fractional numerical scheme CSM [ ψ ]
Given initial values x i [ 0 ] ( i = 1 , , N ) , tolerance > 0 and set l = 0 for iterations q q Compute : ς j l = x j l Γ ( ψ + 1 ) g x j l ψ 1 ψ c ˇ g x j l 1 / ψ u j l = γ j l 1 2 Γ ( ψ + 1 ) 3 ψ 1 ψ c ˇ g ς j l ψ 1 ψ c ˇ g x j l g x j l ψ 1 ψ c ˇ g x j l 1 / ψ , . Update ς i 1 = x i [ l ] g x j l Π n j i j = 1 x i l u j l 1 + g x j l Π n j i j = 1 x i l u j l x i l x i 1 + 1 = x i [ l ] 1 2 3 Π n j i j = 1 ς i l ς j l Π n j i j = 1 x i l u j l g x j l Π n j i j = 1 x i l u j l 1 + 1 2 3 Π n j i j = 1 ς i l ς j l Π n j i j = 1 x i l u j l g x j l Π n j i j = 1 x i l u j l x i l , x i 1 + 1 = x i [ l ] ( i = 1 , , n ) . if e i [ l ] = x i 1 + 1 x i 1 < 10 32 < = 10 30 or σ > p p , then stop . Set l = l + 1 and go to step 2 . End do .

4.1. Example 1: Fractional Relaxation–Oscillation Equation [64]

A fundamental tool for forecasting the dynamic behavior of systems exhibiting both relaxation and oscillatory properties is the fractional relaxation–oscillation equation. This equation has applications in materials science, engineering, and biology, as it incorporates fractional derivatives that account for the system’s memory and hereditary effects, extending traditional relaxation–oscillation models. The fractional relaxation–oscillation equation is given by
ψ 1 n ψ c ˇ g x + ϑ [ ] g x = f ( x ) ; ϵ 0 x ϵ n g ( n 1 ) ψ ϵ 0 = θ n 1 [ ] , , g ϵ 0 = θ 0 [ ] .
In viscoelastic materials with both elastic and viscous behavior, the fractional order ψ explains how the material’s stress response is influenced by its current state and deformation history.
The numerical solution of (57) can be obtained by solving the following polynomial, using the approach described in [65]. For this example, we choose n = 2 , ϑ [ ] = 0 , θ 0 [ ] = 0 , θ n 1 [ ] = 1 , and f ( x ) = g 2 x + 1 :
g x = x ψ Γ ( 1 + ψ ) + x 2 ψ Γ ( 1 + 2 ψ ) + 2 x 4 ψ Γ ( 1 + 4 ψ ) + 6 x 5 ψ Γ ( 1 + 5 ψ ) + 6 x 6 ψ Γ ( 1 + 6 ψ ) .
For ψ = 1 , this simplifies to
g x = x Γ ( 2 ) + x 2 Γ ( 3 ) + 2 x 4 Γ ( 5 ) + 6 x 5 Γ ( 6 ) + 6 x 6 Γ ( 7 ) ,
The Caputo-type derivative of Equation (58) is given by
ψ 1 ψ c ˇ g x = 6 Γ ( 7 ψ ) x 6 ψ + 6 Γ ( 6 ψ ) x 5 ψ + 2 Γ ( 5 ψ ) x 4 ψ + 1 Γ ( 3 ψ ) x 2 ψ + 1 Γ ( 2 ψ ) x 1 ψ
The non-linear Equation (58) has the following exact roots, accurate up to five decimal places:
ζ 1 , 2 = 3.35615 ± 1.79321 i , ζ 3 = 1.76807 , ζ 4 = 0.0 , ζ 5 , 6 = 1.24019 ± 1.77462 i ,
Using initial guesses close to the exact solution, the convergence rate of simultaneous methods increases, allowing the methods to approach the exact roots with fewer iterations (see Table 1).
Table 2 clearly indicates that when ψ 1.0 , the residual error for each root and the order and rate of convergence are superior compared to existing methods, demonstrating better efficiency and stability. To observe the global convergence behavior, we selected some random initial guess values using the “rand()” command in Matlab to generate random initial guesses. The numerical outcomes for these random initial guesses are presented in Table 2.
Root trajectories for roots 1–6 were determined by selecting random initial starting points using the Albearth initial approximation, with a maximum of 7, 8, 12, 10, and 12 iterations for ψ 1.0 to converge to the exact roots. In Figure 4, the black solid circle represents the initial starting roots using the Albearth initial approximation, the empty circle shows the number of iterations required to reach the precise roots, and the red cross on the circle indicates the exact root’s position in the complex plane. The numerical technique converges to the exact roots, displaying global convergence characteristics for any collection of initial test problems, as demonstrated in Figure 4.
Figure 5 clearly demonstrates that as we start with ψ values of 0.1 and proceed closer to 1, the number of iterations decreases, indicating an increase in the convergence rate, which reaches its maximum as ψ approaches 1. This is because the fractional derivative equals the ordinary derivative at ψ = 1 . The figure also illustrates that as ψ increases from 0.1 to 0.9, the number of iterations decreases while accuracy increases, indicating that the newly developed approach becomes stable at ψ 1.0 . The scheme exhibits consistent behavior across all initial approximations, demonstrating global convergence and holding a prominent position in comparison to ZSM [ 5 ] and NSM [ 4 ] .
The global convergence behavior of the numerical scheme for different sets of random starting vectors is investigated in Table 2, Table 3 and Table 4. Random sets of starting values, using the Albearth initial approximation, are presented in Table 2 to find all solutions to the considered application. The comparisons of the starting vectors with exact roots up to four decimal places are shown in the last column. Table 3 presents the numerical outputs for various fractional parameter ψ values ranging from 0.1 to 0.9 when these three initial vectors are used to find all solutions. As shown in Table 3, accuracy increases and reaches a maximum at ψ 1.0 as the ψ values increase from 0.1 to 0.9. To obtain this accuracy for different test vectors T 1 [ ψ ] T 3 [ ψ ] , the method requires a varying number of iterations for ψ , as shown in Table 4. It is also evident from Table 4 that the number of iterations decreases as ψ increases from 0.1 to 0.9. Table 5 shows the scheme’s overall behavior with different sets of starting vectors.
To determine the degree of consistency between the method and other existing methods, Table 5 is utilized. Table 5 assesses the method’s consistency with other existing methods, specifically ZSM [ 5 ] and NSM [ 4 ] . Table 5 shows that σ i [ n 1 ] is the computational order of convergence, CPU is the average CPU time using various sets of initial vectors ( T 1 [ ψ ] T 3 [ ψ ] ), Av-it is the average number of iterations, and Per-C is the percentage average. Comparing the results for ψ 1.0 with the outcomes presented in Table 1 clearly shows that the method is more consistent compared to existing methods and demonstrates improved convergence behavior of CSM [ ψ ] compared to ZSM [ 5 ] and NSM [ 4 ] .

4.2. Example 2: Civil Engineering Application [66]

Consider the fractional differential equations
ψ 1 n ψ c ˇ g x + e 2 g x = 0 ; 1 ψ 2 y 0 = θ 0 [ ] , , g ϵ 0 = θ n 1 [ ]
In viscoelastic materials with both elastic and viscous behavior, the fractional order ψ explains how the material’s stress response is influenced by its current state and deformation history.
The numerical solution of (61) can be obtained by solving the following polynomial using the approach described in [67] by choosing ϑ [ ] = 0 and θ n 1 [ ] = 1 :
g x = x + 1 Γ ( ψ ) x 0 x s ψ 1 d s + 2 Γ ( ψ ) x 0 x s ψ 1 s d s .
For ψ = 1.98 , we have
g x = x 0.5092731905 x 99 50 + 0.3417940876 x 149 50 ,
and for ψ = 2 , we have
g x = 1 3 x 3 1 2 x 2 x .
The Caputo-type derivative of (63) is
ψ 1 ψ c ˇ g x = 1 3 Γ ( 4 ) Γ ( 4 ψ ) x 3 ψ + 1 2 Γ ( 3 ) Γ ( 3 ψ ) x 2 ψ Γ ( 2 ) Γ ( 2 ψ ) x 1 ψ .
The non-linear equation (63) has the following exact roots up to three decimal places:
ζ 1 , 2 = 0.75 ± 1.56124 i , ζ 3 = 0 .
Using initial guesses close to the exact solution, the convergence rate of simultaneous methods increases and approaches exact roots with fewer iterations, as shown in Table 6.
The root trajectories for roots 1 to 3 are determined by randomly selecting the initial starting values for each, using the Albearth approximation. This process requires a maximum of 7, 8, 12, 10, and 12 iterations for ψ to converge to the precise root. The global convergence behavior of the numerical scheme is demonstrated by its consistent convergence to exact roots for any set of starting test problems T 1 [ ψ ] T 3 [ ψ ] , as depicted in Figure 6.
Figure 7 clearly shows that as we start with ψ values of 0.1 and move closer to 1, the number of iterations decreases. This indicates that the convergence rate increases and reaches a maximum as we approach ψ = 1 , since the fractional derivative equals the ordinary derivative at that value. The scheme exhibits consistent behavior across all initial approximations, demonstrating global convergence and maintaining a prominent position compared to current schemes, such as ZSM [ 5 ] and NSM [ 4 ] .
The global convergence behavior of the numerical technique is examined in Table 7, Table 8 and Table 9 for various sets of random initial vectors. Using the Albearth initial approximation, random sets of starting values are generated to find all solutions to the specified application. These sets are displayed in Table 7. The last column compares the initial vectors with exact roots up to four decimal places. Table 8 presents the numerical outputs for fractional parameter values ranging from 0.1 to 0.9 when these three initial vectors are used to find all possible solutions. As ψ values increase from 0.1 to 0.9, Table 8 demonstrates that accuracy rises and reaches a maximum at ψ 1.0 . To achieve this accuracy for different test vectors, the method requires a variable number of iterations for g, as shown in Table 9. Table 9 also clearly shows that the number of iterations decreases as ψ grows from 0.1 to 0.9. Table 10 shows the scheme’s overall behavior with different sets of starting vectors.
It is evident from Table 10 that the method is more consistent than other approaches. The convergence behavior of CSM [ ψ ] is superior to ZSM [ 5 ] , NSM [ 4 ] when comparing the results for ψ 1.0 with those shown in Table 10.

5. Conclusions

We developed a new Caputo-type fractional scheme with a convergence order of 2 ψ + 1 and transformed it into an inverse fractional parallel scheme to find all solutions to nonlinear fractional problems. Convergence analysis reveals that the parallel schemes have a convergence order of 2 ψ + 2 . To enhance the convergence rate of fractional schemes, dynamical planes are utilized to select the optimal initial guessed values for convergence to exact solutions. Several nonlinear problems were considered to evaluate the stability and consistency of CSM [ ψ ] in comparison to ZSM [ 5 ] , NSM [ 4 ] . The numerical results demonstrate that the CSM [ ψ ] method is more stable and consistent in terms of residual error, CPU time, and error graphs for varied values of ψ compared to ZSM [ 5 ] and NSM [ 4 ] . The global convergence behavior is further examined using three initial test vectors, i.e., T 1 [ ψ ] T 3 [ ψ ] . In the future, we plan to develop higher-order inverse parallel schemes using other fractional derivative notations to address more complex problems in biomedical engineering and epidemic modeling.

Author Contributions

Conceptualization, M.S. and B.C.; methodology, M.S.; software, M.S.; validation, M.S.; formal analysis, B.C.; investigation, M.S.; resources, B.C.; writing—original draft preparation, M.S. and B.C.; writing—review and editing, B.C.; visualization, M.S. and B.C.; supervision, B.C.; project administration, B.C.; funding acquisition, B.C. All authors have read and agreed to the published version of the manuscript.

Funding

The work is supported by the Free University of Bozen-Bolzano (IN200Z SmartPrint) and by Provincia Autonoma di Bolzano/Alto Adige—Ripartizione Innovazione, Ricerca, Università e Musei (CUP codex I53C22002100003 PREDICT). Bruno Carpentieri is a member of the Gruppo Nazionale per il Calcolo Scientifico (GNCS) of the Istituto Nazionale di Alta Matematica (INdAM) and this work was partially supported by INdAM-GNCS under Progetti di Ricerca 2024.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

Abbreviations

In this article, the following abbreviations are used:
CSM [ ψ ] Newly developed Schemes
nIterations
CPU timeComputational Time in Seconds
e- 10 ( )
ρ ς [ k 1 ] Computational local convergence order

References

  1. Liu, Y.; Yang, Q. Dynamics of a new Lorenz-like chaotic system. Nonlinear Anal. Real World Appl. 2010, 11, 2563–2572. [Google Scholar] [CrossRef]
  2. Liu, P.; Zhang, Y.; Mohammed, K.J.; Lopes, A.M.; Saberi-Nik, H. The global dynamics of a new fractional-order chaotic system. Chaos Solitons Fractals 2023, 175, 114006. [Google Scholar] [CrossRef]
  3. Ye, X.; Wang, X. Hidden oscillation and chaotic sea in a novel 3d chaotic system with exponential function. Nonlinear Dyn. 2023, 111, 15477–15486. [Google Scholar] [CrossRef]
  4. Venkateshan, S.P.; Swaminathan, P. Computational Methods in Engineering; Academic Press: Cambridge, MA, USA, 2014; pp. 317–373. [Google Scholar]
  5. Hiptmair, R. Finite elements in computational electromagnetism. Acta Numer. 2002, 11, 237–339. [Google Scholar] [CrossRef]
  6. Lomax, H.; Pulliam, T.H.; Zingg, D.W.; Kowalewski, T.A. Fundamentals of computational fluid dynamics. Appl. Mech. Rev. 2002, 55, B61. [Google Scholar] [CrossRef]
  7. Warren, C.; Giannopoulos, A.; Giannakis, I. gprMax: Open source software to simulate electromagnetic wave propagation for Ground Penetrating Radar. Comput. Phys. Commun. 2016, 209, 163–170. [Google Scholar] [CrossRef]
  8. Cantwell, B.J. Organized motion in turbulent flow. Annu. Rev. Fluid Mech. 1981, 13, 457–515. [Google Scholar] [CrossRef]
  9. Peters, S.; Lanza, G.; Jun, N.; Xiaoning, J.; Pei Yun, Y.; Colledani, M. Automotive manufacturing technologies—An international viewpoint. Manuf. Rev. 2014, 1, 1–12. [Google Scholar] [CrossRef]
  10. Singh, J.; Singh, H. Application of lean manufacturing in automotive manufacturing unit. Int. J. Lean Six Sigma 2020, 11, 171–210. [Google Scholar] [CrossRef]
  11. Ma, C.Y.; Shiri, B.; Wu, G.C.; Baleanu, D. New fractional signal smoothing equations with short memory and variable order. Optik 2020, 218, 164507. [Google Scholar] [CrossRef]
  12. Tolstoguzov, V. Phase behaviour of macromolecular components in biological and food systems. Food/Nahrung 2000, 44, 299–308. [Google Scholar] [CrossRef] [PubMed]
  13. Bullmore, E.; Sporns, O. The economy of brain network organization. Nat. Rev. Neurosci. 2012, 13, 336–349. [Google Scholar] [CrossRef] [PubMed]
  14. Arino, J.; Van den Driessche, P. Disease spread in metapopulations. Fields Inst. Commun. 2006, 4, 1–13. [Google Scholar]
  15. Baleanu, D. Fractional Calculus: Models and Numerical Methods; World Scientific: Singapore, 2012; Volume 3. [Google Scholar]
  16. Polyanin, A.D.; Zaitsev, V.F. Handbook of Ordinary Differential Equations: Exact Solutions, Methods, and Problems; Chapman and Hall/CRC: Boca Raton, FL, USA, 2017. [Google Scholar]
  17. Diethelm, K.; Ford, N.J. Analysis of fractional differential equations. J. Math. Anal. Appl. 2002, 265, 229–248. [Google Scholar] [CrossRef]
  18. Gu, X.M.; Huang, T.-Z.; Zhao, Y.L.; Carpentieri, B. A fast implicit difference scheme for solving the generalized time–space fractional diffusion equations with variable coefficients. Numer. Methods Partial. Differ. Equ. 2021, 37, 1136–1162. [Google Scholar] [CrossRef]
  19. Gu, X.M.; Huang, T.Z.; Ji, C.C.; Carpentieri, B.; Alikhanov, A.A. Fast iterative method with a second-order implicit difference scheme for time-space fractional convection–diffusion equation. J. Sci. Comput. 2017, 72, 957–985. [Google Scholar] [CrossRef]
  20. Huang, Y.Y.; Gu, X.M.; Gong, Y.; Li, H.; Zhao, Y.L.; Carpentieri, B. A fast preconditioned semi-implicit difference scheme for strongly nonlinear space-fractional diffusion equations. Fractal Fract. 2021, 5, 230. [Google Scholar] [CrossRef]
  21. Karaagac, B. New exact solutions for some fractional order differential equations via improved sub-equation method. Discret. Contin. Dyn. Syst.-S 2019, 12, 447–454. [Google Scholar] [CrossRef]
  22. Manafian, J.; Allahverdiyeva, N. An analytical analysis to solve the fractional differential equations. Adv. Math. Models Appl. 2021, 6, 128–161. [Google Scholar]
  23. Qazza, A.; Saadeh, R. On the analytical solution of fractional SIR epidemic model. Appl. Comput. Intell. Soft Comput. 2023, 2023, 6973734. [Google Scholar] [CrossRef]
  24. Yépez-Martínez, H.; Rezazadeh, H.; Inc, M.; Akinlar, M.A.; Gomez-Aguilar, J.F. Analytical solutions to the fractional Lakshmanan–Porsezian–Daniel model. Opt. Quantum Electron. 2022, 54, 32. [Google Scholar] [CrossRef]
  25. Reynolds, D.R.; Gardner, D.J.; Woodward, C.S.; Chinomona, R. ARKODE: A flexible IVP solver infrastructure for one-step methods. ACM Trans. Math. Soft. 2023, 49, 1–26. [Google Scholar] [CrossRef]
  26. Ikhile, M.N.O. Coefficients for studying one-step rational schemes for IVPs in ODEs: III. Extrapolation methods. Comput. Math. Appl. 2004, 47, 1463–1475. [Google Scholar] [CrossRef]
  27. Rufai, M.A.; Ramos, H. A variable step-size fourth-derivative hybrid block strategy for integrating third-order IVPs, with applications. Int. J. Comput. Math. 2022, 99, 292–308. [Google Scholar] [CrossRef]
  28. Argyros, I.K.; Hilout, S. Weaker conditions for the convergence of Newton’s method. J. Complex. 2012, 28, 364–387. [Google Scholar] [CrossRef]
  29. Gutierrez, J.M.; Hernández, M.A. An acceleration of Newton’s method: Super-Halley method. Appl Math Comput. 2001, 117, 223–239. [Google Scholar] [CrossRef]
  30. Chun, C. A new iterative method for solving nonlinear equations. Appl. Math. Comput. 2006, 178, 415–422. [Google Scholar] [CrossRef]
  31. Sharma, J.R.; Guha, R.K. A family of modified Ostrowski methods with accelerated sixth order convergence. Appl. Math. Comput. 2007, 190, 111–115. [Google Scholar] [CrossRef]
  32. King, P.R. The use of field theoretic methods for the study of flow in a heterogeneous porous medium. J. Phys. A Math. Gen. 1987, 20, 3935. [Google Scholar] [CrossRef]
  33. Shams, M.; Carpentieri, B. On highly efficient fractional numerical method for solving nonlinear engineering models. Mathematics 2023, 11, 4914. [Google Scholar] [CrossRef]
  34. Cordero, A.; Hueso, J.L.; MartÃ-nez, E.; Torregrosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar] [CrossRef]
  35. Mir, N.A.; Anwar, M.; Shams, M.; Rafiq, N.; Akram, S. On numerical schemes for determination of all roots simultaneously of non-linear equation. Mehran Univ. Res. J. Eng. Technol. 2022, 41, 208–218. [Google Scholar] [CrossRef]
  36. Akram, S.; Shams, M.; Rafiq, N.; Mir, N.A. On the stability of Weierstrass type method with King’s correction for finding all roots of non-linear function with engineering application. Appl. Math. Sci. 2020, 14, 461–473. [Google Scholar] [CrossRef]
  37. Neta, B. New third order nonlinear solvers for multiple roots. Appl. Math. Comput. 2008, 202, 162–170. [Google Scholar] [CrossRef]
  38. Amat, S.; Busquier, S.; Gutiérrez, J.M. Third-order iterative methods with applications to Hammerstein equations: A unified approach. J. Comput. Appl. Math. 2011, 235, 2936–2943. [Google Scholar] [CrossRef]
  39. Liu, Z.; Zheng, Q.; Zhao, P. A variant of Steffensen’s method of fourth-order convergence and its applications. Appl. Math. Comput. 2010, 216, 1978–1983. [Google Scholar] [CrossRef]
  40. Torres-Hernandez, A.; Brambila-Paz, F. Sets of fractional operators and numerical estimation of the order of convergence of afamily of fractional fixed-point methods. Fractal Fract. 2021, 4, 240. [Google Scholar] [CrossRef]
  41. Akgül, A.; Cordero, A.; Torregrosa, J.R. A fractional Newton method with 2th-order of convergence and its stability. Appl. Math. Lett. 2019, 98, 344–351. [Google Scholar] [CrossRef]
  42. Cajori, F. Historical note on the Newton-Raphson method of approximation. Am. Math. Mon. 1911, 18, 29–32. [Google Scholar] [CrossRef]
  43. Kumar, P.; Agrawal, O.P. An approximate method for numerical solution of fractional differential equations. Signal Process 2006, 86, 2602–2610. [Google Scholar] [CrossRef]
  44. Falcão, M.I.; Miranda, F.; Severino, R.; Soares, M.J. Weierstrass method for quaternionic polynomial root-finding. Math. Methods Appl. Sci. 2018, 41, 423–437. [Google Scholar] [CrossRef]
  45. Nedzhibov, G.H. Inverse Weierstrass-Durand-Kerner Iterative Method. Int. J. Appl. Math. 2013, 28, 1258–1264. [Google Scholar]
  46. Shams, M.; Rafiq, N.; Ahmad, B.; Mir, N.A. Inverse numerical iterative technique for finding all roots of nonlinear equations with engineering applications. J. Math. 2021, 2021, 6643514. [Google Scholar] [CrossRef]
  47. Iliev, A.I. A generalization of Obreshkoff-Ehrlich method for multiple roots of polynomial equations. arXiv 2001, arXiv:math/0104239. [Google Scholar]
  48. Petković, M.S.; Petković, L.D.; Džunić, J. On an efficient method for the simultaneous approximation of polynomial multiple roots. Appl. Anal. Discret. Math. 2014, 1, 73–94. [Google Scholar] [CrossRef]
  49. Sebah, P.; Gourdon, X. Introduction to the gamma function. Am. J. Sci. Res. 2002, 1, 2–18. [Google Scholar]
  50. Almeida, R. A Caputo fractional derivative of a function with respect to another function. Commun. Nonlinear Sci. Numer. Simul. 2017, 44, 460–481. [Google Scholar] [CrossRef]
  51. Odibat, Z.M.; Shawagfeh, N.T. Generalized Taylor’s formula. Appl. Math. Comput. 2007, 186, 286–293. [Google Scholar] [CrossRef]
  52. Candelario, G.; Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. An optimal and low computational cost fractional Newton-type method for solving nonlinear equations. Appl. Math. Lett. 2022, 124, 107650. [Google Scholar] [CrossRef]
  53. Shams, M.; Kausar, N.; Agarwal, P.; Jain, S.; Salman, M.A.; Shah, M.A. On family of the Caputo-type fractional numerical scheme for solving polynomial equations. Appl. Math. Sci. Eng. 2023, 31, 2181959. [Google Scholar] [CrossRef]
  54. Candelario, G.; Cordero, A.; Torregrosa, J.R. Multipoint fractional iterative methods with (2α+1) th-order of convergence for solving nonlinear problems. Mathematics 2020, 8, 452. [Google Scholar] [CrossRef]
  55. Jarratt, P. Some fourth order multipoint iterative methods for solving equations. Math. Comput. 1966, 20, 434–437. [Google Scholar] [CrossRef]
  56. Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. Stability and applicability of iterative methods with memory. J. Math. Chem. 2019, 57, 1282–1300. [Google Scholar] [CrossRef]
  57. Cordero, A.; Leonardo Sepúlveda, M.A.; Torregrosa, J.R. Dynamics and stability on a family of optimal fourth-order iterative methods. Algorithms 2022, 15, 387. [Google Scholar] [CrossRef]
  58. Cordero, A.; Lotfi, T.; Khoshandi, A.; Torregrosa, J.R. An efficient Steffensen-like iterative method with memory. Bull. Math. Soc. Sci. Math. Roum. 2015, 1, 49–58. [Google Scholar]
  59. Shams, M.; Ahmad Mir, N.; Rafiq, N.; Almatroud, A.O.; Akram, S. On dynamics of iterative techniques for nonlinear equation with applications in engineering. Math. Probl. Eng. 2020, 2020, 5853296. [Google Scholar] [CrossRef]
  60. Campos, B.; Canela, J.; Vindel, P. Dynamics of Newton-like root finding methods. Numer. Alg. 2023, 93, 1453–1480. [Google Scholar] [CrossRef]
  61. Shams, M.; Carpentieri, B. Efficient Inverse Fractional Neural Network-Based Simultaneous Schemes for Nonlinear Engineering Applications. Fractal Fract. 2023, 7, 849. [Google Scholar] [CrossRef]
  62. Anourein, A.W.M. An improvement on two iteration methods for simultaneous determination of the zeros of a polynomial. Inter. J. Comput. Math. 1977, 6, 241–252. [Google Scholar] [CrossRef]
  63. Zhang, X.; Peng, H.; Hu, G. A high order iteration formula for the simultaneous inclusion of polynomial zeros. Appl. Math. Comput. 2006, 179, 545–552. [Google Scholar] [CrossRef]
  64. Wu, G.C. Adomian decomposition method for non-smooth initial value problems. Math. Comput. Model. 2018, 54, 2104–2108. [Google Scholar] [CrossRef]
  65. Az-Zo’bi, E.A.; Qousini, M.M. Modified Adomian-Rach decomposition method for solving nonlinear time-dependent IVPs. Appl. Math. Sci. 2017, 11, 387–395. [Google Scholar] [CrossRef]
  66. Khodabakhshi, N.; Mansour Vaezpour, S.; Baleanu, D. Numerical solutions of the initial value problem for fractional differential equations by modification of the Adomian decomposition method. Fract. Calc. Appl. 2014, 17, 382–400. [Google Scholar] [CrossRef]
  67. Wazwaz, A.M.; Rach, R.; Bougoffa, L. Dual solutions for nonlinear boundary value problems by the Adomian decomposition method. Int. J. Numer. Meth. Heat. Fluid Flow 2016, 26, 2393–2409. [Google Scholar]
Figure 1. Dynamical planes of the rational map O ε for ψ 1 .
Figure 1. Dynamical planes of the rational map O ε for ψ 1 .
Axioms 13 00671 g001
Figure 2. (ae): Dynamical planes for various parameter values of the rational map O ε . (a) Dynamical planes of the rational map O ε for ψ 0.9 ; (b) dynamical planes of the rational map O ε for ψ 0.9 ; (c) dynamical planes of the rational map O ε for ψ 0.7 ; (d) dynamical planes of the rational map O ε for ψ 0.6 ; (e) dynamical planes of the rational map O ε for ψ 0.5 .
Figure 2. (ae): Dynamical planes for various parameter values of the rational map O ε . (a) Dynamical planes of the rational map O ε for ψ 0.9 ; (b) dynamical planes of the rational map O ε for ψ 0.9 ; (c) dynamical planes of the rational map O ε for ψ 0.7 ; (d) dynamical planes of the rational map O ε for ψ 0.6 ; (e) dynamical planes of the rational map O ε for ψ 0.5 .
Axioms 13 00671 g002
Figure 3. Flow chart of the inverse parallel scheme CSM [ ψ ] for solving (2).
Figure 3. Flow chart of the inverse parallel scheme CSM [ ψ ] for solving (2).
Axioms 13 00671 g003
Figure 4. Root trajectories for all roots using three sets of test vectors.
Figure 4. Root trajectories for all roots using three sets of test vectors.
Axioms 13 00671 g004
Figure 5. Residual error of the numerical technique for different fractional parameter values and computational orders of convergence.
Figure 5. Residual error of the numerical technique for different fractional parameter values and computational orders of convergence.
Axioms 13 00671 g005
Figure 6. Root trajectories for all roots using three sets of test vectors.
Figure 6. Root trajectories for all roots using three sets of test vectors.
Axioms 13 00671 g006
Figure 7. Residual error of the numerical scheme for various fractional parameter values and computational orders of convergence.
Figure 7. Residual error of the numerical scheme for various fractional parameter values and computational orders of convergence.
Axioms 13 00671 g007
Table 1. Numerical results for Example 1.
Table 1. Numerical results for Example 1.
Err ZSM [ 5 ] NSM [ 4 ] CSM [ ψ ]
e 1 [ 4 ] 9.31 × 10 45 9.92 × 10 99 1.1 × 10 104
e 2 [ 4 ] 3.69 × 10 55 4.72 × 10 76 0.0
e 3 [ 4 ] 2.77 × 10 76 5.12 × 10 85 0.0
e 4 [ 4 ] 4.47 × 10 39 6.27 × 10 101 4.9 × 10 165
e 5 [ 4 ] 2.77 × 10 76 5.12 × 10 85 0.0
e 6 [ 4 ] 4.47 × 10 39 6.27 × 10 101 4.9 × 10 165
σ i [ n 1 ] 0.052310.037940.01533
Table 2. Random initial test vectors T 1 [ ] T 3 [ ] for scheme CSM [ ψ ] .
Table 2. Random initial test vectors T 1 [ ] T 3 [ ] for scheme CSM [ ψ ] .
Err T 1 [ ] T 2 [ ] T 3 [ ] Ex-Sol n
x 1 [ 0 ] 1.005 + 0.0078 i 1.063 + 0.062 i 1.013 + 0.0551 i 3.3652 1.7932 i
x 2 [ 0 ] 2.001 + 1.342 i 2.03 + 1.054 i 0.027 + 0.0139 i 3.3652 + 1.7932 i
x 3 [ 0 ] 1.105 + 0.0078 i 3.083 + 4.287 i 5.013 + 0.432 i 1.76807
x 4 [ 0 ] 0.093 + 0.0018 i 0.006 + 0.008 i 8.019 + 0.051 i 0.0
x 5 [ 0 ] 4.005 + 0.0044 i 1.044 + 0.098 i 4.013 + 0.091 i 1.24019 + 1.77462 i
x 6 [ 0 ] 2.012 + 0.0033 i 2.063 + 0.106 i 0.010 + 0.0331 i 1.24019 1.77462 i
Table 3. Residual error on random initial vectors for solving FDE [ 1 ] using fractional CSM [ ψ ] for different values of ψ .
Table 3. Residual error on random initial vectors for solving FDE [ 1 ] using fractional CSM [ ψ ] for different values of ψ .
ψ e 1 [ i ] e 2 [ i ] e 3 [ i ] e 4 [ i ] e 5 [ i ] e 5 [ i ]
0.1 0.2 × 10 5 0.2 × 10 5 0.2 × 10 5 0.2 × 10 5 0.2 × 10 5 0.2 × 10 5
0.2 1.4 × 10 6 1.4 × 10 6 1.4 × 10 6 1.4 × 10 6 1.4 × 10 6 1.4 × 10 6
0.3 0.7 × 10 6 0.7 × 10 6 0.7 × 10 6 0.7 × 10 6 0.7 × 10 6 0.7 × 10 6
0.4 0.5 × 10 4 0.5 × 10 4 0.5 × 10 4 0.5 × 10 4 0.5 × 10 4 0.5 × 10 4
0.5 5.4 × 10 6 5.4 × 10 6 5.4 × 10 6 5.4 × 10 6 5.4 × 10 6 5.4 × 10 6
0.6 1.3 × 10 7 1.3 × 10 7 1.3 × 10 7 1.3 × 10 7 1.3 × 10 7 1.3 × 10 7
0.7 0.6 × 10 9 0.6 × 10 9 0.6 × 10 9 0.6 × 10 9 0.6 × 10 9 0.6 × 10 9
0.8 0.3 × 10 11 0.3 × 10 11 0.3 × 10 11 0.3 × 10 11 0.3 × 10 11 0.3 × 10 11
0.9 2.6 × 10 16 2.6 × 10 16 2.6 × 10 16 2.6 × 10 16 2.6 × 10 16 2.6 × 10 16
Table 4. Number of iterations for random initial test vectors for solving engineering application 1 using CSM [ ψ ] .
Table 4. Number of iterations for random initial test vectors for solving engineering application 1 using CSM [ ψ ] .
ψ e 1 [ i ] e 2 [ i ] e 3 [ i ] e 4 [ i ] e 5 [ i ] e 6 [ i ]
0.17812111012
0.288117711
0.38786810
0.4788779
0.5679669
0.6977867
0.7757968
0.8687776
0.9555775
Table 5. Consistency analysis for different ψ values for solving engineering application 1 as described in Example 1.
Table 5. Consistency analysis for different ψ values for solving engineering application 1 as described in Example 1.
ψ σ i [ n 1 ] Per-CAv-itCPU
0.1 1.1222423 31.001 05 0.09378
0.2 1.5234264 37.0234 05 0.08637
0.3 1.9874325 38.0283 06 0.08386
0.4 2.4574334 39.0082 06 0.07647
0.5 2.8796536 45.1093 05 0.06435
0.6 2.9994356 55.0072 06 0.06343
0.7 3.0034287 59.0041 06 0.05535
0.8 3.6785857 60.5632 05 0.02844
0.9 4.0123332 89982305 0.00138
Table 6. Numerical results for Example 2.
Table 6. Numerical results for Example 2.
Err ZSM [ 5 ] NSM [ 4 ] CSM [ ψ ]
e 1 [ 4 ] 0.01 × 10 55 9.92 × 10 109 0.0
e 2 [ 4 ] 1.09 × 10 25 4.72 × 10 86 0.0
e 3 [ 4 ] 7.00 × 10 76 5.12 × 10 95 5.12 × 10 195
σ i [ n 1 ] 0.07140.05180.0301
Table 7. Random initial test vectors T 1 [ ψ ] T 3 [ ψ ] for scheme CSM [ ψ ] .
Table 7. Random initial test vectors T 1 [ ψ ] T 3 [ ψ ] for scheme CSM [ ψ ] .
Err T 1 [ ] T 2 [ ] T 3 [ ] Ex-Sol n
x 1 [ 0 ] 0.115 + 8.0078 i 0.963 + 0.231 i 4.015 + 6.0551 i 0.75 + 1.56124 i
x 2 [ 0 ] 6.005 + 3.1078 i 5.051 + 0.002 i 0.001 + 2.0431 i 0.0
x 6 [ 0 ] 1.05 + 0.0558 i 0.081 + 6.012 i 8.003 + 1.0051 i 0.75 1.56124 i
Table 8. Residual error on random initial vectors for solving FDE [ 1 ] using CSM [ ψ ] .
Table 8. Residual error on random initial vectors for solving FDE [ 1 ] using CSM [ ψ ] .
ψ e 1 [ i ] e 2 [ i ] e 3 [ i ]
0.1 0.2 × 10 5 0.2 × 10 5 0.2 × 10 5
0.2 1.4 × 10 6 1.4 × 10 6 1.4 × 10 6
0.3 0.7 × 10 6 0.7 × 10 6 0.7 × 10 6
0.4 0.5 × 10 4 0.5 × 10 4 0.5 × 10 4
0.5 5.4 × 10 6 5.4 × 10 6 5.4 × 10 6
0.6 1.3 × 10 7 1.3 × 10 7 1.3 × 10 7
0.7 0.6 × 10 9 0.6 × 10 9 0.6 × 10 9
0.8 0.3 × 10 11 0.3 × 10 11 0.3 × 10 11
0.9 2.6 × 10 16 2.6 × 10 16 2.6 × 10 16
Table 9. Number of iterations for random initial test vectors for solving engineering application 2.
Table 9. Number of iterations for random initial test vectors for solving engineering application 2.
ψ e 1 [ i ] e 2 [ i ] e 3 [ i ] e 4 [ i ] e 5 [ i ] e 6 [ i ]
0.17812111012
0.288117711
0.38786810
0.4788779
0.5679669
0.6977867
0.7757968
0.8687776
0.9555775
Table 10. Consistancy analysis for different ψ values for solving engineering application 2 using CSM [ ψ ] .
Table 10. Consistancy analysis for different ψ values for solving engineering application 2 using CSM [ ψ ] .
ψ σ i [ n 1 ] Per-CAv-itCPU
0.11.122242331.001050.19378
0.21.523426437.0234050.08937
0.31.987432538.0283060.08086
0.42.457433439.0082060.08665
0.52.879653645.1093050.05535
0.62.999435655.0072060.04443
0.73.003428759.0041060.03535
0.83.678585760.5632050.02844
0.94.0123332899823050.00138
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shams, M.; Carpentieri, B. An Efficient and Stable Caputo-Type Inverse Fractional Parallel Scheme for Solving Nonlinear Equations. Axioms 2024, 13, 671. https://doi.org/10.3390/axioms13100671

AMA Style

Shams M, Carpentieri B. An Efficient and Stable Caputo-Type Inverse Fractional Parallel Scheme for Solving Nonlinear Equations. Axioms. 2024; 13(10):671. https://doi.org/10.3390/axioms13100671

Chicago/Turabian Style

Shams, Mudassir, and Bruno Carpentieri. 2024. "An Efficient and Stable Caputo-Type Inverse Fractional Parallel Scheme for Solving Nonlinear Equations" Axioms 13, no. 10: 671. https://doi.org/10.3390/axioms13100671

APA Style

Shams, M., & Carpentieri, B. (2024). An Efficient and Stable Caputo-Type Inverse Fractional Parallel Scheme for Solving Nonlinear Equations. Axioms, 13(10), 671. https://doi.org/10.3390/axioms13100671

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop