Next Article in Journal
Milstein Scheme for a Stochastic Semilinear Subdiffusion Equation Driven by Fractionally Integrated Multiplicative Noise
Previous Article in Journal
FDE-Testset: Comparing Matlab© Codes for Solving Fractional Differential Equations of Caputo Type
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A High-Order Fractional Parallel Scheme for Efficient Eigenvalue Computation

by
Mudassir Shams
1,2,3 and
Bruno Carpentieri
1,*
1
Faculty of Engineering, Free University of Bozen-Bolzano, 39100 Bolzano, Italy
2
Department of Mathematics, Faculty of Arts and Science, Balikesir University, 10145 Balikesir, Turkey
3
Department of Mathematics and Statistics, Riphah International University I-14, Islamabad 44000, Pakistan
*
Author to whom correspondence should be addressed.
Fractal Fract. 2025, 9(5), 313; https://doi.org/10.3390/fractalfract9050313
Submission received: 20 March 2025 / Revised: 6 May 2025 / Accepted: 8 May 2025 / Published: 13 May 2025

Abstract

Eigenvalue problems play a fundamental role in many scientific and engineering disciplines, including structural mechanics, quantum physics, and control theory. In this paper, we propose a fast and stable fractional-order parallel algorithm for solving eigenvalue problems. The method is implemented within a parallel computing framework, allowing simultaneous computations across multiple processors to improve both efficiency and reliability. A theoretical convergence analysis shows that the scheme achieves a local convergence order of 6 κ + 4 , where κ ( 0 , 1 ] denotes the Caputo fractional order prescribing the memory depth of the derivative term. Comparative evaluations based on memory utilization, residual error, CPU time, and iteration count demonstrate that the proposed parallel scheme outperforms existing methods in our test cases, exhibiting faster convergence and greater efficiency. These results highlight the method’s robustness and scalability for large-scale eigenvalue computations.

1. Introduction

Eigenvalue computation is a fundamental topic in mathematics and science, with applications in structural mechanics; vibration analysis [1,2,3]; and, more recently, in machine learning, signal processing, quantum physics, and stability analysis [4]. Efficient numerical methods are essential for handling large-scale problems in these fields. Analytical approaches based on solving characteristic polynomials are inherently limited. In most eigenvalue problems, the characteristic equation is a polynomial of degree five or higher. The Abel–Ruffini impossibility theorem [5] states that general polynomials of degree five or higher cannot be solved exactly by radicals, making closed-form analytical factorization impossible. Therefore, numerical methods must be used to compute eigenvalues accurately. Existing numerical techniques, while effective in many cases, can face limitations in computational efficiency, stability, and convergence speed, particularly for large-scale problems or when dealing with clustered or ill-conditioned eigenvalue distributions. These considerations motivate the development of fractional-order parallel algorithms, which leverage fractional calculus and parallel computing to improve accuracy, accelerate convergence, and reduce computational costs.
Classical iterative methods, such as the Power Method and Inverse Iteration, compute one eigenvalue at a time but suffer from several drawbacks:
  • Local convergence depends on an appropriate initial guess.
  • Performance deteriorates when eigenvalues are closely spaced.
  • Convergence slows or stalls when the derivative of the iterative function approaches zero.
  • Sequential computation is inefficient for large-scale problems.
On the other hand, parallel algorithms, including both integer- and fractional-order approaches, offer a promising alternative by computing multiple eigenvalues simultaneously while optimizing memory and processing resources. Several parallel techniques have been developed to enhance eigenvalue computation efficiency. The Jacobi and Divide-and-Conquer methods are widely used for symmetric eigenvalue problems. The Divide-and-Conquer algorithm ( DC [ * ] ) [6] recursively decomposes the problem into smaller subproblems, facilitating efficient computation on modern multi-core systems, while the Jacobi method [7] applies concurrent rotations for matrix diagonalization. Additionally, QR and Generalized Shur Decomposition Method ( QZ [ * ] ) algorithms [8] have been parallelized using block matrix operations to accelerate convergence [9,10]. Important advantages of parallel and fractional-order approaches include the following:
  • Fractional-order methods incorporate historical information, enhancing convergence speed and accuracy.
  • Modern high-performance architectures reduce computation time.
  • Improved numerical stability for complex eigenvalue problems.
  • Higher precision without sacrificing efficiency.
  • Faster convergence with fewer iterations, reducing computational costs.
This evolution in numerical methods continues to drive progress in large-scale eigenvalue computations, enabling more efficient solutions across scientific and engineering disciplines. For large-scale eigenvalue problems, Krylov subspace-based algorithms such as the Parallel Lanczos [11] and Arnoldi methods [12] provide effective solutions by computing multiple eigenvalues within high-performance computing environments. More recent advancements, including Ehrlich techniques [13] and Weierstrass parallel methods [14], distribute the computational load across multiple processors to enhance efficiency. However, when applied to non-Hermitian or large sparse matrices, these approaches often face scaling limitations, numerical instability, and increased memory demands.
This study aims to develop a high-order, fractional Caputo-based parallel scheme that significantly enhances computational efficiency and accuracy. The proposed fractional-order parallel algorithm achieves an optimal convergence order, outperforming conventional techniques in both precision and performance. By leveraging the memory-preserving properties of fractional calculus, the method improves numerical stability while reducing computational overhead. Unlike traditional parallel schemes, the approach minimizes the number of required iterations, optimizing the use of computational resources. In real-world applications, many stability issues are fundamentally linked to eigenvalue problems. For example, in car dynamics, the linearized eigenvalues of the suspension-steering system determine pitch and roll stability, as well as ride comfort; a positive real part indicates a loss of control under disturbances. In aeroelasticity, aircraft wing flutter is diagnosed by monitoring the eigenvalues of the coupled structural–aerodynamic model, with flutter onset identified by the crossing of the imaginary axis by a complex conjugate eigenvalue pair. Since virtually all engineering stability analyses require eigenvalues to be computed with high precision, efficient iterative techniques for nonlinear eigenvalue problems are essential. The proposed scheme is validated on eigenvalue problems related to car stability and aircraft wing flutter, demonstrating its practical relevance. In addition, we introduce the fundamental concepts of fractional calculus that are critical for constructing and analyzing the parallel schemes. In particular, the Caputo derivative is one of the few fractional derivatives that satisfy the condition D κ ( C ) = 0  [15]. The Caputo derivative of order κ ( 0 , 1 ] incorporates a weighted history integral that smooths oscillations in the iterative map and prevents the derivative term from vanishing when the ordinary first derivative approaches zero. This feature helps reduce divergence around flat regions or multiple roots, enhancing the robustness of the iterative process.
Definition 1
(Caputo fractional derivative). Let
f : R R , f C m [ κ 1 , x ] , < κ 1 < x < + , κ 0 ,
where m = [ κ ] + 1 , and [ κ ] is the integer part of κ . The Caputo fractional derivative [16,17] of order κ is defined as
D κ 1 κ f ( x ) = 1 Γ ( m κ ) κ 1 x d m d t m f ( t ) 1 x t κ m + 1 d t , κ N , d m 1 d t m 1 f ( x ) , κ = m 1 N 0 .
Here, Γ ( x ) denotes the Gamma function, given by
Γ ( x ) = 0 + u x 1 e u d u , x > 0 .
Remark 1.
The notation f C m ( [ κ 1 , x ] ) indicates that f is m-times continuously differentiable on the closed interval [ κ 1 , x ] . This regularity is necessary to ensure that the m-th derivative appearing in (2) exists and is continuous.
The Caputo derivative is adopted in this study because it offers several practical advantages over other fractional derivative definitions:
  • The Caputo derivative of a constant is zero, preserving the classical interpretation of steady-state solutions. In contrast, the Grünwald–Letnikov (GL) derivative yields nonzero values, complicating model interpretation.
  • Unlike the GL definition, the Caputo derivative requires only integer-order initial conditions. This allows empirically observed values to be directly incorporated without introducing additional fractional history terms.
  • Caputo operators improve the stability and convergence of higher-order iterative solvers by ensuring more consistent kernel behavior. Due to these advantages, the Caputo derivative is almost universally preferred in dynamical models and is particularly well suited for parallel fractional framework schemes.
The Taylor series theorem (Appendix A.1) provides a local approximation of a function near its root and serves as the foundation for iterative methods used to solve nonlinear equations. Expanding a function into its Taylor series, including higher-order terms, leads to more accurate and efficient iterative schemes. This approach underpins both function linearization and acceleration towards the root. Each of these classical methods forms the basic foundation for our novel family of fractional schemes. After developing the fractional versions, we incorporate them as correction steps within a parallel architecture capable of approximating all eigenvalues simultaneously in a single iteration step.
Using a Caputo-type fractional variant of Newton’s method ( FM 1 [ * ] ), Candelario et al. [18] introduced the following iteration scheme:
y ϕ = x ϕ Γ ( κ + 1 ) f ( x ϕ ) D κ 1 κ f ( x ϕ ) 1 / κ ,
where D κ 1 κ f ( x i ) D ξ κ f ( ξ ) . For any κ R , the order of convergence of this method is κ + 1 , with the error given by the following:
e ϕ + 1 = Γ ( 2 κ + 1 ) Γ 2 ( κ + 1 ) κ Γ 2 ( κ + 1 ) b 2 e ϕ κ + 1 + O e ϕ 2 κ + 1 ,
where e ϕ + 1 = y ϕ ξ and e ϕ = x ϕ ξ , with
b γ = Γ ( κ + 1 ) Γ ( γ κ + 1 ) D ξ γ κ f ( ξ ) D ξ κ f ( ξ ) , γ 2 .
Shams et al. [19] proposed a one-step fractional iterative method with order κ + 1 , given by
y ϕ = x ϕ Γ ( κ + 1 ) f x ϕ D κ 1 κ f ( x ϕ ) 1 1 α f ( x ϕ ) 1 + f ( x ϕ ) 1 / κ ,
which satisfies the error relation
e ϕ + 1 = α + b 2 Γ 2 ( κ + 1 ) b 2 Γ ( 2 κ + 1 ) κ Γ ( κ + 1 ) b 2 e ϕ κ + 1 + O e ϕ 2 κ + 1 ,
where α R .
Ali et al. [20] later proposed a modified scheme of the following convergence order:
z ϕ = y ϕ Γ ( κ + 1 ) f y ϕ D κ 1 κ f ( x ϕ ) α f y ϕ ,
where
y [ ϕ ] = x [ ϕ ] Γ ( κ + 1 ) f ( x [ ϕ ] ) D κ 1 κ f ( x [ ϕ ] ) α f ( x [ ϕ ] ) ,
for α R , satisfying the following error equation:
e ϕ + 1 = b 2 κ Γ ( κ + 1 ) e ϕ 2 κ 2 + α Γ ( κ + 1 ) κ Γ ( κ + 1 ) + O e ϕ 3 κ 2 .
In [21], a two-step fractional Newton-type method with convergence order 2 κ + 1 was introduced:
v ϕ = y [ ϕ ] Γ ( κ + 1 ) f ( y [ ϕ ] ) D κ 1 κ f ( x ϕ ) 1 / κ ,
where
y ϕ = x ϕ Γ ( κ + 1 ) f ( x ϕ ) D κ 1 κ f ( x ϕ ) 1 / κ ,
satisfying the error relation
e * = Γ ( 2 κ + 1 ) κ 2 Γ 2 ( κ + 1 ) Q b 2 2 e ϕ 2 κ + 1 + O e ϕ κ 2 + 2 κ + 1 ,
where Q = Γ 2 ( κ + 1 ) Γ ( 2 κ + 1 ) Γ 2 ( κ + 1 ) ,   e ϕ + 1 = v ϕ ξ , e ϕ = x ϕ ξ , and b γ = Γ ( κ + 1 ) Γ ( γ κ + 1 ) D ξ γ κ f ξ D ξ κ f ξ , γ 2 . The propagation of the local error e ϕ from one iteration to the next is described by Equations (5), (7), (9) and (11). A complete derivation is not included here, as it follows the classical fractional Taylor expansion and perturbation arguments provided in [18,19,20,21]. Substituting the series expansion of f about the exact root into the fractional correction step produces a power-law term in e ϕ , indicating a convergence order that explicitly depends on the fractional parameter.
The rest of this manuscript is structured as follows. Section 2 introduces and analyzes the proposed fractional-order scheme. Section 3 examines the region of percentage convergence using basins of attraction. Section 4 explores engineering applications, evaluating the proposed method in terms of efficiency, stability, and consistency against existing methods. Finally, Section 5 presents conclusions and potential future research directions.

2. Development and Analysis of the Caputo-Type Fractional Schemes

Classical numerical iterative schemes suffer from several drawbacks, such as the omission of memory effects, divergence of higher-order derivatives, slow convergence, and the limitation to computing only real roots in almost all situations. These issues reduce the precision of models for complex systems with long-range dependencies. Fractional-order approaches address these issues by leveraging historical data, enhancing stability, and simulating nonlocal processes. Therefore, they are more suitable for procedures that require higher model accuracy. The memory- and nonlocal-effect approach can be effectively modeled using Caputo fractional derivatives, providing a more realistic simulation of dynamic systems. The weight function μ ( ϑ ) dynamically modifies the scheme based on the ratio of function values between current and previous iterations. This dynamic adjustment aids in stabilizing and accelerating convergence, particularly under conditions where classical methods may struggle. To improve the convergence order of the existing technique (10), we propose a modified version that incorporates the weight function into the second step of (10). The updated formulation ( MFM 1 [ * * ] ) is given by the following:
z ϕ = y ϕ Γ ( κ + 1 ) f y ϕ D κ 1 κ f x ϕ 1 + μ ϑ 1 / κ ,
where
y ϕ = x ϕ Γ ( κ + 1 ) f ( x ϕ ) D κ 1 κ f ( x ϕ ) 1 / κ ,
and the weight function μ ϑ , which depends on the parameter
ϑ = f y ϕ f x ϕ ,
is specifically designed to guarantee fourth-order convergence, as demonstrated in Theorem 1.

2.1. Convergence Analysis

Convergence analysis is essential for assessing the reliability and efficiency of numerical methods used to solve nonlinear equations. It establishes the conditions under which an iterative approach converges to the desired solution. The local convergence properties of the proposed method are examined in Theorem 1.
Theorem 1.
Let f : Γ R R be a continuous function, where D κ 1 m κ f ( x ) is of order m κ , for any positive integer m and κ ( 0 , 1 ] in the open interval Γ , and let ξ denote the exact root of f ( x ) .
If the initial value x 0 is sufficiently close to ξ , and the function μ satisfies
μ ( 0 ) = 0 , μ ( 0 ) = 2 , and μ ( 0 ) < ,
then the iterative scheme
z ϕ = y ϕ Γ ( κ + 1 ) f y ϕ D κ 1 κ f x ϕ 1 + μ ϑ 1 / κ ,
has a convergence order of at least 3 κ + 1 , with the associated error given by
e z ϕ = b 2 3 Γ κ κ 4 b 2 b 3 Γ κ κ + τ [ * ] e ϕ 3 κ + 1 + O e ϕ 4 κ + 1 ,
where
b γ = Γ ( κ + 1 ) Γ ( γ κ + 1 ) D ξ γ κ f ξ D ξ κ f ξ , γ 2
and  τ [ * ]  is defined in Appendix A.2.
Proof. 
Let  ξ be a root of f ( x ) , and define x [ ϕ ] = ξ + e [ ϕ ] , y [ ϕ ] = ξ + e y [ ϕ ] , and z [ ϕ ] = ξ + e z [ ϕ ] . We assume that the function f ( x ϕ ) is smooth in a neighborhood of the root ξ , with no discontinuities or singularities in the region under consideration. In particular, f ( x ϕ ) should be continuously differentiable up to the required order, and the fractional derivative D κ 1 κ f ( x ϕ ) should also be free from singularities in this region. Expanding f ( x ϕ ) and D κ 1 κ f ( x ϕ ) using their Taylor series around x = ξ and noting that f ( ξ ) = 0 , we obtain.
f ( x ϕ ) = D ξ κ f ξ Γ ( κ + 1 ) e ϕ κ + b 2 e ϕ 2 κ + b 3 e ϕ 3 κ + O ( e ϕ 4 κ ) .
Taking fractional derivative of (15) yields
D κ 1 κ f ( x ϕ ) = D ξ κ f ξ Γ ( κ + 1 ) Γ ( κ + 1 ) + Γ ( 2 κ + 1 ) Γ ( κ + 1 ) b 2 e ϕ κ + Γ ( 3 κ + 1 ) Γ ( 2 κ + 1 ) b 3 e ϕ 2 κ + O ( e ϕ 3 κ ) ,
Dividing (15) by (16), we obtain
f ( x ϕ ) D κ 1 κ f x ϕ = e ϕ κ Γ κ κ + σ 1 e ϕ 2 κ + σ 2 e ϕ 3 κ + O ( e ϕ 4 κ ) .
Thus,
Γ ( κ + 1 ) f ( x ϕ ) D κ 1 κ f ( x ϕ ) 1 / κ = e ϕ κ + b 2 2 κ 2 Γ κ + 1 / 2 b 2 Γ κ κ π e ϕ κ + 1 + O ( e ϕ 2 κ + 1 ) 1 / κ , = e ϕ + b 2 2 κ 2 Γ κ + 1 / 2 b 2 Γ κ κ π e ϕ κ + 1 + O ( e ϕ 2 κ + 1 ) .
Using (18) in the first step of (12), we obtain
y ϕ ξ = x [ ϕ ] ξ Γ ( κ + 1 ) f ( x ϕ ) D κ 1 κ f ( x ϕ ) 1 / κ ,
y ϕ ξ = b 2 + 2 κ 2 Γ κ + 1 / 2 b 2 Γ κ κ π e ϕ κ + 1 + σ 3 e ϕ 2 κ + 1 + O ( e ϕ 3 κ + 1 ) .
e y ϕ = b 2 + 2 κ 2 Γ κ + 1 / 2 b 2 Γ κ κ π e ϕ κ + 1 + σ 3 e ϕ 2 κ + 1 + O ( e ϕ 3 κ + 1 ) .
Expanding f y ϕ around y = ξ using a Taylor series, we obtain
f y ϕ = D ξ κ f ξ Γ ( κ + 1 ) σ 4 e y ϕ 4 κ + 1 + σ 5 e y ϕ 3 κ + 1 1 / 2 σ 6 e y ϕ 2 κ + 1 + σ 7 e y ϕ κ + 1 + O ( e y ϕ 5 κ + 1 ) .
Multiplying (22) by the inverse of (15), we obtain
f y ϕ f x ϕ = σ 8 e ϕ + σ 9 e ϕ κ + 1 + σ 10 e ϕ 2 κ + 1 + σ 11 e ϕ 4 κ + 1 + O ( e ϕ 3 κ + 1 ) .
f y ϕ D κ 1 κ f x ϕ = σ 12 e ϕ κ + 1 + σ 13 e ϕ 2 κ + 1 + σ 14 e ϕ 3 κ + 1 + O ( e ϕ 4 κ + 1 ) .
Expanding μ ( ϑ ) as a Taylor series, we obtain
μ ϑ = μ 0 + μ 0 ϑ + μ 0 2 ! ϑ 2 + . . .
where ϑ = f y ϕ f x ϕ . Thus,
μ ϑ = μ 0 + σ 15 e ϕ + σ 16 e ϕ κ + 1 + σ 17 e ϕ 2 κ + 1 + O ( e ϕ 3 κ + 1 ) .
Therefore,
Γ ( κ + 1 ) f y ϕ D κ 1 κ f x ϕ 1 + μ ϑ 1 / κ = 2 κ 2 2 b 2 μ ( 0 ) Γ κ + 1 2 Γ κ κ π b 2 b 2 μ ( 0 ) + 2 κ 2 b 2 Γ κ + 1 2 Γ κ κ π e ϕ κ + 1 + σ 18 e ϕ 2 κ + 1 + O ( e ϕ 3 κ + 1 ) .
The expression for σ 1 σ 18 is provided in Appendix A.2. Substituting (27) into (12), we obtain
z ϕ ξ = y ϕ ξ Γ ( κ + 1 ) f y ϕ D κ 1 κ f x ϕ 1 + μ ϑ 1 / κ .
This simplifies to
e z ϕ = σ 19 e ϕ κ + 1 + σ 20 e ϕ 2 κ + 1 + σ 21 e ϕ 3 κ + 1 + O ( e ϕ 4 κ + 1 ) .
The coefficients are given by
σ 19 = b 2 b 2 + 2 κ 2 Γ κ + 1 / 2 b 2 κ Γ κ π 2 κ 2 Γ κ + 1 / 2 b 2 κ Γ κ π b 2 3 μ 0 Γ κ κ 2 κ 2 b 2 b 3 Γ κ 2 κ 2 π Γ κ + 1 2 2 κ 2 b 2 b 3 Γ κ 2 κ 2 π Γ κ + 1 2 ,
σ 20 = 2 κ 2 b 2 2 Γ κ + 1 / 2 κ Γ κ π 2 κ 2 b 2 2 Γ κ + 1 / 2 κ Γ κ π + b 3 μ 0 κ Γ κ 2 b 3 κ Γ κ b 3 3 κ 3 3 Γ κ + 1 / 2 Γ κ + 2 / 3 2 κ Γ κ π 2 κ 2 Γ κ + 1 / 2 b 3 3 κ 3 3 Γ κ + 1 / 2 Γ κ + 2 / 3 2 κ Γ κ π 2 κ 2 Γ κ + 1 / 2 .
The expression for σ 21 is provided in Appendix A.2. By choosing μ ( 0 ) = 0 and μ ( 0 ) = 2 , we conclude
e z ϕ = b 2 3 Γ κ κ 4 b 2 b 3 Γ κ κ + τ [ * ] e ϕ 3 κ + 1 + O e ϕ 4 κ + 1 .
Thus, the theorem is proven.    □

2.2. Some Special Cases of the Proposed Fractional Schemes Are Considered

Weight functions play a crucial role in iterative methods for solving nonlinear equations by enhancing convergence speed and stability. They help regulate the influence of previous iterations, improving accuracy while reducing computational costs. A well-designed weight function adaptively balances efficiency and robustness, leading to more reliable numerical solutions. Building on the conditions established in Theorem 1, we introduce several distinct weight function formulations to optimize iterative algorithms, as outlined below.
Fractional Scheme  FSB [ ϑ 1 ] : By selecting the weight function
μ ϑ = 2 ϑ 1 + α ϑ 2 ,
which satisfies the conditions of Theorem 1, we derive the following fractional scheme:
z 1 ϕ = y ϕ Γ ( κ + 1 ) f y ϕ D κ 1 κ f x ϕ 1 + 2 f y ϕ f x ϕ 1 + α f y ϕ f x ϕ 2 1 / κ ,
where α R . Here, y [ ϕ ] is defined as
y ϕ = x ϕ Γ ( κ + 1 ) f ( x ϕ ) D κ 1 κ f ( x ϕ ) 1 / κ .
The associated error equation is given by
e ϕ + 1 = b 2 3 Γ κ κ 4 b 2 b 3 Γ κ κ + τ [ * ] e ϕ 3 κ + 1 + O e ϕ 4 κ + 1 .
Fractional Scheme  FSB [ ϑ 2 ] : By selecting the weight function
μ ϑ = 2 ϑ 1 + ϑ 2 ,
which satisfies the conditions of Theorem 1, we derive the following fractional scheme:
z 2 ϕ = y ϕ Γ ( κ + 1 ) f y ϕ D κ 1 κ f x ϕ 1 + 2 f y ϕ f x ϕ 1 + f y ϕ f x ϕ 2 1 / κ .
Here, y [ ϕ ] is defined as
y ϕ = x ϕ Γ ( κ + 1 ) f ( x ϕ ) D κ 1 κ f ( x ϕ ) 1 / κ .
The associated error equation is given by
e ϕ + 1 = b 2 3 Γ κ κ 4 b 2 b 3 Γ κ κ + τ [ * ] e ϕ 3 κ + 1 + O e ϕ 4 κ + 1 .
Fractional Scheme  FSB [ ϑ 3 ] : By selecting the weight function
μ ϑ = ϑ + ϑ 1 + α ϑ 2 ,
which satisfies the conditions of Theorem 1, we derive the following fractional scheme:
z 3 ϕ = y ϕ Γ ( κ + 1 ) f y ϕ D κ 1 κ f x ϕ 1 + f y ϕ f x ϕ + f y ϕ f x ϕ 1 + α f y ϕ f x ϕ 2 1 / κ ,
where α R . Here, y [ ϕ ] is defined as
y ϕ = x ϕ Γ ( κ + 1 ) f ( x ϕ ) D κ 1 κ f ( x ϕ ) 1 / κ .
The associated error equation is given by
e ϕ + 1 = b 2 3 Γ κ κ 4 b 2 b 3 Γ κ κ + τ [ * ] e ϕ 3 κ + 1 + O e ϕ 4 κ + 1 .
In the following section, we demonstrate how transforming single-eigenvalue methods into parallel fractional approaches enhances computational efficiency by solving for all eigenvalues simultaneously, thereby reducing the number of iterations. This method improves stability and convergence by leveraging interdependencies between eigenvalues, effectively minimizing errors. Moreover, fractional parallel techniques offer a more precise and reliable solution for eigenvalue problems involving clustered or multiple eigenvalues.

3. Parallel Computing Scheme Construction and Theoretical Convergence

Among parallel numerical schemes, the Weierstrass–Durand–Kerner approach [22] is particularly appealing from a computational perspective. In this technique, each eigenvalue contributes to the approximation of the others, allowing for efficient parallel implementation. Its structure allows simultaneous updates, making it ideal for parallel computing architectures. This method is defined a
y i ϕ = x i ϕ ω x i ϕ ,
where
ω x i ϕ = f x i ϕ Π n j i j = 1 x i ϕ x j ϕ , ( i , j = 1 , , n ) ,
represents Weierstrass’s correction. The method in (37) exhibits local quadratic convergence.
Nedzibov et al. [23] proposed a modified Weierstrass method:
y i ϕ = x i ϕ 2 Π n j i j = 1 x i ϕ x j ϕ x i ϕ Π n j i j = 1 x i ϕ x j ϕ + f x i ϕ ,
This is also known as the inverse Weierstrass method, which retains quadratic convergence. The inverse parallel scheme in (39) also demonstrates local quadratic convergence.
In 1977, Ehrlich [24] introduced a third-order simultaneous iterative method defined as
x i ϕ + 1 = x i ϕ 1 1 N i ( x i ϕ ) j i j = 1 n 1 x i ϕ x j ϕ .
Zhang et al. [25] proposed an iterative method with convergence order five, given by
x i ϕ + 1 = x i ϕ ω x i ϕ 1 + Δ [ * ] x i ϕ + 1 + Δ [ * ] x i ϕ 2 + 4 ω x i ϕ j i j = 1 n w x j ϕ x i ϕ x j ϕ x i ϕ ω ( x i ϕ ) x j ϕ .
Here, the term Δ [ * ] ( x i [ ϕ ] ) is defined as
Δ [ * ] x i ϕ = j i j = 1 n ω x j ϕ x i ϕ x j ϕ .
Using x j [ ϕ ] = u j [ ϕ ] as a correction in (40), Petkovic et al. [26] improved the convergence order from three to six, resulting in the following scheme:
y i ϕ = x i ϕ 1 1 N i ( x i ϕ ) j i j = 1 n 1 x i ϕ u j ϕ .
Here, u j [ ϕ ] is given by
u j ϕ = x j ϕ f s j ϕ f x j ϕ 2 f s j ϕ f x j ϕ f x j ϕ f x j ϕ ,
where
s j ϕ = x j ϕ f x j ϕ f x j ϕ .
This technique is also applied in biomedical engineering to solve nonlinear equations, enabling the analysis of various problems and treatment methodologies. Here, we consider the following iterative scheme [27] for finding simple roots of nonlinear equations.
The method proposed in [28] has been shown to achieve higher accuracy, faster convergence, robustness, and improved computational efficiency compared to conventional single root-finding techniques. The iterative scheme is defined as
x ϕ + 1 = y ϕ 5 f ( x ϕ ) + 7 f ( y ϕ ) 3 f ( x ϕ ) + 1 f ( y ϕ ) f ( y ϕ ) 4 f ( y ϕ ) f ( x ϕ ) ,
where
y ϕ = x ϕ f ( x ϕ ) f ( x ϕ ) .
The scheme has a convergence order of 6 and satisfies the following local error relation:
e i ϕ + 1 = 1 3 42 b 2 2 b 2 2 b 3 e ϕ 5 + O e ϕ 6 ,
where
e i ϕ + 1 = x ϕ ξ , e ϕ = x ϕ ξ ,
and
b γ = 1 γ ! f ( γ ) ξ f ξ , γ 2 .
By replacing x j with z t j ϕ in (38) and applying Weierstrass’s correction in (43), we derive a new parallel scheme for simultaneously computing all eigenvalues:
x i ϕ + 1 = y i [ ϕ ] 5 Π n j i j = 1 x i ϕ x j ϕ + 7 Π n j i j = 1 y i ϕ y j ϕ 3 Π n j i j = 1 x i ϕ x j ϕ + 1 Π n j i j = 1 y i ϕ y j ϕ f ( y ϕ ) Π n j i j = 1 y i ϕ y j ϕ 4 Π n j i j = 1 y i ϕ y j ϕ x i ϕ x j ϕ .
Here, the intermediate values are defined as
y i [ ϕ ] = x ϕ f ( x ϕ ) Π n j i j = 1 x i ϕ z t j ϕ , ( t = 1 , 2 , 3 )
Additionally, the correction terms for different schemes are given by
z 1 j ϕ = y j ϕ Γ ( κ + 1 ) f y j ϕ D κ 1 κ f x j ϕ 1 + 2 f y j ϕ f x j ϕ 1 + α f y j ϕ f x j ϕ 2 1 / κ ,
z 2 j ϕ = y j ϕ Γ ( κ + 1 ) f y j ϕ D κ 1 κ f x j ϕ 1 + 2 f y j ϕ f x j ϕ 1 + f y j ϕ f x j ϕ 2 1 / κ ,
z 3 j ϕ = y j ϕ Γ ( κ + 1 ) f y j ϕ D κ 1 κ f x j ϕ 1 + f y j ϕ f x j ϕ + f y j ϕ f x j ϕ 1 + α f y j ϕ f x j ϕ 2 1 / κ .
Fractional-order methods are employed as correction terms within the parallel architecture to enhance its reliability. These schemes improve both the convergence and stability of the parallel approach. In particular, when roots are closely spaced, fractional derivatives help prevent the divergence that often occurs in traditional parallel structures. By incorporating these techniques, the parallel root-finding scheme becomes more efficient and yields more accurate and stable solutions to eigenvalue problems. The resulting parallel schemes are denoted as PFSB [ ϑ 1 ] PFSB [ ϑ 3 ] . The methods can be rewritten a
x i ϕ + 1 = y i [ ϕ ] 5 + 7 Π n j i j = 1 y i ϕ y j ϕ x i ϕ x j ϕ 3 + 1 Π n j i j = 1 y i ϕ y j ϕ x i ϕ x j ϕ f ( y ϕ ) Π n j i j = 1 y i ϕ y j ϕ 4 Π n j i j = 1 y i ϕ y j ϕ x i ϕ x j ϕ ,
where
y i [ ϕ ] = x ϕ f ( x ϕ ) Π n j i j = 1 x i ϕ z t j ϕ .
For multiple roots, the method is extended as
x i ϕ + 1 = y i [ ϕ ] 5 + 7 Π n j i j = 1 y i ϕ y j ϕ x i ϕ x j ϕ σ j 3 + 1 Π n j i j = 1 y i ϕ y j ϕ x i ϕ x j ϕ σ j f ( y ϕ ) Π n j i j = 1 y i ϕ y j ϕ σ j 4 Π n j i j = 1 y i ϕ y j ϕ x i ϕ x j ϕ σ j ,
where
y i [ ϕ ] = x i ϕ f ( x ϕ ) Π n j i j = 1 x i ϕ z t j ϕ σ j .
The following theorem defines the local order of convergence for the inverse fractional scheme.
Theorem 2.
Let ζ 1 , , ζ σ be simple roots of a nonlinear equation and assume that the initial distinct approximations x 1 [ 0 ] , , x n [ 0 ] are sufficiently close to the exact roots. Then, the PFSB [ ϑ 1 ] PFSB [ ϑ 3 ] method achieves a convergence order of 6 κ + 4 .
Proof. 
Let e i = x i [ ϕ ] ζ i ,   e i = y i [ ϕ ] ζ i , and e i [ * ] = x i [ ϕ + 1 ] ζ i represent the errors in x i [ ϕ ] , y i [ ϕ ] , and x i [ ϕ + 1 ] , respectively. From the first step of the PFSB [ ϑ 1 ] PFSB [ ϑ 3 ] method, we obtain
y i ϕ ζ i = x i ϕ ζ i f x i ϕ j i j = 1 n x i ϕ z t j ϕ .
Rewriting in terms of error terms, we obtain
e i = e i e i n j i j = 1 x i ϕ ζ j x i ϕ z t j ϕ ,
which simplifies to
e i = e i 1 n j i j = 1 x i ϕ ζ j x i ϕ z t j ϕ .
From the Taylor series expansion, we approximate
x i ϕ ζ j x i ϕ z t j ϕ = 1 + x t j ϕ ζ j x i [ ϕ ] z t j ϕ .
Since z t j ϕ ζ j = O e i ϕ 3 κ + 1 , we obtain
x i ϕ ζ j x i ϕ z t j ϕ = 1 + x t j ϕ ζ j x i [ ϕ ] z t j ϕ
= 1 + O e i ϕ 3 κ + 1 ,
Thus, we derive
e i = e i O e i ϕ 3 κ + 1
= O e i ϕ 3 κ + 2 .
For the second step,
x i ϕ + 1 ζ i = y i [ ϕ ] ζ i 5 + 7 Π n j i j = 1 y i ϕ y j ϕ x i ϕ x j ϕ 3 + Π n j i j = 1 y i ϕ y j ϕ x i ϕ x j ϕ f ( y ϕ ) Π n j i j = 1 y i ϕ y j ϕ 4 Π n j i j = 1 y i ϕ y j ϕ x i ϕ x j ϕ .
Rewriting in terms of the error terms, we obtain
e i [ * ] = e i 5 + 7 Π n j i j = 1 y i ϕ y j ϕ x i ϕ x j ϕ 3 + Π n j i j = 1 y i ϕ y j ϕ x i ϕ x j ϕ f ( y ϕ ) Π n j i j = 1 y i ϕ y j ϕ 4 Π n j i j = 1 y i ϕ y j ϕ x i ϕ x j ϕ .
Since
Π n j i j = 1 y i ϕ y j ϕ x i ϕ x j ϕ = 1
we obtain the following using the error propagation assumption.
Π n j i j = 1 y i ϕ ζ j Π n j i j = 1 y i ϕ y j ϕ = 1 + O e i .
Assuming e i = e j = e i , we obtain
e i [ * ] = O e i 2 .
Since e i = O ( | e i | 3 κ + 2 ) , we conclude
e i [ * ] = O e i 6 κ + 4 .
This completes the proof.    □

4. Computational Efficiency and Numerical Results

Computational efficiency in parallel iterative methods is crucial for solving complex problems while minimizing computational costs. Analyzing efficiency allows us to identify numerical schemes that require fewer arithmetic operations while maintaining accuracy, stability, and consistency in solving eigenvalue problems. Efficient algorithms reduce execution time, making them particularly well suited for large-scale simulations and real-time applications. Comparing computational performance helps in selecting methods that maximize efficiency without compromising precision. Optimized numerical schemes are especially valuable in high-performance computing and engineering contexts, where computational resources are limited and reliability is essential. The computational efficiency of a parallel scheme is an important metric for evaluating how effectively it solves a problem, especially in terms of resource usage such as time, memory and the number of operations. To quantify this efficiency, we use the expression
E = log r D [ * ] ,
where E denotes the computing efficiency, r is the convergence order of the fractional-order parallel scheme, and D [ * ] represents the total computational cost of the algorithm. The cost D [ * ] is computed as the weighted sum of all computational operations performed at each step [29]:
D [ * ] = ϑ 1 A S n + ϑ 2 M n + ϑ 3 D n
Here, we have the following:
  • A S n represents the number of additions and subtractions.
  • M n is the number of multiplications.
  • D n denotes the number of divisions required for a polynomial of degree n.
The constants ϑ 1 , ϑ 2 , and ϑ 3 are weights that reflect the relative computational cost of each operation type (division, multiplication, and addition) in each iteration. These weights are empirically determined based on the performance characteristics of the computing platform or hardware in use. These considerations are critical for analyzing large-scale systems and improving the overall accuracy and robustness of solutions. Based on the data in Table 1, the percentage computational efficiency (65) is calculated using
ψ PFSB [ ϑ 1 ] , ψ [ * ] = log ( r ) D [ * ] 1 × 100 ,
which can be used to assess the performance of newly developed parallel schemes in comparison to existing methods in the literature. In terms of consistency, stability, and percentage computational efficiency, the following numerical schemes—classified by their convergence order—are analyzed:
  • Perkovic et al.’s method ( PMM [ * ] ) [30] is as follows:
    x i [ ϕ + 1 ] = 1 f x ϕ f x ϕ j i j = 1 n 1 x i ϕ z j ϕ ,
    where
    s j ϕ = x j ϕ f x j ϕ f x j ϕ , v j ϕ = s j ϕ f x j ϕ f s j ϕ f x j ϕ f x j ϕ f x j ϕ f s j ϕ 2 , z j ϕ = v j ϕ s j ϕ f j ϕ f v j ϕ f x j ϕ f x j ϕ f x j ϕ f v j ϕ 2 f s j ϕ + f x j ϕ 2 f s j ϕ f v j ϕ .
  • Wang–Wu’s parallel scheme ( NNM [ * ] ) [31] is as follows:
    x i ϕ + 1 = x i ϕ 1 r i ϕ f x j ϕ f x j ϕ 2 R 1 , i [ ϕ ] 2 + R 2 , i [ ϕ ] 1
  • The Farmer–Loizou-like technique ( NFM [ * ] ) [32] is combined with Newton’s method at the first step:
    x i ϕ + 1 = x i ϕ f x j ϕ f x j ϕ E 2 , i [ * ] 1 2 f x j ϕ f x j ϕ E 2 , i [ * ] + u i ϕ 2 2 E 2 , i [ * ] 2 R 2 , i [ ϕ ] 1
    where
    r i ϕ = f x j ϕ f x j ϕ f x j ϕ f x j ϕ , R k , i [ ϕ ] = j i j = 1 n 1 x i ϕ x j ϕ + f x j ϕ f x j ϕ k , E 2 , i [ * ] = f x j ϕ 2 f x j ϕ , k = 1 , 2 .
Table 2 and Figure 1 present the computing efficiency percentage of the newly developed methodology compared to the previous approach. The results indicate that the proposed method requires fewer basic mathematical operations per iteration and consumes less memory compared to other existing schemes.
Using the information from Table 2 and the corresponding stopping criterion
ϵ i [ ϕ ] = x i ϕ + 1 x i ϕ < 10 32 ,
the effectiveness, stability, and consistency of the newly developed method, in relation to existing strategies for biomedical engineering applications, are analyzed in the following sections. Here, ϵ i [ ϕ ] represents the absolute error or the norm-2 measure of error.
The eigenvalue problems chosen for numerical experiments, such as automotive suspension systems and aviation wing models, are chosen because of their real-world engineering importance and inherent mathematical complexity. They provide higher-degree characteristic polynomials, making them ideal for validating numerical eigenvalue techniques. Eigenvalues have a direct impact on natural frequency and damping ratio in automobile systems, which in turn affect ride stability and comfort. Eigenvalue-based flutter analysis is used in aircraft constructions to predict critical speeds and the structural reaction to aerodynamic loads. Their decision is based on practical applicability, computational complexity, and potential generalizability to other advanced engineering systems, making them suitable for numerical experimentation. To ensure clarity and reproducibility, Algorithm 1 and flowchart Figure 2 present a step-by-step procedure for approximating all eigenvalues simultaneously.
Algorithm 1: Fractional order parallel scheme for finding all eigenvalues simultaneously.
1:
Input: Nonlinear function f ( x ) , initial approximations { x i [ ϕ ] } i = 1 n , fractional order κ , tolerance ϵ
2:
Output: Approximated roots { x i [ ϕ + 1 ] } i = 1 n
3:
while not converged do
4:
    for i = 1 to n do
5:
          Compute y i [ ϕ ] = x i [ ϕ ] f ( x i [ ϕ ] ) j = 1 , j i n ( x i [ ϕ ] z t j [ ϕ ] ) for t = 1 , 2 , 3
6:
          for j = 1 to n do
7:
             Compute ϑ j = f ( y j [ ϕ ] ) f ( x j [ ϕ ] )
8:
             Compute the weight function μ ( ϑ j ) depending on the chosen scheme:
  • Scheme FSB [ ϑ 1 ] :
    μ ( ϑ j ) = 2 ϑ j 1 + α ϑ j 2
  • Scheme FSB [ ϑ 2 ] :
    μ ( ϑ j ) = 2 ϑ j 1 + ϑ j 2
  • Scheme FSB [ ϑ 3 ] :
    μ ( ϑ j ) = ϑ j + ϑ j 1 + α ϑ j 2
9:
             Compute fractional correction:
z t j [ ϕ ] = y j [ ϕ ] Γ ( κ + 1 ) · f ( y j [ ϕ ] ) [ D κ 1 κ ] f ( x j [ ϕ ] ) ( 1 + μ ( ϑ j ) ) 1 / κ
10:
          end for
11:
      end for
12:
      for i = 1 to n do
13:
          Compute:
x i [ ϕ + 1 ] = y i [ ϕ ] 5 j = 1 , j i n ( x i [ ϕ ] x j [ ϕ ] ) + 7 j = 1 , j i n ( y i [ ϕ ] y j [ ϕ ] ) 3 j = 1 , j i n ( x i [ ϕ ] x j [ ϕ ] ) + j = 1 , j i n ( y i [ ϕ ] y j [ ϕ ] ) f ( y [ ϕ ] ) j = 1 , j i n ( y i [ ϕ ] y j [ ϕ ] ) 4 j = 1 , j i n y i [ ϕ ] y j [ ϕ ] x i [ ϕ ] x j [ ϕ ]
14:
      end for
15:
      Check convergence: If x i ϕ + 1 x i ϕ < 10 ( 32 ) , stop
16:
      Update: x i [ ϕ ] x i [ ϕ + 1 ] for all i
17:
end while

4.1. Mechanical Engineering Model for Automotive Suspension [33]

To solve a mechanical system with multiple degrees of freedom (MDOF), consider the example of an automotive suspension system. In such systems, the governing equations of motion can be formulated and analyzed using eigenvalue techniques. The objective is to derive the governing equations, obtain the characteristic polynomial, and determine the eigenvalues of a polynomial of degree greater than six.
Consider a simplified model of an automotive suspension system, as illustrated in Figure 3a,b [34]. The system consists of a car body (m) mounted on four springs (stiffness k) and dampers (damping coefficient c). While the analysis will be extended to a higher-degree-of-freedom (DOF) system, the initial representation can be approximated as a two-degree-of-freedom (2DOF) system.

4.1.1. Model Assumptions

  • The system is linear.
  • The suspension consists of multiple springs and dampers, each representing a distinct suspension component.

4.1.2. Governing Equations

The equations of motion for a system with multiple degrees of freedom can be expressed in matrix form as follows:
M d 2 x d t 2 + C d x d t + Kx = F ,
where the following is the case:
  • x is the displacement vector of the masses.
  • M is the mass matrix.
  • C is the damping matrix.
  • K is the stiffness matrix.
  • F is the external force vector.
For a system with n degrees of freedom, the mass ( M ), damping ( C ), and stiffness ( K ) matrices take the following forms:
M = m 1 0 0 0 m 2 0 0 0 m 3 ,
C = c 1 c 12 0 c 21 c 2 c 23 0 c 32 c 3 ,
K = k 1 k 12 0 k 21 k 2 k 23 0 k 32 k 3 .
Assume a harmonic solution of the form given in Equation (72):
x t = x exp ( i λ t )
Substituting Equation (73) into Equation (72), we obtain
M λ 2 x + i C λ x + Kx = F .
The eigenvalues of the system are obtained by setting F = 0 , leading to the following characteristic equation:
det M λ 2 + i C λ + K = 0 .
For a system with n degrees of freedom, the characteristic polynomial of degree n is given by
P λ = a n λ n + a n 1 λ n 1 + . . . + a 0 .
The above model can be solved for specific cases, such as n = 2 and n = 4 degrees of freedom.

4.1.3. Case 1: Two-Degree-of-Freedom (2DOF) System

A two-degree-of-freedom system is represented as
x = x 1 x 2 ,
where the mass, damping, and stiffness matrices for specific parameter values are given as follows:
M = m 1 0 0 m 2 = 2000 0 0 500 .
C = c 1 + c 2 c 2 c 2 c 2 = 4000 + 3000 3000 3000 3000 = 7000 3000 3000 3000 .
K = k 1 + k 2 k 2 k 2 k 2 = 80000 + 60000 60000 60000 60000 = 140000 60000 60000 60000 .
Substituting Equations (78)–(82) into Equation (75), we obtain
det λ 2 2000 0 0 500 + i λ 7000 3000 3000 3000 + 140000 60000 60000 60000 = 0 .
Thus, the determinant simplifies to
det 140000 2000 λ 2 + 7000 i λ 60000 3000 i λ 60000 3000 i λ 60000 500 λ 2 + 3000 i λ = 0 .
The following polynomial is derived after performing determinant calculations and simplifications:
f λ = 10 λ 4 95 i λ 3 880 λ 2 + 810 i λ 3180 .
The exact eigenvalues of Equation (83), accurate up to ten decimal places, are given as
ζ 1 , 2 = 9.46598080270024 ± 3.3068620634614 i , ζ 3 = 4.363049172 i , ζ 4 = 7.249325046 i , .
The initial eigenvalue approximations, close to the exact solutions, are given by
λ 1 , 2 [ 0 ] = 10 ± 4 i , λ 3 [ 0 ] = 5 i , λ 4 [ 0 ] = 7 i .
The sparsity pattern, symmetry, and conditioning are key structural characteristics of the 2DOF system matrix, as detailed in Table 3. Table 4 shows the parameter α and κ values in the proposed scheme for solving Equation (74), derived from the dynamical plane analysis, indicating a wider and faster convergence region.
The dynamical analysis shown in Figure 4, with fractals representing the basins of attraction, is also important in determining the method’s convergence behavior. In these basins of attraction, smoothness shows strong global stability, where the majority of the initial points converge to the exact roots with less sensitivity, whereas the fractal or complex boundary reveals the sensitivity to initial conditions. Small perturbations can produce irregular convergence behavior, and the scatter area exhibits chaotic or unstable behavior with slow convergence. This approach helps determine the most appropriate starting values, resulting in more efficient and stable convergence by identifying stable and unstable regions from the initial values. The fractal images clearly depict how initial assumptions can considerably alter the rate of convergence, providing useful information for improving the efficiency of the iterative process. The procedure aids not only in the actual selection of initial values but also in refining knowledge about the method’s behavior in various parts of the solution space. The fractal analysis of the classical and fractional schemes for solving (74) is presented in Table 5.
The enhanced convergence behavior of the classical and fractional parallel schemes is demonstrated through dynamical analysis using fractals, as shown in Table 5 and Figure 4. Fractals of fractional parallel schemes are formed for κ 1 . In terms of iterations and percentage convergence, the parallel schemes analyzed via fractals exhibit superior convergence behavior compared to existing approaches. Furthermore, fractal analysis plays a crucial role in selecting optimal initial values, allowing for the simultaneous computation of all eigenvalues of the problem.
The numerical results of the proposed scheme for solving the eigenvalue problem (74), using starting values close to the exact eigenvalues, are presented in Table 6. This table clearly demonstrates that the residual error of the scheme is lower compared to the previous method. To evaluate the global convergence behavior of the parallel schemes, we use randomly generated initial vectors produced by MATLAB’s “rand()” function (see Appendix B Table A1). The corresponding numerical results are summarized in Table 7 and Table 8.
The numerical experiments show that the new proposed technique consistently achieves superior accuracy, numerical stability, and faster performance, especially for large-scale, sparse, or ill-conditioned matrices. In contrast to previous methods, which may suffer from convergence concerns or sensitivity while solving ill-conditioned systems, the novel technique preserves accuracy and stability. In general, the results show that our method is a more stable and efficient solution to complex eigenvalue issues than other analytical methods.
Table 7 and Table 8 demonstrate that the newly developed fractional schemes outperform existing methods when using random initial guess values. This improvement is evident in terms of computational time (C-Time), memory usage (Mem), maximum error (Max-Err), maximum iterations (Max-It), percentage divergence, and the total number of function and derivative evaluations across all iteration steps to solve Equation (74). The approximated roots are computed using 64-digit floating-point algebra, ensuring a high degree of precision for problem simulation. Figure 5 illustrates the residual error when using random initial values, highlighting the superior convergence of the proposed approach compared to existing methods.
Table 9 provides a detailed theoretical comparison between the newly proposed algorithm and several well-known analytical methods for solving eigenvalue problems, including the QR algorithm ( QR [ * ] ), the Divide-and-Conquer method ( DC [ * ] ), the Generalized Schur Decomposition ( QZ [ * ] ), and MATLAB’s built-in function eig() ( MT [ * ] ).

4.1.4. Analysis of Eigenvalues

  • Real eigenvalues: Real eigenvalues indicate stable oscillatory behavior at specific frequencies.
  • Complex eigenvalues: The presence of complex eigenvalues suggests damped oscillations, where disturbances gradually decay over time.

4.1.5. Physical Behavioral Interpretation of the Two-Degree-of-Freedom System

  • Stable modes: The natural frequencies determine the preferred oscillation modes of the suspension system when disturbed. These modes are crucial for ensuring stability in vehicle dynamics during operation.
  • Damping effects: The presence of complex eigenvalues signifies that disturbances will decay over time, leading to a gradual return to equilibrium. This effect is essential for ride comfort, preventing excessive oscillations caused by road irregularities.
  • High-frequency response: The highest natural frequency (10.0 Hz) may correspond to the first mode of vibration of the suspension system. This mode is particularly critical for maintaining control and stability at high speeds or during rapid maneuvers.

4.1.6. Case 2: Four-Degree-of-Freedom (4DOF) System

The automotive suspension system is modeled as a four-degree-of-freedom (4DOF) system, representing a more complex vehicle suspension, such as a multi-link suspension. The system consists of two masses representing the car body and two wheel assemblies.
System Parameters
The 4DOF system includes the following mass, damping, and stiffness parameters:
Masses
  • m 1 = 2000 kg (car body);
  • m 2 = 500 kg (front-left wheel assembly);
  • m 3 = 500 kg (front-right wheel assembly);
  • m 4 = 500 kg (rear wheel assembly).
Spring Constants
  • 80,000 N/m (spring between car body and front axle);
  • 60,000 N/m (spring between front axle and ground);
  • 60,000 N/m (spring between rear axle and ground).
Damping Coefficients
  • 4000 Ns/m (damper between car body and front axle);
  • 3000 Ns/m (damper between front axle and ground);
  • 3000 Ns/m (damper between rear axle and ground).
Governing Equations
The system’s dynamics are governed by the equation of motion:
M d 2 x d t 2 + C d x d t + Kx = F .
where x represents the displacement vector
x = x 1 x 2 x 3 x 4 .
The mass, damping, and stiffness matrices for the 4DOF system are given as
Mass Matrix
M = m 1 0 0 0 0 m 2 0 0 0 0 m 3 0 0 0 0 m 4 = 2000 0 0 0 0 500 0 0 0 0 500 0 0 0 0 500 .
Damping Matrix
C = c 1 c 2 0 0 c 2 c 2 0 0 0 0 c 2 c 3 0 0 c 3 c 3 = 4000 + 3000 3000 0 0 3000 3000 0 0 0 0 3000 3000 0 0 3000 3000
= 7000 3000 0 0 3000 3000 0 0 0 0 3000 3000 0 0 3000 3000 .
Stiffness Matrix
K = k 1 k 2 0 0 k 2 k 2 0 0 0 0 k 3 0 0 0 0 k 3 = 80000 + 60000 60000 0 0 60000 60000 0 0 0 0 60000 0 0 0 0 60000
= 140000 60000 0 0 60000 60000 0 0 0 0 60000 0 0 0 0 60000 .
To determine the eigenvalues for F = 0 , we compute the determinant
det M λ 2 + i C λ + K = 0 .
Expanding the determinant, we obtain
det C 1 [ * ] 60000 3000 i λ 0 0 60000 3000 i λ C 2 [ * ] 0 0 0 0 C 2 [ * ] 3000 i λ 0 0 3000 i λ C 2 [ * ] ,
where
C 1 [ * ] = 140000 + 7000 i λ 2000 λ 2 , C 2 [ * ] = 60000 + 3000 i λ 500 λ 2 .
The characteristic polynomial is obtained after determinant calculations and simplifications:
f ( λ ) = 2.5 × 10 11 λ 8 5.735 × 10 12 i λ 7 1.39 × 10 14 λ 6 + 1.656 × 10 15 λ 5 + 2.178 × 10 16 λ 4 1.188 × 10 18 λ 2 + 3.456 × 10 18 λ + 1.728 × 10 19 .
The exact eigenvalues of Equation (92), accurate to ten decimal places, are as follows:
ζ 1 , 2 = 12.0 + 4.0 i , ζ 3 = 10.95445115 , ζ 4 , 5 = 9.16515139 ± 6 i , ζ 6 , 7 = 5.425633604 ± 0.75 i , ζ 8 = 10.95445115 .
The sparsity pattern, symmetry, and conditioning are key structural characteristics of the 2DOF system matrix, as detailed in Table 10.
The initial approximations of eigenvalues, close to the exact solutions, are
λ 1 , 2 [ 0 ] = 10 ± 4 i , λ 3 [ 0 ] = 11 , λ 4 , 5 [ 0 ] = 10 ± 5 i , λ 6 , 7 [ 0 ] = 6.0 ± 0.8 i , λ 8 [ 0 ] = 10 .
Dynamical analysis through fractals, as presented in Table 11 and Figure 6, illustrates the improved convergence behavior of both fractional and classical parallel schemes. The fractals of fractional parallel schemes are generated for κ 1 . In terms of iteration count and percentage convergence, the fractal analysis demonstrates that parallel schemes exhibit superior convergence behavior compared to existing approaches. Moreover, fractal analysis aids in selecting optimal initial values, enabling the simultaneous determination of all eigenvalues in the problem.
The numerical results of the proposed scheme for solving the eigenvalue problem (92) using initial values close to the exact eigenvalues are presented in Table 12. The data in Table 12 clearly demonstrate that the residual error of the proposed scheme is lower than that of previous methods. To evaluate the global convergence behavior of the parallel schemes, we generate multiple sets of initial vectors using MATLAB’s rand() function (see Appendix B Table A2). The corresponding numerical results, including convergence rates and performance comparisons, are provided in Table 13 and Table 14.
In terms of computational time (in seconds), memory usage, maximum error (see Figure 7), maximum iterations, percentage divergence, and the total number of function and derivative evaluations across all iteration steps for solving (92), Table 13 and Table 14 demonstrate that the newly developed fractional schemes outperform existing methods when using randomly generated initial values. The approximate roots are computed using 64-digit floating-point arithmetic, ensuring a high degree of precision for simulating the problem.
A detailed theoretical comparison between the newly proposed algorithm and several widely used analytical techniques for solving eigenvalue problems—including the QR algorithm, the Divide-and-Conquer strategy, the Generalized Schur Decomposition, and MATLAB’s built-in function eig() —is presented in Table 15.
Simulations demonstrate that the proposed technique consistently achieves higher accuracy, improved numerical stability, and faster performance—particularly for large-scale, sparse, or ill-conditioned matrices—when compared to QR [ * ] , DC [ * ] , QZ [ * ] , and MT [ * ] , respectively.

4.1.7. Analysis of Eigenvalues for a Four-Degree-of-Freedom Mechanical System

  • Stable modes: Real eigenvalues indicate stable modes of oscillation. The frequencies determine how the system naturally oscillates when subjected to disturbances.
  • Damped oscillations: Complex eigenvalues signify oscillations that decay over time, representing the damping characteristics of the suspension system. This is crucial for maintaining comfort and ride quality in vehicles.

4.1.8. Physical Interpretation of Eigenvalues in a Four-Degree-of-Freedom System

  • Stable modes: The real eigenvalues correspond to natural frequencies ranging from 10.9 Hz to 10.9 Hz, where the system exhibits stable oscillations. These modes are critical for ensuring vehicle stability under various loading conditions, particularly when driving over uneven surfaces.
  • Damping effects: The presence of complex eigenvalues indicates that the suspension system gradually dissipates energy in response to disturbances, ensuring a smooth return to equilibrium. This characteristic is essential for preventing excessive oscillations that could compromise passenger comfort and vehicle handling.
  • Frequency range: The identified natural frequencies define the range of vibrations the suspension system will experience. High-frequency responses (e.g., 12.0 Hz) correspond to rapid load changes, such as those encountered during cornering or sudden braking.

4.2. Aeroelasticity and Flutter Analysis: Detailed Model and Solution Behavior [35]

Aeroelasticity describes the interaction between aerodynamic forces and structural dynamics, particularly in flexible structures such as aircraft wings, bridges, wind turbine blades, and high-rise buildings. One of the key phenomena studied in aeroelasticity is flutter, a potentially catastrophic oscillatory instability that arises when aerodynamic forces excite the structural modes of a system.
In flutter analysis, the objective is to determine the conditions—such as airspeed or structural stiffness—under which dynamic instability occurs. This is accomplished by solving a coupled system of differential equations that govern the interaction between aerodynamic loads and structural motion. The solution of this system typically reduces to an eigenvalue problem, where the behavior of the eigenvalues indicates whether the system remains stable or becomes unstable.

4.2.1. Flutter Model Overview

Flutter can be modeled using coupled differential equations that describe the interaction between structural and aerodynamic forces acting on a flexible body, such as an aircraft wing. For instance, an airfoil experiencing both bending and torsional motion can be analyzed by formulating equations of motion for structural dynamics and aerodynamic forces governed by potential flow theory.

4.2.2. Equations of Motion

The general form of the structural equations governing the bending and torsional motion of an airfoil is given by
m h t + c h h t + k h h t = L t , I α α t + c α α t + k α α t = M t ,
where the following is the case:
  • h ( t ) —vertical displacement of the airfoil (bending);
  • α ( t ) —angular displacement (torsion);
  • m and I α —mass and moment of inertia of the structure;
  • c h and c α —damping coefficients;
  • k h and k α —stiffness coefficients;
  • L ( t ) —aerodynamic lift force;
  • M ( t ) —aerodynamic pitching moment.

4.2.3. Aerodynamic Forces

The aerodynamic forces, L ( t ) and M ( t ) , depend on the airspeed and the structural motion. For small disturbances, they can be approximated using linearized potential flow theory, leading to the following expressions:
L t = ρ U 2 S α t ; M t = ρ U 2 S h t ,
where ρ is the air density, and U is the free-stream airspeed. The aerodynamic forces are coupled to the structural equations, resulting in an interaction between the two modes: bending and torsion.
By selecting specific parameter values, we define the following:
  • m = 50 kg—mass per unit length of the wing;
  • I α = 10 kg·m2—moment of inertia of the wing;
  • c h = 0.1 Ns/m—damping coefficient for bending;
  • c α = 0.05 Ns·m/rad—damping coefficient for torsion;
  • k h = 2000 N/m—bending stiffness;
  • k α = 1000 Nm/rad—torsional stiffness.
The aerodynamic forces acting on the wing are modeled as linearized functions of the displacements and are computed for the following conditions:
  • ρ = 1.225 kg/m3—air density;
  • S = 10 m2—wing surface area.
Using the parameter values in Equation (95), we obtain the following system of coupled differential equations:
50 h t + 0.1 h t + 2000 h t = 1.225 × 10 U 2 α t , 10 α t + 0.05 α t + 1000 α t = 1.225 × 10 U 2 h t .
These equations describe the bending displacement h ( t ) and torsional rotation α ( t ) . The aerodynamic coupling terms, U 2 α ( t ) and U 2 h ( t ) , establish the interaction between the bending and torsional motions. This coupling highlights the crucial role of the airspeed U in determining the stability of the system.

4.2.4. Coupled System and Eigenvalue Problem

The governing equation of motion for the aeroelastic system is given by
Mx t + Cx t + K U x t = 0 ,
where the state vector is defined as
x ( t ) = h ( t ) α ( t ) .
The mass matrix M , damping matrix C , and stiffness matrix K ( U ) are given by
M = 50 0 0 10 , C = 0.1 0 0 0.05 ,
K ( U ) = 2000 1.225 × 10 U 2 1.225 × 10 U 2 1000 .
To determine the flutter speed and assess the stability of the system, we assume a solution of the form
x ( t ) = ϕ exp ( i λ t ) ,
where λ represents the oscillation frequency. Substituting this assumption into Equation (98) leads to the following eigenvalue problem:
K U λ 2 M + i λ C ϕ = 0 .
The characteristic equation for the eigenvalues λ is obtained by computing the determinant:
det K U λ 2 M + i λ C = 0 .
This equation governs the natural frequencies and damping characteristics of the system as a function of the airspeed U, providing insights into the system’s stability and the onset of flutter.
Next, we solve the system at a low airspeed, specifically U = 50 m / s . Substituting this value into Equation (103) yields
det 50 λ 2 + 0.1 i λ + 2000 1225.0 1225.0 10 λ 2 + 0.1 i λ + 1000 = 0 .
The corresponding characteristic polynomial is given by
500 λ 4 + 3.5 i λ 3 + 6999.995 λ 2 + 200 i λ + 499375 = 0 .
The sparsity pattern, symmetry, and conditioning are key structural characteristics of the 2DOF system matrix, as detailed in Table 16. Table 17 shows the parameter α and κ values in the proposed scheme for solving Equation (105), derived from the dynamical plane analysis, indicating a wider and faster convergence region.
The exact eigenvalues, computed up to ten decimal places, are
ζ 1 , 2 = ± 11.5112367 i ,   ζ 3 , 4 = ± 2.747298478 i .
The initial eigenvalues, chosen close to the exact roots, are
λ 1 [ 0 ] = 10.5 i ,   λ 2 [ 0 ] = 10.5 i ,   λ 3 [ 0 ] = 3.1 i ,   λ 4 [ 0 ] = 3.1 i .
The dynamical analysis and fractals, as presented in Table 18 and Figure 8, illustrate the improved convergence behavior of both fractional and classical parallel schemes. The fractal patterns of fractional parallel schemes are generated for κ 1 , highlighting their enhanced stability and efficiency. In terms of iteration count and percentage convergence, the fractal analysis demonstrates that parallel schemes outperform existing methods, providing superior convergence behavior. Additionally, fractal analysis plays a crucial role in selecting optimal initial values, facilitating the simultaneous computation of all eigenvalues in the system.
The numerical results of the proposed numerical scheme for solving the eigenvalue problem (105) using initial values close to the exact eigenvalues are presented in Table 19. As illustrated in Table 19, the scheme exhibits a lower residual error compared to previous methods, as shown in Figure 9. Matlab’s Rand() function is used to generate a set of random initial vectors (see Appendix B Table A3), which are then utilized to compute the rate of convergence and assess the global convergence behavior of the parallel schemes. The corresponding numerical results are summarized in Table 20 and Table 21.
The newly developed fractional schemes demonstrate superior performance compared to existing methods when using random initial guesses, as shown in Table 20 and Table 21. This improvement is evident when evaluating key computational metrics such as execution time (in seconds), memory usage, maximum error, maximum iterations, percentage divergence, and the number of function and derivative evaluations across all iteration steps required to solve (105). The approximated roots are computed using 64-digit floating-point arithmetic, ensuring a high degree of precision in simulating the problem.
The theoretical comparison between the newly developed parallel approach and several analytical techniques—including the QR algorithm, the Divide-and-Conquer strategy, the Generalized Schur Decomposition, and MATLAB’s eig() function—is presented in Table 22.
Our proposed approach outperforms conventional techniques in terms of accuracy, stability, and the handling of sparse or ill-conditioned matrices, as demonstrated by the numerical results. These findings highlight the effectiveness and reliability of the newly developed strategy for solving difficult eigenvalue problems.

4.2.5. Solution Behavior and Physical Interpretation

  • Stable regime (below flutter speed): For U < 150 m / s , the system remains stable. The real parts of all eigenvalues are negative, indicating that oscillations decay over time due to damping. The system naturally returns to equilibrium after disturbances, ensuring structural stability.
  • Flutter point (at critical speed): At U = 150 m / s , the system reaches a marginally stable state. The real part of one pair of eigenvalues approaches zero, meaning that the damping effect vanishes. Oscillations neither grow nor decay, marking the flutter onset speed. Beyond this point, the wing loses its ability to return to a stable state once disturbed.
  • Unstable regime (above flutter speed): For U > 150 m / s , the real part of at least one pair of eigenvalues becomes positive, leading to exponentially growing oscillations. This marks the onset of the flutter regime, where both bending and torsional oscillations increase uncontrollably, potentially causing structural failure unless corrective action, such as reducing airspeed, is taken.

4.3. Limitations of Fractional-Order Parallel Schemes for Eigenvalue Problems

The proposed fractional-order parallel techniques offer a strong framework for tackling complex eigenvalue problems while improving convergence behavior. By integrating fractional dynamics and parallel iteration, these methods estimate all eigenvalues with high accuracy. Their efficiency is particularly apparent in high-degree and complex eigenvalue problem, where existing schemes may not work well. Here are some limitations of the proposed fractional-order parallel scheme:
  • The techniques can perform worse on matrices with clustered or closely spaced eigenvalues, resulting in slower convergence or instability.
  • Optimal convergence and higher accuracy require reasonable initial approximations, which are difficult to produce for nonsymmetric or ill-conditioned matrices.
  • The inclusion of fractional-order terms increases computing time, especially for sparse or nontrivially large matrix problems.
  • Optimal initialization using fractal analysis is problem-specific and does not provide generic automation.
  • Implementation and visualization may require significant computational power and expertise.
To improve the proposed fractional-order parallel algorithms, future studies should focus on memory-reducing fractional operators, GPU/multicore parallelization for large-scale problems, and adaptive parameter adaption using machine learning or optimization techniques. Effective initialization strategies, as well as hybrid methods using error estimators, will improve accuracy and reliability. In addition, theoretical extensions to nonlinear systems and fractional PDEs will increase when utilizing the technique and scope.

5. Conclusions

In this work, we developed novel fractional-order numerical methods for computing single eigenvalues and extended them into parallel schemes capable of determining all eigenvalues simultaneously. The proposed PFSB [ ϑ 1 ] PFSB [ ϑ 3 ] schemes achieve a theoretical convergence order of 6 κ + 4 , demonstrating their efficiency and accuracy in solving eigenvalue problems. Numerical experiments confirm that the proposed fractional schemes outperform existing methods, including PMM [ * ] , NNM [ * ] , and NFM [ * ] , particularly when initialized with accurate eigenvalue estimates. The computational efficiency of these fractional parallel schemes is evident in their reduced number of arithmetic operations, making them more suitable for large-scale applications. Dynamical analysis highlights that the percentage of convergence zones in fractal structures is significantly improved, effectively identifying optimal initial eigenvalue estimates for parallel computations. Furthermore, the fractional methods demonstrate strong global convergence properties, exhibiting robustness even for random initial values. In terms of computational cost and accuracy, the fractional methods outperform existing numerical solvers across multiple metrics, including memory usage, residual error, function and derivative evaluations, iteration count, and convergence–divergence behavior. These findings establish fractional-order parallel schemes as highly efficient and reliable alternatives for solving eigenvalue problems in scientific computing, engineering, and applied mathematics.
Future research may focus on optimizing parameter selection, extending the approach to generalized eigenvalue problems, and investigating real-world applications such as quantum mechanics, stability analysis, and computational fluid dynamics. The strong numerical performance of fractional-order methods suggests significant potential for further advancements in large-scale numerical computations.

Author Contributions

Conceptualization, M.S. and B.C.; methodology, M.S.; software, M.S.; validation, M.S.; formal analysis, B.C.; investigation, M.S.; resources, B.C.; writing—original draft preparation, M.S. and B.C.; writing—review and editing, B.C.; visualization, M.S. and B.C.; supervision, B.C.; project administration, B.C.; funding acquisition, B.C. All authors have read and agreed to the published version of the manuscript.

Funding

Bruno Carpentieri’s work is supported by the European Regional Development and Cohesion Funds (ERDF) 2021–2027 under Project AI4AM—EFRE1052. He is a member of the Gruppo Nazionale per il Calcolo Scientifico (GNCS) of the Istituto Nazionale di Alta Matematica (INdAM), and this work was partially supported by INdAM-GNCS under the Progetti di Ricerca 2024 program.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Notations

CPU-timeComputational time seconds
Max-ItMaximum iterations
Max-ErrMaximum error
Per-ConvergencePercentage convergence
PMM [ * ] , NNM [ * ] , NFM [ * ] Existing parallel methods
ϵ 1 [ ϕ ] Residual error
PFSB [ ϑ 1 ] PFSB [ ϑ 3 ] Newly developed methods

Appendix A

Appendix A.1

Theorem A1.
Suppose that D κ 1 γ κ f ( x ) C κ 1 , κ 2 for γ = 1 , , n + 1 , where κ 0 , 1 . Then, the Generalized Taylor Formula [36] is given by
f ( x ) = i = 0 n D κ 1 i κ f κ 1 x κ 1 i κ Γ ( i κ + 1 ) + D κ 1 n + 1 κ f ξ x κ 1 n + 1 κ Γ ( n + 1 κ + 1 ) ,
where
κ 1 ξ x , x κ 1 , κ 2 ,
D κ 1 n κ = D κ 1 κ . D κ 1 κ D κ 1 κ ( n t i m e s )
and Γ . n = Γ n ( . ) .
  • Consider the Taylor expansion of f ( x ) around κ 1 = ξ as
    f ( x ) = D ξ κ f ξ Γ ( + 1 ) x ξ κ + D ξ 2 κ f ξ Γ ( 2 κ + 1 ) x ξ 2 κ + O x ξ 3 κ .
  • Factoring out D ξ κ f ( ξ ) Γ ( κ + 1 ) , we obtain
    f ( x ) = D ξ κ f ξ Γ ( κ + 1 ) x ξ κ + b 2 x ξ 2 κ + O x ξ 3 κ ,
    where
    b γ = Γ ( κ + 1 ) Γ ( γ κ + 1 ) D ξ γ κ f ξ D ξ κ f ξ , γ = 2 , 3 ,
  • Around ξ , the Caputo-type derivative of f ( x ) is given by
    D ξ κ f x = D ξ κ f ξ Γ ( κ + 1 ) Γ ( κ + 1 ) + Γ ( 2 κ + 1 ) Γ ( κ + 1 ) b 2 x ξ κ + O x ξ 2 κ .
  • These expansions are used to analyze the convergence of the methods.

Appendix A.2

The coefficients of e ϕ κ were used in the proof of Theorem 2.
τ [ * ] = 3 2 κ 2 Γ κ + 1 / 2 b 2 b 3 Γ κ 2 κ 2 π + 3 b 3 3 κ 3 3 b 2 2 Γ κ 2 κ 2 π 2 κ 2 Γ κ + 1 3 Γ κ + 2 3 Γ κ + 1 2 b 3 3 κ 3 3 b 2 Γ κ 3 κ 3 π Γ κ + 1 3 Γ κ + 2 3 + τ 1 + . . . + τ 7 4 b 2 3 2 κ 6 Γ κ + 1 / 2 3 Γ κ 4 κ 4 π 3 / 2 9 2 κ 4 Γ κ + 1 / 2 2 b 2 3 Γ κ 3 κ 3 π + 4 b 2 3 2 κ 2 Γ κ + 1 / 2 Γ κ 2 κ 2 π
τ 1 = 2 κ 4 b 2 3 b 3 Γ κ 2 κ 2 π Γ κ + 1 2 2 + 6 b 2 5 2 κ 8 Γ κ + 1 / 2 4 κ 4 π 2 Γ κ 4 + 6 b 2 5 2 κ 4 Γ κ + 1 / 2 2 Γ κ 2 κ 2 π ,
τ 2 = 2 κ 4 b 2 3 b 3 Γ κ 2 κ 2 π Γ κ + 1 2 2 + 6 b 2 5 2 κ 8 Γ κ + 1 / 2 4 κ 4 π 2 Γ κ 4 + 6 b 2 5 2 κ 4 Γ κ + 1 / 2 2 Γ κ 2 κ 2 π ,
τ 3 = 12 b 2 5 2 κ 6 Γ κ + 1 / 2 3 Γ κ 3 κ 3 π 3 / 2 + 3 b 3 b 2 3 2 κ 6 Γ κ + 1 / 2 3 Γ κ 3 κ 3 π 3 / 2 ,
τ 4 = 6 b 3 2 κ 2 Γ κ + 1 / 2 b 2 3 3 κ 3 3 Γ κ + 1 / 3 Γ κ + 2 / 3 Γ κ 3 κ 3 π 3 / 2 + 8 b 3 b 2 3 3 κ 3 3 Γ κ + 1 / 3 Γ κ + 2 / 3 Γ κ 2 κ 2 π ,
τ 5 = 2 b 3 b 2 3 3 κ 3 3 Γ κ + 1 / 3 Γ κ + 2 / 3 Γ κ κ π 2 κ 2 Γ κ + 1 / 2 2 b 2 b 3 2 3 κ 3 3 Γ κ + 1 / 3 Γ κ + 2 / 3 Γ κ κ π 2 κ 2 Γ κ + 1 / 2 ,
τ 6 = 4 2 κ 2 Γ κ + 1 / 2 b 2 2 b 4 Γ κ κ π + 4 b 2 2 b 4 + 2 b 2 b 3 2 ,
τ 7 = + b 3 2 κ 2 b 2 3 Γ κ κ π Γ κ + 1 2 3 b 3 b 2 3 + 3 b 2 b 3 2 3 κ 6 2 Γ κ 2 κ 2 π 2 κ 4 Γ κ + 1 3 2 Γ κ + 2 3 2 Γ κ + 1 2 2 ,
σ 1 = b 2 Γ κ κ 2 κ 2 Γ κ + 1 / 2 b 2 κ 2 π Γ κ 2 ,
σ 2 = b 3 Γ κ κ 2 κ 2 Γ κ + 1 / 2 b 2 2 κ 2 π Γ κ 2 1 2 b 3 3 κ 3 3 Γ κ + 1 / 3 Γ κ + 2 / 3 κ 2 π Γ κ 2 2 κ 2 Γ κ + 1 / 2 + 2 κ 4 Γ κ + 1 / 2 2 b 2 2 Γ κ 3 κ 3 π ,
σ 3 = b 3 + 2 κ 2 Γ κ + 1 / 2 b 2 2 Γ κ κ π + 1 2 b 3 3 κ 3 3 Γ κ + 1 / 3 Γ κ + 2 / 3 Γ κ κ π 2 κ 2 Γ κ + 1 / 2 2 κ 4 Γ κ + 1 / 2 2 b 2 2 Γ κ 2 κ 2 π ,
σ 4 = 2 b 3 Γ κ 2 κ 2 π 3 / 2 2 κ 2 Γ κ + 1 / 2 2 2 κ 4 Γ κ + 1 / 2 2 b 2 2 Γ κ κ π b 3 3 κ 3 3 Γ κ + 1 / 3 Γ κ + 2 / 3 Γ κ κ π + 2 2 κ 6 Γ κ + 1 / 2 3 b 2 2 π ,
σ 5 = b 2 2 Γ κ κ π 2 κ 2 Γ κ + 1 / 2 Λ { 1 } Γ κ 3 κ 3 π 2 2 κ 2 Γ κ + 1 / 2 ,
σ 6 = b 2 3 Γ κ κ π 2 κ 2 Γ κ + 1 / 2 2 Γ κ 2 κ 2 π ,
σ 7 = 2 b 3 Γ κ 2 κ 2 π 3 / 2 2 κ 2 Γ κ + 1 / 2 2 2 κ 4 Γ κ + 1 / 2 2 b 2 2 Γ κ κ π b 3 3 κ 3 3 Γ κ + 1 / 3 Γ κ + 2 / 3 Γ κ κ π + 2 2 κ 6 Γ κ + 1 / 2 3 b 2 2 π , ,
σ 8 = Γ κ 2 κ 2 π 3 / 2 2 κ 2 Γ κ + 1 / 2 , a { 6 } = a { 4 } a { 5 } ,
σ 9 = b 2 Γ κ κ π 2 κ 2 Γ κ + 1 / 2 Γ κ κ π ,
σ 10 = b 2 + b 2 2 κ 2 Γ κ κ π Γ κ + 1 2 ,
σ 11 = b 3 + b 2 2 + b 3 3 κ 3 3 2 Γ κ κ π 2 κ 2 Γ κ + 1 3 Γ κ + 2 3 Γ κ + 1 2 1 2 κ 4 b 2 2 Γ κ 2 κ 2 π Γ κ + 1 2 2 ,
σ 12 = 2 b 2 b 3 b 3 3 κ 3 3 b 2 2 Γ κ κ π 2 κ 2 Γ κ + 1 3 Γ κ + 2 3 Γ κ + 1 2 1 2 b 2 3 2 κ 2 Γ κ + 1 / 2 Γ κ κ π + 2 b 2 3 2 κ 4 Γ κ + 1 / 2 2 Γ κ 2 κ 2 π b 2 2 κ 2 b 3 Γ κ κ π Γ κ + 1 2 ,
σ 13 = b 2 2 b 3 + b 2 b 4 b 2 2 b 3 3 κ 3 3 2 Γ κ κ π 2 κ 2 Γ κ + 1 3 Γ κ + 2 3 Γ κ + 1 2 + b 3 2 b 3 2 3 κ 3 3 2 Γ κ κ π 2 κ 2 Γ κ + 1 3 Γ κ + 2 3 Γ κ + 1 2 1 + b 2 2 b 3 3 κ 3 3 Γ κ 2 κ 2 π Γ κ + 1 3 Γ κ + 2 3 + 2 b 2 4 2 κ 4 Γ κ + 1 / 2 2 Γ κ 2 κ 2 π 2 b 2 4 2 κ 6 Γ κ + 1 / 2 3 Γ κ 3 κ 3 π 3 / 2 b 2 2 2 κ 2 b 3 Γ κ κ π Γ κ + 1 2 + 2 κ 4 b 2 2 b 3 Γ κ 2 κ 2 π Γ κ + 1 2 2 b 2 2 κ 2 b 4 Γ κ κ π Γ κ + 1 2 .
σ 14 = b 2 Γ κ κ + b 2 2 κ 2 Γ κ 2 κ 2 π Γ κ + 1 2
σ 15 = b 3 3 κ 3 3 2 Γ κ 2 κ 2 π 2 κ 2 Γ κ + 1 3 Γ κ + 2 3 Γ κ + 1 2 b 3 Γ κ κ + 2 2 κ 2 Γ κ + 1 / 2 b 2 2 Γ κ 2 κ 2 π 2 2 κ 4 Γ κ + 1 / 2 2 b 2 2 Γ κ 3 κ 3 π
σ 16 = b 2 2 κ 2 b 3 Γ κ 2 κ 2 π Γ κ + 1 2 + b 3 3 κ 3 3 b 2 2 Γ κ 2 κ 2 π 2 κ 2 Γ κ + 1 3 Γ κ + 2 3 Γ κ + 1 2 + b 2 3 Γ κ κ b 3 3 κ 3 3 b 2 Γ κ 3 κ 3 π Γ κ + 1 3 Γ κ + 2 3 + 2 b 2 3 2 κ 6 Γ κ + 1 / 2 3 Γ κ 4 κ 4 π 3 / 2 2 b 2 3 2 κ 2 Γ κ + 1 / 2 Γ κ 2 κ 2 π b 2 3 2 κ 4 Γ κ 3 κ 3 π Γ κ + 1 2 2
σ 17 = μ 0 b 2 2 κ 2 Γ κ κ π Γ κ + 1 2 μ 0 + σ 1 [ * ] σ 5 [ * ] b 2
σ 1 [ * ] = μ 0 b 3 3 κ 3 3 2 Γ κ κ π 2 κ 2 Γ κ + 1 3 Γ κ + 2 3 Γ κ + 1 2 1 μ 0 2 κ 2 b 2 2 Γ κ κ π Γ κ + 1 2 + μ 0 2 κ 4 b 2 2 2 Γ κ 2 κ 2 π Γ κ + 1 2 2 μ 0 2 κ 4 b 2 2 Γ κ 2 κ 2 π Γ κ + 1 2 2 + μ 0 b 2 2 2 μ 0 b 3 + μ 0 b 2 2
σ 2 [ * ] = 2 μ 0 b 2 b 3 + μ 0 b 2 b 3 μ 0 b 3 3 κ 3 3 b 2 2 Γ κ κ π 2 κ 2 Γ κ + 1 3 Γ κ + 2 3 Γ κ + 1 2 μ 0 b 3 3 κ 3 3 b 2 2 Γ κ κ π 2 κ 2 Γ κ + 2 3 Γ κ + 1 3 Γ κ + 1 2 μ 0 b 2 2 κ 2 b 3 Γ κ κ π Γ κ + 1 2 μ 0 b 2 2 κ 2 b 3 Γ κ κ π Γ κ + 1 2 + μ 0 b 3 3 κ 3 3 b 2 2 Γ κ 2 κ 2 π Γ κ + 1 3 Γ κ + 2 3 + μ 0 b 2 3 2 κ 2 Γ κ κ π Γ κ + 1 2 + μ 0 b 2 3 2 κ 4 Γ κ 2 κ 2 π Γ κ + 1 2 2 μ 0 2 κ 6 b 2 3 Γ κ 3 κ 3 π 3 2 Γ κ + 1 2 3 2 μ 0 b 2 3 2 κ 2 Γ κ + 1 / 2 Γ κ κ π + 2 μ 0 b 2 3 2 κ 4 Γ κ + 1 / 2 2 Γ κ 2 κ 2 π μ 0 b 2 3
σ 3 [ * ] = 2 κ 2 b 2 H 0 Γ κ 2 κ 2 π Γ κ + 1 2 + b 2 Γ κ κ + b 2 H 0 Γ κ κ b 2 + 2 κ 2 b 2 Γ κ κ π Γ κ + 1 2 2 κ 2 b 2 Γ κ 2 κ 2 π Γ κ + 1 2
σ 4 [ * ] = 2 2 κ 2 Γ κ + 1 / 2 b 2 2 H 0 Γ κ 2 κ 2 π + 2 2 κ 4 Γ κ + 1 / 2 2 b 2 2 H 0 Γ κ 3 κ 3 π + 2 DH 0 b 2 2 2 κ 2 Γ κ + 1 / 2 Γ κ 2 κ 2 π 2 κ 4 b 2 2 DH 0 Γ κ 3 κ 3 π Γ κ + 1 2 2 b 3 3 κ 3 3 H 0 2 Γ κ 2 κ 2 π 2 κ 2 Γ κ + 1 3 Γ κ + 2 3 Γ κ + 1 2 1 + b 3 3 κ 3 3 2 Γ κ κ π 2 κ 2 Γ κ + 1 3 Γ κ + 2 3 Γ κ + 1 2 1 b 3 3 κ 3 3 2 Γ κ 2 κ 2 π 2 κ 2 Γ κ + 1 3 Γ κ + 2 3 Γ κ + 1 2 1 + b 3 Γ κ κ + b 3 H 0 Γ κ κ DH 0 b 2 2 Γ κ κ b 3 2 2 κ 2 Γ κ + 1 / 2 b 2 2 Γ κ 2 κ 2 π + 2 2 κ 4 Γ κ + 1 / 2 2 b 2 2 Γ κ 3 κ 3 π + 2 κ 2 b 2 2 Γ κ κ π Γ κ + 1 2 2 κ 4 b 2 2 Γ κ 2 κ 2 π Γ κ + 1 2 2 + σ 5 [ * ]
σ 5 [ * ] = σ 1 * + σ 2 * + σ 3 * + σ 4 * + σ 5 * + σ 6 * + σ 17
σ 1 * = 2 b 2 3 2 κ 6 Γ κ + 1 / 2 3 Γ κ 4 κ 4 π 3 / 2 + 2 κ 4 b 2 3 Γ κ 3 κ 3 π Γ κ + 1 2 2 + 2 b 2 3 2 κ 2 Γ κ + 1 / 2 Γ κ 2 κ 2 π + b 3 3 κ 3 3 μ 0 b 2 Γ κ 2 κ 2 π 2 κ 2 Γ κ + 1 3 Γ κ + 2 3 Γ κ + 1 2
σ 2 * = b 3 3 κ 3 3 b 2 H 0 2 Γ κ 2 κ 2 π 2 κ 2 Γ κ + 1 3 Γ κ + 2 3 Γ κ + 1 2 + b 3 3 κ 3 3 b 2 Γ κ 3 κ 3 π Γ κ + 1 3 Γ κ + 2 3
σ 3 * = b 3 3 κ 3 3 b 2 2 Γ κ 2 κ 2 π 2 κ 2 Γ κ + 1 3 Γ κ + 2 3 Γ κ + 1 2 b 3 3 κ 3 3 μ 0 b 2 Γ κ 3 κ 3 π Γ κ + 1 3 Γ κ + 2 3
σ 4 * = b 3 3 κ 3 3 b 2 H 0 Γ κ 3 κ 3 π Γ κ + 1 3 Γ κ + 2 3 + 2 κ 4 b 2 3 H 0 Γ κ 3 κ 3 π Γ κ + 1 2 2 2 b 2 3 2 κ 6 Γ κ + 1 / 2 3 H 0 Γ κ 4 κ 4 π 3 / 2
σ 5 * = 5 2 κ 4 Γ κ + 1 / 2 2 b 2 3 μ 0 Γ κ 3 κ 3 π + 3 b 2 3 2 κ 6 Γ κ + 1 / 2 3 μ 0 Γ κ 4 κ 4 π 3 / 2 + 3 μ 0 b 2 3 2 κ 4 2 Γ κ 3 κ 3 π Γ κ + 1 2 2 b 2 3 2 κ 6 μ 0 2 Γ κ 4 κ 4 π 3 / 2 Γ κ + 1 2 3
σ 6 * = 2 b 2 3 2 κ 2 Γ κ + 1 / 2 H 0 Γ κ 2 κ 2 π + b 2 3 2 κ 2 μ 0 Γ κ 2 κ 2 π Γ κ + 1 2 3 μ 0 b 2 3 2 κ 2 2 Γ κ 2 κ 2 π Γ κ + 1 2 2 μ 0 b 2 b 3 Γ κ κ b 2 3 H 0 Γ κ κ
σ 7 * = b 2 3 μ 0 Γ κ κ + μ 0 b 2 3 2 Γ κ κ + 2 μ 0 b 3 2 κ 2 Γ κ + 1 / 2 b 2 Γ κ 2 κ 2 π 2 κ 2 b 2 b 3 H 0 Γ κ 2 κ 2 π Γ κ + 1 2 2 κ 2 b 2 b 3 Γ κ 2 κ 2 π Γ κ + 1 2 b 2 3 Γ κ κ
σ 18 = μ ( 0 ) 2 κ 4 b 2 2 Γ κ + 1 2 2 Γ κ 2 π κ 2 + 2 2 α 2 Γ κ + 1 / 2 b 2 2 μ ( 0 ) Γ κ κ π 2 2 κ 4 Γ κ + 1 / 2 2 b 2 2 μ ( 0 ) Γ κ 2 κ 2 π 2 b 2 2 μ ( 0 ) 2 κ 2 Γ κ + 1 / 2 Γ κ κ π + b 3 3 κ 3 3 μ ( 0 ) 2 Γ κ κ π 2 κ 2 Γ κ + 1 3 Γ κ + 2 3 Γ κ + 1 2 + b 3 3 κ 3 3 2 Γ κ κ π 2 κ 2 Γ κ + 1 3 Γ κ + 2 3 Γ κ + 1 2 b 3 b 3 μ ( 0 ) + b 2 2 μ ( 0 ) + 2 2 κ 2 Γ κ + 1 / 2 b 2 2 Γ κ κ π 2 2 κ 4 Γ κ + 1 / 2 2 b 2 2 κ 2 Γ κ 2 π
σ 21 = 9 2 α 4 Γ α + 1 / 2 2 c 2 3 Γ α 3 α 3 π + 4 c 2 3 2 α 6 Γ α + 1 / 2 3 Γ α 4 α 4 π 3 / 2 + 3 c 3 3 α 3 3 b 2 2 Γ α 2 α 2 π 2 α 2 + Γ α + 1 3 Γ α + 2 3 Γ α + 1 2 4 b 2 b 3 Γ α α b 3 3 α 3 3 b 2 Γ α 3 α 3 π Γ α + 1 3 Γ α + 2 3 + b 2 3 Γ α α + μ 0 0 b 2 3 2 Γ α α + 3 2 α 2 Γ α + 1 / 2 b 2 b 3 Γ α 2 α 2 π + 4 b 2 3 2 α 2 Γ α + 1 / 2 Γ α 2 α 2 π 3 μ 0 b 2 3 2 α 2 2 Γ α 2 α 2 π Γ α + 1 2 + 4 b 2 3 2 α 2 Γ α + 1 / 2 Γ α 2 α 2 π 3 μ 0 b 2 3 2 α 2 2 Γ α 2 α 2 π Γ α + 1 2 + 3 μ 0 b 2 3 2 α 4 2 Γ α 3 α 3 π Γ α + 1 2 2 b 2 3 2 α 6 μ 0 2 Γ α 4 α 4 π 3 / 2 Γ α + 1 2 3

Appendix B

The random initial starting vector is used in Eigenvalue Problem 1 to demonstrate the global convergence of numerical techniques, as shown in Appendix B and Table A1.
Table A1. Random starting vectors in parallel schemes.
Table A1. Random starting vectors in parallel schemes.
Ran-Test [ λ 1 [ 0 ] , λ 2 [ 0 ] , λ 3 [ 0 ] , λ 4 [ 0 ] ]
1 [ 0.2722 , 0.4191 , 0.1691 , 0.0497 ]
2 [ 0.0231 , 0.0498 , 0.0598 , 0.6004 ]
3 [ 0.0744 , 0.0663 , 0.0663 , 0.4153 ]
5 [ 0.4001 , 0.2553 , 0.2453 , 0.4053 ]
The random initial starting vector is used in Eigenvalue Problem 1 to demonstrate the global convergence of numerical techniques, as shown in Appendix B and Table A2.
Table A2. Random initial vectors for fractional and classical schemes.
Table A2. Random initial vectors for fractional and classical schemes.
Ran-Test [ λ 1 [ 0 ] , λ 2 [ 0 ] , λ 3 [ 0 ] λ 4 [ 0 ] λ 5 [ 0 ] , λ 6 [ 0 ] λ 7 [ 0 ] λ 8 [ 0 ] ]
1 [ 0.732 , 0.091 , 0.007 , 0.181 , 0.091 , 0.007 , 0.181 , 0.097 ]
2 [ 0.822 , 0.048 , 0.131 , 0.104 , 0.048 , 0.131 , 0.104 , 0.634 ]
3 [ 0.083 , 0.243 , 0.224 , 0.590 , 0.243 , 0.224 , 0.590 , 0.823 ]
5 [ 0.011 , 0.283 , 0.057 , 0.703 , 0.283 , 0.057 , 0.703 , 0.253 ]
The random initial starting vector is used in Eigenvalue Problem 1 to demonstrate the global convergence of numerical techniques, as shown in Appendix B Table A3.
Table A3. Random starting vectors in parallel schemes.
Table A3. Random starting vectors in parallel schemes.
Ran-Test [ λ 1 [ 0 ] , λ 2 [ 0 ] , λ 3 [ 0 ] λ 8 [ 0 ] ]
1 [ 0.9572 , 0.0041 , 0.007 , 0.0546 ]
2 [ 0.3872 , 0.0451 , 0.021 , 0.0034 ]
3 [ 0.0591 , 0.2483 , 0.017 , 0.3453 ]
[ 0.0491 , 0.2853 , 0.017 , 0.276 ]

References

  1. Guglielmi, N.; De Marinis, A.; Savostianov, A.; Tudisco, F. Contractivity of neural ODEs: An eigenvalue optimization problem. Math. Comput. 2025. [Google Scholar]
  2. Meng, F.; Wang, Z.; Zhang, M. Pissa: Principal singular values and singular vectors adaptation of large language models. Adv. Neural Inf. Process. Syst. 2024, 37, 121038–121072. [Google Scholar]
  3. Ciftci, H.; Hall, R.L.; Saad, N. Construction of exact solutions to eigenvalue problems by the asymptotic iteration method. J. Phys. A Math. Gen. 2005, 38, 1147. [Google Scholar]
  4. Zhang, Y.; Chen, C.; Ding, T.; Li, Z.; Sun, R.; Luo, Z. Why transformers need adam: A hessian perspective. Adv. Neural Inf. Process. Syst. 2024, 37, 131786–131823. [Google Scholar]
  5. Ramond, P. The Abel–Ruffini theorem: Complex but not complicated. Am. Math. Mon. 2022, 129, 231–245. [Google Scholar]
  6. Gidiotis, A.; Tsoumakas, G. A divide-and-conquer approach to the summarization of long documents. IEEE/ACM Trans. Audio Speech Lang. Process. 2020, 28, 3029–3040. [Google Scholar] [CrossRef]
  7. Goldstine, H.H.; Murray, F.J.; Von Neumann, J. The Jacobi method for real symmetric matrices. J. ACM (JACM) 1959, 6, 59–96. [Google Scholar]
  8. Fokkema, D.R.; Sleijpen, G.L.; Van der Vorst, H.A. Jacobi–Davidson style QR and QZ algorithms for the reduction of matrix pencils. SIAM J. Sci. Comput. 1998, 20, 94–125. [Google Scholar]
  9. Lu, Z.; Niu, Y.; Liu, W. Efficient block algorithms for parallel sparse triangular solve. In Proceedings of the 49th International Conference on Parallel Processing, Edmonton, AB, Canada, 17–20 August 2020; pp. 1–11. [Google Scholar]
  10. Zhang, Z.; Wang, H.; Han, S.; Dally, W.J. Sparch: Efficient architecture for sparse matrix multiplication. In Proceedings of the 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA), San Diego, CA, USA, 22–26 February 2020; pp. 261–274. [Google Scholar]
  11. Bašić, J.; Blagojevixcx, B.; Baxsxixcx, M.; Sikora, M. Parallelism and Iterative bi-Lanczos Solvers. In Proceedings of the 2021 6th International Conference on Smart and Sustainable Technologies (SpliTech), Split & Bol, Croatia, 8–11 September 2021; pp. 1–6. [Google Scholar]
  12. Wu, G.; Wei, Y. A Power–Arnoldi algorithm for computing PageRank. Numer. Linear Algebra Appl. 2007, 14, 521–546. [Google Scholar]
  13. Nourein, A.W. An improvement on Nourein’s method for the simultaneous determination of the zeroes of a polynomial (an algorithm). J. Comput. Appl. Math. 1977, 3, 109–112. [Google Scholar]
  14. Falcão, M.I.; Miranda, F.; Severino, R.; Soares, M.J. Weierstrass method for quaternionic polynomial root-finding. Math. Methods Appl. Sci. 2018, 41, 423–437. [Google Scholar]
  15. Kipouros, T.; Jaeggi, D.; Dawes, B.; Parks, G.; Savill, M.A.; Clarkson, P.J. Insight into high-quality aerodynamic design spaces through multi-objective optimization. CMES Comput. Model. Eng. Sci. 2008, 37, 1–23. [Google Scholar]
  16. Landman, D.; Simpson, J.; Vicroy, D.; Parker, P. Response surface methods for efficient complex aircraft configuration aerodynamic characterization. J. Aircraf. 2007, 44, 1189–1195. [Google Scholar]
  17. Liu, Y. Bibliometric Analysis of Weather Radar Research from 1945 to 2024: Formations. Develop. Trends Sens. 2024, 24, 3531. [Google Scholar]
  18. Jajarmi, A.; Baleanu, D. A new iterative method for the numerical solution of high-order non-linear fractional boundary value problems. Front. Phys. 2020, 8, 220. [Google Scholar]
  19. Sazaklioglu, A.U. An iterative numerical method for an inverse source problem for a multidimensional nonlinear parabolic equation. Appl. Numer. Math. 2024, 198, 428–447. [Google Scholar]
  20. Akgül, A.; Cordero, A.; Torregrosa, J.R. A fractional Newton method with 2αth-order of convergence and its stability. Appl. Math. Lett. 2019, 98, 344–351. [Google Scholar]
  21. Batiha, B. Innovative Solutions for the Kadomtsev–Petviashvili Equation via the New Iterative Method. Math. Prob. Eng. 2024, 2024, 5541845. [Google Scholar]
  22. Weierstrass, K. Neuer Beweis des Satzes, dass jede ganze rationale Function einer Verän derlichen dargestellt werden kann als ein Product aus linearen Functionen derselben Verän derlichen. Sitzungsberichte KöNiglich Preuss. Akad. Der Wiss. Berl. 1981, 2, 1085–1101. [Google Scholar]
  23. Nedzhibov, G.H. Inverse Weierstrass–Durand–Kerner Iterative Method. Int. J. Appl. Math. 2013, 28, 1258–1264. [Google Scholar]
  24. Nourein, A.W.M. An improvement on two iteration methods for simultaneous determinationof the zeros of a polynomial. Int. J. Comput. Math. 1977, 6, 241–252. [Google Scholar]
  25. Zhang, X.; Peng, H.; Hu, G. A high order iteration formula for the simultaneous inclusion of polynomial zeros. Appl. Math. Comput. 2006, 179, 545–552. [Google Scholar]
  26. Petkovic, M. Iterative Methods for Simultaneous Inclusion of Polynomial Zeros; Springer: Berlin/Heidelberg, Germany, 2006; Volume 1387. [Google Scholar]
  27. Zein, A. A general family of fifth-order iterative methods for solving nonlinear equations. Eur. J. Pure Appl. Math. 2023, 16, 2323–2347. [Google Scholar]
  28. Halilu, A.S.; Waziri, M.Y. A transformed double step length method for solving large-scale systems of nonlinear equations. J. Numer. Math. Stochastics 2021, 9, 20–23. [Google Scholar]
  29. Rafiq, N.; Akram, S.; Shams, M.; Mir, N.A. Computer geometries for finding all real zeros of polynomial equations simultaneously. Comput. Mater. Contin. 2021, 69, 2636–2651. [Google Scholar]
  30. Petković, M.S.; Petković, L.D.; Džunić, J. On an efficient simultaneous method for finding polynomial zeros. Appl. Math. Lett. 2014, 28, 60–65. [Google Scholar]
  31. Wang, D.R.; Wu, Y.J. Some modifications of the parallel Halley iteration method and their convergence. Computing 1987, 38, 75–87. [Google Scholar]
  32. Petković, M.S.; Rančić, L.; Milošević, M. The improved Farmer–Loizou method for finding polynomial zeros. Int. J. Comput. Math. 2012, 89, 499–509. [Google Scholar]
  33. Gillespie, T. Fundamentals of Vehicle Dynamics; SAE: Warrendale, PA, USA, 2000. [Google Scholar]
  34. Reimpell, J.; Stoll, H.; Betzler, J. The Automotive Chassis: Engineering Principles; Elsevier: Amsterdam, The Netherlands, 2001. [Google Scholar]
  35. Prohl, M.A. A general method for calculating critical speeds of flexible rotors. J. Appl. Mech. 1945, 12, A142–A148. [Google Scholar]
  36. Shams, M.; Carpentieri, B. On highly efficient fractional numerical method for solving nonlinear engineering models. Mathematics 2023, 11, 4914. [Google Scholar] [CrossRef]
Figure 1. Computational efficiency comparison of the PMM [ * ] , NNM [ * ] , NFM [ * ] , and PFSB [ ϑ 1 ] - PFSB [ ϑ 3 ] parallel architectures.
Figure 1. Computational efficiency comparison of the PMM [ * ] , NNM [ * ] , NFM [ * ] , and PFSB [ ϑ 1 ] - PFSB [ ϑ 3 ] parallel architectures.
Fractalfract 09 00313 g001
Figure 2. Flowchart of the fractional-order parallel schemes for simultaneously finding all eigenvalues.
Figure 2. Flowchart of the fractional-order parallel schemes for simultaneously finding all eigenvalues.
Fractalfract 09 00313 g002
Figure 3. (a,b) Stability analysis of the car and its corresponding suspension system. (a) Car suspension system; (b) detailed view of the car suspension system.
Figure 3. (a,b) Stability analysis of the car and its corresponding suspension system. (a) Car suspension system; (b) detailed view of the car suspension system.
Fractalfract 09 00313 g003
Figure 4. Dynamical planes of the parallel schemes for solving eigenvalues of Equation (74).
Figure 4. Dynamical planes of the parallel schemes for solving eigenvalues of Equation (74).
Fractalfract 09 00313 g004
Figure 5. Error graph of the parallel schemes using random initial gusses values for solving (74).
Figure 5. Error graph of the parallel schemes using random initial gusses values for solving (74).
Fractalfract 09 00313 g005
Figure 6. Dynamical planes of the parallel schemes for solving eigenvalues of (92).
Figure 6. Dynamical planes of the parallel schemes for solving eigenvalues of (92).
Fractalfract 09 00313 g006
Figure 7. Error graph of the parallel schemes using random initial guesses values for solving (92).
Figure 7. Error graph of the parallel schemes using random initial guesses values for solving (92).
Fractalfract 09 00313 g007
Figure 8. Dynamical planes of the parallel schemes for solving the eigenvalues of Equation (105).
Figure 8. Dynamical planes of the parallel schemes for solving the eigenvalues of Equation (105).
Fractalfract 09 00313 g008
Figure 9. Error graph of the parallel schemes using random initial guesses for solving Equation (105).
Figure 9. Error graph of the parallel schemes using random initial guesses for solving Equation (105).
Fractalfract 09 00313 g009
Table 1. Numerical results of parallel schemes for solving eigenvalue problems.
Table 1. Numerical results of parallel schemes for solving eigenvalue problems.
Scheme PMM [ * ] NNM [ * ] NFM [ * ] PFSB [ ϑ 1 ] PFSB [ ϑ 2 ] PFSB [ ϑ 3 ]
+ , 22.0 κ [ * * ] 29.0 κ [ * * ] 27.0 κ [ * * ] 11.0 κ [ * * ] 13.0 κ [ * * ] 16.0 κ [ * * ]
×18.0 κ [ * * ] 26.0 κ [ * * ] 26.0 κ [ * * ] 15.0 κ [ * * ] 16.0 κ [ * * ] 15.0 κ [ * * ]
where κ [ * * ] = n 2 + O ( n ) represents the computational complexity, and the number of divisions for all methods is the same, i.e., 2 κ [ * * ] .
Table 2. Percentage computational efficiency and memory utilization of parallel schemes.
Table 2. Percentage computational efficiency and memory utilization of parallel schemes.
ψ [ * ] PMM [ * ] NNM [ * ] NFM [ * ] PFSB [ ϑ 1 ] PFSB [ ϑ 2 ] PFSB [ ϑ 3 ]
% Efficiency47.1236.3429.3368.8956.4168.99
% Memory Usage68.6553.9863.0425.6731.9828.32
Table 3. Characteristics of the eigenvalue problem from a 2DOF mechanical system.
Table 3. Characteristics of the eigenvalue problem from a 2DOF mechanical system.
Property Description
Matrix Size  2 × 2 (mass, damping, and stiffness matrices)
Polynomial Degree 4 (due to transformation)
SparsityM:Sparse (only diagonal entries are nonzero)
 C:Sparse (few nonzeros)
 K:Sparse (few nonzeros, mostly diagonal and few off-diagonal
Symmetry Yes
Condition Number Well conditioned
Eigenvalue Type Complex
Table 4. Values of α and κ in the proposed methods for solving Equations (74) and (92), respectively.
Table 4. Values of α and κ in the proposed methods for solving Equations (74) and (92), respectively.
ParameterRangeEffect on AlgorithmSelection Guidance
κ κ ( 0 , 1 ] Controls memory effect and nonlocality. Moderate values improve the convergence region and stability.Choose κ 1.0 for faster convergence. Lower κ values increase memory and smoothness, influencing stability.
α α R Figure 4, Figure 5 and Figure 6 illustrate the technique’s enhanced convergence for α [ 1.7 , 0.5 ] when compared to existing techniques.For general problems, we use α = 1.03 . For ill-conditioned or stiff eigenvalue problems, we used α = 1.5 for improved robustness.
Table 5. Fractal analysis of the classical and fractional schemes for solving Equation (74).
Table 5. Fractal analysis of the classical and fractional schemes for solving Equation (74).
Scheme PMM [ * ] NNM [ * ] NFM [ * ] PFSB [ ϑ 1 ] PFSB [ ϑ 2 ] PFSB [ ϑ 3 ]
Iterations272829292123
Per-Convergence45.346%36.465%65.51%75.01%65.768%77.453%
Table 6. Numerical results of parallel schemes using Equation (74).
Table 6. Numerical results of parallel schemes using Equation (74).
Method ϵ 1 [ ϕ ] ϵ 2 [ ϕ ] ϵ 3 [ ϕ ] ϵ 4 [ ϕ ]
PMM [ * ] 9.16 × 10 58 4.8 × 10 45 3.8 × 10 33 9.53 × 10 87
NNM [ * ] 8.36 × 10 67 0.0 0.0 3.65 × 10 69
NFM [ * ] 8.65 × 10 45 4.84 × 10 57 0.0 2.15 × 10 71
PFSB [ ϑ 1 ] 9.93 × 10 86 0.0 8.36 × 10 102 9.51 × 10 96
PFSB [ ϑ 2 ] 1.2 × 10 98 0.0 0.0 0.0
PFSB [ ϑ 3 ] 0.0 0.0 8.55 × 10 21 5.8 × 10 105
Table 7. Memory usage and computation time (C-Time) using random initial values.
Table 7. Memory usage and computation time (C-Time) using random initial values.
Scheme PMM [ * ] NNM [ * ] NFM [ * ] PFSB [ ϑ 1 ] PFSB [ ϑ 2 ] PFSB [ ϑ 3 ]
C-Time (s)4.54645.43643.54651.23541.34351.6343
Memory Usage (%)47.87628.12321.00212.43216.65717.456
Table 8. Consistency analysis using random initial values for solving Equation (74).
Table 8. Consistency analysis using random initial values for solving Equation (74).
SchemeMax-ErrMax-It [ ( x [ ϕ ] ) , ( x [ ϕ ] ) ] Per-Divergence
PMM [ * ] 4.11 × 10 13 2242.066.1232%
NNM [ * ] 8.634 × 10 12 2147.063.3453%
NFM [ * ] 9.134 × 10 10 2056.034.4564%
PFSB [ ϑ 1 ] 7.653 × 10 12 1621.029.4334%
PFSB [ ϑ 2 ] 6.435 × 10 10 1126.023.5346%
PFSB [ ϑ 3 ] 9.513 × 10 15 833.013.3435%
Table 9. Comparative analysis of eigenvalue computation (74) using analytical and parallel methods.
Table 9. Comparative analysis of eigenvalue computation (74) using analytical and parallel methods.
Scheme QR [ * ] DC [ * ] QZ [ * ] MT [ * ] PFSB [ ϑ 2 ] PFSB [ ϑ 3 ]
CPU−time 8.1208 8.8764 7.0043 5.7554 1.6454
Max-Err 0.36 × 10 7 4.97 × 10 8 5.76 × 10 7 7.54 × 10 5 9.76 × 10 44
Table 10. Characteristics of the eigenvalue problem from a 4DOF mechanical system.
Table 10. Characteristics of the eigenvalue problem from a 4DOF mechanical system.
Property Description
Matrix Size 4 × 4 (mass, damping, and stiffness matrices)
Polynomial Degree8 (due to transformation)
SparsityM:Sparse (mass matrix is diagonal)
 C:Sparse (damping matrices are banded)
 K:Sparse (stiffness matrices are banded)
SymmetryYes
Condition Number 10 4 ; moderately ill-conditioned
Eigenvalue TypeReal and complex conjugate pairs (due to damping)
Table 11. Fractal analysis of classical and fractional schemes for solving (92).
Table 11. Fractal analysis of classical and fractional schemes for solving (92).
Scheme PMM [ * ] NNM [ * ] NFM [ * ] PFSB [ ϑ 1 ] PFSB [ ϑ 2 ] PFSB [ ϑ 3 ]
Iterations262730282021
Percentage Convergence37.546%26.654%45.771%65.432%55.004%67.987%
Table 12. Numerical results of parallel schemes using (92).
Table 12. Numerical results of parallel schemes using (92).
Method ϵ 1 [ ϕ ] ϵ 2 [ ϕ ] ϵ 3 [ ϕ ] ϵ 4 [ ϕ ]
PMM [ * ] 7.16 × 10 47 0.0 5.86 × 10 63 9.53 × 10 37
NNM [ * ] 0.97 × 10 68 0.0 0.0 0.65 × 10 56
NFM [ * ] 0.0 4.84 × 10 57 0.0 0.15 × 10 67
PFSB [ ϑ 1 ] 0.0 0.0 9.76 × 10 102 0.0
PFSB [ ϑ 2 ] 5.76 × 10 97 0.0 0.0 0.0
PFSB [ ϑ 3 ] 0.0 0.0 8.88 × 10 21 9.99 × 10 98
Table 13. Memory usage and computational time (C-Time) using random initial values for solving (92).
Table 13. Memory usage and computational time (C-Time) using random initial values for solving (92).
Scheme PMM [ * ] NNM [ * ] NFM [ * ] PFSB [ ϑ 1 ] PFSB [ ϑ 2 ] PFSB [ ϑ 3 ]
C-Time (s)5.77766.88656.75552.36463.64532.2436
Memory Usage (%)77.436%68.998%51.565%48.002%47.070%39.986%
Table 14. Consistency analysis using random initial values for solving (92).
Table 14. Consistency analysis using random initial values for solving (92).
SchemeMax-ErrMax-It [ ( x [ ϕ ] ) , ( x [ ϕ ] ) ] Per-Divergence (%)
PMM [ * ] 7.347 × 10 11 2244.058.1232
NNM [ * ] 9.980 × 10 12 2148.055.7777
NFM [ * ] 0.765 × 10 10 2046.047.9860
PFSB [ ϑ 1 ] 0.246 × 10 12 1626.048.2265
PFSB [ ϑ 2 ] 1.754 × 10 10 1136.033.0005
PFSB [ ϑ 3 ] 0.654 × 10 15 835.023.1164
Table 15. Comparative analysis of eigenvalue computation in Equation (92) using analytical and parallel methods.
Table 15. Comparative analysis of eigenvalue computation in Equation (92) using analytical and parallel methods.
Scheme QR [ * ] DC [ * ] QZ [ * ] MT [ * ] PFSB [ ϑ 2 ] PFSB [ ϑ 3 ]
CPU−time 8.1208 8.8764 7.0043 5.7554 1.6454
Max-Err 8.56 × 10 5 6.87 × 10 7 0.06 × 10 6 0.64 × 10 5 1.14 × 10 39
Table 16. Characteristics of the eigenvalue problem from a 2DOF mechanical system.
Table 16. Characteristics of the eigenvalue problem from a 2DOF mechanical system.
Property Description
Matrix Size  2 × 2
Polynomial Degree 4 (due to transformation)
SparsityM:Sparse (only diagonal entries are nonzero)
 C:Sparse (few nonzeros)
 K:Sparse (few nonzeros, mostly diagonal and few off-diagonal
Symmetry Yes
Condition Number Well conditioned
Eigenvalue Type Complex
Table 17. Values of α and κ in the proposed methods for solving Equation (105).
Table 17. Values of α and κ in the proposed methods for solving Equation (105).
ParameterRangeEffect on AlgorithmSelection Guidance
κ κ ( 0 , 1 ] Controls memory effect and nonlocality. Moderate values improve the convergence region and stability.Choose κ 1.0 for faster convergence. Lower κ values increase memory and smoothness, influencing stability.
α α R Figure 8 illustrates the technique’s enhanced convergence for α [ 2.0 , 0.0 ] when compared to existing techniques.For general problems, we use α = 0.05 . For ill-conditioned or stiff eigenvalue problems, we used α = 1.5 for improved robustness.
Table 18. Fractal analysis of classical and fractional schemes for solving Equation (105).
Table 18. Fractal analysis of classical and fractional schemes for solving Equation (105).
Scheme PMM [ * ] NNM [ * ] NFM [ * ] PFSB [ ϑ 1 ] PFSB [ ϑ 2 ] PFSB [ ϑ 3 ]
Iterations262730282021
Percentage Convergence37.546%26.654%45.771%65.432%55.004%67.987%
Table 19. Numerical results of parallel schemes using Equation (105).
Table 19. Numerical results of parallel schemes using Equation (105).
Method ϵ 1 [ ϕ ] ϵ 2 [ ϕ ] ϵ 3 [ ϕ ] ϵ 4 [ ϕ ]
PMM [ * ] 7.16 × 10 47 0.0 5.86 × 10 63 9.53 × 10 37
NNM [ * ] 0.97 × 10 68 0.0 0.0 0.65 × 10 56
NFM [ * ] 0.0 4.84 × 10 57 0.0 0.15 × 10 67
PFSB [ ϑ 1 ] 0.0 0.0 9.76 × 10 102 0.0
PFSB [ ϑ 2 ] 5.76 × 10 97 0.0 0.0 0.0
PFSB [ ϑ 3 ] 0.0 0.0 8.88 × 10 21 9.99 × 10 98
Table 20. Memory usage and computational time (C-Time) for different schemes using random initial values for solving Equation (105).
Table 20. Memory usage and computational time (C-Time) for different schemes using random initial values for solving Equation (105).
Scheme PMM [ * ] NNM [ * ] NFM [ * ] PFSB [ ϑ 1 ] PFSB [ ϑ 2 ] PFSB [ ϑ 3 ]
C-Time (s)5.77766.88656.75552.36463.64542.2436
Memory Usage (%)77.43668.99851.56548.00247.07039.986
Table 21. Consistency analysis using random initial values for solving Equation (105).
Table 21. Consistency analysis using random initial values for solving Equation (105).
SchemeMax-ErrMax-It ( x [ ϕ ] ) , ( x [ ϕ ] ) Per-Divergence (%)
PMM [ * ] 7.347 × 10 14 2244.058.1265
NNM [ * ] 9.110 × 10 17 2148.055.0003
NFM [ * ] 6.54 × 10 16 2046.047.9860
PFSB [ ϑ 1 ] 7.003 × 10 21 1626.048.5555
PFSB [ ϑ 2 ] 5.004 × 10 20 1136.033.0045
PFSB [ ϑ 3 ] 6.54 × 10 19 835.023.1100
Table 22. Comparative analysis of eigenvalue computation in Equation (105) using analytical and parallel methods.
Table 22. Comparative analysis of eigenvalue computation in Equation (105) using analytical and parallel methods.
Scheme QR [ * ] DC [ * ] QZ [ * ] MT [ * ] PFSB [ ϑ 2 ] PFSB [ ϑ 3 ]
CPU−time 8.1208 8.8764 7.0043 5.7554 1.6454
Max-Err 5.40 × 10 4 0.07 × 10 5 5.74 × 10 6 7.77 × 10 4 5.03 × 10 34
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shams, M.; Carpentieri, B. A High-Order Fractional Parallel Scheme for Efficient Eigenvalue Computation. Fractal Fract. 2025, 9, 313. https://doi.org/10.3390/fractalfract9050313

AMA Style

Shams M, Carpentieri B. A High-Order Fractional Parallel Scheme for Efficient Eigenvalue Computation. Fractal and Fractional. 2025; 9(5):313. https://doi.org/10.3390/fractalfract9050313

Chicago/Turabian Style

Shams, Mudassir, and Bruno Carpentieri. 2025. "A High-Order Fractional Parallel Scheme for Efficient Eigenvalue Computation" Fractal and Fractional 9, no. 5: 313. https://doi.org/10.3390/fractalfract9050313

APA Style

Shams, M., & Carpentieri, B. (2025). A High-Order Fractional Parallel Scheme for Efficient Eigenvalue Computation. Fractal and Fractional, 9(5), 313. https://doi.org/10.3390/fractalfract9050313

Article Metrics

Back to TopTop