Efficient Inverse Fractional Neural Network-Based Simultaneous Schemes for Nonlinear Engineering Applications

: Finding all the roots of a nonlinear equation is an important and difﬁcult task that arises naturally in numerous scientiﬁc and engineering applications. Sequential iterative algorithms frequently use a deﬂating strategy to compute all the roots of the nonlinear equation, as rounding errors have the potential to produce inaccurate results. On the other hand, simultaneous iterative parallel techniques require an accurate initial estimation of the roots to converge effectively. In this paper, we propose a new class of global neural network-based root-ﬁnding algorithms for locating real and complex polynomial roots, which exploits the ability of machine learning techniques to learn from data and make accurate predictions. The approximations computed by the neural network are used to initialize two efﬁcient fractional Caputo-inverse simultaneous algorithms of convergence orders ς + 2 and 2 ς + 4, respectively. The results of our numerical experiments on selected engineering applications show that the new inverse parallel fractional schemes have the potential to outperform other state-of-the-art nonlinear root-ﬁnding methods in terms of both accuracy and elapsed solution time.


Introduction
Determining the roots of nonlinear equations of the form is among the oldest problems in science and engineering, dating back to at least 2000 BC, when the Babylonians discovered a general solution to quadratic equations.In 1079, Omer Khayyams developed a geometric method for solving cubic equations.In 1545, in his book Ars Magna, Girolomo Cardano published a universal solution to a cubic polynomial equation.Cordano was one of the first authors to use complex numbers, but only to derive real solutions to generic polynomials.Niel Hernor Abel [1] proved Abel's Impossibility theorem "There is no solution in radical to general polynomial with arbitrary coefficient of degree five or higher" in 1824.The fact established in the 17th century that every generic polynomial equation of positive degree has a solution, possibly non-real, was completely demonstrated at the beginning of the 19th century as the "Fundamental theorem of algebra" [2].
From the beginning of the 16th century to the end of the 19th century, one of the main problems in algebra was to find a formula that computed the solution of a generic polynomial with arbitrary coefficients of degree equal to or greater than five.Because there are no analytical or implicit methods for solving this problem, we must rely on numerical techniques to approximate the roots of higher-degree polynomials.These numerical algorithms can be further divided into two categories: those that estimate one polynomial root at a time and those that approximate all polynomial roots simultaneously.Work in this area began in 1970, with the primary goal of developing numerical iterative techniques that could locate polynomial roots on contemporary, state-of-the-art computers with optimal speed and efficiency [3].Iterative approaches are currently used to find polynomial roots in a variety of disciplines.In signal processing, polynomial roots are the frequencies of signals representing sounds, images, and movies.In control theory, they characterize the behavior of control systems and can be utilized to enhance system stability or performance.The prices of options and futures contracts are determined by estimating the roots of (1); therefore, it is critical to compute them accurately so that they can be priced precisely.
In recent years, many iterative techniques for estimating the roots of nonlinear equations have been proposed.These methods use several quadrature rules, interpolation techniques, error analysis, and other techniques to improve the convergence order of previously known single-root-finding procedures [4][5][6].In this paper, we will focus on iterative approaches for solving single-variable nonlinear equations one root at a time.However, we will generalize these methods to locate all distinct roots of nonlinear equations simultaneously.A sequential approach for finding all the zeros in a polynomial necessitates repeated deflation, which can produce significantly inaccurate results due to rounding errors propagating in finite-precision floating-point arithmetic.As a result, we employ more precise, efficient, and stable simultaneous approaches.The literature is vast and dates back to 1891, when Weierstrass introduced the single-step derivative-free simultaneous method [7] for finding all polynomial roots, which was later rediscovered by Kerner [8], Durand [9], Dochev [10], and Presic [11].Gauss-Seidal [12] and Petkovic et al. [13] introduced the second-order method for approximating all roots simultaneously; Börsch-Supan [14] and Mir [15] presented the third-order method; Provinic et al [16] introduced the fourth-order method; and Zhang et al [17] the fifth-order method.Additional enhancements in efficiency were demonstrated by Ehlirch in [18] and Milovanovic et al. in [19]; Nourein proposed a fourth-order method in [20]; and Petkovic et al. a six-order simultaneous method with derivatives in [21].Former [22] introduced the percentage efficacy of simultaneous methods in 2014.Later, in 2015, Proinov et al. [23] presented a general convergence theorem for simultaneous methods and a description of the application of the Weierstrass root approximating methodology.In 2016, Nedzibove [24] developed a modified version of the Weierstrass method, and in 2020, Marcheva et al. [25] presented the local convergence theorem.Shams et al. [26,27] presented the computational efficiency ratios for the simultaneous approach on initial vectors to locate all polynomial roots in 2020, as well as the global convergence in 2022 [28].Additional advancements in the field can be found in [29][30][31][32][33] and the references therein.
The primary goal of this study is to develop Caputo-type fractional inverse simultaneous schemes that are more robust, stable, computationally inexpensive, and CPU-efficient compared to existing methods.The theory and analysis of inverse fractional parallel numerical Caputo-type schemes, as well as their practical implementation utilizing artificial neural networks (ANNs) for approximating all roots of (1), are also thoroughly examined.Simultaneous schemes, Caputo-type fractional inverse simultaneous schemes, and simultaneous schemes based on artificial neural networks are all discussed and compared in depth.The main contributions of this study are as follows: • Two novel fractional inverse simultaneous Caputo-type methods are introduced in order to locate all the roots of (1).

•
A local convergence analysis is presented for the parallel fractional inverse numerical schemes that are proposed.

•
A rigorous complexity analysis is provided to demonstrate the increased efficiency of the new method.

•
The Levenberg-Marquardt Algorithm is utilized to compute all of the roots using ANNs.

•
The global convergence behavior of the proposed inverse fractional parallel rootfinding method with random initial estimate values is illustrated.

•
The efficiency and stability of the new method are numerically assessed using dynamical planes.

•
The general applicability of the method for various nonlinear engineering problems is thoroughly studied using different stopping criteria and random initial guesses.
To the best of our knowledge, this contribution is novel.A review of the existing body of literature indicates that research on fractional parallel numerical methods for simultaneously locating all roots of (1) is extremely limited.This paper is organized as follows.In Section 2, a number of fundamental definitions are given.The construction, analysis, and assessment of inverse fractional algorithms are covered in Section 3. A comparison is made between the artificial neural network aspects of recently developed parallel methods and those of existing schemes in Section 4. A cost analysis of classical and fractional parallel schemes is detailed in Section 5. A dynamical analysis of global convergence is presented in Section 6.In order to evaluate the newly developed techniques in comparison to the parallel computer algorithms that are currently documented in the literature, Section 7 solves a number of nonlinear engineering applications and reports on the numerical results.The global convergence behavior of the inverse parallel scheme is also compared to that of a simultaneous neural network-based method in this section.Finally, some conclusions arising from this work are drawn in Section 8.

Preliminaries
Despite the fact that with the exception of the Caputo derivative, none of the fractionaltype derivatives satisfy the fractional calculus conditions, [ ς ](1) = 0, if ς is not a natural number, in this section we discuss some basic concepts of fractional calculus, as well as the fractional iterative scheme for solving (1) using Caputo-type derivatives.
Theorem 1 (Generalized Taylor Formula [36]).The generalized Taylor theorem of fractional order is a powerful mathematical tool that extends the applicability of Taylor series approximations to fractional-order functions that model and describe a wide range of complex phenomena in a variety of scientific and engineering domains, including signal processing, control theory, biomedical engineering, image processing, chaos theory, economic and financial modeling, and many more. and , we have: where The corresponding Caputo-type fractional derivative of f (υ

Construction of Inverse Fractional Parallel Schemes
Numerical methods for solving nonlinear equations are essential tools for a wide range of problems.They do, however, have trade-offs, such as the necessity for initial guesses, convergence issues, and parameter sensitivity.To produce accurate and efficient answers, the method used should be carefully assessed depending on the specific characteristics of the problem.The Newton-Raphson method, is a widely used algorithm for locating a single root of (1).If f (υ (σ) ) → 0, the method becomes unstable.As a result, we consider an alternate technique based on fractional-order iterative algorithms in this paper.The fractional Newton approach using different fractional derivatives is discussed by Akgül et al. [37], Torres-Hernandez et al. [38], Gajori et al. [39], and Kumar et al. [40].Candelario et al. [41] proposed the following Caputo-type fractional variant of the classical Newton method (FNN): where The following error equation is satisfied by the order of convergence of the fractional Newton method, which is ς + 1, where , γ = 2, 3, . . .Candelario et al. [41] proposed another fractional numerical scheme for calculating simple roots of (1) as: The order of convergence of the numerical scheme is 2ς + 1, and the associated error equation is: . There have recently been numerous studies on iterative root-finding algorithms that can precisely approximate one root of (1) at a time [42][43][44][45].The class of fractional numerical schemes is particularly sensitive to the choice of the initial guesses; if we do not choose a suitable initial guess sufficiently close to a root of (1), the method becomes unstable and may not converge.As a result, we explore numerical schemes with global convergence properties, i.e., parallel numerical schemes for simultaneously finding all roots of (1).

Construction of Inverse Fractional Parallel Scheme of Order ς + 2
The German mathematician Karl Weierstrass (1815-1897) developed the Weierstrass (WDKI) method for finding roots of (1), which is based on the following quadratically convergent iterative scheme [46]: In order to reduce the computational costs and enhance the convergence rates, we investigate inverse simultaneous methods [47].The inverse simultaneous scheme applied to (1) is given as [48]: Method (13) can also be expressed as: replacing υ (σ) j with y * (σ) j in (14).As a result, our new inverse fractional simultaneous scheme (FINS ς ) is established as follows: where . Method (16) can also be written as: The newly developed inverse fractional-order inverse parallel schemes outperform other current approaches in the literature in terms of convergence order, as proven by the following convergence analysis.

Convergence Framework
The following theorem examines the convergence order of FINS ς .
i , and u Thus, we obtain Using the expression [49] in (19), we have: If we assume that all errors are of the same order, i.e., Hence, the theorem is proved.

Construction of Inverse Fractional Parallel Scheme of Order 2ς + 4
Consider the two-step Weierstrass method [50] with fourth-order convergence, locally defined as: , , and the two-step inverse Weierstrass method [51] with fourth-order convergence, locally (IWDKI) defined as: where u .

Convergence Framework
The following theorem examines the convergence order of FINS ς * .
Theorem 3. Let ζ 1 , . . ., ζ n be a simple zero of (1), and for a sufficiently close initial distinct estimation, υ n , of the roots, then FINS ς * has a convergence order of 2ς + 4.
− ζ i be the errors in υ i , u i , and z i , respectively.From the first step in FINS ς * , we have: Thus, we obtain Using the expression in (27), we have: If we assume that all errors are of the same order, i.e., Taking the second step of FINS ς * , we have and, as a result, we obtain Considering the previous argument, . By applying it to (31), we have: Assuming that all errors are of the same order, i.e.
Hence, the theorem is proved.
In order to achieve a higher order of convergence with simultaneous methods, it is necessary to compute higher derivatives.Occasional inaccuracies may result from repeated deflation and segregation toward initial approximations in finite-precision arithmetic, which is caused by the accumulation of rounding errors.This study investigates the efficacy and accuracy of a neural network-based algorithm in locating real and complex roots of (1).This may be feasible due to the widespread recognition of conventional ANNs for their ability to identify intricate nonlinear input-output mappings.
Several researchers have used ANNs to approximate the roots of polynomial equations, including Hormis and colleagues [52], who published the first paper in 1995 on the use of ANN-based methods to locate the roots of a given polynomial.Huang and Chi [53,54] used the ANN framework in 2001 to find the real and complex roots of a polynomial and enhanced the training algorithm with prior knowledge of root-coefficient relationships.A dilatation method to locate close arbitrary roots of polynomials was introduced in 2003 [55].By increasing the distance between ANNs, their ability to locate close roots would be improved.In contrast, Huang et al. [56] included Newton identities in the ANN training algorithm.In this study, we compare ANNs to the inverse simultaneous technique in order to rapidly and precisely approximate all roots of (1) originating from a variety of engineering problems.

Artificial Neural Network-Based Inverse Parallel Schemes
Artificial neural networks (ANNs) are capable of solving nonlinear equations and other related issues, and they are relevant in this context for a variety of reasons: • Versatility: Because they are flexible function approximations, ANNs can express complex and nonlinear interactions between inputs and outputs.They can be used to solve a wide range of nonlinear equations in physics, engineering, finance, and optimization, among other domains, due to their versatility.• Data-driven solutions: ANNs are capable of learning from data.ANNs can be trained on pre-existing data to provide solutions or approximations for nonlinear equations that are difficult to analyze or solve numerically.In particular, this data-driven methodology proves advantageous in domains where empirical data are easily accessible.

•
Inverse Problems: In various real-world scenarios involving data and the need to determine which variables or inputs provide the best explanation for them, inverse modeling is used to solve the resulting inverse problems.ANNs are capable of solving inverse problems by finding the mapping between unknown parameters and data.

•
Complex Systems: ANNs can be used to describe the overall system behavior in complex systems where nonlinear equations are coupled and difficult to solve separately.This methodology can be used by engineers and scientists to gain knowledge, make predictions, or improve system performance.• Automation: Once trained, ANNs can provide automatic solutions to nonlinear problems that require less manual input and specialized mathematical knowledge.
Although ANNs have a number of advantages for dealing with nonlinear equations and related problems, they are not always the best option, contingent on factors such as data availability, problem characteristics, and the particular objectives of the analysis.In certain situations, symbolic mathematics or conventional numerical methods remain more favorable.Nevertheless, ANNs have proven to be valuable tools for dealing with difficult nonlinear problems across numerous disciplines.
In this research paper, we propose a neural network-based methodology for locating real and complex polynomial roots, and we evaluate its computational performance and accuracy.The approximations obtained by the ANNs are used to build the initialization scheme for the inverse fractional parallel approach.We trained a neural network with three layers (input, hidden, and output) using the well-known Levenberg-Marquardt Algorithm (LMA) [57,58].The network's input was a collection of real coefficients from n-degree poly-nomials, and its output was the set of their roots.Figure 1 depicts a schematic representation of a neural network that can approximate the roots of an n-th degree polynomial.Data Set: The tables in Appendices A and B present the upper edge-data sets utilized in the ANN to estimate the real and complex roots of (1) in some engineering applications.These sets consist of 10,000 archives.In order to illustrate the real and complex parts of the roots, their values are presented in the odd and even columns, respectively, in the second set of data in Appendix B. Random polynomial coefficients in the range [0, 1] were generated in MATLAB using the Symbolic Math Toolbox, and the exact real or complex roots of the polynomials were determined.The coefficients and roots were computed using double-precision arithmetic, despite having only four decimal digits.It should be noted that the ANN algorithm cannot distinguish between complex and real roots.The ANNs were trained using 70% of the samples from these data sets.The remaining 30% of the data was used to evaluate the generalization capabilities of the ANNs.In order to compute the real and imaginary parts of the n roots of each polynomial of degree n, the n + 1 coefficients were used as the input in the ANNs.
Training Algorithm: The ANNs were trained using the well-known LMA method [57,58], as previously mentioned.The LMA method integrates the gradient descent method and the Gauss-Newton method, and is regarded as a highly effective approach for training ANNs, especially those with medium-sized bits, owing to its fast convergence and effectiveness.We refer the reader to [59,60] for a comprehensive presentation of the LMA.The method depends on a positive parameter that is modified adaptively during each iteration in order to achieve a balance between the effects of the two optimization techniques [61].The weights of neural connections are modified based on the discrepancy between the predicted and computed values; the error is computed as follows: where I is the identity matrix.Finally,  represents the Jacobian matrix of elements i,j = ∂e i ∂∆ j .
The LMA method was used in a batch learning strategy, which means that the network's weights and biases were updated after all of the training set samples were presented to it.The strategy may be viewed as an exact method since it employs derivative information to adjust the ANN's weights in order to reduce the error between the precise objective values and the predicted values.The results are presented for polynomials with real and complex roots, as well as comparisons of the accuracy measures and execution times of the FINS ς1 * -FINS ς5 * methods' approximations.The mean squared error (MSE) was employed as the error metric: where ϑ i denotes the exact ith root in the test data set, and Ňi is the appropriate estimate obtained using the FINS ς1 * -FINS ς5 * methods or the proposed ANN strategy.

Computational Analysis of Inverse Fractional Parallel Schemes
This section discusses the algorithmic complexity and convergence characteristics of our method.The convergence is influenced by the initial guess of the roots.When the original estimate is closer to the roots of (1), the method converges more quickly.In comparison to a single root-finding algorithm, the computational cost of the simultaneous technique is dominated by a global convergence behavior.The total complexity of the simultaneous technique is O(m 2 ), where m is the degree of the polynomial.In this section, we compare the computational efficiency of the FINS ς1 * -FINS ς5 * algorithms as the parameter values change.
The computational efficiency of an iterative method of convergence order r can be estimated as [62][63][64]: where D is the computational cost defined as: Thus, (37) becomes: Using (39) and the data in Table 1, we compute the efficiency ratio * ((FISM where the acronym IWDKI represents the inverse Weierstrass method ( 23) for simultaneously locating all roots of nonlinear equations.

Method Addition and Subtraction Multiplications Divisions
These percentage ratios are graphically illustrated in Figure 2a-e.It is evident that the new inverse fractional simultaneous techniques are more efficient compared to the IWDKI method [66,67].

Dynamical Analysis of Inverse Fractional Parallel Schemes
In order to solve a polynomial equation using the inverse fractional simultaneous iterative method, it is often useful to examine the basins of attraction [68,69] of the equation's roots.The inverse fractional simultaneous scheme will eventually converge to a particular polynomial root in the basins of attraction of the complex plane.To identify the basins of attraction for (1) using the inverse fractional simultaneous scheme, we use a grid of [800 × 800] 2 points in the domain [−2, 2] × [−2, 2] of the complex plane encompassing the region of interest.We show the basins of attraction for the polynomial equation for which the Caputo-type derivative is given as We use an inverse numerical scheme to solve the polynomial equation, starting from that grid of points.We observe the behavior of the iterations for each point until it converges to one of the roots of the polynomial equation within a tolerance of 10 −3 on the error or until a predetermined number of iteration steps have been performed.Each grid point is colored or shaded based on the polynomial root to which it converges.This will provide a visual representation of the basins of attraction of (1).It is important to note that the inverse simultaneous scheme converges to a root for initial points that are far from any of the roots or are in a region with complex dynamics.This demonstrates the global convergence behavior of the numerical scheme.
In Tables 2 and 3, E-Time denotes the elapsed time in seconds, It-N indicates the number of iterations, TT-Points represents the total number of grid points, C-Points denotes the convergent points, D-Points refers to the number of divergent points, and Per-Convergence and Per-Divergence are the percentage convergence and divergence of the numerical scheme used to generate the basins of attractions for various functional parameters.Figure 3a,b and Tables 2 and 3 both clearly demonstrate that the rate of convergence increases from 0.1 to 1.0, showing the global convergence of FINS ς 1 * and FINS ς , respectively.

Analysis of Numerical Results
In this section, we illustrate a few numerical experiments to compare the performance of our proposed simultaneous methods, FINS ς −FINS ς 1 and FINS ς 1 * −FINS ς 1 * of order ten, to that of the ANN in some real-world applications.The calculations were carried out in quadruple-precision (128-bit) floating-point arithmetic using Maple 18.The algorithm was terminated based on the stopping criterion: e represents the absolute error.The stopping criteria for both the fractional inverse numerical simultaneous method and the ANN training were 5000 iterations and e = 10 −18 .The elapsed times were obtained using a laptop equipped with a third-generation Intel Core i3 CPU and 4 GB of RAM.In our experiments, we compare the results of the newly developed fractional numerical schemes FINS ς1 -FINS ς5 and FINS ς1 * -FINS ς5 * to the Weierstrass method (WDKM), defined as , the convergent method by Zhang et al. (ZPHM), defined as and the Petkovic method (MPM), defined as where and y . We generate random starting guess values using Algorithms 1 and 2, as shown in Tables A1-A6.The parameter values utilized in the numerical results are reported below.

Real-World Applications
In this section, we apply our new inverse methods FINS ς 1 −FINS ς 5 and FINS ς 1 * −FINS ς 1 * to solve some real-world applications.
The shock absorber, or damper, is a component of the suspension system that is used to control the transient behavior of the vehicle mass and the suspension mass (see Pulvirenti [70] and Konieczny [71]).Because of its nonlinear behavior, it is one of the most complicated suspension system components.The damping force of the damper is characterized by an asymmetric nonlinear hysteresis loop (Liu [72]).In this example, the vehicle's characteristics are simulated using a quarter-car model with two degrees of freedom, and the damper effect is investigated using linear and nonlinear damping characteristics.Simpler models, such as linear and independently linear ones, fall short of explaining the damper's actions.The mass motion equations are as follows: where k s and k σ are the spring stiffness and suspension coefficients in the tire stiffness system; m s and m u are the over-and under-sprung masses; and υ s and υ u are the displacements of the over-and under-masses.The coefficient of damping force F in ( 43) is approximated by the polynomial [73]: measuring the displacement, velocity, and acceleration of a mass over time.Figure 4 illustrates how the model can be used to develop and optimize vehicle systems for a range of driving situations, including comfort during travel, interacting with others, and stability.The Caputo-type derivative of ( 44) is given as: The exact roots of Equation ( 44) are Next, the convergence rate and computational order of the numerical schemes FINS ς 1 * -FINS ς 5 * are defined.In order to quantify the global convergence rate of the inverse parallel fractional scheme, a random initial guess value v = [0.213,0.124, 1.02, 1425] is generated using the built-in MATLAB rand() function.With a random initial estimate, FINS ς 1 * converges to the exact roots after 9, 8, 7, and 7 iterations and requires, respectively, 0.04564, 0.07144, 0.07514, 0.01247, and 0.045451 s to converge for different fractional parameters, i.e., 0.1, 0.3, 0.5, 0.8, and 1.0.The results in Table 4 clearly indicate that as the value of ς grows from 0.1 to 1.0, the rate of convergence of FINS ς 1 * increases.Unlike ANNs, our newly developed algorithm converges to the exact roots for a range of random initial guesses, confirming a global convergence behavior.
When the initial approximation is close to the exact roots, the rate of convergence increases significantly, as illustrated in Tables 5 and 6.The following initial estimate values increase the convergence rates: The outcomes of the ANN-based inverse simultaneous schemes (ANNFN ς 1 * -ANNFN ς 5 * ) are shown in Table 7.The real coefficients of the nonlinear equations utilized in engineering application 1 were fed into the ANNs, and the output was the exact roots of the relevant nonlinear equations, as shown in Figure 1.The head of the data set utilized in the ANNs is shown in Tables A1 and A5, which provides the approximate root of the nonlinear equation used in engineering application 1 as an output.To generate the data sets, polynomial coefficients were produced at random in the interval [0, 1], and the exact roots were calculated using Matlab.According to Appendix A Table A1, the ANNs are currently unaware of which roots are real and which roots are complex; therefore, the ANNs were trained using 70% of the samples from these data sets.The remaining 30% was utilized to assess the ANNs' generalization skills by computing a performance metric on the samples that were not used to train the ANNs.For a polynomial of degree 4, the ANN required 5 input data points, two hidden layers, and 10 output data points (the real and imaginary parts of the calculated root).In order to represent all the roots of engineering application 1, Figures 5a-9a display the error histogram (EPH), mean square error (MSE), regression plot (RP), transition statistics (TS), and fitness overlapping graphs of the target and outcomes of the LMA-ANN for each instance's training, testing, and validation.Table 7 provides a summary of the performance of ANNFN ς 1 * -ANNFN ς 5 * in terms of the mean square error (MSE), percentage effectiveness (Per-E), execution time in seconds (Ex-time), and iteration number (Error-it).
The numerical results of the simultaneous schemes with initial guess values that vary close to the exact roots are shown in Table 8.In terms of the residual error, CPU time, and maximum error (Max-Error), our new methods exhibit better results compared to the existing methods after the same number of iterations.The proposed root-finding method, represented in Figure 1, is based on the Levenberg-Marquardt technique of artificial neural networks.The "nftool" fitting tool, which is included in the ANN toolkit in MATLAB, is used to approximate roots of polynomials with randomly generated coefficients.
For training, testing, and validation, the input and output results of the LMA-ANNs' fitness overlapping graphs are shown in Figure 5a.According to the histogram, the error is 6.2 × 10 −2 , demonstrating the consistency of the suggested solver.For engineering application 1, the MSE of the LMA-ANNs when comparing the expected outcome to the target solution is 6.6649 × 10 −9 at epoch 112, as shown in Figure 6a.The expected and actual results of the LMA-ANNs are linearly related, as shown in Figure 7a. Figure 8a illustrates the efficiency, consistency, and reliability of the engineering application 1 simulation, where Mu is the adaptation parameter for the algorithm that trained the LMA-ANNs.The choice of Mu directly affects the error convergence and maintains its value in the range [0, 1].For engineering application 1, the gradient value is 9.9314 × 10 −6 with a Mu parameter of 1.0 × 10 −4 .Figure 8a shows how the results for the minimal Mu and gradient converge closer as the network becomes more efficient at training and testing.In turn, the fitness curve simulations and regression analysis simulations are displayed in Figure 9a.When R is near 1, the correlation is strong; however, it becomes unreliable when R approaches 0. A reduced MSE causes a decreased response time.Figure 10a-e depict the root trajectories for various initial estimate values.

Example 2 (Blood Rheology Model [75]).
Nanofluids are synthetic fluids made of nanoparticles dispersed in a liquid such as water or oil that are typically less than 100 nanometers in size.These nanoparticles can be used to improve the heat transfer capabilities or other properties of the base fluid.They are frequently chosen for their special thermal, electrical, or optical characteristics.Casson nanofluid, like other nanofluids, can be used in a variety of contexts, such as heat-transfer systems, the cooling of electronics, and even medical applications.The introduction of nanoparticles into a fluid can enhance its thermal conductivity and other characteristics, potentially leading to enhanced heat exchange or other intended results in specific applications.A basic fluid such as water or plasma will flow in a tube so that its center core travels as a plug with very little deflection and a velocity variance toward the tube wall according to the Casson fluid model.In our experiment, the plug flow of Casson fluids was described as: Using G = 0.40 in Equation ( 45), we have: The Caputo-type derivative of ( 46) is given as: The exact roots of Equation ( 46) are: In order to quantify the global convergence rate of the inverse parallel fractional schemes FINS ς 1 * -FINS ς 5 * , a random initial guess value v = [12.01, 14.56, 4.01, 45.5, 3.45, 78.9, 14.56, 47.89] is generated by the built-in MATLAB rand() function.With a random initial estimate, FINS ς 1 * converges to the exact roots after 9, 8, 7, 6, and 5 iterations and requires, respectively, 0.04564, 0.07144, 0.07514, 0.01247, and 0.045451 s to converge for different fractional parameters, namely 0.1, 0.3, 0.5, 0.8, and 1.0.The results in Table 9 clearly indicate that as the value of ς grows from 0.1 to 1.0, the rate of convergence of FINS ς 1 * increases.Unlike ANNs, our newly developed algorithm converges to the exact roots for a range of random initial guesses, confirming a global convergence behavior.Figure 11a-e depict the root trajectories for various initial estimate values.When the initial approximation is close to the exact roots, the rate of convergence increases significantly, as illustrated in Tables 10 and 11.The following initial estimate value results in an increase in the convergence rates: υ 8 = 1.5 + 0.9i.
Table 12 displays the results of inverse simultaneous methods based on artificial neural networks.The ANNs were trained using 70% of the data set samples, with the remaining 30% used to assess their ability to generalize using a performance metric.For a polynomial of degree 8, the ANN required 9 input data points, two hidden layers, and 18 output data points.In order to represent all the roots of engineering application 2, Figures 5b-9b display the EPH, MSE, RP, TS, and fitness overlapping graphs of the target and outcomes of the LMA-ANN algorithm for the training, testing, and validation of each instance.Table 12 provides a summary of the performance of ANNFN ς 1 * -ANNFN ς 5 * in terms of the MSE, Per-E, Ex-time, and Error-it.For training, testing, and validation, the input and output results of the LMA-ANNs' fitness overlapping graphs are shown in Figure 5b.According to the histogram, the error is 0.51, demonstrating the consistency of the suggested solver.For engineering application 2, the MSE of the LMA-ANNs compares the expected outcomes to the target solution, as shown in Figure 6b.The MSE for example 2 is 1.1914 × 10 −6 at epoch 49.The expected and actual results of the LMA-ANNs are linearly related, as shown in Figure 7b. Figure 8b illustrates the efficiency, consistency, and reliability of the engineering application 2 simulation.For engineering application 2, the gradient value is 9.8016 × 10 −6 with a Mu parameter of 1.0 × 10 −5 .Figure 8b shows how the results for the minimal Mu and gradient converge closer as the network becomes more efficient at training and testing.The fitness curve and regression analysis results are displayed in Figure 9b.When R is near 1, the correlation parameter is close; however, it becomes unreliable when R is near 0. A reduced MSE causes a decreased response time.The ANNs for various values of the fractional parameter, namely 0.1, 0.3, 0.5, 0.7, 0.8, and 1.0, are shown in Table 12 as ANNFN ς 1 * through ANNFN ς 5 * .
The numerical results of the simultaneous schemes with initial guess values that vary close to the exact roots are shown in Table 13.In terms of the residual error, CPU time, and maximum error (Max-Error), our newly developed strategies surpass the existing methods on the same number of iterations.

Example 3 (Hydrogen atom's Schrödinger wave equation [76]).
The Schrödinger wave equation is a fundamental equation in quantum mechanics that was invented in 1925 by Austrian physicist Erwin Schrödinger and specifies how the quantum state of a physical system changes over time.It is used to predict the behavior of particles, such as electrons in atomic and molecular systems.The equation is defined for a single particle of mass m moving in a central potential as follows: where r is the distance of the electron from the core and ∈ is the energy.In spherical coordinates, (47) has the following form: The general solution can be obtained by decomposing the final equation into angular and radial components.The angular component can be further reduced into two equations (see, e.g., [77]), one of which leads to the Legendre equation: In the case of azimuth symmetry, m = 0, the solution of ( 49) can be expressed using Legendre polynomials.In our example, we computed the zeros of the members of the aforementioned family of polynomials (49) all at once.Specifically, we used The Caputo-type derivative of ( 50) is given as: In order to quantify the global convergence rate of the inverse parallel fractional schemes FINS ς 1 * -FINS ς 5 * , a random initial guess value v = [2.32,5.12, 2.65, 4.56, 2.55, 2.36, 9.35, 5.12, 5.23, 4.12] is generated by the built-in MATLAB rand() function.With a random initial estimate, FINS ς 1 * converges to the exact roots after 9, 8, 7, 7, and 6 iterations and requires, respectively, 0.04564, 0.07144, 0.07514, 0.01247, and 0.045451 s to converge for different fractional parameters, namely 0.1, 0.3, 0.5, 0.8, and 1.0.The results in Table 14 clearly indicate that as the value of ς grows from 0.1 to 1.0, the rate of convergence of FINS ς 1 * increases.Unlike ANNs, our newly developed algorithm converges to the exact roots for a range of random initial guesses, confirming a global convergence behavior.Figure 12a-e depict the root trajectories for various initial estimate values.When the initial approximation is close to the exact roots, the rate of convergence increases significantly, as illustrated in Tables 15 and 16.The ANNs for various values of the fractional parameter, namely 0.1, 0.3, 0.5, 0.7, 0.8, and 1.0, are shown in Table 17 as ANNFN ς 1 * through ANNFN ς 5 * .The following initial guess value results in an increase in convergence rates: Table 17 displays the results of the inverse simultaneous methods based on artificial neural networks.The ANNs were trained using 70% of the data set samples, with the remaining 30% used to assess their ability to generalize using a performance metric.For a polynomial of degree 10, the ANN required 11 input data points, two hidden layers, and 22 output data points.In order to represent all the roots of engineering application 3, Figures 5c-9c display the EPH, MSE, RP, TS, and fitness overlapping graphs of the target and outcomes of the LMA-ANN algorithm for the training, testing, and validation.Table 17 provides a summary of the performance of ANNFN ς 1 * -ANNFN ς 5 * in terms of the MSE, Per-E, Ex-time, and Error-it.
For training, testing, and validation, the input and output results of the LMA-ANNs' fitness overlapping graphs are shown in Figure 5c.According to the histogram, the error is 6.43 × 10 −3 , demonstrating the consistency of the suggested solver.For engineering application 3, the MSE of the LMA-ANNs compares the expected outcomes to the target solution, as shown in Figure 6c.The MSE for example 3 is 6.6649 × 10 −9 at epoch 112.The expected and actual results of the LMA-ANNs are linearly related, as shown in Figure 7c. Figure 8c illustrates the efficiency, consistency, and reliability of the engineering application 3 simulation.The gradient value is 9.9911 × 10 −6 with a Mu parameter of 1.0 × 10 −4 .Figure 8c shows how the results for the minimal Mu and gradient converge closer as the network becomes more efficient in training and testing.The fitness curve and regression analysis results are displayed in Figure 9c.When R is near 1, the correlation parameter is close; however, it becomes unreliable when R is near 0. A reduced MSE causes a decreased response time.The ANNs for various values of the fractional parameter, namely 0.1, 0.3, 0.5, 0.7, 0.8, and 1.0, are shown in Table 17 as ANNFN ς 1 * through ANNFN ς 5 * .
The numerical results of the simultaneous schemes with initial guess values that vary close to the precise roots are shown in Table 18.In terms of the residual error, CPU time, and maximum error (Max-Error), our newly developed strategies surpass the existing methods on the same number of iterations.

Example 4 (Mechanical Engineering Application).
Mechanical engineering, like most other sciences, makes extensive use of thermodynamics [78].The temperature of dry air is related to its zero-pressure specific heat, denoted as C ρ , through the following polynomial: To calculate the temperature at which a heat capacity of, say, 1.2 kJ/kgK occurs, we replace C r ho = 1.2 in the equation above and obtain the following polynomial: The Caputo-type derivative of ( 52) is given as: In order to quantify the global convergence rate of the inverse parallel fractional schemes FINS ς 1 * -FINS ς 5 * , a random initial guess value v = [0.24,0.124,1.23,1.45.2.35] is generated by the built-in MATLAB rand() function.With a random initial estimate, FINS ς 1 * converges to the exact roots after 9, 8, 7, 5, and 4 iterations and requires, respectively, 0.04164, 0.07144, 0.02514, 0.012017, and 0.015251 s to converge for different fractional parameters, namely 0.1, 0.3, 0.5, 0.8, and 1.0.The results in Table 19 clearly indicate that as the value of ς grows from 0.1 to 1.0, the rate of convergence of FINS ς 1 * increases.Unlike ANNs, our newly developed algorithm converges to the exact roots for a range of random initial guesses, confirming a global convergence behavior.Figure 13a-e depict the root trajectories for various initial estimate values.When the initial approximation is close to the exact roots, the rate of convergence increases significantly, as illustrated in Tables 20 and 21.The following initial guess value results in an increase in the convergence rates: (0) The initial estimates of ( 40) are as follows: (0) υ 4 = 2536 − 910i.
Table 22 displays the results of the inverse simultaneous methods based on artificial neural networks.The ANNs were trained using 70% of the data set samples, with the remaining 30% used to assess their ability to generalize using a performance metric.For a polynomial of degree 4, the ANN required 5 input data points, two hidden layers, and 10 output data points.In order to represent all the roots of engineering application 4, Figures 5d-9d display the EPH, MSE, RP, TS, and fitness overlapping graphs of the target and outcomes of the LMA-ANN algorithm for the training, testing, and validation.Table 17 provides a summary of the performance of ANNFN ς 1 * -ANNFN ς 5 * in terms of the MSE, Per-E, Ex-time, and Error-it.
For training, testing, and validation, the input and output results of the LMA-ANNs' fitness overlapping graphs are shown in Figure 5d.According to the histogram, the error is 1.08×10 −6 , demonstrating the consistency of the suggested solver.For engineering application 4, the MSE of the LMA-ANNs compares the expected outcomes to the target solution, as shown in Figure 6d.The MSE for example 4 is 6.0469 × 10 −9 at epoch 49.The expected and actual results of the LMA-ANNs are linearly related, as shown in Figure 7d. Figure 8d illustrates the efficiency, consistency, and reliability of the engineering application 4 simulation.The gradient value is 3.0416×10 −4 with a Mu parameter of 1.0×10 −6 .Figure 8d shows how the results for the minimal Mu and gradient converge closer as the network becomes more efficient in training and testing.The fitness curve and regression analysis results are displayed in Figure 9d.When R is near 1, the correlation parameter is close; however, it becomes unreliable when R is near 0. A reduced MSE causes a decreased response time.
The ANNs for various values of the fractional parameter, namely 0.1, 0.3, 0.5, 0.7, 0.8, and 1.0, are shown in Table 22 as ANNFN ς 1 * through ANNFN ς 5 * .Table 23 presents the numerical results of the simultaneous schemes when the initial guess values approach the exact roots.When evaluating the same number of iterations, our newly devised strategies outperform the existing methods in terms of the maximum error (Max-Error), residual error, and CPU time.When evaluating the same number of iterations, our newly proposed strategies outperform the existing methods in terms of the residual error, CPU time, and maximum error (Max-Error).
The root trajectories of the nonlinear equations arising from engineering applications 1-4 clearly demonstrate that our FINS ς 1 * -FINS ς 5 * schemes converge to the exact roots starting from random initial guesses, and the rate of convergence increases as the value of ς increases from 0.1 to 1.0.

Appendix B
The ANN is trained using the head of the output data set to locate the real and imaginary roots of polynomial equations in engineering applications 1 to 4 using Algorithm 2 in CAS-MATLAB@2012.

Figure 1 .
Figure1.A schematic representation of the process of feeding the coefficients of a polynomial into an ANN, which then yields an approximation for each root of (1).

Figure 2 .
Figure 2. (a-e) Computational efficiency of FINS ς1 * -FINS ς5 * in comparison to the IWDKI method.(a) Computational efficiency of FINS ς1 * in comparison to the IWDKI method.(b) Computational efficiency of FINS ς2 * in comparison to the IWDKI method.(c) Computational efficiency of FINS ς3 * in comparison to the IWDKI method.(d) Computational efficiency of FINS ς4 * in comparison to the IWDKI method.(e) Computational efficiency of FINS ς5 * in comparison to the IWDKI method.

Figure 4 .
Figure 4. Model of a quarter car.

Figure 7 .
Figure 7. (a-d) Regression plots (RPs) for LMA-ANN methods utilized to approximate the roots of the polynomial equations.Regression diagrams show the linear relationship between the expected and actual outcomes.The visualizations include all of the training, validation, and test data.The data exhibit the highest degree of correlation with a curve or line when the regression value is R = 1 [74].

Table 2 .
Results of percentage convergence and divergence in dynamical analysis.

Table 3 .
Results of percentage convergence and divergence in dynamical analysis.

Table 6 .
Simultaneous approximation of all polynomial equation roots.

Table 7 .
Numerical results using an artificial neural network on application 1.
(d) EPH for example 4 Figure5.(a-d) LMA-ANNs are utilized to represent error histograms (EHP), which are subsequently used to approximate the roots of the polynomial equations used in engineering applications 1-4.According to the histograms, the errors are approximately 6.2 × 10 −2 , 0.51, 6.43 × 10 −3 , and 1.08 × 10 −6 , respectively.These graphs demonstrate the consistency of the proposed solver.

Table 8 .
A comparison of the simultaneous schemes' numerical results utilizing initial guess values that are close to the exact roots.
Figure 9. (a-d) Fitness curves (FCs) for the LMA-ANN methods utilized to approximate the roots of Equation (1).The way the fitness curves overlap demonstrates the accuracy and stability of the methods.

Table 11 .
Approximation of all polynomial equation roots.

Table 12 .
Numerical results using artificial neural networks.

Table 13 .
A comparison of simultaneous schemes' numerical results utilizing initial guess values that are close to the exact roots.

Table 18 .
A comparison of the simultaneous schemes' numerical results utilizing initial guess values that are close to the exact roots.

Table 20 .
Approximation of all polynomial equation roots.

Table 21 .
Approximation of all polynomial equation roots.

Table A1 .
The ANN is trained using the head of the input data set to locate the real and imaginary roots of polynomial equations in engineering application 1.

Table A2 .
The ANN is trained using the head of the input data set to locate the real and imaginary roots of polynomial equations in engineering application 2.

Table A3 .
The ANN is trained using the head of the input data set to locate the real and imaginary roots of polynomial equations in engineering application 3.

Table A4 .
The ANN is trained using the head of the input data set to locate the real and imaginary roots of polynomial equations in engineering application 4.

Table A5 .
The ANN is trained using the head of the output data set to locate the real and imaginary roots of polynomial equations in engineering application 1.

Table A6 .
The ANN is trained using the head of the output data set to locate the real and imaginary roots of polynomial equations in engineering application 4.