Next Article in Journal
A Quick Capture Evaluation System for the Automatic Assessment of Work-Related Musculoskeletal Disorders for Sanitation Workers
Next Article in Special Issue
Comprehensive Evaluation of Lateral Performance of Innovative Post in Sand
Previous Article in Journal
Static Sound Event Localization and Detection Using Bipartite Matching Loss for Emergency Monitoring
Previous Article in Special Issue
Inverse Problem Protocol to Estimate Horizontal Groundwater Velocity from Temperature–Depth Profiles in a 2D Aquifer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Q-Analogues of Parallel Numerical Scheme Based on Neural Networks and Their Engineering Applications

by
Mudassir Shams
1,2 and
Bruno Carpentieri
1,*
1
Faculty of Engineering, Free University of Bozen-Bolzano (BZ), 39100 Bolzano, Italy
2
Department of Mathematics and Statistics, Riphah International University I-14, Islamabad 44000, Pakistan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(4), 1540; https://doi.org/10.3390/app14041540
Submission received: 29 December 2023 / Revised: 3 February 2024 / Accepted: 10 February 2024 / Published: 14 February 2024

Abstract

:
Quantum calculus can provide new insights into the nonlinear behaviour of functions and equations, addressing problems that may be difficult to tackle by classical calculus due to high nonlinearity. Iterative methods for solving nonlinear equations can benefit greatly from the mathematical theory and tools provided by quantum calculus, e.g., using the concept of q-derivatives, which extends beyond classical derivatives. In this paper, we develop parallel numerical root-finding algorithms that approximate all distinct roots of nonlinear equations by utilizing q-analogies of the function derivative. Furthermore, we utilize neural networks to accelerate the convergence rate by providing accurate initial guesses for our parallel schemes. The global convergence of the q-parallel numerical techniques is demonstrated using random initial approximations on selected biomedical applications, and the efficiency, stability, and consistency of the proposed hybrid numerical schemes are analyzed.

1. Introduction

Nonlinear equations are widely used in computational science and engineering modeling because of their ability to accurately represent the complexities of real-world phenomena, resulting in more precise predictions, optimizations, and insights into system behaviors in a wide range of scientific and engineering disciplines [1,2,3,4,5], including fluid dynamics [6], quantum mechanics [7], electromagnetism, and computational biology processes [8], to name only a few. They are especially important in chaos theory and complexity, where they can model systems with great sensitivity to initial conditions, e.g., in meteorology, population dynamics, and in the financial sector [9,10]. According to Abel’s impossibility theorem  [11], there is no algebraic solution to polynomials of degree greater than four expressed in terms of a finite number of additions, subtractions, multiplications, divisions, and root extractions; thus, we need to turn to numerical iterative methods such as Newton’s method and fixed-point iteration [12,13,14] to approximate the roots of a general Equation (1) one at a time. Single root finding methods are highly sensitive to the initial guess values used, though, and their local convergence behavior diverges as f ( x ) approaches zero. As a result, in this study we investigate parallel numerical algorithms that exhibit global convergence behavior, are more efficient, and are more stable than single root-finding algorithms [15]. Parallel numerical schemes use simple arithmetic operations to simultaneously and independently update the estimates for each root in each iteration, making them well suited for efficient parallel implementation. These techniques are also well-known for their robust convergence properties [16]. They commonly converge to the roots of nonlinear equations even starting with random initial guesses, which may be very advantageous in a parallel setting [17]. Because of these reasons, a significant amount of work is being devoted to the development of parallel simultaneous root-finding schemes for solving nonlinear equations. These schemes employ a variety of techniques, each with its own convergence order, e.g., Weierstrass [18], Kerner [19], Dochev [20], Nedzhibov  [21], Marcheva et al. [22], Shams et al. [23], Alefeld et al. [24], Nourein [25] and references cited therein.
The design of iterative methods for solving nonlinear equations can benefit greatly from recent developments in quantum calculus theory, a prominent area of mathematics that is constantly evolving [26,27,28,29], to tackle problems with high nonlinearity that may be difficult to address by conventional calculus. The concept of q-derivatives, which is introduced in quantum calculus and extends beyond classical derivatives, can provide new insights into the nonlinear behavior of functions and equations.
One way neural networks can help with root finding is by regression or approximation.
The main goal of this research is to develop parallel numerical schemes that approximate all distinct roots of nonlinear Equation (1) by utilizing q-analogies of the function derivative. Neural networks are utilized to accelerate the convergence rate of the new class of parallel numerical schemes introduced in this work. A neural network can be trained to approximate the polynomial’s function before using numerical approaches to determine its roots. However, it is crucial to note that, while neural networks excel in approximating complex functions, their ability to precisely locate roots varies according to the nature of the problem and the network’s architecture. The hybrid neural network-based q-version of parallel numerical schemes used the neural network’s outputs as an initial guess and refined it up to the desired decimal, outperforming classical parallel schemes and speeding up convergence. The numerical scheme that is derived from parallel neural networks (PNNS) and implemented in the q-calculus framework demonstrates global convergence, as illustrated by our numerical experiments.
In this section, we review some fundamental results of q-calculus theory that were utilized in the design of our q-version of parallel numerical schemes for locating all distinct roots of the nonlinear equation
f ( x ) = 0 .
Given q ( 0 , 1 ) , the q-integer is defined as
m ; q = 1 + q + q 2 + q m + + q m 1 = 1 q m 1 q ; f o r m = 1 , 2 , 3 , .
whereas
m ; q = m ; for q = 1 .
The q-binomial is defined for 0 < j < m , as
m j ; q = m ; q ! j ; q ! [ m j ; q ] ! .
Finally, the q-factorial [ m ; q ] is defined as
m ; q ! = [ m ; q ] [ m 1 ; q ] [ 3 ; q ] [ 2 ; q ] [ 1 ; q ]
and [ 0 ; q ] = 1 .
Definition 1
([30,31]). The q-derivative of f ( x ) is defined as
q ( x ) = q 1 ( x ) = q f ( x ) = d d x q f ( x ) = f ( q x ) f ( x ) q x x , q 1 .
For q 1 , we have
( x ) = f ( x ) = d d x f ( x ) ,
 which is the classical derivative. The q-derivative q 1 f ( x ) is known as Jakson derivative [32]. We define higher q-derivatives as
q 0 f = f , q n f = q q n 1 f o r n = 1 , 2 , 3 ,
Definition 2.
Product and quotient of functions f ( x ) and g ( x ) are defined, respectively, as [33]
q 1 f ( x ) g ( x ) = g ( x ) q 1 f ( x ) + f ( q x ) q 1 g ( x ) , = g ( q x ) q 1 f ( x ) + f ( x ) q 1 g ( x ) ,
 and
q 1 f ( x ) g ( x ) = g ( x ) q 1 f ( x ) f ( x ) q 1 g ( x ) g ( q x ) g ( x ) , g ( q x ) g ( x ) 0 .
Definition 3.
q-Taylor’s formula [34] for f ( x ) is given as
f ( x ) = f ( C ) + x C 1 1 : q q 1 f ( x ) + x C 2 2 : q ! q 2 f ( x ) + + x C n n : q ! q n f ( x ) + R n ,
f ( x ) = j = 1 q j ( x C ) j j ! + R n ; x ( a , b ) ,
 where ( x C ) 0 = 1 , ( x C ) i = Π n 1 i = 0 ( x C q ) i , i N . 0 < q < 1 ,   R n = x C n + 1 n + 1 : q ! q n + 1 f ( ζ )   a n d   q 1 , q 2 , are all q-derivatives.
The remainder of the article is structured as follows. In Section 2, we present and analyze a parallel numerical scheme for solving (1) formulated in the framework of quantum calculus. Section 3 discusses the neural network implementation of the proposed q-version of the parallel solver. We address the numerical solution of two nonlinear engineering problems in Section 4. Finally, the paper concludes in Section 5 with some remarks arising from this study.

2. Construction of Parallel Numerical Scheme Using Q-Calculus

In this section, we propose q-analogies of a novel class of single root finding methods of (1). Then, we generalize it into a parallel numerical scheme to find all distinct roots of (1). The starting point of our development is the following numerical scheme:
v [ σ ] = x [ σ ] f ( x [ σ ] ) q 1 f ( x ) 1 1 α 1 f ( x [ σ ] ) 1 + α 2 f ( x [ σ ] ) , z [ σ ] = v [ σ ] f ( v [ σ ] ) q 1 f ( x [ σ ] ) , σ = 0 , 1 ,
where q [ 1 ] f ( x [ σ ] ) = f ( q x [ σ ] ) f ( x [ σ ] ) q x [ σ ] x [ σ ] and q , α 1 , α 2 R .

2.1. Convergence Analysis

The order of convergence of the numerical scheme (13) is established by the following theorem.
Theorem 1.
Let ζ I be a simple root of a sufficiently differential function f : I R R in an open interval I R . If x [ 0 ] is sufficiently close to ζ, then (13) has 3 : q -order convergence (in terms of quantum calculus) with following error equation:
ϑ [ σ + 1 ] = α 1 q 2 f ( ζ ) q 1 f ( ζ ) + q 2 f ( ζ ) 2 q 1 f ( ζ ) 2 ϑ [ σ ] 3 ; q + O ϑ [ σ ] 4 ; q .
The proof of Theorem 1 can be found in Appendix A.

2.2. Q-Analogies of Parallel Numerical Scheme of Convergence Order [ ϵ 3 ; q ]

Next, we introduce and analyze a new family of single-step and two-step q-analogies of parallel numerical schemes for computing all distinct roots of (1) based on numerical scheme (13). We begin with the well-known Weierstrass method [35], which approximates all roots of (1) as
x i [ σ + 1 ] = x i [ σ ] ϑ i x i [ σ ] ,
where ϑ i x i [ σ ] = f x i [ σ ] i j j = 1 n x i [ σ ] x j [ σ ] is the so-called Weierstrass correction. The order of convergence of the numerical scheme (15) is 2. By applying the q-analogies of the single root finding method (13) as a correction, specifically substituting v j [ σ ] for x j [ σ ] in Equation (15), we consider the following new family of q-analogies of parallel numerical algorithms, denoted as Q - M M σ 1 , which are designed to approximate all distinct roots of Equation (1):
x i [ σ + 1 ] = x i [ σ ] ϑ i * x i [ σ ] ,
where ϑ i * x i [ σ ] = f x i [ σ ] i j j = 1 n x i [ σ ] x j [ σ ] f x j σ ] q 1 f x j [ σ ] 1 1 α 1 f x j [ σ ] 1 + α 2 f x j [ σ ] . Hence, method Q - M M σ 1 can be written as:
x i [ σ + 1 ] = x i [ σ ] f x i [ σ ] i j j = 1 n x i [ σ ] x j [ σ ] f x j σ ] q 1 f x j [ σ ] 1 1 α 1 f x j [ σ ] 1 + α 2 f x j [ σ ] ,
where α 1 , α 2 R .

2.3. Convergence Analysis

In the following theorem we establish the convergence order of Q - M M σ 1 .
Theorem 2.
Let ζ 1 , , ζ σ be simple zeros of the nonlinear Equation (1). For initial distinct values x 1 [ 0 ] , , x n [ 0 ] that are sufficiently close to the exact roots, the Q - M M σ 1 has convergence order [ ϵ 3 ; q ] .
Proof. 
Let ϵ i = x i [ σ ] ζ i , ϵ i = x i [ σ + 1 ] ζ i be the errors in x i [ σ ] , and x i [ σ + 1 ] respectively. From the first-step of Q - M M σ 1 , we have:
y i [ σ ] ζ i = x i [ σ ] ζ i f x i [ σ ] j = 1 j i n x i [ σ ] x j [ σ ] f x j [ σ ] q 1 f x j [ σ ] 1 α 1 f x j [ σ ] 1 + α 2 f x j [ σ ] 1 ,
ϵ i = ϵ i ϑ i * x i [ σ ] = ϵ i ϵ i ϑ i * x i [ σ ] ϵ i
ϵ i = ϵ i 1 Q i [ * ]
where
Q i [ ] = ϑ i * x i [ σ ] ϵ i = j = 1 j i n x i [ σ ] ζ j x i [ σ ] y j [ σ ]
and v j [ σ ] = x j [ σ ] f x j [ σ ] q 1 f x j [ σ ] 1 α 1 f x j [ σ ] 1 + α 2 f x j [ σ ] 1 . Therefore, we can write
x i [ σ ] ζ j x i [ σ ] y j [ σ ] = 1 + v j [ σ ] ζ j x i [ σ ] v j [ σ ] 1 + O ϵ j 2 ; q ,
and, consequently,
Q i [ ] = j = 1 j i n v j [ σ ] ζ j x i [ σ ] v j [ σ ] = 1 + O ϵ j 2 n 1 , = 1 + n 1 O ϵ j 2 = 1 + O ϵ j 2 ,
Q i [ ] 1 = O ϵ j 2 ; q .
We conclude that
ϵ i = ϵ i O ϵ j 2 ; q .
If ϵ i and ϵ j have same order, then we have
ϵ i = O ϵ 3 ; q .
This proves the theorem.    □

2.4. Q-Analogies of Parallel Numerical Scheme of Convergence Order [ ϵ 8 ; q ]

At this stage, we consider the well known two-step Weierstrass method [36] for approximating all roots of Equation (1):
x i [ σ + 1 ] = y i [ σ ] ϑ i y i [ σ ] = y i [ σ ] f y i [ σ ] i j j = 1 n y i [ σ ] y j [ σ ] ,
where y i [ σ ] = x i [ σ ] ϑ i x i [ σ ] = x i [ σ ] f x i [ σ ] i j j = 1 n x i [ σ ] x j [ σ ] and ϑ i x i [ σ ] = f x i [ σ ] i j j = 1 n x i [ σ ] x j [ σ ] . This numerical scheme has fourth-order convergence. The following novel q-analogies of the double-Weierstrass methods (abbreviated as Q - M M σ 2 ) are obtained by substituting x j [ σ ] with z j [ σ ] :
x i [ σ + 1 ] = y i [ σ ] ϑ i y i [ σ ] = y i [ σ ] f y i [ σ ] i j j = 1 n y i [ σ ] y j [ σ ] ,
where y i [ σ ] = x i [ σ ] ϑ i * * x i [ σ ] = x i [ σ ] f x i [ σ ] i j j = 1 n x i [ σ ] z j [ σ ] . Method (27) can also be written in the form:
y i [ σ ] = x i [ σ ] f x i [ σ ] i j j = 1 n x i [ σ ] v j [ σ ] + f x j σ ] q 1 f v j [ σ ] , x i [ σ + 1 ] = y i [ σ ] f y i [ σ ] i j j = 1 n y i [ σ ] y j [ σ ] ,
where v j [ σ ] = x j [ σ ] f x j σ ] q 1 f x j [ σ ] 1 1 α 1 f x j [ σ ] 1 + α 2 f x j [ σ ] , and α 1 , α 2 R .

2.5. Convergence Analysis

In the following theorem, we prove the convergence order of Q - M M σ 2 .
Theorem 3.
Let ζ 1 , , ζ σ be simple zeros of the nonlinear Equation (1). For initial distinct values x 1 [ 0 ] , , x n [ 0 ] that are sufficiently close to the exact roots, the Q - M M σ 2 method has convergence order ϵ 8 ; q .
Proof. 
Let ϵ i = x i [ σ ] ζ i , ϵ i = x i [ σ + 1 ] ζ i be the errors in x i [ σ ] , and x i [ σ + 1 ] respectively. From the first-step of Q - M M σ 2 , we have:
y i [ σ ] ζ i = x i [ σ ] ζ i f x i [ σ ] j = 1 j i n x i [ σ ] x j [ σ ] f x j [ σ ] q 1 f x j [ σ ] 1 α 1 f x j [ σ ] 1 + α 2 f x j [ σ ] 1 ,
ϵ i = ϵ i ϑ i * x i [ σ ] = ϵ i ϵ i ϑ i * x i [ σ ] ϵ i
ϵ i = ϵ i 1 Q i [ * * ]
where
Q i [ ] = ϑ i * x i [ σ ] ϵ i = n j i j = 1 x i [ σ ] ζ j x i [ σ ] x * j [ σ ]
and z j [ σ ] = v j [ σ ] f v j [ σ ] q 1 f v j [ σ ] , v j [ σ ] = x j [ σ ] f x j [ σ ] q 1 f x j [ σ ] 1 α 1 f x j [ σ ] 1 + α 2 f x j [ σ ] 1 . Therefore, we can write
x i [ σ ] ζ j x i [ σ ] z j [ σ ] = 1 + z j [ σ ] ζ j x i [ σ ] z j [ σ ] 1 + O ϵ j 3 ; q
and, consequently,
Q i [ * * ] = n j i j = 1 z j [ σ ] ζ j x i [ σ ] z j [ σ ] = 1 + O ϵ j 3 n 1 = 1 + n 1 O ϵ 3 = 1 + O ϵ j 3 .
We conclude that
Q i [ ] 1 = O ϵ j 3 ; q .
If ϵ i and ϵ j have same order, then we have
ϵ i = ϵ i O ϵ j 3 ; q = O ϵ 4 ; q .
Using the second step of Q - M M σ 2 , we get:
x i [ σ + 1 ] ζ i = y i [ σ ] ζ i f y i [ σ ] j = 1 j i n y i [ σ ] y j [ σ ] ,
ϵ i = ϵ i ϑ i y i [ σ ] = ϵ i ϵ i ϑ i y i [ σ ] ϵ i ,
ϵ i = ϵ i 1 E i [ * ] ,
where
E i [ ] = ϑ i y i [ σ ] ϵ i = j = 1 j i n y i [ σ ] ζ j y i [ σ ] y j [ σ ] .
Let
y i [ σ ] ζ j y i [ σ ] y j [ σ ] = 1 + y j [ σ ] ζ j y i [ σ ] y j [ σ ] 1 + O ϵ j ; q ,
E i [ ] = j = 1 j i n y i [ σ ] ζ j y i [ σ ] y j [ σ ] = 1 + O ϵ j ; q n 1 ,
= 1 + n 1 O ϵ j = 1 + O ϵ j ; q ,
E i [ ] 1 = O ϵ j ; q ,
ϵ i = ϵ i O ϵ j ; q .
If ϵ i and ϵ j have same order, then we have
ϵ i = O ϵ 2 ; q = O ϵ 4 2 ; q .
ϵ i = O ϵ 8 ; q .
This proves that the convergence order of Q - M M σ 2 is O ϵ 8 ; q .    □

3. Neural Network-Based Q-Analogies of the Numerical Scheme

Neural networks have been proven to be extremely versatile computational techniques for the solution of numerous complex engineering problem, due to their ability to represent complex relationships and to learn from data [37,38,39]. These features make them effective modelling tools for a wide range of applications, including the development of iterative root finding procedures. See, for example, Daws et al. [40], Mourrain et al. [41], Huang et al. [42], Freitas et al. [43], Shams et al. [44] and references cited therein. Utilizing neural network techniques to improve the efficiency and accuracy of the q-analogies of our parallel numerical root finding schemes involves a two-step approach. First, a neural network is trained to approximate the roots of a polynomial using the polynomial’s coefficients. The network learns the complex relationship between the input coefficients and the corresponding roots during this phase. Once trained, the neural network produces preliminary estimates for the polynomial roots. These estimates are then used as starting values for the parallel system and refined iteratively until convergence. This hybrid technique has the potential to accelerate the convergence process by leveraging the neural network’s ability to capture the intricate mapping between the polynomial’s coefficients and roots, resulting in improved initial estimates [45]. However, the success of this strategy depends on the complexity of the polynomial and the accuracy of the neural network’s approximation.
To estimate all roots of nonlinear equations using neural network-based q-analogies of our parallel scheme, the following steps were required:
  • Data representation. Prepare the data sets by feeding them into the neural network with the coefficients of nonlinear functions, particularly higher-degree polynomials. Table in Appendix B, presents the upper edge-data set used in PNN to approximate all roots of (1). The data set contains 5000 archives. Using Symbolic Math Toolbox in Maple, random polynomial co-efficient in the range of [0, 1] were generated (see Appendix B and Appendix C). The PNN were trained using 70% of the sample of these data and remaining 30% of the data was utilized to find the generalization capability of the PNN.
  • Architecture. Two input/output layers were required to develop a neural network architecture capable of approximating the roots of nonlinear equations. The size of the input layer should match the number of polynomial coefficients. The size of the neural network’s output layer should correspond to the set of real or complex polynomial roots, as indicated in Figure 1 and Figure 2.
  • Training. The neural network is trained with three layers (input, two hidden layers, and output layer) to approximating all the roots of nonlinear polynomial equations using well-known Levenberg-Marquardt Algorithm (LMA). The weights of the PNNS connections are modified based on the discrepancy between the predicated and computed values; the error are approximated as
    ϑ σ + 1 = ϑ σ j ^ σ t j ^ σ + λ σ I 1 j ^ σ e σ ,
    where j ^ σ = e σ ϑ σ and I is the identity matrix. The mean square error (MSE) is computed as
    M S E = 1 n i = 1 n γ 11 γ 12 ,
    where γ 11 = Exact ith-root in the dat set and γ 12 = approximate value using Q M M σ 2 .
  • Enhancement of neural network accuracy. To increase efficiency and accuracy, we update the neural network’s outputs using q-analogies of parallel numerical algorithms.
Algorithm 1 described below utilizes a neural network with q-analogies of the parallel numerical scheme introduced in this paper, to estimate all roots of nonlinear equations in MATLAB.
Algorithm 1: Neural network with q-analogies of the parallel numerical scheme in MATLAB.
Applsci 14 01540 i001

4. Numerical Results

Some non-linear problems from biomedical engineering and applied sciences are considered to illustrate the performance and efficiency of Q - M M σ 1 and Q - M M σ 2 . The experiments are performed using CAS Maple 18 with 64 digits floating point arithmetic and the following stopping criterion:
e i ( σ ) = x i σ + 1 ζ < ,
where e i ( σ ) represents the absolute error. We set = 10 30 as the tolerance used in the stopping criterion. In Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16, Table 17, Table 18, Table 19, Table 20, Table 21 and Table 22, ρ i represents the local convergence order of our iterative schemes. Here, we compare our newly developed methods Q - M M σ 1 Q - M M σ 2 with Borch-Supan method (abbreviated as B S M C 3 ) [46] of convergence order 3, defined as:
x i ( σ + 1 ) = x i ( σ ) ϑ i x i [ σ ] 1 + j i j = 1 n ϑ i x i [ σ ] x i ( σ ) x j ( σ ) ,
where ϑ i x i [ σ ] = f x i [ σ ] i j j = 1 n x i [ σ ] x j [ σ ] . Rafiq et al. [47] proposed the following two derivative free simultaneous method (abbreviated as N A M C 3 ) of convergence order 3 defined as:
y i ( σ ) = x i ( σ ) ϑ i x i [ σ ] , x i ( σ + 1 ) = x i ( σ ) ϑ i x i [ σ ] f ( x i ( σ ) ) + f ( y i ( σ ) ) f ( x i ( σ ) ) .
Nedzibove et al. [48] introduced the following two derivative free simultaneous method (abbreviated as N D M C 3 ) of convergence order 3 defined as:
y i ( σ ) = x i ( σ ) ϑ i x i [ σ ] , x i ( σ + 1 ) = x i ( σ ) ϑ i x i [ σ ] 1 + f ( y i ( σ ) ) f ( x i ( σ ) ) 2 λ y i ( σ ) ) , λ R
Mir et al. [49] proposed a method (abbreviated as N A M C 8 ) of convergence order 8 defined as:
z i [ σ ] = x i ( σ ) f ( x j ( σ ) ) f ( x j ( σ ) ) , y i ( σ ) = x i ( σ ) 1 1 N i ( x i ( σ ) ) j = 1 j i n 1 ( x i ( σ ) Z j ( σ ) ) ) α , y i ( σ ) = x i ( σ ) 1 1 N i ( x i ( σ ) ) j = 1 j i n 1 ( x i ( σ ) Z j ( σ ) ) ) ,
where N i ( x i ( σ ) ) = f ( x j ( σ ) ) f ( x j ( σ ) ) and α R . Thangavel et al.  [50] proposed a neutral-type of switched neural networks (abbreviated as T A M C * ) to solve nonlinear problems. We consider all these methods in the comparative performance analysis of our new parallel schemes. To determine all the roots of nonlinear equations, we utilized Algorithms 2 and 3.
Algorithm 2: For q-Numerical scheme Q - M M σ 1 .
Applsci 14 01540 i002
Algorithm 3: For q-Numerical scheme Q - M M σ 2 .
Applsci 14 01540 i003
Some biomedical engineering examples [51,52,53,54] are shown in this section to assess the effectiveness of the newly developed q-analogies of the parallel method for locating all zeros of (1) simultaneously.

4.1. Engineering Application 1: Osteoporosis in Chinese Women

Wu et al. [52] investigated age-related changes in tibial sound speed and osteoporosis prevalence among Chinese women. The nonlinear relationship between sound speed and age was found to be as follows:
f ( x ) = 0.0039 x 3 0.78 x 2 + 39.9 x 467 .
The exact solution of (53) up to four decimal places is
ζ 1 = 16.7023 ,   ζ 2 = 56.5740 ,   ζ 3 = 126.7235 .
To analyze global convergence, as in [53], we use the randomly generated initial guesses X 1 [ 0 ] = X 11 [ 0 ] , X 12 [ 0 ] , X 13 [ 0 ] , shown in Appendix B, Table A1.
In Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10 and Table 11, we evaluate the numerical results produced with our parallel numerical scheme for various values of q. In Table 1, we report on the error and approximate solution up to two decimal places whereas the approximated root is computed up to 64-digit floating point arithmetic using the stopping criterion Λ i [ * ] = x i [ σ + 1 ] x i [ σ ] . The maximum error, number of iterations, and elapsed CPU time for the Q - M M σ 1 method for various values of q are shown in Table 2, Table 3 and Table 4. On the other hand, Table 5, Table 6, Table 7, Table 8 and Table 9 displays the numerical results of Q - M M σ 2 in terms of approximated values, residual errors, maximum errors, number of iterations, and elapsed CPU time for finding all roots of (53) using different values of q. The rate of convergence of Q - MM σ 1 - Q - MM σ 2 increases significantly as the q values increase from 0.2 to 1.1 on random initial values, demonstrating global convergence behavior.
Figure 3a–e illustrates the neural network implementation and its results using Algorithm 1. The error histogram curve of the neural network demonstrates the consistency of the proposed scheme; the transition statistics curve reflects the effective convergence rate of the neural network; the fitness curve shows accuracy and stability; the regression curve illustrates the linear relationship between the expected and actual outcomes; and the mean square error demonstrates how well the target solution and expected outcomes matched. Figure 3 displays the neural network’s outputs in the following ways: (a) the error histogram; (b) the transition statistics curve; (c) the mean square error; (d) the regression curve; and (e) the fitness curve. As illustrated in Figure 3a–e and Table 9, neural network simulations for this engineering application produce reliable and consistent results. Figure 4a,b depicts the residual error and approximate order of convergence, whereas Figure 5a–f depicts the local computational order of convergence of the neural network-based parallel root finding scheme.
The following values are chosen as initial guesses:
x 1 ( 0 ) = 20.1 , x 2 ( 0 ) = 50.8 , x 3 ( 0 ) = 125.5 .
As shown in Table 10, the accuracy of the proposed numerical schemes improves when the outputs of the neural networks are used as initial guess values in Q - M M σ 1 , T A M C * , B S M C 3 , N A M C 3 , N D M C 3 , N A M C 8 , Q - M M σ 2 . Table 11 presents the overall convergence behavior of neural network outputs based on q-analogies.
In Table 11, M a x - E r r Λ 1 [ * ] represents the maximum error; M a x - i t Λ 1 [ * ] represents the maximum number of iterations; M a x - C P U Λ 1 [ * ] represents the maximum elapsed CPU time; M a x - C O C Λ 1 [ * ] represents the maximum order of convergence achieved; and ρ i [ σ 1 ] is the local order of convergence of q-analogies based neural network parallel numerical scheme on this application using Λ i [ * ] = x i [ σ + 1 ] x i [ σ ] as stopping criterion.

4.2. Engineering Application 2: Blood Rheology Model

Blood is a non-Newtonian fluid modeled as a “Casson Fluid”. According to the Casson fluid model, simple fluids such as water and blood will flow through a tube in such a way that the center core of the fluids will travel as a plug with little distortion and a velocity gradient will occur near the wall [52,54]. We used the following non-linear polynomial equation to describe the plug flow of Casson fluids:
G = 1 16 7 x + 4 3 x 1 21 x 4 ,
where G represents the reduction in flow rate. Using G = 0.40 in (54), we have:
f 4 ( x ) = 1 441 x 8 8 63 x 5 0.05714285714 x 4 + 16 9 x 2 3.624489796 x + 0.36 .
The exact roots of (55) are:
ζ 1 = 0.1046986515 , ζ 2 = 3.822389235 , ζ 3 = 1.553919850 + . 9404149899 i , ζ 4 = 1.238769105 + 3.408523568 i , ζ 5 = 2.278694688 + 1.987476450 i ζ 6 = 2.278694688 1.987476450 i , ζ 7 = 1.238769105 3.408523568 , ζ 8 = 1.553919850 0.9404149899 i .
To analyze convergence, we use the randomly generated starting guesses X 2 [ 0 ] = X 21 [ 0 ] , X 22 [ 0 ] , X 23 [ 0 ] , shown in Appendix B, Table A2. In Table 12, Table 13, Table 14, Table 15, Table 16, Table 17, Table 18, Table 19, Table 20, Table 21 and Table 22, we examine the numerical results of the proposed parallel numerical scheme for this engineering applications at different values of q. In Table 12, the error and estimated solution are given up to two decimal places whereas the approximated root is computed up to 64 digit floating point arithmetic using Λ i [ * ] = x i [ σ + 1 ] x i [ σ ] as stopping criterion. Table 13, Table 14 and Table 15 report on the maximum error, number of iterations, and elapsed CPU time for the Q - M M σ 1 method using different values of q. On the other hand, Table 16, Table 17, Table 18 and Table 19 displays the numerical results of Q - M M σ 2 in terms of approximated value, residual error, maximum error, number of iterations, and elapsed CPU time for finding all roots of Equation (55) using different values of q. The rate of convergence of Q - MM σ 1 - Q - MM σ 2 increases significantly as the q values increase from 0.2 to 1.1 on random initial values, demonstrating global convergence behavior.
The implementation of the neural network and its results using Algorithm 1 are shown in Figure 6a–e. The neural network’s outputs are displayed in Figure 6 as follows: (a) the error histogram; (b) the transition statistics curve;(c) the mean square error; (d) the regression curve; and (e) the fitness curve. Neural network simulations of this engineering application provide reliable and consistent results, as shown in Figure 6a–e and Table 20. Figure 7a,b represents the residual error and approximate order of convergence whereas the local computational order of convergence of the neural network based on parallel numerical schemes is depicted in Figure 8a–f.
The following values are chosen as initial guesses:
x 1 ( 0 ) = 0.1 , x 2 ( 0 ) = 3.8 , x 3 ( 0 ) = 1.5 + 0.9 i , x 4 ( 0 ) = 1.2 + 3.4 i . x 5 ( 0 ) = 2.2 + 1.9 i , x 6 ( 0 ) = 2.2 1.9 i x 7 ( 0 ) = 1.2 3.4 i , x 8 ( 0 ) = 1.5 + 0.9 i .
As shown in Table 21, the accuracy of the proposed numerical schemes improves when the outputs of the neural networks (see Appendix C) are used as initial guess values in Q - M M σ 1 , T A M C * , B S M C 3 , N A M C 3 , N D M C 3 , N A M C 8 , Q - M M σ 2 . Table 22 presents the overall convergence behavior of neural network outputs based on q-analogies.
In Table 22, M a x - E r r Λ 1 [ * ] represents the maximum error; M a x - i t Λ 1 [ * ] represents the maximum number of iterations; M a x - C P U Λ 1 [ * ] represents the maximum CPU time consumed; M a x - C O C Λ 1 [ * ] represents the maximum order of convergence achieved; and ρ i [ σ 1 ] is the local order of convergence of q-analogies based neural network parallel numerical scheme for solving this engineering application using Λ i [ * ] = x i [ σ + 1 ] x i [ σ ] as stopping criterion.

5. Conclusions

In this paper, we introduced two new families of q-type parallel numerical methods for finding all distinct roots of nonlinear Equation (1). Furthermore, a new hybrid numerical scheme combining neural network techniques and q-analogies are thoroughly examined. In order to analyze the global convergence behavior of the proposed numerical methods, random starting values for the iterations are used. The numerical outcomes of the Q - M M σ 1 - Q - M M σ 2 on random initial approximations clearly demonstrate that the newly created methods are more efficient than the existing methods, as shown in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 and Table 12, Table 13, Table 14, Table 15, Table 16, Table 17, Table 18 and Table 19 and Figure 5a–f and Figure 8a–f. Analyzing the overall convergence behaviour in Table 11 and Table 22 reveals that the newly created techniques Q - M M σ 1 - Q - M M σ 2 are more stable and consistent than the existing methods B S M C 3 , T A M C * , N A M C 3 , N D M C 3 , and N A M C 8 . The results of our experiments illustrate that convergence is significantly improved when the neural network outcomes are utilized as input for the q-analogies of our parallel schemes, as shown in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16, Table 17, Table 18, Table 19, Table 20, Table 21 and Table 22 and Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7. The numerical experiments on two biomedical engineering applications reveal that the new Q - M M σ 1 - Q - M M σ 2 methods can outperform the B S M C 3 , T A M C * , N A M C 3 , N D M C 3 , and N A M C 8 methods in terms of errors using random initial guesses, CPU time, residual error, computational and local-computational order of convergence, and number of iterations. The next step of this research will involve the development and analysis of higher-order q-analogues of inverse numerical schemes based on neural network [55,56,57].

Author Contributions

Conceptualization, M.S. and B.C.; methodology, M.S.; software, M.S.; validation, M.S.; formal analysis, B.C.; investigation, M.S.; resources, B.C.; writing—original draft preparation, M.S. and B.C.; writing—review and editing, B.C.; visualization, M.S. and B.C.; supervision, B.C.; project administration, B.C.; funding acquisition, B.C. All authors have read and agreed to the published version of the manuscript.

Funding

The work is supported by Provincia autonoma di Bolzano/Alto Adigeâ euro Ripartizione Innovazione, Ricerca, Universitá e Musei (contract nr. 19/34). Bruno Carpentieri is a member of the Gruppo Nazionale per it Calcolo Scientifico (GNCS) of the Istituto Nazionale di Alta Matematia (INdAM) and this work was partially supported by INdAM-GNCS under Progetti di Ricerca 2022.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The work is supported by Provincia autonoma di Bolzano/Alto Adige-Ripartizione Innovazione, Ricerca, Università e Musei (contract nr. 19/34). The work of Bruno Carpentieri is also supported by the Free University of Bozen-Bolzano (IN200Z SmartPrint). Bruno Carpentieri is a member of the Gruppo Nazionale per il Calcolo Scientifico (GNCS) of the Istituto Nazionale di Alta Matematica (INdAM) and this work was partially supported by INdAM-GNCS under Progetti di Ricerca 2022.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Proof of Theorem 1.
Let ζ be a simple root of f, and x [ σ ] = ζ + ϑ [ σ ] and z [ σ ] = ζ + ϑ [ σ + 1 ] . By q-Taylor’s series expansion of q 1 f ( x [ σ ] ) around x [ σ ] = ζ , and taking f ( ζ ) = 0 , we get
f ( x [ σ ] ) = q 1 f ( ζ ) ϑ [ σ ] 1 ; q + q 2 f ( ζ ) ϑ [ σ ] 2 ; q + q 3 f ( ζ ) ϑ [ σ ] 3 ; q +
f ( x [ σ ] ) = ϑ [ σ ] 1 ; q + q 2 f ( ζ ) q 1 f ( ζ ) ϑ [ σ ] 2 ; q + q 3 f q 1 f ( ζ ) ( ζ ) ϑ [ σ ] 3 ; q +
and
q 1 f ( x [ σ ] ) = q 1 f ( ζ ) 1 + 2 q 2 f ( ζ ) 2 ! q 1 f ( ζ ) ϑ [ σ ] ; q + 3 q 3 f ( ζ ) 3 ! q f ( ζ ) ϑ [ σ ] 2 ; q + .
When (A1) is divided by (A3), we have:
f ( x [ σ ] ) q 1 f ( x [ σ ] ) = ϑ [ σ ] ; q q 2 f ( ζ ) 2 ! q f ( ζ ) ϑ [ σ ] 2 ; q 2 q 3 f ( ζ ) 3 ! q f ( ζ ) 2 q 2 f ( ζ ) 2 ! q f ( ζ ) 2 ϑ [ σ ] 3 ; q +
1 + α 2 f ( x [ σ ] ) = 1 + α 2 ϑ [ σ ] ; q + α 2 q 2 f ( ζ ) q 1 f ( ζ ) ϑ [ σ ] 2 ; q + α 2 q 3 f ( ζ ) q 1 f ( ζ ) ϑ [ σ ] 3 ; q +
α 1 f ( x [ σ ] ) 1 + α 2 f ( x [ σ ] ) = α 1 ϑ [ σ ] ; q + α q 2 f ( ζ ) q 1 f ( ζ ) α 1 α 2 ϑ [ σ ] 2 ; q + O ϑ [ σ ] 3 ; q .
By using this expression in the first-step of (13), we have:
1 α 1 f ( x [ σ ] ) 1 + α 2 f ( x [ σ ] ) = 1 α 1 ϑ [ σ ] ; q α q 2 f ( ζ ) q 1 f ( ζ ) α 1 α 2 ϑ [ σ ] 2 ; q + O ϑ [ σ ] 3 ; q .
We can take the inverse of (A7):
1 α 1 f ( x [ σ ] ) 1 + α 2 f ( x [ σ ] ) 1 = 1 + α 1 ϑ [ σ ] ; q + α 1 q 2 f ( ζ ) q 1 f ( ζ ) α 1 α 2 ϑ [ σ ] 2 ; q +
Thus,
f ( x [ σ ] ) q 1 f ( x [ σ ] ) 1 1 α 1 f ( x [ σ ] ) 1 + α 2 f ( x [ σ ] ) = ϑ [ σ ] ; q + α 1 q 2 f ( ζ ) q 1 f ( ζ ) ϑ [ σ ] 2 ; q + α 1 2 α 1 α 2 + q 2 f ( ζ ) q 1 f ( ζ ) 2 2 q 3 f ( ζ ) q 1 f ( ζ ) ϑ [ σ ] 3 ; q +
At this stage, we can use f ( x [ σ ] ) q 1 f ( x [ σ ] ) 1 1 α 1 f ( x [ σ ] ) 1 + α 2 f ( x [ σ ] ) in the first-step of (13):
v [ σ ] ζ = x [ σ ] ζ f ( x [ σ ] ) q 1 f ( x ) 1 1 α 1 f ( x [ σ ] ) 1 + α 2 f ( x [ σ ] ) ,
v [ σ ] ζ = α 1 + q 2 f ( ζ ) 2 ! q 1 f ( ζ ) ϑ [ σ ] 2 ; q + ϑ [ σ ] 3 ; q .
Using the Taylor series to expand f ( v [ σ ] ) , we get
f ( v [ σ ] ) = α 1 + q 2 f ( ζ ) 2 ! q 1 f ( ζ ) ϑ [ σ ] 2 ; q + α 1 2 + α 1 α 2 2 q 2 f ( ζ ) 2 ! q 1 f ( ζ ) 2 + 2 q 3 f ( ζ ) 3 ! q 1 f ( ζ ) ϑ [ σ ] 3 ; q + α 1 3 + 2 α 1 2 α 2 α 1 2 α 2 2 q 2 f ( ζ ) 2 ! q 1 f ( ζ ) 3 α 1 q 2 f ( ζ ) 2 ! q 1 f ( ζ ) 2 + 5 q 2 f ( ζ ) 2 ! q 1 f ( ζ ) 3 + α 1 q 3 f ( ζ ) 3 ! q 1 f ( ζ ) 7 q 2 f ( ζ ) 2 ! q 1 f ( ζ ) q 3 f ( ζ ) 3 ! q 1 f ( ζ ) ϑ [ σ ] 4 ; q ,
When (A12) is divided by (A3), we have:
f ( y [ σ ] ) Ð q 1 f ( x [ σ ] ) = α 1 + q 2 f ( ζ ) 2 ! q 1 f ( ζ ) ϑ [ σ ] 2 ; q + α 1 2 + α 1 α 2 + 2 α 1 q 2 f ( ζ ) 2 ! q 1 f ( ζ ) 4 q 2 f ( ζ ) 2 ! q 1 f ( ζ ) 2 + 2 q 3 f ( ζ ) 3 ! q 1 f ( ζ ) ϑ [ σ ] 3 ; q + α 1 3 + 2 α 1 2 α 2 + α 1 2 q 2 f ( ζ ) q 1 f ( ζ ) α 1 2 α 2 2 α 1 α 2 q 2 f ( ζ ) 2 ! q 1 f ( ζ ) 7 α 1 q 2 f ( ζ ) 2 ! q 1 f ( ζ ) 2 + 13 q 2 f ( ζ ) 2 q 1 f ( ζ ) 3 + 4 α 1 q 3 f ( ζ ) 3 ! q 1 f ( ζ ) 14 q 2 f ( ζ ) 2 ! q 1 f ( ζ ) q 3 f ( ζ ) 3 ! q 1 f ( ζ ) + 3 q 4 f ( ζ ) 4 ! q 1 f ( ζ ) ϑ [ σ ] 4 ; q .
When (A13) is used in the second-step of (13), we have:
z [ σ ] ζ = v [ σ ] ζ f ( v [ σ ] ) q 1 f ( x [ σ ] ) ,
ϑ [ σ + 1 ] = α 1 q 2 f ( ζ ) q 1 f ( ζ ) + q 2 f ( ζ ) 2 q 1 f ( ζ ) 2 ϑ [ σ ] 3 ; q + O ϑ [ σ ] 4 ; q .
This proves the third-order convergence of our numerical scheme (13). □

Appendix B

Table A1: In biomedical engineering application 1, the ANN is trained on the head of the input data set to find all roots of (53).
Table A1. Randomly generated initial guesses X 1 [ 0 ] .
Table A1. Randomly generated initial guesses X 1 [ 0 ] .
X [ 0 ] x 1 ( 0 ) x 2 ( 0 ) x 3 ( 0 ) x 4 ( 0 )
X 11 [ 0 ] 0.101−0.0700.5030.137
X 12 [ 0 ] 0.6540.6450.5520.721
X 13 [ 0 ] 0.709−0.1110.6520.345
Table A2: In biomedical engineering application 2, the ANN is trained on the head of the input data set to find all roots of (54).
Table A2. Randomly generated initial guesses X 2 [ 0 ] .
Table A2. Randomly generated initial guesses X 2 [ 0 ] .
X 2 [ 0 ] x 1 ( 0 ) x 2 ( 0 ) x 3 ( 0 ) x 4 ( 0 ) x 5 ( 0 ) x 6 ( 0 ) x 7 ( 0 ) x 8 ( 0 )
X 21 [ 0 ] 1.000−0.6310.9940.9670.1110.9090.6430.967
X 22 [ 0 ] 0.007−0.0130.0110.8500.6150.0030.3980.190
X 23 [ 0 ] 0.433−0.8820.5440.1670.0760.1130.9430.700

Appendix C

Using Algorithm 2 in CAS-MATLAB@2012b, the ANN is trained on the head of the output data set to find the real and imaginary roots of polynomial equations in engineering applications 1 to 2.
Table A3. The head of the output data set to find the real and imaginary roots of biomedical engineering applications 1 to 2.
Table A3. The head of the output data set to find the real and imaginary roots of biomedical engineering applications 1 to 2.
ζ 0 ζ 1 ζ 2 ζ 3 ζ 4
Re( ζ 0 )Im( ζ 0 )Re( ζ 1 )Im( ζ 1 )Re( ζ 2 )Im( ζ 2 )Re( ζ 3 )Im( ζ 3 )Re( ζ 4 )Im( ζ 4 )
0.0040.3270.53430.31434.9250.1568.3436.6559.3138.443
0.0754.1874.2324.3150.0140.0158.7799.1155.3439.876
ζ 6 ζ 7 ζ 8
Re( ζ 6 )Im( ζ 6 )Re( ζ 7 )Im( ζ 7 )Re( ζ 8 )Im( ζ 8 )
0.32130.1100.5472.0989.2328.5443
0.00450.5434.0989.1120.99591458

References

  1. Akbari, M.R.; Ganji, D.D.; Nimafar, M.; Ahmadi, A.R. Significant progress in solution of nonlinear equations at displacement of structure and heat transfer extended surface by new AGM approach. Front. Mech. Eng. 2014, 9, 390–401. [Google Scholar] [CrossRef]
  2. Akbari, M.R. Akbari-Ganjis method AGM to chemical reactor design for non-isothermal and non-adiabatic of mixed flow reactors. J. Chem. Eng. Mater. Sci. 2020, 11, 1–9. [Google Scholar]
  3. Von-Karman, T. The engineer grapples with nonlinear problems. Bull. Am. Math. Soc. 1940, 46, 615–683. [Google Scholar] [CrossRef]
  4. Cordero, A.; Garrido, N.; Torregrosa, J.R.; Triguero-Navarro, P. Iterative schemes for finding all roots simultaneously of nonlinear equations. Appl. Math. Lett. 2022, 134, 108325. [Google Scholar] [CrossRef]
  5. Fredlund, D.G. Unsaturated soil mechanics in engineering practice. J. Geotech. Geoenviron. Eng.-ASCE 2006, 132, 286–321. [Google Scholar] [CrossRef]
  6. Baranovskii, E.S.; Artemov, M.A. Optimal control for a nonlocal model of non-Newtonian fluid flows. Mathematics 2021, 9, 275. [Google Scholar] [CrossRef]
  7. Marchildon, L. Quantum Mechanics: From Basic Principles to Numerical Methods and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  8. Johnson, C.R. Computational and numerical methods for bioelectric field problems. Crit. Rev. Biomed. Eng. 1997, 25, 1–10. [Google Scholar] [CrossRef]
  9. Mao, X.; Szpruch, L. Strong convergence and stability of implicit numerical methods for stochastic differential equations with non-globally Lipschitz continuous coefficients. J. Comput. Appl. Math. 2013, 238, 14–28. [Google Scholar] [CrossRef]
  10. Lux, T. Estimation of an agent-based model of investor sentiment formation in financial markets. J. Econ. Dyn. Cont. 2012, 36, 1284–1302. [Google Scholar] [CrossRef]
  11. Alekseev, V.B. Abel’s Theorem in Problems and Solutions: Based on the Lectures of Professor VI Arnold; Springer: Dordrecht, The Netherlands, 2004. [Google Scholar]
  12. Cordero, A.; Neta, B.; Torregrosa, J.R. Memorizing Schröder’s method as an efficient strategy for estimating roots of unknown multiplicity. Mathematics 2021, 9, 2570. [Google Scholar] [CrossRef]
  13. Akram, S.; Akram, F.; Junjua, M.U.D.; Arshad, M.; Afzal, T. A family of optimal eighth order iteration functions for multiple roots and its dynamics. J. Math. 2021, 2021, 5597186. [Google Scholar] [CrossRef]
  14. Erfanifar, R.; Hajarian, M. A new multi-step method for solving nonlinear systems with high efficiency indices. Numer. Algorithms 2024, 1–26. [Google Scholar] [CrossRef]
  15. Sugiura, H.; Hasegawa, T. On the global convergence of Schröder’s iterative formulae for real roots of algebraic equations. J. Comput. Appl. Math. 2018, 344, 313–322. [Google Scholar] [CrossRef]
  16. Proinov, P.D.; Vasileva, M.T. Local and semilocal convergence of Nourein’s iterative method for finding all zeros of a polynomial simultaneously. Symmetry 2020, 12, 1801. [Google Scholar] [CrossRef]
  17. Ivanov, S.I. A unified semilocal convergence analysis of a family of iterative algorithms for computing all zeros of a polynomial simultaneously. Numer. Algorithms 2017, 75, 1193–1204. [Google Scholar] [CrossRef]
  18. Weierstrass, K. Neuer Beweis des Satzes, dass jede ganze rationale Function einer Verän derlichen dargestellt werden kann als ein Product aus linearen Functionen derselben Verän derlichen. Sitzungsberichte KöNiglich Preuss. Akad. Der Wiss. Berl. 1981, 2, 1085–1101. [Google Scholar]
  19. Kerner, I.O. Ein gesamtschrittverfahren zur berechnung der nullstellen von polynomen. Numer. Math. 1966, 8, 290–294. [Google Scholar] [CrossRef]
  20. Dochev, M. Modified Newton method for the simultaneous computation of all roots of a given algebraic equation. Phys. Math. J. Bulg. Acad. Sci. 1962, 5, 136–139. [Google Scholar]
  21. Nedzhibov, G.H. Improved local convergence analysis of the Inverse Weierstrass method for simultaneous approximation of polynomial zeros. In Proceedings of the MATTEX 2018 Conference, Targovishte, Bulgaria, 16–17 October 2018; Volume 1, pp. 66–73. [Google Scholar]
  22. Marcheva, P.I.; Ivanov, S.I. Convergence analysis of a modified Weierstrass method for the simultaneous determination of polynomial zeros. Symmetry 2020, 12, 1408. [Google Scholar] [CrossRef]
  23. Shams, M.; Rafiq, N.; Kausar, N.; Agarwal, P.; Park, C.; Mir, N.A. On iterative techniques for estimating all roots of nonlinear equation and its system with application in differential equation. Adv. Differ. Equ. 2021, 2021, 480. [Google Scholar] [CrossRef]
  24. Alefeld, G.; Herzberger, J. On the convergence speed of some algorithms for the simultaneous approximation of polynomial roots. SIAM J. Numer. Anal. 1974, 11, 237–243. [Google Scholar] [CrossRef]
  25. Nourein, A.W. An improvement on Nourein’s method for the simultaneous determination of the zeroes of a polynomial (an algorithm). J. Comput. Appl. Math. 1977, 3, 109–112. [Google Scholar] [CrossRef]
  26. Nemri, A.; Soltani, F. Analytical approximation formulas in quantum calculus. Math. Mech. Solid. 2017, 22, 2075–2090. [Google Scholar] [CrossRef]
  27. Noeiaghdam, Z.; Rahmani, M.; Allahviranloo, T. Introduction of the numerical methods in quantum calculus with uncertainty. J. Math. Model. 2021, 9, 303–322. [Google Scholar]
  28. Al-Salih, R. Dynamic network flows in quantum calculus. J. Anal. Appl. 2020, 18, 53–66. [Google Scholar]
  29. Sinha, A.K.; Panda, S. Shehu Transform in Quantum Calculus and Its Applications. Int. J. Appl. Comput. Math. 2022, 8, 1–19. [Google Scholar] [CrossRef]
  30. Alhindi, K.R. Convex Families of q-Derivative Meromorphic Functions Involving the Polylogarithm Function. Symmetry 2023, 15, 1388. [Google Scholar] [CrossRef]
  31. Akça, H.; Benbourenane, J.; Eleuch, H. The q-derivative and differential equation. J. Phys. Conf. Ser. 2019, 1411, 12002. [Google Scholar] [CrossRef]
  32. Sana, G.; Mohammed, P.O.; Shin, D.Y.; Noor, M.A.; Oudat, M.S. On iterative methods for solving nonlinear equations in quantum calculus. Fractal Fract. 2021, 5, 60. [Google Scholar] [CrossRef]
  33. Georgiev, S.G.; Tikare, S. Taylor’s formula for general quantum calculus. J. Math. Model. 2023, 11, 491–505. [Google Scholar]
  34. Sheng, Y.; Zhang, T. Some Results on the q-Calculus and Fractional q-Differential Equations. Mathematics 2022, 10, 64. [Google Scholar] [CrossRef]
  35. Proinov, P.D. General convergence theorems for iterative processes and applications to the Weierstrass root-finding method. J. Complex. 2016, 33, 118–144. [Google Scholar] [CrossRef]
  36. Proinov, P.D.; Petkova, M.D. Convergence of the two-point Weierstrass root-finding method. Jpn. J. Ind. Appl. Math. 2014, 31, 279–292. [Google Scholar] [CrossRef]
  37. Huang, D.S.; Chi, Z. Finding complex roots of polynomials by feed forward neural networks. In Proceedings of the IJCNN’01, International Joint Conference on Neural Networks, Cat. No. 01CH37222, Washington, DC, USA, 15–19 July 2001; Volumes 15–19, p. A13. [Google Scholar]
  38. Huang, D.S.; Chi, Z. Neural networks with problem decomposition for finding real roots of polynomials. In Proceedings of the IJCNN’01, International Joint Conference on Neural Networks, 01CH37222, Washington, DC, USA, 15–19 July 2001; p. A25. [Google Scholar]
  39. Huang, D.S.; Ip, H.H.; Chi, Z.; Wong, H.S. Dilation method for finding close roots of polynomials based on constrained learning neural networks. Phys. Lett. A 2003, 309, 443–451. [Google Scholar] [CrossRef]
  40. Daws, J., Jr.; Webster, C.G. A polynomial-based approach for architectural design and learning with deep neural networks. arXiv 2019, arXiv:1905.10457. [Google Scholar]
  41. Mourrain, B.; Pavlidis, N.G.; Tasoulis, D.K.; Vrahatis, M.N. Determining the number of real roots of polynomials through neural networks. Comput. Math. Appl. 2006, 51, 527–536. [Google Scholar] [CrossRef]
  42. Huang, D.; Chi, Z. Finding roots of arbitrary high order polynomials based on neural network recursive partitioning method. Sci. China Inf. Sci. 2004, 47, 232–245. [Google Scholar] [CrossRef]
  43. Freitas, D.; Guerreiro Lopes, L.; Morgado-Dias, F. A Neural network-based approach for approximating arbitrary roots of polynomials. Mathematics 2021, 9, 317. [Google Scholar] [CrossRef]
  44. Shams, M.; Carpentieri, B. On highly efficient fractional numerical method for solving nonlinear engineering models. Mathematics 2023, 11, 4914. [Google Scholar] [CrossRef]
  45. Huang, D.S. A constructive approach for finding arbitrary roots of polynomials by neural networks. IEEE Trans. Neural Netw. 2004, 15, 477–491. [Google Scholar] [CrossRef] [PubMed]
  46. Börsch-Supan, W. Residuenabschätzung für Polynom-Nullstellen mittels Lagrange-Interpolation. Numer. Math. 1970, 14, 287–296. [Google Scholar] [CrossRef]
  47. Rafiq, N.; Mir, N.A.; Yasmin, N. Some two-step simultaneous methods for determining all the roots of a non-linear equation. Life Sci. J. 2013, 10, 54–59. [Google Scholar]
  48. Nedzhibov, G.H.; Petkov, M.G. On a family of iterative methods for simultaneous extraction of all roots of algebric polynomial. Appl. Math. Comput. 2005, 162, 427–433. [Google Scholar]
  49. Mir, N.A.; Shams, M.; Rafiq, N.; Akram, S.; Ahmed, R. On family of simultaneous method for finding distinct as well as multiple roots of non-linear equation. Punjab Univ. J. Math. 2020, 52, 1–10. [Google Scholar]
  50. Saravanakumar, T.; Nirmala, V.J.; Raja, R.; Cao, J.; Lu, G. Finite-time reliable dissipative control of neutral-type switched artificial neural networks with non-linear fault inputs and randomly occurring uncertainties. Asian J. Cont. 2020, 22, 2487–2499. [Google Scholar] [CrossRef]
  51. Polyanin, A.D.; Manzhirov, A.V. Handbook of Mathematics for Engineers and Scientists; CRC Press: Boca Raton, FL, USA, 2006. [Google Scholar]
  52. Fournier, R.L. Basic Transport Phenomena in Biomedical Engineering; Taylor Franics: New York, NY, USA, 2007. [Google Scholar]
  53. Petković, M.S.; Tričković, S.; Herceg, D. On Euler-like methods for the simultaneous approximation of polynomial zeros. Jpn. J. Indust. Appl. Math. 1998, 15, 295–315. [Google Scholar] [CrossRef]
  54. Neumaier, A.; Schäfer, A. Divided differences, shift transformations and Larkin’s root finding method. Math. Comput. 1985, 45, 181–196. [Google Scholar]
  55. Argyros, I.K.; Magreñán, Á.A.; Orcos, L. Local convergence and a chemical application of derivative free root finding methods with one parameter based on interpolation. J. Math. Chem. 2016, 54, 1404–1416. [Google Scholar] [CrossRef]
  56. Wu, X.; Shao, R.; Zhu, Y. Jacobi-free and complex-free method for finding simultaneously all zeros of polynomials having only real zeros. Comput. Math. Appl. 2003, 46, 1387–1395. [Google Scholar] [CrossRef]
  57. Proinov, P.D. On the local convergence of Ehrlich method for numerical computation of polynomial zeros. Calcolo 2016, 53, 413–426. [Google Scholar] [CrossRef]
Figure 1. A schematic representation of the process of feeding the coefficients of a polynomial into an Artificial Neural Network (ANN), which then yields an approximation for each root of Equation (1).
Figure 1. A schematic representation of the process of feeding the coefficients of a polynomial into an Artificial Neural Network (ANN), which then yields an approximation for each root of Equation (1).
Applsci 14 01540 g001
Figure 2. A schematic representation of the process of feeding the coefficients of a polynomial into an Parallel Neural Network (PNNS), which then yields an approximation for each root of Equation (1).
Figure 2. A schematic representation of the process of feeding the coefficients of a polynomial into an Parallel Neural Network (PNNS), which then yields an approximation for each root of Equation (1).
Applsci 14 01540 g002
Figure 3. (ae) Error histogram of neural network for engineering application 1 (b) Transition statistics curve of neural network for engineering application 1 (c) Mean square error curve of neural network for engineering application 1 (d) Regression curve of neural network for engineering application 1 (e) Fitness curve of neural network for engineering application 1. (a) The neural network’s error histogram. (b) The neural network’s transition statistics curve. (c) The neural network’s mean square error curve. (d) The neural network’s regression curve. (e) The neural network’s fitness curve.
Figure 3. (ae) Error histogram of neural network for engineering application 1 (b) Transition statistics curve of neural network for engineering application 1 (c) Mean square error curve of neural network for engineering application 1 (d) Regression curve of neural network for engineering application 1 (e) Fitness curve of neural network for engineering application 1. (a) The neural network’s error histogram. (b) The neural network’s transition statistics curve. (c) The neural network’s mean square error curve. (d) The neural network’s regression curve. (e) The neural network’s fitness curve.
Applsci 14 01540 g003
Figure 4. (a,b) The convergence path of the approximated roots for engineering application 1 using a neural network, (b) the neural network-based computational order of convergence of the approximate roots for engineering application 1. (a) The neural network’s convergence path. (b) The computational order of convergence.
Figure 4. (a,b) The convergence path of the approximated roots for engineering application 1 using a neural network, (b) the neural network-based computational order of convergence of the approximate roots for engineering application 1. (a) The neural network’s convergence path. (b) The computational order of convergence.
Applsci 14 01540 g004
Figure 5. (af) Using the neural network’s output as input, the local computational order of convergence of Q - M M σ 1 required 24 iterations to converge (b) using the neural network’s output as input, the local computational order of convergence of B S M C 3 required 30 iterations (c) using the neural network’s output as input, the local computational order of convergence of N A M C 3 required 25 iterations (d) using the neural network’s output as input, the local computational order of convergence of N D M C 3 required 22 iterations (e) using the neural network’s output as input, the local computational order of convergence of N A M C 8 required 19 iterations (e) using the neural network’s output as input, the local computational order of convergence of Q - M M σ 2 required 16 iterations to converge. (a) Local computing order of Q - M M σ 1 convergence. (b) Local computing order of B S M C 3 convergence. (c) Local computing order of N A M C 3 convergence. (d) Local computing order of N D M C 3 convergence. (e) Local computing order of N A M C 8 convergence. (f) Local computing order of Q - M M σ 2 convergence, for solving biomedical engineering application 1.
Figure 5. (af) Using the neural network’s output as input, the local computational order of convergence of Q - M M σ 1 required 24 iterations to converge (b) using the neural network’s output as input, the local computational order of convergence of B S M C 3 required 30 iterations (c) using the neural network’s output as input, the local computational order of convergence of N A M C 3 required 25 iterations (d) using the neural network’s output as input, the local computational order of convergence of N D M C 3 required 22 iterations (e) using the neural network’s output as input, the local computational order of convergence of N A M C 8 required 19 iterations (e) using the neural network’s output as input, the local computational order of convergence of Q - M M σ 2 required 16 iterations to converge. (a) Local computing order of Q - M M σ 1 convergence. (b) Local computing order of B S M C 3 convergence. (c) Local computing order of N A M C 3 convergence. (d) Local computing order of N D M C 3 convergence. (e) Local computing order of N A M C 8 convergence. (f) Local computing order of Q - M M σ 2 convergence, for solving biomedical engineering application 1.
Applsci 14 01540 g005
Figure 6. (ae) Error histogram of neural network for engineering application 2 (b) Transition statistics curve of neural network for engineering application 2 (c) Mean square error curve of neural network for engineering application 2 (d) Regression curve of neural network for engineering application 2 (e) Fitness curve of neural network for engineering application 2. (a) The neural network’s error histogram. (b) The neural network’s transition statistics curve. (c) The neural network’s mean square error curve. (d) The neural network’s regression curve. (e) The neural network’s fitness curve.
Figure 6. (ae) Error histogram of neural network for engineering application 2 (b) Transition statistics curve of neural network for engineering application 2 (c) Mean square error curve of neural network for engineering application 2 (d) Regression curve of neural network for engineering application 2 (e) Fitness curve of neural network for engineering application 2. (a) The neural network’s error histogram. (b) The neural network’s transition statistics curve. (c) The neural network’s mean square error curve. (d) The neural network’s regression curve. (e) The neural network’s fitness curve.
Applsci 14 01540 g006
Figure 7. (a,b) The convergence path of the approximated roots for engineering application 2 using a neural network (b) the neural network-based computational order of convergence of the approximate roots for engineering application 2. (a) The neural network’s convergence path. (b) The computational order of convergence.
Figure 7. (a,b) The convergence path of the approximated roots for engineering application 2 using a neural network (b) the neural network-based computational order of convergence of the approximate roots for engineering application 2. (a) The neural network’s convergence path. (b) The computational order of convergence.
Applsci 14 01540 g007
Figure 8. (af) Using the neural network’s output as input, the local computational order of convergence of Q - M M σ 1 required 80 iterations to converge (b) Using the neural network’s output as input, the local computational order of convergence of B S M C 3 required 90 iterations (c) using the neural network’s output as input, the local computational order of convergence of N A M C 3 required 119 iterations (d) using the neural network’s output as input, the local computational order of convergence of N D M C 3 required 19 iterations (e) Using the neural network’s output as input, the local computational order of convergence of N A M C 8 required 115 iterations (e) using the neural network’s output as input, the local computational order of convergence of Q - M M σ 2 required 80 iterations to converge. (a) Local computing order of Q - M M σ 1 convergence. (b) Local computing order of B S M C 3 convergence. (c) Local computing order of N A M C 3 convergence. (d) Local computing order of N D M C 3 convergence. (e) Local computing order of N A M C 8 convergence. (f) Local computing order of Q - M M σ 2 convergence.
Figure 8. (af) Using the neural network’s output as input, the local computational order of convergence of Q - M M σ 1 required 80 iterations to converge (b) Using the neural network’s output as input, the local computational order of convergence of B S M C 3 required 90 iterations (c) using the neural network’s output as input, the local computational order of convergence of N A M C 3 required 119 iterations (d) using the neural network’s output as input, the local computational order of convergence of N D M C 3 required 19 iterations (e) Using the neural network’s output as input, the local computational order of convergence of N A M C 8 required 115 iterations (e) using the neural network’s output as input, the local computational order of convergence of Q - M M σ 2 required 80 iterations to converge. (a) Local computing order of Q - M M σ 1 convergence. (b) Local computing order of B S M C 3 convergence. (c) Local computing order of N A M C 3 convergence. (d) Local computing order of N D M C 3 convergence. (e) Local computing order of N A M C 8 convergence. (f) Local computing order of Q - M M σ 2 convergence.
Applsci 14 01540 g008
Table 1. Error analysis and roots approximation using the Q - M M σ 1 method with X 1 [ 0 ] on engineering application 1.
Table 1. Error analysis and roots approximation using the Q - M M σ 1 method with X 1 [ 0 ] on engineering application 1.
X 1 [ 0 ] = X 11 [ 0 ] , X 12 [ 0 ] , X 13 [ 0 ] ,
q 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.1
x 1 [ 0 ] 0.03 3.8 i 2.0 × 10 5 3.7 i 2.0 1.2 × 10 49 i 2.0 × 10 5 3.7 i 3.0 + 2.6 × 10 38 i 3.0 8.1 × 10 34 i 2.0 1.4 × 10 59 i 3.0 7.5 × 10 96 i 3.0 7.5 × 10 96 i
x 2 [ 0 ] 4.3 3.5 i 3.0 + 4.6 × 10 30 i 3.0 + 3.5 × 10 5 i 3.0 + 4.6 × 10 30 i 2 1.8 × 10 35 i 1.5 × 10 51 3.8 i 3.0 + 2.6 × 10 56 i 2.0 1.2 × 10 94 i 2.0 1.2 × 10 94 i
x 3 [ 0 ] 1.9 + 1.6 i 2.0 7.7 × 10 38 i 2.1 × 10 63 3.8 i 2.0 7.7 × 10 38 i 7.8 × 10 50 3.8 i 2.0 + 1.2 × 10 30 i 1.3 × 10 62 + 38 i 5.2 7.7 × 10 38 i 5.2 7.7 × 10 38 i
x 4 [ 0 ] 0.04 + 3.8 i 4.0 × 10 51 + 3.8 i 2.4 × 10 64  +  3.8 i 4.0 × 10 51 + 3.8 i 1.9 × 10 49 + 3.8 i 6.0 × 10 49 + 3.8 i 4.0 × 10 53 3.8 i 2.4 × 10 63 + 3.8 i 2.4 × 10 64 + 3.8 i
Λ 1 [ ] 0.11 4.3 × 10 30 8.8 × 10 25 4.3 × 10 30 2.9 × 10 20 3.3 × 10 18 6.4 × 10 27 2.8 × 10 47 1.8 × 10 49
Λ 2 [ ] 8.16 1.3 × 10 20 2.0 × 10 26 1.3 × 10 20 1.3 × 10 17 3.8 × 10 33 8.3 × 10 29 4.7 × 10 46 0.7 × 10 47
Λ 3 [ ] 1.66 5.3 × 10 11 1.0 × 10 37 5.3 × 10 11 1.3 × 10 29 4.7 × 10 15 1.7 × 10 33 6.3 × 10 50 9.0 × 10 50
Λ 4 [ ] 0.08 8.1 × 10 31 2.6 × 10 37 8.1 × 10 29 2.6 × 10 30 2.0 × 10 32 8.1 × 10 33 0.1 × 10 51 0.1 × 10 55
E max time 6.10241 4.12154 3.14561 3.51423 4.31414 3.12478 2.01245 1.012451 1.12431
Table 2. Max-Error for the Q - M M σ 1 method using X 1 [ 0 ] .
Table 2. Max-Error for the Q - M M σ 1 method using X 1 [ 0 ] .
Q - MM σ 1 ; q e 1 ( 3 ) e 2 ( 3 ) e 3 ( 3 )
Q - MM σ 2 ; 1.1 4.0 × 10 49 3.6 × 10 47 9.9 × 10 50
Q - MM σ 2 ; 0.9 4.33 × 10 47 6.3 × 10 46 3.5 × 10 50
Q - MM σ 2 ; 0.8 4.3 × 10 27 3.6 × 10 29 4.5 × 10 33
Q - MM σ 2 ; 0.7 5.3 × 10 18 6.5 × 10 33 9.9 × 10 15
Q - MM σ 2 ; 0.6 3.1 × 10 20 6.5 × 10 17 6.3 × 10 29
Q - MM σ 2 ; 0.5 3.2 × 10 26 9.3 × 10 36 5.6 × 10 29
Q - MM σ 2 ; 0.4 4.6 × 10 26 3.9 × 10 36 1.4 × 10 35
Q - MM σ 2 ; 0.3 1.2 × 10 30 6.3 × 10 20 7.1 × 10 19
Q - MM σ 2 ; 0.2 2.0 × 10 2 3.1 × 10 2 1.2 × 10 3
Table 3. Number of iterations for Q - M M σ 1 using X 1 [ 0 ] .
Table 3. Number of iterations for Q - M M σ 1 using X 1 [ 0 ] .
Q - MM σ 1 ; q It- e 1 [ σ ] It- e 2 [ σ ] It- e 3 [ σ ]
Q - MM σ 2 ; 1.1 161616
Q - MM σ 2 ; 0.9 161616
Q - MM σ 2 ; 0.8 181818
Q - MM σ 2 ; 0.7 202020
Q - MM σ 2 ; 0.6 262626
Q - MM σ 2 ; 0.5 272727
Q - MM σ 2 ; 0.4 282828
Q - MM σ 2 ; 0.3 858585
Q - MM σ 2 ; 0.2 100100100
Table 4. CPU-time for the  Q - M M σ 1  method  using X 1 [ 0 ] .
Table 4. CPU-time for the  Q - M M σ 1  method  using X 1 [ 0 ] .
Q - MM σ 1 ; q CT- e 1 ( σ ) CT- e 2 ( σ ) CT- e 3 ( σ )
Q - MM σ 2 ; 1.1 1.942151.94151.8451
Q - MM σ 2 ; 0.9 2.145612.541652.14554
Q - MM σ 2 ; 0.8 3.01243.001243.01245
Q - MM σ 2 ; 0.7 3.124153.241563.21451
Q - MM σ 2 ; 0.6 3.541263.845613.98451
Q - MM σ 2 ; 0.5 4.001214.001254.00165
Q - MM σ 2 ; 0.4 4.012344.120134.18745
Q - MM σ 2 ; 0.3 5.012425.124155.1421
Q - MM σ 2 ; 0.2 5.124555.142156.1425
Table 5. Error analysis and roots approximation using the Q - M M σ 2 method with X 1 [ 0 ] on engneering application 1.
Table 5. Error analysis and roots approximation using the Q - M M σ 2 method with X 1 [ 0 ] on engneering application 1.
X 1 [ 0 ] = X 11 [ 0 ] , X 12 [ 0 ] , X 13 [ 0 ] ,
q 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.1
x 1 [ 0 ] 0.03 3.8 i 2.0 × 10 50 3.7 i 2.0 1.2 × 10 49 i 2.0 × 10 50 3.7 i 3.0 + 2.6 × 10 38 i 3.0 8.1 × 10 34 i 2.0 1.4 × 10 59 i 3.0 7.5 × 10 96 i 3.0 7.5 × 10 96 i
x 2 [ 0 ] 4.3 3.5 i 3.0 + 4.6 × 10 30 i 3.0 + 3.5 × 10 50 i 3.0 + 4.6 × 10 30 i 2 1.8 × 10 35 i 1.5 × 10 51 3.8 i 3.0 + 2.6 × 10 56 i 2.0 1.2 × 10 94 i 2.0 1.2 × 10 94 i
x 3 [ 0 ] 1.9 + 1.6 i 2.0 7.7 × 10 38 i 2.1 × 10 63 3.8 i 2.0 7.7 × 10 38 i 7.8 × 10 50 3.8 i 2.0 + 1.2 × 10 30 i 1.3 × 10 52 + 38 i 5.2 × 10 64 7.7 i 5.2 × 10 65 7.7 i
x 4 [ 0 ] 0.04 + 3.8 i 4.0 × 10 51 + 3.8 i 2.4 × 10 64 + 3.8 i 4.0 × 10 51 + 3.8 i 1.9 × 10 49 + 3.8 i 6.0 × 10 49 + 3.8 i 4.0 × 10 53 3.8 i 2.4 × 10 63 + 3.8 i 2.4 × 10 64 + 3.8 i
Λ 1 [ ] 5.1 × 10 55 1.3 × 10 64 4.8 × 10 63 4.3 × 10 52 2.9 × 10 45 6.3 × 10 18 6.4 × 10 55 4.1 × 10 64 1.1 × 10 64
Λ 2 [ ] 2.0 × 10 58 0.3 × 10 64 0.0 0.7 × 10 53 6.3 × 10 45 9.8 × 10 33 8.3 × 10 55 2.2 × 10 74 9.7 × 10 65
Λ 3 [ ] 5.1 × 10 55 5.9 × 10 86 1.5 × 10 86 7.3 × 10 53 6.0 × 10 46 9.7 × 10 15 6.7 × 10 45 0.3 × 10 64 9.9 × 10 70
Λ 4 [ ] 2.3 × 10 57 8.4 × 10 86 5.6 × 10 86 7.1 × 10 54 8.6 × 10 46 0.1 × 10 32 6.1 × 10 56 8.8 × 10 75 6.0 × 10 71
E max time 6.10241 4.12154 3.14561 3.51423 4.31414 3.12478 2.01245 1.012451 1.12431
Table 6. Max-Error for the Q - M M σ 2 method using X 1 [ 0 ] .
Table 6. Max-Error for the Q - M M σ 2 method using X 1 [ 0 ] .
Q - MM σ 2 ; q e 1 ( 3 ) e 2 ( 3 ) e 3 ( 3 )
Q - MM σ 2 ; 1.1 1.4 × 10 19 1.7 × 10 15 1.3 × 10 14
Q - MM σ 2 ; 0.99 2.2 × 10 21 1.1 × 10 18 3.2 × 10 21
Q - MM σ 2 ; 0.9 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15
Q - MM σ 2 ; 0.8 4.2 × 10 25 3.1 × 10 31 4.2 × 10 25
Q - MM σ 2 ; 0.7 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15
Q - MM σ 2 ; 0.6 4.2 × 10 25 3.1 × 10 31 4.2 × 10 25
Q - MM σ 2 ; 0.5 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15
Q - MM σ 2 ; 0.4 4.2 × 10 25 3.1 × 10 31 4.2 × 10 25
Q - MM σ 2 ; 0.3 4.2 × 10 25 3.1 × 10 31 4.2 × 10 25
Table 7. Number of iterations of Q - M M σ 2 using X 1 [ 0 ] .
Table 7. Number of iterations of Q - M M σ 2 using X 1 [ 0 ] .
Q - MM σ 2 ; q It- e 1 [ σ ] It- e 2 [ σ ] It- e 3 [ σ ] It- e 4 [ σ ]
Q - MM σ 2 ; 1.1 5555
Q - MM σ 2 ; 0.9 5555
Q - MM σ 2 ; 0.8 7777
Q - MM σ 2 ; 0.7 8888
Q - MM σ 2 ; 0.6 8888
Q - MM σ 2 ; 0.5 11111111
Q - MM σ 2 ; 0.4 14141414
Q - MM σ 2 ; 0.3 17171717
Q - MM σ 2 ; 0.2 22222222
Table 8. CPU-time Q - M M σ 2 using X 1 [ 0 ] .
Table 8. CPU-time Q - M M σ 2 using X 1 [ 0 ] .
Q - MM σ 2 ; q CT- e 1 ( 3 ) CT- e 2 ( 3 ) CT- e 3 ( 3 )
Q - MM σ 2 ; 1.1 0.042150.04150.0451
Q - MM σ 2 ; 1.0 0.140610.541050.14004
Q - MM σ 2 ; 0.9 2.01242.001242.01245
Q - MM σ 2 ; 0.8 2.124152.30562.21451
Q - MM σ 2 ; 0.7 3.541263.845613.98451
Q - MM σ 2 ; 0.6 3.101113.001253.4126
Q - MM σ 2 ; 0.5 4.212344.151234.18745
Q - MM σ 2 ; 0.4 4.012424.124254.64201
Q - MM σ 2 ; 0.3 4.124554.131254.14112
Table 9. Error outcomes using neural networks on application 1.
Table 9. Error outcomes using neural networks on application 1.
Method e 1 ( σ ) e 2 ( σ ) e 3 ( σ ) ρ i [ σ 1 ]
Q - MM σ 1 ; 1.0 4.2 × 10 47 3.1 × 10 46 4.2 × 10 50 3.4714
T A M C * 6.9 × 10 25 0.4 × 10 25 8.5 × 10 31 3.0032
B S M C 3 2.1 × 10 25 2.7 × 10 24 2.1 × 10 25 2.7451
N A M C 3 4.2 × 10 25 3.1 × 10 13 4.2 × 10 30 2.1452
N D M C 3 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15 3.0145
N A M C 8 4.2 × 10 57 3.1 × 10 56 4.2 × 10 60 7.61452
Q - MM σ 2 ; 1.0 2.1 × 10 65 2.7 × 10 64 2.1 × 10 73 8.0124
Table 10. Improvement in convergence rate using neural network outcomes as input for the parallel root finding scheme.
Table 10. Improvement in convergence rate using neural network outcomes as input for the parallel root finding scheme.
Method e 1 ( 3 ) e 2 ( 3 ) e 3 ( 3 ) ρ i [ σ 1 ]
Q - MM σ 1 ; 1.0 1.4 × 10 19 1.7 × 10 15 1.3 × 10 14 3.1024
T A M C * 6.1 × 10 25 7.7 × 10 19 7.8 × 10 17 3.1332
B S M C 3 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15 3.0612
N A M C 3 4.2 × 10 25 3.1 × 10 31 4.2 × 10 25 3.0071
N D M C 3 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15 2.9981
N A M C 8 4.2 × 10 25 3.1 × 10 31 4.2 × 10 25 7.4751
Q - MM σ 2 ; 1.0 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15 8.0182
Table 11. Overall results of q-analogies-based neural network outcomes for accurate initial guesses.
Table 11. Overall results of q-analogies-based neural network outcomes for accurate initial guesses.
Method M a x - E r r Λ 1 [ * ] M a x - i t Λ 1 [ * ] M a x - C P U Λ 1 [ * ] M a x - C O C Λ 1 [ * ] ρ i [ σ 1 ]
Q - MM σ 1 ; 1.0 1.4 × 10 19 1.3 × 10 14 2.1 × 10 15 3.314246 3.4219
T A M C * 0.7 × 10 27 0.1 × 10 25 5.1 × 10 29 2.013325 3.0106
B S M C 3 2.2 × 10 21 3.2 × 10 21 6.1 × 10 30 2.981454 3.1452
N A M C 3 2.1 × 10 15 2.1 × 10 15 2.7 × 10 14 3.001245 2.9874
N D M C 3 4.2 × 10 25 4.2 × 10 25 3.1 × 10 31 3.012445 3.0145
N A M C 8 2.1 × 10 51 2.1 × 10 15 2.7 × 10 14 7.954154 7.6845
Q - MM σ 2 ; 1.0 4.2 × 10 25 4.2 × 10 25 3.1 × 10 31 8.012416 8.3541
Table 12. Error analysis and roots approximation using the Q - M M σ 1 method with X 2 [ 0 ] on engineering application 2.
Table 12. Error analysis and roots approximation using the Q - M M σ 1 method with X 2 [ 0 ] on engineering application 2.
X 2 [ 0 ] = X 21 [ 0 ] , X 22 [ 0 ] , X 23 [ 0 ] ,
q 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.1
x 1 [ σ ] 0.1 1.0 × 10 64 i 0.1 1.0 × 10 64 i 0.1 1.0 × 10 64 i 0.1 1.0 × 10 64 i 0.1 1.0 × 10 64 i 0.1 1.0 × 10 64 i 0.1 1.0 × 10 64 i 0.1 1.0 × 10 64 i 0.1 1.0 × 10 64 i
x 2 [ σ ] 3.8 0.5 × 10 63 i 3.8 0.5 × 10 63 i 3.8 0.5 × 10 63 i 3.8 0.5 × 10 63 i 3.8 0.5 × 10 63 i 3.8 0.5 × 10 63 i 3.8 0.5 × 10 63 i 3.8 0.5 × 10 63 i 3.8 0.5 × 10 63 i
x 3 [ σ ] 1.531 + 0.124 i 1.531 + 0.124 i 1.531 + 0.124 i 1.531 + 0.124 i 1.531 + 0.124 i 1.531 + 0.124 i 1.531 + 0.124 i 1.531 + 0.124 i 1.531 + 0.124 i
x 4 [ σ ] 1.2 + 3.41 i 1.2 + 3.41 i 1.2 + 3.41 i 1.2 + 3.41 i 1.2 + 3.41 i 1.2 + 3.41 i 1.2 + 3.41 i 1.2 + 3.41 i 1.2 + 3.41 i
x 5 [ σ ] 2.2 + 1.94 i 2.2 + 1.94 i 2.2 + 1.94 i 2.2 + 1.94 i 2.2 + 1.94 i 2.2 + 1.94 i 2.2 + 1.94 i 2.2 + 1.94 i 2.2 + 1.94 i
x 6 [ σ ] −2.2-1.94i 2.2 1.94 i 2.2 1.94 i 2.2 1.94 i 2.2 1.94 i 2.2 1.94 i −2.2-1.94i 2.2 1.94 i 2.2 1.94 i
x 7 [ σ ] 1.2 3.40 i 1.2 3.40 i 1.2 3.40 i 1.2 3.40 i 1.2 3.40 i 1.2 3.40 i 1.2 3.40 i 1.2 3.40 i 1.2 3.40 i
x 8 [ σ ] 1.5 0.94 i 1.5 0.94 i 1.5 0.94 i 1.5 0.94 i 1.5 0.94 i 1.5 0.94 i 1.5 0.94 i 1.5 0.94 i 1.5 0.94 i
Λ 1 [ ] 0.11 4.3 × 10 30 8.8 × 10 25 4.3 × 10 30 1.3 × 10 29 4.7 × 10 15 6.4 × 10 27 2.8 × 10 47 1.8 × 10 49
Λ 2 [ ] 8.16 1.3 × 10 20 2.0 × 10 26 1.3 × 10 20 2.6 × 10 30 2.0 × 10 32 8.3 × 10 29 4.7 × 10 46 0.7 × 10 47
Λ 3 [ ] 1.66 5.3 × 10 11 1.0 × 10 37 5.3 × 10 11 1.3 × 10 29 4.7 × 10 15 1.7 × 10 33 6.3 × 10 50 9.0 × 10 50
Λ 4 [ ] 0.08 8.1 × 10 31 2.6 × 10 37 8.1 × 10 29 2.6 × 10 30 2.0 × 10 32 8.1 × 10 33 0.1 × 10 51 0.1 × 10 55
Λ 5 [ ] 0.11 4.3 × 10 30 8.8 × 10 25 4.3 × 10 30 2.9 × 10 20 3.3 × 10 18 6.4 × 10 27 2.8 × 10 47 1.8 × 10 49
Λ 6 [ ] 8.16 1.3 × 10 20 2.0 × 10 26 1.3 × 10 20 1.3 × 10 17 3.8 × 10 33 8.3 × 10 29 4.7 × 10 46 0.7 × 10 27
Λ 7 [ ] 1.66 5.3 × 10 11 1.0 × 10 37 5.3 × 10 11 1.3 × 10 29 4.7 × 10 15 1.7 × 10 33 6.3 × 10 50 9.0 × 10 50
Λ 8 [ ] 0.08 8.1 × 10 31 2.6 × 10 37 8.1 × 10 29 2.6 × 10 30 2.0 × 10 32 8.1 × 10 33 0.1 × 10 51 0.1 × 10 55
E max time 8.84512 9.1241 9.2145 10.254 9.4151 9.1245 9.1245 7.1425 6.3251
Table 13. Max-Error for the Q - M M σ 1 method using X 1 [ 0 ] on engineering application 2.
Table 13. Max-Error for the Q - M M σ 1 method using X 1 [ 0 ] on engineering application 2.
Q - MM σ 1 ; q e 1 ( σ ) e 2 ( σ ) e 3 ( σ ) e 4 ( σ ) e 5 ( σ ) e 6 ( σ ) e 7 ( σ ) e 8 ( σ )
Q - MM σ 2 ; 1.1 0.2 × 10 29 0.2 × 10 65 3.5 × 10 64 88 × 10 30 0.3 × 10 35 1.0 × 10 35 9.7 × 10 45 5.7 × 10 41
Q - MM σ 2 ; 1.0 2.2 × 10 29 1.1 × 10 65 3.2 × 10 64 6.1 × 10 30 7.3 × 10 35 3.5 × 10 35 4.2 × 10 45 3.1 × 10 41
Q - MM σ 2 ; 0.9 2.1 × 10 25 2.7 × 10 24 2.1 × 10 25 2.7 × 10 24 2.1 × 10 25 2.7 × 10 24 2.1 × 10 25 2.7 × 10 20
Q - MM σ 2 ; 0.8 4.2 × 10 25 3.1 × 10 20 4.2 × 10 22 3.1 × 10 31 4.2 × 10 25 3.1 × 10 21 4.2 × 10 25 3.1 × 10 18
Q - MM σ 2 ; 0.7 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15 2.7 × 10 14
Q - MM σ 2 ; 0.6 4.2 × 10 20 8.1 × 10 10 9.2 × 10 15 3.1 × 10 11 4.2 × 10 15 3.1 × 10 11 4.2 × 10 15 3.1 × 10 10
Q - MM σ 2 ; 0.5 2.1 × 10 5 6.7 × 10 4 2.1 × 10 5 2.1 × 10 4 2.1 × 10 5 2.7 × 10 4 2.1 × 10 5 2.7 × 10 4
Q - MM σ 2 ; 0.4 0.2 × 10 5 1.1 × 10 3 4.2 × 10 5 3.1 × 10 3 4.1 × 10 2 3.0 × 10 3 3.2 × 10 5 3.1 × 10 4
Q - MM σ 2 ; 0.3 1.2 × 10 3 0.1 × 10 3 4.2 × 10 2 3.1 × 10 3 4.2 × 10 2 3.1 × 10 3 4.2 × 10 2 3.1 × 10 3
Table 14. Number of iterations for the Q - M M σ 1 method using X 2 [ 0 ] .
Table 14. Number of iterations for the Q - M M σ 1 method using X 2 [ 0 ] .
MM α 1 ; q It- e 1 [ σ ] It- e 2 [ σ ] It- e 3 [ σ ] It- e 4 [ σ ] It- e 5 [ σ ] It- e 6 [ σ ] It- e 7 [ σ ] It- e 8 [ σ ]
Q - MM σ 2 ; 1.1 6363636363636363
Q - MM σ 2 ; 0.9 6363636363636363
Q - MM σ 2 ; 0.8 7777777777777777
Q - MM σ 2 ; 0.7 7979797979797979
Q - MM σ 2 ; 0.6 8585858585858585
Q - MM σ 2 ; 0.5 8787878787878787
Q - MM σ 2 ; 0.4 9191919191919191
Q - MM σ 2 ; 0.3 9797979797979797
Q - MM σ 2 ; 0.2 9999999999999999
Table 15. CPU-time for the Q - MM σ 1 method using X 2 [ 0 ] .
Table 15. CPU-time for the Q - MM σ 1 method using X 2 [ 0 ] .
Q - MM σ 1 ; q CT- e 1 ( 3 ) CT- e 2 ( 3 ) CT- e 3 ( 3 ) CT- e 4 ( 3 ) CT- e 5 ( 3 ) CT- e 6 ( 3 ) CT- e 7 ( 3 ) CT- e 8 ( 3 )
Q - MM σ 2 ; 1.1 6.33256.14247.12417.21486.21456.32516.21456.2525
Q - MM σ 2 ; 0.9 6.32146.21456.98566.32986.8547632147.21456.3214
Q - MM σ 2 ; 0.8 7.36527.14567.321457.32147.14527.36527.12457.1426
Q - MM σ 2 ; 0.7 7.21457.36527.12457.36527.12547.65217.32567.1254
Q - MM σ 2 ; 0.6 7.365417.986547.69357.965237.32147.693217.85468.2156
Q - MM σ 2 ; 0.5 8.365218.965428.321548.365428.369548.214538.63528.9658
Q - MM σ 2 ; 0.4 8.654188.69548.65988.745668.36549.65328.32149.3654
Q - MM σ 2 ; 0.3 9.451789.102410.25418.74549.65429.87459.41579.6542
Q - MM σ 2 ; 0.2 9.12419.845129.415248.54129.354129.87549.41579.4571
Table 16. Error analysis and roots approximation using the Q - M M σ 2 method with X 2 [ 0 ] on engineering application 2.
Table 16. Error analysis and roots approximation using the Q - M M σ 2 method with X 2 [ 0 ] on engineering application 2.
X 2 [ 0 ] = X 21 [ 0 ] , X 22 [ 0 ] , X 23 [ 0 ] ,
q 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.1
x 1 [ σ ] 0.1 1.0 × 10 64 i 0.1 1.0 × 10 64 i 0.1 1.0 × 10 64 i 0.1 1.0 × 10 64 i 0.1 1.0 × 10 64 i 0.1 1.0 × 10 64 i 0.1 1.0 × 10 64 i 0.1 1.0 × 10 64 i 0.1 1.0 × 10 64 i
x 2 [ σ ] 3.8 0.5 × 10 63 i 3.8 0.5 × 10 63 i 3.8 0.5 × 10 63 i 3.8 0.5 × 10 63 i 3.8 0.5 × 10 63 i 3.8 0.5 × 10 63 i 3.8 0.5 × 10 63 i 3.8 0.5 × 10 63 i 3.8 0.5 × 10 63 i
x 3 [ σ ] 1.531 + 0.124 i 1.531 + 0.124 i 1.531 + 0.124 i 1.531 + 0.124 i 1.531 + 0.124 i 1.531 + 0.124 i 1.531 + 0.124 i 1.531 + 0.124 i 1.531 + 0.124 i
x 4 [ σ ] 1.2 + 3.41 i 1.2 + 3.41 i 1.2 + 3.41 i 1.2 + 3.41 i 1.2 + 3.41 i 1.2 + 3.41 i 1.2 + 3.41 i 1.2 + 3.41 i 1.2 + 3.41 i
x 5 [ σ ] 2.2 + 1.94 i 2.2 + 1.94 i 2.2 + 1.94 i 2.2 + 1.94 i 2.2 + 1.94 i 2.2 + 1.94 i 2.2 + 1.94 i 2.2 + 1.94 i 2.2 + 1.94 i
x 6 [ σ ] 2.2 1.94 i 2.2 1.94 i 2.2 1.94 i 2.2 1.94 i 2.2 1.94 i 2.2 1.94 i 2.2 1.94 i 2.2 1.94 i 2.2 1.94 i
x 7 [ σ ] 1.2 3.40 i 1.2 3.40 i 1.2 3.40 i 1.2 3.40 i 1.2 3.40 i 1.2 3.40 i 1.2 3.40 i 1.2 3.40 i 1.2 3.40 i
x 8 [ σ ] 1.5 0.94 i 1.5 0.94 i 1.5 0.94 i 1.5 0.94 i 1.5 0.94 i 1.5 0.94 i 1.5 0.94 i 1.5 0.94 i 1.5 0.94 i
Λ 1 [ ] 0.11 4.3 × 10 30 8.8 × 10 25 4.3 × 10 30 1.3 × 10 29 4.7 × 10 15 6.4 × 10 27 2.8 × 10 47 1.8 × 10 49
Λ 2 [ ] 8.16 1.3 × 10 20 2.0 × 10 26 1.3 × 10 20 2.6 × 10 30 2.0 × 10 32 8.3 × 10 29 4.7 × 10 46 0.7 × 10 47
Λ 3 [ ] 1.66 5.3 × 10 11 1.0 × 10 37 5.3 × 10 11 1.3 × 10 29 4.7 × 10 15 1.7 × 10 33 6.3 × 10 50 9.0 × 10 50
Λ 4 [ ] 0.08 8.1 × 10 31 2.6 × 10 37 8.1 × 10 29 2.6 × 10 30 2.0 × 10 32 8.1 × 10 33 0.1 × 10 51 0.1 × 10 55
Λ 5 [ ] 0.11 4.3 × 10 30 8.8 × 10 25 4.3 × 10 30 2.9 × 10 20 3.3 × 10 18 6.4 × 10 27 2.8 × 10 47 1.8 × 10 49
Λ 6 [ ] 8.16 1.3 × 10 20 2.0 × 10 26 1.3 × 10 20 1.3 × 10 17 3.8 × 10 33 8.3 × 10 29 4.7 × 10 46 0.7 × 10 47
Λ 7 [ ] 1.66 5.3 × 10 11 1.0 × 10 37 5.3 × 10 11 1.3 × 10 29 4.7 × 10 15 1.7 × 10 33 6.3 × 10 50 9.0 × 10 50
Λ 8 [ ] 0.08 8.1 × 10 31 2.6 × 10 37 8.1 × 10 29 2.6 × 10 30 2.0 × 10 32 8.1 × 10 33 0.1 × 10 51 0.1 × 10 55
E max time 8.84512 9.1241 9.2145 10.254 9.4151 9.1245 9.1245 7.1425 6.3251
Table 17. Max-Error for the Q - M M σ 2 using X 2 [ 0 ] on engineering application 2.
Table 17. Max-Error for the Q - M M σ 2 using X 2 [ 0 ] on engineering application 2.
Q - MM σ 2 ; q e 1 ( 3 ) e 2 ( 3 ) e 3 ( 3 ) e 4 ( 3 ) e 5 ( 3 ) e 6 ( 3 ) e 7 ( 3 ) e 8 ( 3 )
Q - MM σ 2 ; 1.1 1.4 × 10 19 1.7 × 10 15 1.3 × 10 14 2.1 × 10 15 2.0 × 10 13 4.3 × 10 14 2.1 × 10 15 2.7 × 10 14
Q - MM σ 2 ; 1.0 2.2 × 10 21 1.1 × 10 18 3.2 × 10 21 6.1 × 10 30 7.3 × 10 22 3.5 × 10 25 4.2 × 10 25 3.1 × 10 31
Q - MM σ 2 ; 0.9 2.1 × 10 13 2.7 × 10 14 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15 2.7 × 10 14
Q - MM σ 2 ; 0.8 4.2 × 10 25 3.1 × 10 31 4.2 × 10 25 3.1 × 10 31 4.2 × 10 25 3.1 × 10 31 4.2 × 10 25 3.1 × 10 31
Q - MM σ 2 ; 0.7 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15 2.7 × 10 14
Q - MM σ 2 ; 0.6 4.2 × 10 25 3.1 × 10 31 4.2 × 10 25 3.1 × 10 31 4.2 × 10 25 3.1 × 10 31 4.2 × 10 25 3.1 × 10 31
Q - MM σ 2 ; 0.5 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15 2.7 × 10 14
Q - MM σ 2 ; 0.4 4.2 × 10 25 3.1 × 10 31 4.2 × 10 25 3.1 × 10 31 4.2 × 10 25 3.1 × 10 31 4.2 × 10 25 3.1 × 10 31
Q - MM σ 2 ; 0.3 4.2 × 10 25 3.1 × 10 31 4.2 × 10 25 3.1 × 10 31 4.2 × 10 25 3.1 × 10 31 4.2 × 10 25 3.1 × 10 31
Table 18. Number of iterations for the Q - M M σ 2 method using X 2 [ 0 ] .
Table 18. Number of iterations for the Q - M M σ 2 method using X 2 [ 0 ] .
Q - MM σ 2 ; q It- e 1 [ σ ] It- e 2 [ σ ] It- e 3 [ σ ] It- e 4 [ σ ] It- e 5 [ σ ] It- e 6 [ σ ] It- e 7 [ σ ] It- e 8 [ σ ]
Q - MM σ 2 ; 1.1 2424242424242424
Q - MM σ 2 ; 1.0 2323232323232323
Q - MM σ 2 ; 0.9 3030303030303030
Q - MM σ 2 ; 0.8 3131313131313131
Q - MM σ 2 ; 0.7 3636363636363636
Q - MM σ 2 ; 0.6 3939393939393939
Q - MM σ 2 ; 0.5 4646464646464646
Q - MM σ 2 ; 0.4 4747474747474747
Q - MM σ 2 ; 0.3 5050505050505050
Table 19. CPU-time for the Q - MM σ 2 method using X 2 [ 0 ] .
Table 19. CPU-time for the Q - MM σ 2 method using X 2 [ 0 ] .
Q - MM σ 2 ; q CT- e 1 ( 3 ) CT- e 2 ( 3 ) CT- e 3 ( 3 ) CT- e 4 ( 3 ) CT- e 5 ( 3 ) CT- e 6 ( 3 ) CT- e 7 ( 3 ) CT- e 8 ( 3 )
Q - MM σ 2 ; 1.1 4.33254.14244.12415.21485.21455.32514.21456.2525
Q - MM σ 2 ; 1.0 6.32145.21455.98564.32984.85474.32145.21455.3214
Q - MM σ 2 ; 0.9 5.36525.14565.321455.32145.14527.36525.12455.1426
Q - MM σ 2 ; 0.8 6.21456.36526.12456.36526.12546.65216.32567.1254
Q - MM σ 2 ; 0.7 6.365416.986547.69357.965237.32147.693217.85468.2156
Q - MM σ 2 ; 0.6 8.365217.965428.321548.365427.369548.214538.63528.9658
Q - MM σ 2 ; 0.5 8.654188.69547.65987.745667.36549.65328.32149.3654
Q - MM σ 2 ; 0.4 9.451789.10248.25418.74548.65428.87458.41579.6542
Q - MM σ 2 ; 0.3 8.12418.845128.415248.54128.354128.87548.41579.4571
Table 20. Error outcomes using neural networks on application 2.
Table 20. Error outcomes using neural networks on application 2.
Method e 1 ( 3 ) e 2 ( 3 ) e 3 ( 3 ) e 4 ( 3 ) e 5 ( 3 ) e 6 ( 3 ) e 7 ( 3 ) e 8 ( 3 ) ρ i [ σ 1 ]
Q - MM σ 1 ; 1.0 1.4 × 10 19 1.7 × 10 15 1.3 × 10 14 2.1 × 10 15 2.0 × 10 13 4.3 × 10 14 2.1 × 10 15 2.7 × 10 14 3.014
T A M C * 8.7 × 10 18 0.6 × 10 21 6.1 × 10 25 3.3 × 10 11 0.1 × 10 9 9.7 × 10 24 5.1 × 10 25 4.7 × 10 19 2.742
B S M C 3 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15 2.7 × 10 14 3.142
N A M C 3 4.2 × 10 25 3.1 × 10 31 4.2 × 10 25 3.1 × 10 31 4.2 × 10 25 3.1 × 10 31 4.2 × 10 25 3.1 × 10 31 2.941
N D M C 3 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15 2.7 × 10 14 3.124
N A M C 8 4.2 × 10 25 3.1 × 10 31 4.2 × 10 25 3.1 × 10 31 4.2 × 10 25 3.1 × 10 31 4.2 × 10 25 3.1 × 10 31 7.984
Q - MM σ 2 ; 1.0 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15 2.7 × 10 14 2.1 × 10 15 2.7 × 10 14 8.014
Table 21. Improvement in convergence rate using neural network outcomes as input.
Table 21. Improvement in convergence rate using neural network outcomes as input.
Method e 1 ( 3 ) e 2 ( 3 ) e 3 ( 3 ) e 4 ( 3 ) e 5 ( 3 ) e 6 ( 3 ) e 7 ( 3 ) e 8 ( 3 ) ρ i [ σ 1 ]
Q - MM σ 1 ; 1.0 0.4 × 10 39 1.7 × 10 35 1.3 × 10 34 2.1 × 10 45 2.0 × 10 43 4.3 × 10 34 2.1 × 10 35 2.7 × 10 34 3.525
T A M C * 9.1 × 10 26 0.8 × 10 31 2.0 × 10 55 6.0 × 10 19 1.1 × 10 21 8.5 × 10 11 9.1 × 10 25 4.4 × 10 24 2.924
B S M C 3 0.1 × 10 35 2.7 × 10 34 2.0 × 10 55 2.7 × 10 14 2.1 × 10 25 2.7 × 10 14 2.1 × 10 15 2.7 × 10 14 3.124
N A M C 3 4.2 × 10 65 3.1 × 10 31 5.2 × 10 25 3.1 × 10 31 4.3 × 10 35 3.1 × 10 31 0.2 × 10 25 3.2 × 10 33 3.145
N D M C 3 0.1 × 10 65 2.7 × 10 14 2.1 × 10 25 3.7 × 10 24 2.1 × 10 35 2.7 × 10 34 2.1 × 10 35 2.7 × 10 34 2.965
N A M C 8 3.2 × 10 75 3.1 × 10 61 4.2 × 10 55 3.1 × 10 11 4.2 × 10 45 3.1 × 10 61 4.2 × 10 55 3.1 × 10 51 7.891
Q - MM σ 2 ; 1.0 2.1 × 10 75 2.7 × 10 94 2.1 × 10 85 2.7 × 10 80 2.1 × 10 75 2.7 × 10 74 0.0 2.7 × 10 71 8.012
Table 22. Overall results of q-analogies-based neural network outcomes for accurate initial guesses.
Table 22. Overall results of q-analogies-based neural network outcomes for accurate initial guesses.
Method M a x - E R R Λ 1 [ ] M a x - i t Λ 1 [ ] M a x - C P U Λ 1 [ ] M a x - C O C Λ 1 [ ] ρ i [ σ 1 ]
Q - MM σ 1 ; 1.0 1.4 × 10 19 1.3 × 10 14 2.1 × 10 15 3.314246 3.5412
T A M C * 6.2 × 10 25 4.7 × 10 16 9.1 × 10 26 3.905654 3.3229
B S M C 3 2.2 × 10 21 3.2 × 10 21 6.1 × 10 30 2.981454 3.3412
N A M C 3 2.1 × 10 15 2.1 × 10 15 2.7 × 10 14 3.001245 3.0125
N D M C 3 4.2 × 10 25 4.2 × 10 25 3.1 × 10 31 3.012445 3.4125
N A M C 8 2.1 × 10 15 2.1 × 10 15 2.7 × 10 14 7.954154 8.0145
Q - MM σ 2 ; 1.0 4.2 × 10 25 4.2 × 10 25 3.1 × 10 31 8.012416 8.3214
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shams, M.; Carpentieri, B. Q-Analogues of Parallel Numerical Scheme Based on Neural Networks and Their Engineering Applications. Appl. Sci. 2024, 14, 1540. https://doi.org/10.3390/app14041540

AMA Style

Shams M, Carpentieri B. Q-Analogues of Parallel Numerical Scheme Based on Neural Networks and Their Engineering Applications. Applied Sciences. 2024; 14(4):1540. https://doi.org/10.3390/app14041540

Chicago/Turabian Style

Shams, Mudassir, and Bruno Carpentieri. 2024. "Q-Analogues of Parallel Numerical Scheme Based on Neural Networks and Their Engineering Applications" Applied Sciences 14, no. 4: 1540. https://doi.org/10.3390/app14041540

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop