Next Article in Journal
The Role of the Table of Games in the Discrete Thermostatted Kinetic Theory
Previous Article in Journal
CPSGD: A Novel Optimization Algorithm and Its Application in Side-Channel Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A High-Order Numerical Scheme for Efficiently Solving Nonlinear Vectorial Problems in Engineering Applications

by
Mudassir Shams
1,2 and
Bruno Carpentieri
1,*
1
Faculty of Engineering, Free University of Bozen-Bolzano (BZ), 39100 Bolzano, Italy
2
Department of Mathematics and Statistics, Riphah International University, I-14, Islamabad 44000, Pakistan
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(15), 2357; https://doi.org/10.3390/math12152357
Submission received: 17 June 2024 / Revised: 18 July 2024 / Accepted: 24 July 2024 / Published: 28 July 2024
(This article belongs to the Section E: Applied Mathematics)

Abstract

In scientific and engineering disciplines, vectorial problems involving systems of equations or functions with multiple variables frequently arise, often defying analytical solutions and necessitating numerical techniques. This research introduces an efficient numerical scheme capable of simultaneously approximating all roots of nonlinear equations with a convergence order of ten, specifically designed for vectorial problems. Random initial vectors are employed to assess the global convergence behavior of the proposed scheme. The newly developed method surpasses methods in the existing literature in terms of accuracy, consistency, computational CPU time, residual error, and stability. This superiority is demonstrated through numerical experiments tackling engineering problems and solving heat equations under various diffusibility parameters and boundary conditions. The findings underscore the efficacy of the proposed approach in addressing complex nonlinear systems encountered in diverse applied scenarios.

1. Introduction

Scalar nonlinear equations of a single variable γ ,
f ( γ ) = 0 ,
play a pivotal role in advancing scientific understanding and engineering [1,2]. Various scientific disciplines, including physics, chemistry, biology, and economics, utilize nonlinear equations to describe complex correlations and interactions between variables. These equations enable scientists to more accurately characterize chaotic systems, fluid dynamics, and population dynamics compared to linear models. In engineering, nonlinear equations are crucial in areas such as control systems [3], structural analysis [4], and electrical circuits [5]. Engineers use these equations to model and predict real-world behaviors by accounting for nonlinearities in materials and systems. Nonlinear optimization techniques are essential for solving engineering problems including parameter estimation [6], optimal control [7], and system design [8]. The significance of nonlinear equations extends to emerging fields like artificial intelligence and machine learning [9], where they are used for complex data processing and pattern recognition [10]. Overall, nonlinear equations and their associated systems are indispensable tools for scientists and engineers striving to understand and manage complex systems, thereby fostering the advancement of knowledge and technology.
Solving nonlinear equations analytically can be challenging, and often impossible, due to the intrinsic complexity of nonlinear interactions. Nonlinear equations include terms that are not simply proportional to the variable of interest, and their solutions may not be expressible in closed-form expressions or simple algebraic equations [11,12,13,14]. Therefore, we turn to numerical iterative schemes. Iterative numerical methods are effective in solving nonlinear equations and systems, making them invaluable tools for researchers across various fields [15,16,17,18]. These numerical iterative techniques are classified into three types: single root-finding schemes with local convergence behavior, simultaneous methods for finding all roots of (1) with global convergence behavior, and schemes that find all solutions to nonlinear systems of equations (i.e., vectorial problems). Iterative techniques for solving nonlinear systems of equations, such as gradient descent [19] or evolutionary algorithms that search for roots simultaneously across multiple dimensions in the solution space [20], exhibit local convergence behavior.
The simplest and most efficient method is the classical Newton’s method [21] for solving (1), given by
σ [ k ] = γ [ k ] f ( γ [ k ] ) f ( γ [ k ] ) ; k = 0 , 1 , . . . ,
The method (2) exhibits local quadratic convergence. To reduce the computational cost of (2), Steffensen [22] proposed the following modified version:
σ [ k ] = γ [ k ] f ( γ [ k ] ) θ 11 [ k ] , γ [ k ] ; f ,
where θ 11 [ k ] = γ [ k ] + f ( γ [ k ] ) , and θ 11 [ k ] , γ [ k ] ; f : Δ R R is the first-order forward difference on Δ , i.e., θ 11 [ k ] , γ [ k ] ; f = f θ 11 [ k ] f ( γ [ k ] ) θ 11 [ k ] γ [ k ]  [23]. In high-precision computing, the divided difference is replaced with a first-order central difference on Δ as follows:
σ [ k ] = γ [ k ] f ( γ [ k ] ) θ 11 [ k ] , θ 12 [ k ] ; f ,
where θ 12 [ k ] = γ [ k ] f ( γ [ k ] ) and θ 11 [ k ] , θ 12 [ k ] ; f is the first-order central difference operator. The two-step modified Newton’s method [24] of the third-order convergence has the form
ς [ k ] = τ [ k ] f ( τ [ k ] ) f ( γ [ k ] ) ,
where τ [ k ] = γ [ k ] f ( γ [ k ] ) f ( γ [ k ] ) .
Higher-order schemes offer considerable advantages over lower-order schemes for solving nonlinear equations due to improved accuracy and efficiency. They achieve higher accuracy per iteration step, reducing truncation errors and requiring fewer iterations to obtain the desired precision. The order of convergence can increase up to three, four, and so on, such as the well-known Ostrowski’s method [25].
τ [ k ] = γ [ k ] f ( γ [ k ] ) f ( γ [ k ] ) , ς [ k ] = τ [ k ] f ( γ [ k ] ) f ( γ [ k ] ) 2 f ( τ [ k ] ) f ( τ [ k ] ) f ( γ [ k ] ) .
Similarly, Kou et al. [26] developed sixth-order methods using the weight function technique, written as
τ [ k ] = γ [ k ] f ( γ [ k ] ) f ( γ [ k ] ) , z [ k ] = τ [ k ] f ( γ [ k ] ) f ( γ [ k ] ) 2 f ( τ [ k ] ) f ( τ [ k ] ) f ( γ [ k ] ) , γ [ k + 1 ] = z [ k ] f ( z [ k ] ) f ( γ [ k ] ) f ( γ [ k ] ) f ( τ [ k ] ) f ( γ [ k ] ) 2 f ( τ [ k ] ) 2 + f ( z [ k ] ) f ( τ [ k ] ) α f ( z [ k ] ) .
Liu et al. [27] presented the following eighth-order methods using the weight function technique:
τ [ k ] = γ [ k ] f ( γ [ k ] ) f ( γ [ k ] ) , z [ k ] = τ [ k ] f ( γ [ k ] ) f ( γ [ k ] ) 2 f ( τ [ k ] ) f ( τ [ k ] ) f ( γ [ k ] ) , γ [ k + 1 ] = z [ k ] f ( z [ k ] ) f ( γ [ k ] ) f ( γ [ k ] ) f ( τ [ k ] ) f ( γ [ k ] ) 2 f ( τ [ k ] ) 2 + f ( z [ k ] ) f ( τ [ k ] ) α 1 f ( z [ k ] ) + 4 f ( z [ k ] ) f ( τ [ k ] ) α 2 f ( z [ k ] ) ,
where α 1 , α 2 R .
Numerous single- and multi-step methods exist for solving (1), and some of these methods can be applied to solving systems of nonlinear equations with local convergence behavior [28]. Noot et al. [29], Darvishi et al. [30], Babajee et al. [31], Ortega et al. [32], and others (see e.g., [33,34] and references therein) have employed (2) as a predictor step to construct multi-step approaches to solving nonlinear equation systems. Iterative methods for finding a single root of nonlinear equations, though widely used, have certain inherent limitations that researchers must consider. One primary concern is convergence; these methods may fail to find a solution if the initial guess is not close enough to a root or if the function has abrupt changes. The dependence on initial guesses poses a significant challenge, as inaccurate or poorly chosen starting points can result in a slow convergence or divergence [35,36]. Furthermore, iterative methods usually provide only local solutions, with no guarantee of accurately identifying all roots, particularly multiple roots. The computational cost can be significant, especially for complex functions or high-dimensional systems, and ill-conditioned problems can lead to numerical instability. Additionally, these methods generally provide root values without information about their multiplicity, and their applicability may be limited in the presence of discontinuities or non-smooth features [37]. Due to these limitations, when using iterative methods, we may need to investigate alternative approaches based on the specific characteristics of the nonlinear equations under investigation. Therefore, we turn to simultaneous methods, which are more stable, consistent, and reliable, and can also be applied to parallel computing (see e.g., [38,39]).
In 1891, Weierstrass [40] introduced the generalized form of (2) by incorporating the Weierstrass correction, which was later explored by Presic [41], Durand [42], Dochev [43], and Kerner [44]. In 2015, Proinov et al. [45] proposed the local convergence theorem for the double Weierstrass technique. In 2016, Nedzibove created a modified version of the Weierstrass technique [46] and presented its local convergence [47] in 2018. In 1973, Aberth [48] developed a third-order convergent simultaneous method with derivatives, which was then accelerated by Nourein [49] to the fourth order in 1977, by Petković [50] to the sixth order in 2020, and by Mir et al. [51] to the tenth order using various single root-finding methods as corrections. Cholakov [52,53] and Marcheva et al. [54] proposed the local convergence of multi-step simultaneous methods for determining all roots of (1). In 2023, Shams et al. [55,56] described the global convergence behavior of simultaneous algorithms using random initial gauge values, along with contributions from many others.
Among derivative-free simultaneous methods, the Weierstrass–Dochive method [57] (abbreviated as BM) is the most attractive. It is given by
γ t [ k + 1 ] = γ t [ k ] Δ t ( γ t [ k ] ) ,
where
Δ t ( γ t [ k ] ) = f ( γ t [ k ] ) j = 1 j t m ( γ t [ k ] γ j [ k ] ) , ( t , j = 1 , 2 , . . . , m ) ,
is the Weierstrass correction. The method (10) has local quadratic convergence.
In 1977, Ehrlich [58] presented the following convergent simultaneous method (abbreviated as EM) of the third order:
γ t [ k + 1 ] = γ t [ k ] 1 1 N t ( γ t [ k ] ) j t j = 1 m 1 γ t [ k ] γ j [ k ] ,
where γ j [ k ] = u j [ k ] is used as a correction. Petkovic et al. [59] accelerated the convergence order of (11) from three to six:
γ t [ k + 1 ] = γ t [ k ] 1 1 N t ( γ t [ k ] ) j t j = 1 m 1 γ t [ k ] u j [ k ] ,
where u j [ k ] = γ j [ k ] f ( v j [ k ] ) f ( γ j [ k ] ) 2 f ( v j [ k ] ) f ( γ j [ k ] ) f ( γ j [ k ] ) f ( γ j [ k ] ) and v j [ k ] = γ j [ k ] f ( γ j [ k ] ) f ( γ j [ k ] ) .
Petkovic et al. [60] accelerated the convergence order of (11) from three to ten, as shown in the following method (abbreviated as PMϵ):
γ t [ k + 1 ] = γ t [ k ] 1 1 N t ( γ t [ k ] ) j t j = 1 m 1 γ t [ k ] Z j [ k ] ,
where z j [ k ] = u j [ k ] τ j [ k ] u j [ k ] ) f u j [ k ] f γ j [ k ] f γ j [ k ] f γ j [ k ] f u j [ k ] 2 f τ j [ k ] f γ j [ k ] f τ j [ k ] f u j [ k ] , u j [ k ] = τ j [ k ] f γ j [ k ] f τ j [ k ] f γ j [ k ] f γ j [ k ] f γ j [ k ] f τ j [ k ] 2 , τ j [ k ] = γ j [ k ] f γ j [ k ] f γ j [ k ] .
Shams et al. [61] proposed a three-step simultaneous scheme for finding all polynomial roots (abbreviated as MMϵ):
γ t [ k + 1 ] = z t [ k ] f ( z t [ k ] ) Π m j t j = 1 ( z t [ k ] z j [ k ] ) ,
where z t [ k ] = τ t [ k ] f ( τ t [ k ] ) Π m j t j = 1 ( τ t [ k ] τ j [ k ] ) , τ t [ k ] = γ t [ k ] f ( γ t [ k ] ) Π m j t j = 1 ( γ t [ k ] v j [ k ] ) and v j [ k ] = γ j [ k ] α f ( γ k [ k ] ) 2 f ( γ j [ k ] + α f ( γ j [ k ] ) ) f ( γ j [ k ] ) ; α R . The numerical scheme (14) exhibits a convergence order of twelve.
A review of the existing literature reveals the following:
  • Most iterative methods used for solving nonlinear equations and systems are highly effective at converging to solutions when the initial guess is close to a root.
  • These iterative techniques are particularly sensitive to initial guesses and may fail to converge if the initial values are not chosen precisely.
  • Local convergence algorithms may lack stability and consistency in many cases.
  • Iterative methods are susceptible to rounding errors and may fail to converge when the problem is poorly conditioned.
  • Nonlinear equations and systems can have multiple solutions, and achieving convergence to the desired solution based on initial estimates can be challenging.
Hirstov et al. [62] proposed the generalized Weierstrass method for solving systems of nonlinear equations (abbreviated as BMϵ):
γ t . , r [ k + 1 ] = γ t , r [ k ] f t ( γ t , r [ k ] ) Π m l s ( γ t , l [ k ] γ t , s [ k ] ) .
Chinesta et al. [63] proposed the generalized method (11) for nonlinear systems (abbreviated as EMϵ):
γ t . , r [ k + 1 ] = γ t , r [ k ] f t ( γ t , r [ k ] ) θ 31 [ k ] , γ [ k ] ; f t f t ( γ t , r [ k ] ) l s m 1 γ t , l [ k ] γ t , s [ k ] ,
where θ 31 [ k ] = γ [ k ] + β [ ] f ( γ [ k ] ) , and β [ ] R . The order of convergence of EMϵ is 2. Motivated by prior work, the main objective of this study is to develop a novel family of efficient, higher-order simultaneous schemes. These schemes aim not only to compute all roots of nonlinear equations simultaneously but also to solve nonlinear systems of equations comprehensively, thereby addressing the limitations outlined earlier. The structure of the paper is as follows: after the introduction, Section 2 introduces and analyzes a new family of two-step vectorial simultaneous algorithms. Section 3 is dedicated to discussing computational efficiencies, while in Section 4, we present and discuss the numerical results obtained from our proposed schemes. Finally, the concluding section summarizes the key findings, contributions of this research, and directions for future work.

2. Constructing a Family of Simultaneous Methods for Distinct Roots

Consider the two-step Newton’s method [64] for finding the simple root of (1):
τ [ k ] = γ [ k ] f ( γ [ k ] ) f ( γ [ k ] ) , ϑ [ k ] = τ [ k ] f ( τ [ k ] ) f ( τ [ k ] ) .
The methods in (17) exhibit third-order convergence if ζ is a simple root of (1), ϵ = γ [ k ] ζ , and the error equation is described by
ϑ [ k ] ζ γ [ k ] ζ 4 C 2 2 C 3 C 2 5 + 2 C 3 2 C 2 3 C 3 C 4 C 2 2 ;
C ω ( ς ) = f ( ω ) ( ζ ) ω ! f ( ζ ) , ω = 2 , 3 , . . .
or
ϑ [ k ] ζ = O ( ϵ 4 ) .
Suppose (1) has m distinct roots. Then, f ( γ ) and f ( γ ) can be approximated as
f ( γ ) = j = 1 m γ γ j and f ( γ ) = t = 1 n m j t j = 1 γ γ j .
This implies that
f ( γ ) f ( γ ) = j = 1 m 1 ( γ γ j ) = 1 1 γ γ k j t j = 1 m 1 ( γ γ j ) .
By using (22) in (11), we developed a new simultaneous scheme (abbreviated as BDϵ):
ς t [ k ] = τ t [ k ] 1 1 N ( τ k [ k ] ) j t j = 1 m 1 ( τ t [ k ] τ j [ k ] ) ,
where τ t [ k ] = γ t [ k ] 1 1 N ( γ t [ k ] ) j t j = 1 m 1 ( γ t [ k ] ϑ j [ k ] ) , with N ( γ t [ k ] ) = f γ j [ k ] f γ j [ k ] and ϑ j [ k ] = γ t [ k ] f γ j [ k ] b j [ k ] γ j [ k ] f b j [ k ] f γ j [ k ] , where b j [ k ] = γ j [ k ] f γ j [ k ] 2 f ( a j [ k ] ) f γ j [ k ] a j [ k ] γ j [ k ] f γ j [ k ] a j [ k ] γ j [ k ] and a j [ k ] = γ j [ k ] f γ j [ k ] f γ j k , γ j k + f γ j k . The method (23) for multiple roots can also be expressed as (abbreviated as BMϵ)
ς [ k ] = τ [ k ] i i N ( τ t [ k ] ) j t j = 1 m j ( τ t [ k ] τ j [ k ] ) ,
where τ t [ k ] = γ t [ k ] i i N ( γ t [ k ] ) j t j = 1 m j ( γ t [ k ] ϑ j [ k ] ) . To develop a derivative-free approach, we replace f ( γ i [ i ] ) with the central difference operator θ 11 [ k ] , θ 12 [ k ] ; f , resulting in (abbreviated as DFϵ)
φ t [ k ] = ς t [ k ] t t θ 21 [ k ] , θ 22 [ k ] ; f f ( ς t [ k ] ) j t j = 1 m j ( ς t [ k ] ς j [ k ] ) ,
where ς t [ k ] = τ t [ k ] t t θ 11 [ k ] , θ 12 [ k ] ; f f ( τ t [ k ] ) j t j = 1 m j ( τ t [ k ] τ j [ k ] ) , θ 11 [ k ] = τ t [ k ] + f ( τ t [ k ] ) , θ 12 [ k ] = τ t [ k ] + f ( τ t [ k ] ) , θ 21 [ k ] = ς t [ k ] + f ( ς t [ k ] ) , and θ 22 [ k ] = ς t [ k ] + f ( ς t [ k ] ) .
Assume the system
F γ = 0 ,
has m solutions ξ t = ξ t 1 , ξ t 2 , . . . ξ t n for t = 1 , . . . , m , where F : Δ R m R m is defined over an open convex domain Δ R m . The primary goal of this research study is to develop a numerical scheme that can find all solutions of the nonlinear system of equations (26) simultaneously. To find all solutions, we assume a set of m initial guesses γ t 0 = γ t 1 0 , , γ t n 0 , for t = 1 , , m , and define t sum γ k = t 1 sum γ k , , t n sum γ k [65],
where
t , r sum γ k = j t j = 1 m 1 τ t , r [ k ] τ j , r [ k ] ,
and
t , r sum ς k = j t j = 1 m 1 ς t , r [ k ] ς j , r [ k ] .
Using (27) and (28) in (25), we develop a new simultaneous scheme (abbreviated as DSϵ) as follows:
φ t , r [ k ] = ς t , r [ k ] F ( ς t , r [ k ] ) 21 [ k ] , 22 [ k ] ; F F ( ς t , r [ k ] ) k , r sum ς t , r k ,
where ς t , r [ k ] = τ t , r [ k ] F ( τ t , r [ k ] ) 11 [ k ] , 12 [ k ] ; F F ( τ t , r [ k ] ) k , r sum τ t , r k , 11 [ k ] = τ t , r [ k ] + F ( τ t , r [ k ] ) , 12 [ k ] = τ t , r [ k ] F ( τ t , r [ k ] ) , 21 [ k ] = ς t , r [ k ] + F ( ς t , r [ k ] ) , 22 [ k ] = ς t , r [ k ] F ( ς t , r [ k ] ) , and 11 [ k ] , 12 [ k ] ; F : Δ R m R m is the first-central forward difference on Δ .
The theoretical order of convergence of the parallel scheme BDϵ − BMϵ for approximating the roots of nonlinear equations is demonstrated in Theorem 1. For the case of multiplicity unity, we observe the convergence of BDϵ.
Theorem 1. 
Let ξ 1 , . . . , ξ m be simple roots of (1) with multiplicity ℘. If the initial guesses γ 0 , ,   γ m 0 are sufficiently close to the actual roots, then the BMϵ method has a convergence order of ten.
Proof. 
Let ϵ t = γ t [ k ] ξ t , ϵ t = τ t [ k ] ξ t and ϵ t = ς t [ k ] ξ t represent the errors in γ t [ k ] , τ [ k ] and ς t [ k ] , respectively. Considering the first step of BMϵ, we have
τ t [ k ] = γ t [ k ] 1 1 N ( γ t [ k ] ) j t j = 1 m 1 ( γ t [ k ] ϑ j [ k ] ) ,
where N ( γ t [ k ] ) = f ( γ t [ k ] ) f ( γ t [ k ] ) . For distinct roots, we have
1 N ( γ t [ k ] ) = f ( γ t [ k ] ) f ( γ t [ k ] ) = j = 1 m 1 ( γ t [ k ] ξ j ) ,
= 1 ( γ t [ k ] ξ t ) + j t j = 1 m 1 γ t [ k ] ξ j ) .
For multiple roots, the BDϵ method can be expressed as
τ t [ k ] = γ t [ k ] k k ( γ t [ k ] ξ t ) + j t j = 1 n j ( γ t [ k ] ξ j ) j t j = 1 m j ( γ t [ k ] ϑ j [ k ] ) ,
τ t [ k ] ξ t = γ t [ k ] ξ t σ k σ t ( γ t [ k ] ξ t ) + j t j = 1 m j γ t [ k ] ) ϑ j [ k ] γ t [ k ] + ξ j γ t [ k ] ξ j γ k [ k ] ϑ j [ k ] ,
ϵ t = ϵ t t t ϵ t + j t j = 1 m j ϑ j [ k ] ξ j γ t [ k ] ξ j γ t [ k ] ϑ j [ k ] ,
ϵ t = ϵ t t ϵ t t + ϵ t j t j = 1 m j ϑ j [ k ] ξ j γ t [ k ] ξ j γ t [ k ] ϑ j [ k ] ,
= ϵ t t ϵ t t + ϵ t j t j = 1 m U t ϵ j 4 ,
where ς j [ k ] ξ j = ϵ j 3 as per [24]. Using Equation (37) and U t = j γ t [ k ] ξ j γ t [ k ] ϑ j [ k ] , we obtain
ϵ k = ϵ t 2 j t j = 1 m U t ϵ j 3 σ t + ϵ t j t j = 1 m U t ϵ j 3 .
Assuming ϵ t = ϵ j = O ϵ , from Equation (38), we have
ϵ t = O ( ϵ t ) 5 .
Now, consider the second step of BMϵ:
ς t [ k ] = τ t [ k ] t t τ t [ k ] ξ k + j t j = 1 m j τ t [ k ] ξ j j t j = 1 n j τ t [ k ] τ j [ k ] ,
ς t [ k ] ξ t = τ t [ k ] ξ t t t τ t [ k ] ξ t + j t j = 1 n j γ t [ k ] τ j [ k ] τ t [ k ] + ξ j τ t [ k ] ξ j τ t [ k ] τ j [ k ] ,
ϵ t = ϵ t t t ϵ t + j t j = 1 m j τ j [ k ] ξ j τ t [ k ] ξ j τ t [ k ] τ j [ k ] ,
ϵ t = ϵ t k σ t ϵ t + j t t = 1 m j τ j [ k ] ξ j τ t [ k ] ξ j τ t [ k ] τ j [ k ] ,
= ϵ t k t ϵ t + j t j = 1 n U t ϵ j ,
Using Equation (44) and the definition of U t = j τ j [ k ] ξ j τ t [ k ] ξ j τ t [ k ] τ j [ k ] , we have
ϵ t = ϵ t j t j = 1 m ϵ j U t t + ϵ t j t j = 1 m U t ϵ j .
Assuming ϵ t = ϵ j = O ϵ , from Equation (45), we obtain
ϵ t = O ( ϵ ) 2 = O ( ϵ 5 ) 2 = O ( ϵ ) 10 .
Hence, the theorem is proved.    □
The theoretical order of convergence of the derivative-free parallel scheme DFϵ to approximate all the roots of nonlinear equations is demonstrated in Theorem 2.
Theorem 2. 
Let ξ 1 , . . . , ξ m be the simple roots of Equation (1). If the initial guesses γ 1 [ k ] , , γ m [ k ] are sufficiently close to these roots, then the DFϵ scheme achieves eighth-order convergence.
Proof. 
Let ϵ τ = τ t [ k ] ξ t , ϵ ς = ς t [ k ] ξ t , and ϵ φ = φ t [ k ] ξ t be the errors in τ t [ k ] , ς [ k ] , and φ t [ k ] , respectively. Expanding f ( τ t [ k ] ) using the Taylor series around ξ , we have
f ( τ t [ k ] ) = f ξ ϵ τ + c 2 ϵ τ 2 + c 3 ϵ τ 3 + ,
where c 2 = f i ξ i ! f ξ for i 2 . Thus,
θ 11 [ k ] , θ 12 [ k ] ; f = f ξ 1 + 2 c 2 ϵ τ + 3 c 3 + c 3 f ξ 2 ϵ τ 2 + .
Considering the first step of DFϵ, we have
ς t [ k ] = τ t [ k ] f ( τ t [ k ] ) θ 11 [ k ] , θ 12 [ k ] ; f f ( τ t [ k ] ) j t j = 1 m 1 τ t [ k ] τ j [ k ] ,
ς t [ k ] ξ t = τ t [ k ] ξ t f ( τ t [ k ] ) θ 11 [ k ] , θ 12 [ k ] ; f f ( τ t [ k ] ) j t j = 1 m 1 τ t [ k ] τ j [ k ] ,
ϵ ς = ϵ τ ϵ τ + c 2 ϵ τ 2 + c 3 ϵ τ 3 + . . . 1 + 2 c 2 ϵ τ + 3 c 3 + c 3 f ξ 2 ϵ τ 2 . . . ϵ τ j t j = 1 m 1 τ t [ k ] τ j [ k ] + . . . ,
ϵ ς = ϵ τ ϵ τ + c 2 ϵ τ 2 + c 3 ϵ τ 3 + 1 + 2 c 2 j t j = 1 m 1 τ t [ k ] τ j [ k ] ϵ τ + O ϵ τ 2 ,
ϵ ς = ϵ τ + 2 c 2 j t t = 1 m 1 τ t [ k ] τ j [ k ] ϵ τ 2 + ϵ τ c 2 ϵ τ 2 c 3 ϵ τ 3 1 + 2 c 2 j t j = 1 m 1 τ t [ k ] τ j [ k ] ϵ τ + O ϵ τ 2 ,
ϵ ς = 2 c 2 c 2 j t j = 1 m 1 τ t [ k ] τ j [ k ] ϵ τ 2 + . . . 1 + 2 c 2 j t j = 1 m 1 τ t [ k ] τ j [ k ] ϵ τ + O ϵ τ 2
ϵ ς = O ϵ τ 2 .
Now, considering the second step of DFϵ, we have
φ t [ k ] = ς t [ k ] f ( ς t [ k ] ) θ 11 [ k ] , θ 12 [ k ] ; f f ( ς t [ k ] ) j t t = 1 m 1 ς t [ k ] ς j [ k ] ,
φ t [ k ] ξ t = ς t [ k ] ξ t f ( ς t [ k ] ) θ 11 [ k ] , θ 12 [ k ] ; f f ( ς t [ k ] ) j t t = 1 m 1 ς t [ k ] ς j [ k ] ,
ϵ φ = ϵ ς ϵ ς + c 2 ϵ ς 2 + c 3 ϵ ς 3 + 1 + 2 c 2 ϵ ς + 3 c 3 + c 3 f ξ 2 ϵ ς 2 ϵ ς j t j = 1 m 1 ς t [ k ] ς j [ k ] + ,
ϵ φ = ϵ ς ϵ ς + c 2 ϵ ς 2 + c 3 ϵ ς 3 + . . . 1 + 2 c 2 j t j = 1 m 1 ς t , r [ k ] ς j , r [ k ] ϵ τ + O ϵ τ 2 ,
ϵ φ = ϵ ς + 2 c 2 j t j = 1 m 1 ς t [ k ] ς j [ k ] ϵ ς 2 + . . . ϵ ς c 2 ϵ ς 2 c 3 ϵ ς 3 . . . 1 + 2 c 2 j t j = 1 m 1 ς t [ k ] ς j [ k ] ϵ ς + O ϵ ς 2 ,
ϵ φ = 2 c 2 c 2 j t j = 1 m 1 ς t [ k ] ς j [ k ] ϵ ς 2 + . . . 1 + 2 c 2 j t j = 1 m 1 ς t [ k ] ς j [ k ] ϵ ς + O ϵ ς 2 ,
ϵ φ = O ϵ ς 2 .
Since ϵ τ = O ϵ 2 (as shown in [66]), then ϵ τ = O ϵ τ 2 = O ϵ 2 2 = O ϵ 4 , and
ϵ φ = O ϵ 4 2 = O ϵ 8 .
Hence, the theorem is proved.    □
The theoretical order of convergence of the derivative-free parallel scheme DSϵ to approximate all solutions of the vectorial problems simultaneously is demonstrated in Theorem 3.
Theorem 3. 
Consider a sufficiently differentiable function F : R n R d defined on a convex neighborhood of ξ k , r denoted by Δ R n , satisfying F ( ξ 1 ) = 0 , for t = 1 , , m . If F ( ξ 1 ) 0 , then there exists γ t 0 = γ t 1 0 , , γ t n 0 R n close enough to ξ 1 . r r = 1 , , n , such that the iterative sequence DSϵ converges to the exact solution of (26) with order-eight convergence.
Proof. 
Let
ϵ t , k j 1 [ τ ] = τ t j 1 k ξ t j 1 , ϵ t , k j 1 [ ς ] = ς t j 1 k ξ t j 1
and ϵ t , k j 1 [ φ ] = φ t j 1 k ξ t j 1 be the errors in the estimates τ k , r k , ς k , r k , and φ k , r k , respectively. Consider F = F 1 , , F d , where F p : R n R and the coordinate of F p τ t k is p = 1 , , d . Using Taylor series expansion around ξ t , we have
F p τ t k = j 1 = 1 n F p ξ t γ j 1 ϵ k , k j 1 γ k k + j 1 = 1 n j 2 = 1 n 2 F p ξ t γ j 1 γ j 2 ϵ t , k j 1 γ t k ϵ t , k j 2 γ t k + O 3 ϵ t , k ,
where ϵ t , k j 1 [ τ ] = τ k j 1 k ξ t for all t = 1 , , m and j 1 = 1 , , n . The residual term O 3 ϵ t , k contains the higher-order terms of the Taylor series, where the sum of exponents of ϵ t , k j 1 q γ k k for j 1 1 , , n satisfies q 3 . We have
F p τ t k t , r sum τ k = j 1 = 1 n F p ξ t τ j 1 ϵ t , k j 1 t , r sum τ k + O 2 ϵ t , k .
Then
F p τ t , t k k , r sum τ k = G p , r ϵ t , k [ τ ] + O 2 ϵ t , k [ τ ] , and
F p ς t , t k k , r sum ς k = G p , r ϵ t , k [ ς ] + O 2 ϵ t , k [ ς ] ,
for p = 1 , , d and k = 1 , , n . Therefore, F p τ k , t k k , r sum τ t , r k is replaced by A ϵ t , k [ τ ] + O 2 ϵ t , k [ τ ] and F p ς t , r k k , r sum ς t , r k by A [ ] ϵ t , k [ ς ] + O 2 ϵ t , k [ ς ] . Now, expanding F γ t k , F γ t k , and F γ t , r k around ξ k , t , we have
F τ t , r k = F ξ t , k j 1 ϵ t , k j 1 [ τ ] + C 2 ϵ t , k j 1 [ τ ] 2 + O 3 ϵ t , k j 1 ,
F τ t , r k = F ξ t , k j 1 I + 2 C 2 ϵ t , k j 1 [ τ ] + O 2 ϵ t , k j 1 ,
F τ t , r k = F ξ t , k j 1 2 C 2 + O 1 ϵ t , k j 1 .
Therefore,
21 [ k ] , 22 [ k ] ; F = F ξ t , k j 1 1 + 2 C 2 ϵ t , k j 1 [ τ ] + 3 C 3 + C 3 F ξ 2 ϵ t , k j 1 [ τ ] 2 + . . . O 3 ϵ t , k j 1 ,
where C ω ( ς ) = F ( ω ) ( ξ t , k j 1 ) ω ! F ( ξ t , k j 1 ) , ω 2 . Considering the first step of DSϵ, we have
ς t , r [ k ] = τ t , r [ k ] F ( τ t , r [ k ] ) 11 [ k ] , 12 [ k ] ; F F ( τ t , r [ k ] ) j t j = 1 m 1 τ t [ k ] τ j [ k ] ,
ς t , r [ k ] ξ t , r = τ t , r [ k ] ξ t , r F τ t , r [ k ] ) 11 [ k ] , 12 [ k ] ; F F ( τ t , r [ k ] ) j t j = 1 m 1 τ t , r [ k ] τ j , r [ k ] ,
ϵ t , k j 1 [ ς ] = ϵ t , k j 1 [ τ ] ϵ t , k j 1 [ τ ] + C 2 ϵ t , k j 1 [ τ ] 2 + C 3 ϵ t , k j 1 [ τ ] 3 + . . . 1 + 2 c 2 C t , k j 1 [ τ ] + 3 C 3 + C 3 F ξ 2 ϵ t , k j 1 [ τ ] 2 . . . ϵ τ j t j = 1 m 1 τ t , r [ k ] τ j , r [ k ] + . . . ,
ϵ t , k j 1 [ ς ] = ϵ t , k j 1 [ τ ] ϵ t , k j 1 [ τ ] + C 2 ϵ t , k j 1 [ τ ] 2 + C 3 ϵ t , k j 1 [ τ ] 3 + . . . 1 + 2 C 2 j t j = 1 m 1 τ t , r [ k ] τ j , r [ k ] ϵ t , k j 1 [ τ ] + O ϵ t , k j 1 [ τ ] 2 ,
ϵ t , k j 1 [ ς ] = ϵ t , k j 1 [ τ ] + 2 C 2 j t t = 1 m 1 τ t , r [ k ] τ j , r [ k ] ϵ t , k j 1 [ τ ] 2 + . . . ϵ t , k j 1 [ τ ] C 2 ϵ t , k j 1 [ τ ] 2 C 3 ϵ t , k j 1 [ τ ] 3 . . . 1 + 2 C 2 j t t = 1 m 1 τ t , r [ k ] τ j , r [ k ] ϵ t , k j 1 [ τ ] + O ϵ t , k j 1 [ τ ] 2 ,
ϵ t , k j 1 [ ς ] = 2 C 2 C 2 j t t = 1 m 1 τ t , r [ k ] τ j , r [ k ] ϵ t , k j 1 [ τ ] 2 + . . . 1 + 2 C 2 j t j = 1 m 1 τ t , r [ k ] τ j , r [ k ] ϵ t , k j 1 [ τ ] + O ϵ t , k j 1 [ τ ] 2 ,
ϵ t , k j 1 [ ς ] = O ϵ t , k j 1 [ τ ] 2 .
Now, considering the second step of DSϵ, we have
φ t , r [ k ] = ς t , r [ k ] F ( ς t , r [ k ] ) 21 [ k ] , 22 [ k ] ; f F ( ς t , r [ k ] ) j t t = 1 m 1 ς t , r [ k ] ς j , r [ k ] ,
φ t , r [ k ] ξ t , r = ς t , r [ k ] ξ t , r F ( ς t , r [ k ] ) 21 [ k ] , 22 [ k ] ; F F ( ς t , r [ k ] ) j t t = 1 m 1 ς t , r [ k ] ς j , r [ k ] ,
ϵ t , k j 1 [ φ ] = ϵ t , k j 1 [ ς ] ϵ t , k j 1 [ ς ] + C 2 ϵ t , k j 1 [ ς ] 2 + c 3 ϵ t , k j 1 [ ς ] 3 + . . . 1 + 2 C 2 ϵ t , k j 1 [ ς ] + 3 C 3 + C 3 F ξ 2 ϵ t , k j 1 [ ς ] 2 ϵ t , k j 1 [ ς ] j t t = 1 m 1 ς t , r [ k ] ς j , r [ k ] + ,
ϵ t , k j 1 [ φ ] = ϵ t , k j 1 [ ς ] ϵ t , k j 1 [ ς ] + C 2 ϵ t , k j 1 [ ς ] 2 + C 3 ϵ t , k j 1 [ ς ] 3 + 1 + 2 C 2 j t j = 1 m 1 ς t , r [ k ] ς j , r [ k ] ϵ t , k j 1 [ ς ] + O ϵ t , k j 1 [ ς ] 2 ,
ϵ t , k j 1 [ φ ] = ϵ t , k j 1 [ φ ] + 2 C 2 j t t = 1 m 1 ς t , r [ k ] ς j , r [ k ] ϵ t , k j 1 [ φ ] 2 + ϵ t , k j 1 [ φ ] C 2 ϵ t , k j 1 [ φ ] 2 C 3 ϵ t , k j 1 [ φ ] 3 1 + 2 C 2 j t j = 1 m 1 ς t , r [ k ] ς j , r [ k ] ϵ t , k j 1 [ ς ] + O ϵ t , k j 1 [ ς ] 2 ,
ϵ t , k j 1 [ φ ] = 2 C 2 C 2 j t t = 1 m 1 ς t , r [ k ] ς j , r [ k ] ϵ t , k j 1 [ φ ] 2 + . . . 1 + 2 C 2 j t t = 1 m 1 ς t , r [ k ] ς j , r [ k ] ϵ t , k j 1 [ ς ] + O ϵ t , k j 1 [ ς ] 2 ,
ϵ t , k j 1 [ φ ] = O ϵ t , k j 1 [ ς ] 2 .
As ϵ t , k j 1 [ τ ] = O ϵ t , k j 1 2  [67], we have ϵ t , k j 1 [ ς ] = O ϵ t , k j 1 [ τ ] 2 = O ϵ t , k j 1 2 2 = O ϵ t , k j 1 4 , and
ϵ t , k j 1 [ φ ] = O ϵ t , k j 1 [ ς ] 2 = O ϵ t , k j 1 4 2 = O ϵ t , k j 1 8 .
Hence the theorem is proved.    □

3. Computational Analysis of Simultaneous Methods

The computational efficiency of iterative methods for finding all roots of nonlinear equations is a topic of significant importance in numerical analysis. Iterative methods, both sequential and parallel numerical schemes, often demonstrate favorable computational efficiency due to their ability to iteratively refine estimates for each root. The iterative process of root-finding methods allows them to adapt effectively to complex and nonlinear functions. However, their efficiency can be influenced by factors such as the choice of initial guesses, the characteristics of the function being solved, and potential convergence issues. In cases of rapid convergence, iterative methods offer a computationally efficient approach to simultaneously find all roots. Conversely, slow convergence, particularly with high-degree polynomials or ill-conditioned problems, can negatively impact computational efficiency. Evaluating the computational efficiency of iterative methods involves assessing their convergence rates, sensitivity to initial conditions, and suitability for various types of nonlinear equations, contributing to a nuanced understanding of their performance across different scenarios. For further details on computational efficiency, refer to [68].
Figure 1a–d presents the percentage computational efficiency ratios of MMϵ, with respect to PMϵ, BDϵand, respectively. Meanwhile, Figure 2a–e illustrate the computational efficiency of MMϵ, BDϵ, EMϵ, BMϵ, and DFϵ relative to the PMϵ technique. The computational efficiency curves clearly demonstrate that the newly developed method is more efficient and consistent than the PMϵ method.
[ ] [ m ] = log [ r ] 11 [ ] + 12 [ ] + 13 [ ] ,
where 11 [ ] = w a s A S m , 12 [ ] = w m M m , and 13 [ ] = w d D m and A S m , M M m , D D m represent the number of arithmetic operations of addition and subtraction, and multiplications and division, respectively [69]. Using the data provided in Table 1, we have
1 [ ] [ m , n ] = [ ] [ m ] [ ] [ n ] [ ] [ n ] × 100 .
Remark 1. 
Figure 1a–e and Figure 2a–e graphically illustrate these percentage ratios. It is evident from Figure 1a,b that the newly constructed simultaneous methods BDϵ, DFϵ are more efficient compared to PMϵ and MMϵ, respectively.

4. Numerical Outcomes

To evaluate the performance and efficiency of the newly designed vectorial scheme, we solved nonlinear vectorial problems arising in science and engineering. We terminated the computer program using the following criteria:
e t [ k ] = γ t [ k + 1 ] γ t [ k ] 2 < = 10 15 ,
where e t [ k ] represents the absolute error using the Euclidean norm-2, i.e.,   2 [70,71]. In the numerical calculations, we utilized vectors v1–v3 from Appendix A Table A1, Table A2 and Table A3 for (29) and abbreviated as DS ϵ 1 , DS ϵ 2 , and DS ϵ 3 , respectively. The numerical outcomes considered the following points to analyze the simultaneous schemes PMϵ, MMϵ, BDϵ, BMϵ, DFϵ, and DSϵ:
  • Computational CPU time (CPU-time);
  • Residual error computed for all roots using Algorithms 1–3;
  • Efficiency of the simultaneous schemes PMϵ, MMϵ, BDϵ, BMϵ, DFϵ, and DSϵ;
  • Consistency and stability analysis;
  • Global convergence of the simultaneous schemes analyzed using a random set of initial vectors provided in Appendix A Table A1, Table A2, Table A3, Table A4 and Table A5.
Algorithm 1: Method for finding all distinct and multiple roots of (1).
First compute γ i ( 0 ) ( t t = 1 , , n ) , tolerance > 0 and set j j = 0 for iterations t t Calculate a j [ k ] = γ j [ k ] f γ j [ k ] f γ j k , γ j k + f γ j k , b j [ k ] = γ j [ k ] f γ j [ k ] 2 f ( a j [ k ] ) f γ j [ k ] a j [ k ] γ j [ k ] f γ j [ k ] a j [ k ] γ j [ k ] , ϑ j [ k ] = γ t [ k ] f γ j [ k ] b j [ k ] γ j [ k ] f b j [ k ] f γ j [ k ] . Update τ t [ k ] = γ t [ k ] 1 1 N ( γ t [ k ] ) j t j = 1 m 1 ( γ t [ k ] ϑ j [ k ] ) , ς t [ k ] = τ t [ k ] 1 1 N ( τ k [ k ] ) j t j = 1 m 1 ( τ t [ k ] τ j [ k ] ) , where N ( γ t [ k ] ) = f ( γ t [ k ] ) f ( γ t [ k ] ) . ς t [ k + 1 ] = ς t [ k ] ( t t = 1 , . . . , n ) . if e t [ k ] = γ t [ k + 1 ] γ t [ k ] < = 10 30 or σ > t t , then stop . Set j j = j j + 1 and go to 1 s t - iteration . End do .
Algorithm 2: Derivative-free method for finding all roots of (1).
First compute γ i ( 0 ) ( t t = 1 , , n ) , tolerance > 0 and set j j = 0 for iterations t t Calculate τ t [ k ] = γ t [ k ] f ( γ t [ k ] ) θ 11 [ k ] , γ t [ k ] ; f . Update ς t [ k ] = τ t [ k ] t t θ 11 [ k ] , θ 12 [ k ] ; f f ( τ t [ k ] ) j t j = 1 m j ( τ t [ k ] τ j [ k ] ) , φ t [ k ] = ς t [ k ] t t θ 21 [ k ] , θ 22 [ k ] ; f f ( ς t [ k ] ) j t j = 1 m j ( ς t [ k ] ς j [ k ] ) , φ t [ k ] = γ t [ k ] ( t t = 1 , . . . , n ) . if e t [ k ] = γ t [ k + 1 ] γ t [ k ] < = 10 30 or σ > t t , then stop . Set j j = j j + 1 and go to 1 s t - iteration . End do .
Algorithm 3: For finding all solution of (26).
First compute γ i ( 0 ) ( t t = 1 , . . , n ) , tolerance > 0 and set j j = 0 for iterations t t Calculate σ t , r [ k ] = γ t , r [ k ] F ( γ t , r [ k ] ) θ 11 [ k ] , γ t , r [ k ] ; F . Update ς t , r [ k ] = τ t , r [ k ] t F ( τ t , r [ k ] ) t 11 [ k ] , 12 [ k ] ; F F ( τ t , r [ k ] ) j t j = 1 m j ( τ t , r [ k ] τ j , r [ k ] ) , φ t , r [ k ] = ς t , r [ k ] t F ( ς t [ k ] ) t 21 [ k ] , 22 [ k ] ; F F ( ς t [ k ] ) j t j = 1 m j ( ς t , r [ k ] ς j , r [ k ] ) , where k , r sum τ t , r k = j t t = 1 m 1 ( τ t , r [ k ] τ j , r [ k ] ) , and k , r sum ς t , r k = j t t = 1 m 1 ( ς t , r [ k ] ς j , r [ k ] ) . ς t , r k = γ t , r [ k ] ( t t = 1 , . . . , n ) . if e t [ k ] = γ t [ k + 1 ] γ t [ k ] < = 10 30 or σ > t t , then stop . Set j j = j j + 1 and go to 1 s t - iteration . End do .
Example 1: Quarter car suspension model
The shock absorber in the suspension system regulates the transient behavior of both the vehicle and suspension mass [72,73]. The nonlinear behavior of the suspension system makes it one of its most complex components. Nonetheless, the damping force of the dampers is characterized by an asymmetric nonlinear hysteresis loop. Automobile engineers utilize a quarter car suspension model, which is a simplified representation, to examine the vertical dynamics of a vehicle’s single wheel and its interaction with the road. This model is a component of the broader field of vehicle dynamics and suspension design. The quarter car model divides the vehicle into two primary parts: the sprung mass and the unsprung mass.
Automobile Structure Sprung Weight: The vehicle body mass includes the chassis, occupants, and other components directly supported by the suspension. The majority of the sprung mass is typically concentrated around the vehicle’s center of gravity.
The Unsprung Weight and Suspension of the Wheels: The unsprung mass includes the wheel, tire, and any suspension components directly linked to the wheel. These components are not supported by the suspension springs.
The suspension system, comprising a spring and a damper, regulates the interaction between sprung and unsprung masses. The spring represents the suspension’s elasticity, while the damper simulates the shock absorber’s damping effect. Using the quarter car suspension model, engineers can analyze how a vehicle responds to potholes and other road irregularities. This model allows for the calculation of dynamic quantities such as suspension deflection, wheel displacement, and vehicle body forces. Understanding these fundamental dynamics and characteristics of suspension systems aids in designing and optimizing suspension systems for improved ride comfort, handling, and stability. Despite the availability of more advanced models, such as half-car or full-car models, the quarter car model remains a critical tool in vehicle dynamics studies. The equations for mass motion are as follows:
m s γ s + k s ( γ s γ u ) + F = 0 , m u γ u k s ( γ s γ u ) k σ ( γ r γ u ) F = 0 ,
where m s represents the mass above the spring, m u denotes the mass below the spring, γ s signifies the displacement of the sprung mass, γ u indicates the displacement of the unsprung mass, γ r represents disturbances from road bumps, k s corresponds to the spring stiffness, and k σ pertains to the tire stiffness. To accurately model the damper force F, one can use the polynomial presented by Barethiye [74] in Equation (64):
f ( γ ) = 77.14 γ 4 + 23.14 γ 3 + 342.7 γ 2 + 956.7 γ + 124.5 .
The exact roots of Equation (65) are
ζ 1 = 3.090556803 , ζ 2 = 1.326919946 + 1.434668028 i , ζ 3 = 0.1367428388 , ζ 4 = 1.326919946 1.434668028 i .
Initial guesses: γ 1 [ 0 ] = 0.1 , γ 2 [ 0 ] = 1 + 1 i , γ [ 0 ] 3 = 0.1 , γ 4 [ 0 ] = 1 1 i . The numerical outcomes for the initial guesses, which are sufficiently close to the exact values, are presented in Table 2.
The results of Table 2 clearly show that BFϵ and BDϵ are superior to PMϵ, MMϵ, BM, and EM in terms of computational order of convergence, CPU-time, residual error, and error iteration (Error it) for solving (85).
The initial vectors provided in Table A1 [75] are used to verify the global convergence of PMϵ, MMϵ, BM, EM, DFϵ, and BDϵ.
The numerical outcomes of the simultaneous vectorial method for solving (85) in terms of residual error, CPU time, local computational order of convergence, and iterations are shown in Table 3, Table 4 and Table 5 and Figure 3. Table 3, Table 4 and Table 5 clearly illustrate that DFϵ outperforms MMϵ, BM, EM, PMϵ, and BDϵ in terms of global convergence, as it converges faster and utilizes less CPU time and fewer iterations than the other methods.
The numerical results from iterative methods using random initial vectors, as presented in Table 3, Table 4, Table 5 and Table 6, demonstrate that the newly developed schemes BDϵ and DFϵ outperform existing methods such as PMϵ, MMϵ, BM, and EM, achieving a significantly higher accuracy with maximum errors of 0.11 × 10−57, 0.98 × 10−54, and 7.98 × 10−54 for the three sets of initial vectors v1–v3 (see Table 3 and Figure 3). These techniques also exhibit superior performance compared to DM, DM1, and DM3 in terms of average CPU time (Avg-CPU) and average number of iterations (Avg-Iterations). Table 6 provides an overall assessment of the simultaneous schemes, confirming that DFϵ shows greater stability and consistency compared to MMϵ, BDϵ, BMϵ, PMϵ, and DSϵ, respectively.
Example 2: Solving a non-differentiable system [76]
Consider the non-differentiable system:
γ = γ 1 γ 2 γ 1 = 0 , γ 1 γ 2 γ 2 = 0 .
The exact solutions to γ = 0 are ξ 1 = ξ 1 , 1 , ξ 1 , 2 = 1 , 1 ; ξ 2 = ξ 2 , 1 , ξ 2 , 2 = 1 , 1 , and the trivial solution is ( 0 , 0 ) . We chose the following initial guesses: γ 1 [ 0 ] = γ 1 , 1 [ 0 ] , γ 1 , 2 [ 0 ] = 2.5 , 3.1 and γ 2 [ 0 ] = γ 2 , 1 [ 0 ] , γ 2 , 2 [ 0 ] = 2.5 , 3.1 . The numerical results are presented in Table 7.
The results of Table 7 clearly demonstrate that DS ϵ 3 outperforms DS ϵ 1 DS ϵ 2 , MMϵ, EMϵ, and BM in terms of computational order of convergence, CPU-time, and residual error for solving (86).
To check the global convergence behavior, we utilized the following starting set of vectors presented in Appendix A Table A2.
In terms of residual error, CPU time, local computational order of convergence, and iterations, the numerical results of the simultaneous vectorial method for solving (86) are presented in Table 8, Table 9 and Table 10 and Figure 4. These tables clearly illustrate that DS ϵ 3 outperforms DS ϵ 1 DS ϵ 2 , MMϵ, EMϵ, and BMϵ in terms of global convergence, converging faster, and utilizing less CPU time and fewer iterations than other methods.
The numerical results of the iterative method using random initial vectors presented in Table 8, Table 9 and Table 10 show that the newly developed scheme DS ϵ 1 DS ϵ 3 is more efficient than existing methods MMϵ, EMϵ, BMϵ because it achieves a much higher accuracy, with errors of 4.8756 × 10−18, 7.651 × 10−29, and 7.654 × 10−29 for the three sets of initial vectors v1–v3, respectively. Additionally, it consumes less CPU time and requires fewer iterations. Table 11 depicts the overall behavior of the simultaneous schemes and demonstrates that DS ϵ 1 is more stable and consistent than DS ϵ 1 DS ϵ 2 , MMϵ, EMϵ, and BMϵ.
Example 3: Computing the steady state of the epidemic model [77]
Consider the following system of nonlinear equations:
γ = γ 1 1 R D 10 1 + β 1 γ 1 e 10 γ 1 1 + 10 γ 1 ω = 0 , γ 1 1 β 2 γ 2 + 1 R D 10 β 1 γ 1 1 + β 1 γ 2 e 10 γ 2 1 + 10 γ 2 ω = 0 ,
where D = 22 , β 1 = β 1 = 2 , R = 0.935 and ω = 1000 , although different values of R may be considered. The exact solutions of γ = 0 are ξ 1 = ξ 1 , 1 , ξ 1 , 2 = 0.00752614 , 0.0589059 and ξ 2 = ξ 2 , 1 , ξ 2 , 2 = 1 , 1 . We choose the initial guesses as γ 1 [ 0 ] = γ 1 , 1 [ 0 ] , γ 1 , 2 [ 0 ] = 2.5 , 3.1 and γ 2 [ 0 ] = γ 2 , 1 [ 0 ] , γ 2 , 2 [ 0 ] = 2.5 , 3.1 . The numerical results are presented in Table 12.
The results of Table 12 clearly show that DS ϵ 3 outperforms DS ϵ 1 DS ϵ 2 , MMϵ, EMϵ, and BMϵ in terms of computational order of convergence, CPU-time, and residual error for solving (87).
To evaluate the global convergence behavior, we utilize the initial vector set presented in Appendix A Table A3.
In terms of residual error, CPU time, local computational order of convergence, and iterations, the numerical outcomes of the simultaneous vectorial method for solving (87) are shown in Table 13, Table 14 and Table 15 and Figure 5. Table 13, Table 14 and Table 15 clearly illustrate that DS ϵ 3 outperforms DS ϵ 1 DS ϵ 2 , MMϵ, EMϵ, and BM in terms of global convergence, converging faster and consuming less CPU time, and requiring fewer iterations than other methods.
With errors of 8.75 × 10−18, 1.124 × 10−24, and 1.121 × 10−24 for the three sets of initial vectors v1–v3, the newly developed scheme DS ϵ 1 DS ϵ 3 achieved a significantly higher accuracy compared to the existing methods MMϵ, EMϵ, and BMϵ. It also consumed less CPU time and required fewer iterations, as evidenced by the numerical results of the iterative method using random initial vectors presented in Table 3, Table 4, Table 5 and Table 6. Table 16 depicts the overall behavior of the simultaneous schemes and demonstrates that DS DS ϵ 3 is more stable and consistent than DS ϵ 1 DS ϵ 2 , MMϵ, EMϵ, and BMϵ.
Example 4: Searching for the equilibrium point of the N-Body system [78]
Consider the nonlinear system of equations describing how to find the equilibrium solution in an N-body system as
Ņ γ = 3 γ 1 γ 2 1 1 γ 1 2 + γ 2 2 3 2 + Λ 1 3 γ 1 1 γ 2 1 1 γ 1 1 2 + γ 2 2 3 2 = 0 , 2 γ 2 1 1 γ 1 2 + γ 2 2 3 2 + Λ 2 3 γ 1 1 γ 2 1 1 γ 1 1 2 2 + γ 2 3 2 2 3 2 = 0 .
The nonlinear system of equations Ņ γ has more than one solution depending on the parameter values. For instance, if we choose Λ 1 = 0.3 and Λ 1 = 0.4 , Ņ γ has five solutions. Initial starting values are chosen as γ 1 [ 0 ] = γ 1 , 1 [ 0 ] , γ 1 , 2 [ 0 ] = 2.5 , 3.1 and γ 2 [ 0 ] = γ 2 , 1 [ 0 ] , γ 2 , 2 [ 0 ] = 2.5 , 3.1 . The numerical outcomes are presented in Table 7.
The results in Table 17 clearly demonstrate that DS ϵ 3 outperforms DS ϵ 1 DS ϵ 2 , MMϵ, EMϵ, and BMϵ in terms of computational order of convergence, CPU-time, and residual error for solving (88).
To assess the global convergence behavior, we utilize the following initial set of vectors presented in Appendix A Table A4.
In terms of residual error, CPU time, local computational order of convergence, and iterations, the numerical outcomes of the simultaneous vectorial method for solving (88) are shown in Table 18, Table 19 and Table 20 and Figure 6. These tables clearly illustrate that DS ϵ 3 outperforms DS ϵ 1 DS ϵ 2 , MMϵ, EMϵ, and BMϵ in terms of global convergence, again converging faster and requiring less CPU time and iterations than other methods.
The newly developed schemes DS ϵ 1 DS ϵ 3 achieved a substantially higher accuracy than the existing methods, with errors of 4.876 × 10−18, 4.875 × 10−18, and 4.34 × 10−18 for the three sets of initial vectors v1–v3. The numerical results of the iterative method using random initial vectors reported in Table 18, Table 19 and Table 20 also indicate that DS ϵ 1 DS ϵ 3 required less CPU time and fewer iterations Table 21 presents the overall behavior of the simultaneous schemes and demonstrates that DS ϵ 3 is more stable and consistent than DS ϵ 1 DS ϵ 2 , MMϵ, EMϵ, and BMϵ.
Example 5: Solving the diffusion equation [79]
Consider the heat diffusion equation with various boundary conditions as follows:
ϕ t = α 2 ϕ γ 2 , 0 γ L ; t 0 , ϕ γ , 0 = ϕ 0 γ ; ϕ 0 , t = σ 1 , ϕ L , t = σ 2 , σ 1 , σ 2 R .
To find the solution of (89), we set the diffusivity parameter as α = 1 4 and ensure the stability of the implicit finite difference scheme with ϑ = α Δ t Δ γ 2 < 1 :
ϕ t = ϕ i , j + 1 ϕ i , j Δ t + Δ t ; 2 ϕ γ 2 = ϕ i + 1 , j + 1 2 ϕ i , j + 1 + ϕ i 1 , j + 1 Δ γ 2 + Δ γ .
Applying approximations to (89), we derive the tridiagonal system of equations:
ϕ i , j = ϑ ϕ i 1 , j + 1 + 1 + 2 ϑ ϕ i , j + 1 ϑ ϕ i = 1 , j + 1 .
For different initial and boundary conditions, we obtain an additional set of partial differential equations:
ϕ t = α 2 ϕ γ 2 , 0 γ L ; t 0 , ϕ γ , 0 = sin 1 γ 3 + 0.3 tan ( γ ) ; ϕ 0 , t = 0 , ϕ L , t = 0 .
Using (90) in (92), we derive a nonlinear system of equations similar to (91) after incorporating the initial and boundary conditions. The exact and approximate solutions obtained by BDSϵ1 − DSϵ3, MMϵ, EMϵ, and BM are presented in Figure 7.
ϕ t = α 2 ϕ γ 2 , 0 γ L ; t 0 , ϕ γ , 0 = sin 1 ; ϕ 0 , t = 0 , ϕ L , t = 0 .
Using (90) in (93), we obtain another nonlinear system of equations similar to (91) after incorporating the respective initial and boundary conditions. The exact and approximate solutions obtained by DSϵ1 − DSϵ3, MMϵ, EMϵ, and BM are presented in Figure 8.
ϕ t = α 2 ϕ γ 2 , 0 γ L ; t 0 , ϕ γ , 0 = e γ 5 γ 2 + 1 + tan ( γ ) ; ϕ 0 , t = 0.5 e t , ϕ L , t = cos ( t ) ( t + 1 ) .
Using a random set of initial vectors γ 1 [ 0 ] = γ 1 , 1 [ 0 ] , γ 1 , 2 [ 0 ] from Appendix A Table A5, the numerical results from Table 22, Table 23 and Table 24 and Figure 7, Figure 8 and Figure 9 clearly show that the scheme DSϵ3 is more stable and consistent than DSϵ1 − DSϵ2, MMϵ, EMϵ, and BM when solving (89) with different boundary and initial conditions.
The scheme DSϵ1 − DSϵ3 outperformed the MMϵ, EMϵ, and BMϵ. It also required less CPU time and fewer iterations, as evidenced by the numerical results of the iterative technique under various initial conditions shown in Table 22, Table 23 and Table 24. These tables depict the overall behavior of the simultaneous schemes and demonstrate that DSϵ3 is more stable and consistent than DSϵ1 − DSϵ2, MMϵ, EMϵ, and BMϵ.

5. Conclusions

In this study, novel parallel numerical schemes are developed for solving nonlinear equations and their systems, including vectorial problems. A convergence analysis reveals that these parallel vertorial schemes achieve a high order of convergence, up to 10. Engineering applications using various random initial approximations were employed to assess the efficiency of MMϵ, PMϵ, BDϵ, BMϵ, DFϵ, and DSϵ. Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16, Table 17, Table 18, Table 19, Table 20, Table 21, Table 22, Table 23 and Table 24 and Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9 present the numerical outcomes from these experiments. The results clearly demonstrate that the newly developed schemes exhibit greater stability and consistency with global convergence compared to previous methods documented in the literature. Table 2, Table 7, Table 12, Table 18 and Table 22 illustrate that utilizing initial approximations close to exact solutions enhances the convergence rate of DSϵ1 − DSϵ3, MMϵ, EMϵ, and BM. Future research will focus on developing similar higher-order simultaneous iterative methods to tackle more intricate engineering challenges involving nonlinear equations and associated systems [80,81].

Author Contributions

Conceptualization, M.S. and B.C.; methodology, M.S.; software, M.S.; validation, M.S.; formal analysis, B.C.; investigation, M.S.; resources, B.C.; writing—original draft preparation, M.S. and B.C.; writing—review and editing, B.C.; visualization, M.S. and B.C.; supervision, B.C.; project administration, B.C.; funding acquisition, B.C. All authors have read and agreed to the published version of the manuscript.

Funding

The work was supported by Provincia autonoma di Bolzano/Alto Adigeâ euro Ripartizione Innovazione, Ricerca, Universitá e Musei (contract nr. 19/34). Bruno Carpentieri is a member of the Gruppo Nazionale per it Calcolo Scientifico (GNCS) of the Istituto Nazionale di Alta Matematia (INdAM), and this work was partially supported by INdAM-GNCS under Progetti di Ricerca 2022.

Data Availability Statement

Data will be made available on request.

Acknowledgments

The work was supported by the Free University of Bozen-Bolzano (IN200Z SmartPrint). Bruno Carpentieri is a member of the Gruppo Nazionale per il Calcolo Scientifico (GNCS) of the Istituto Nazionale di Alta Matematica (INdAM), and this work was partially supported by INdAM-GNCS under Progetti di Ricerca 2024.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

Abbreviations

In this study’s article, the following abbreviations are used:
BDϵ, BMϵ, DFϵNewly developed Schemes
Error itIterations
CPU timeComputational Time in Seconds
e- 10 ( )
ρ ς [ k 1 ] Computational local convergence order
β [ ] , α , α 1 , α 2 Parameters
ρ ς [ k 1 ] Computational local convergence order

Appendix A. Random Set of Initial Vectors Used in Numerical Experiments

Table A1. Random set of initial vectors used in Example 2 for solving nonlinear systems of equations.
Table A1. Random set of initial vectors used in Example 2 for solving nonlinear systems of equations.
ζ ζ 1 , ζ 2 , ζ 3 , ζ 4
v ζ 1 ζ 2 ζ 3 ζ 4
v1 = v 1 0.32 0.12 0.12 0.91
v2 = v 2 0.11 0.65 0.41 0.12
v3 = v 3 0.02 0.93 0.14 0.31
⋮,⋮,⋮,
Table A2. Random set of initial vectors used in Example 3 for solving nonlinear systems of equations.
Table A2. Random set of initial vectors used in Example 3 for solving nonlinear systems of equations.
ζ ζ 1 , ζ 2 , ζ 3 , ζ 4
v ζ 11 , ζ 12 ζ 21 , ζ 22 ζ 11 , ζ 12 ζ 21 , ζ 22
v1 = v 11 , v 12 0.12 , 0.72 0.14 , 0.13 0.32 , 0.36 0.01 , 0.65
v2 = v 11 , v 12 0.61 , 0.72 0.43 , 0.54 0.01 , 0.42 0.92 , 0.45
v3 = v 11 , v 12 0.52 , 0.32 0.13 , 0.23 0.14 , 0.11 0.35 , 0.55
⋮,⋮,⋮,
Table A3. Random set of initial vectors used in Example 4 for solving nonlinear systems of equations.
Table A3. Random set of initial vectors used in Example 4 for solving nonlinear systems of equations.
ζ ζ 1 , ζ 2 , ζ 3 , ζ 4
v ζ 11 , ζ 12 ζ 21 , ζ 22 ζ 11 , ζ 12 ζ 21 , ζ 22
v1 = v 11 , v 12 0.21 , 0.12 0.13 , 0.54 0.01 , 0.13 0.02 , 0.05
v2 = v 11 , v 12 0.02 , 0.32 0.12 , 0.13 0.02 , 0.76 0.01 , 0.60
v3 = v 11 , v 12 0.01 , 0.12 0.54 , 0.94 0.01 , 0.43 0.10 , 0.40
⋮,⋮,⋮,
Table A4. Random set of initial vectors used in Example 5 for solving nonlinear systems of equations.
Table A4. Random set of initial vectors used in Example 5 for solving nonlinear systems of equations.
ζ ζ 1 , ζ 2 , ζ 3 , ζ 4
v ζ 11 , ζ 12 ζ 21 , ζ 22 ζ 11 , ζ 12 ζ 21 , ζ 22
v1 = v 11 , v 12 0.02 , 0.32 0.21 , 0.13 0.19 , 0.99 0.91 , 0.65
v2 = v 11 , v 12 0.02 , 0.32 0.93 , 0.03 0.04 , 0.11 0.31 , 0.02
v3 = v 11 , v 12 0.82 , 0.32 0.93 , 0.03 0.14 , 0.01 0.31 , 0.22
[ ,⋮,⋮, ]
Table A5. Random set of initial vectors used in Example 6 for solving nonlinear systems of equations.
Table A5. Random set of initial vectors used in Example 6 for solving nonlinear systems of equations.
ζ ζ 1 , ζ 2 , ζ 3 , ζ 4 ζ n
v ζ 11 , . . . , ζ 12 ζ 21 , . . . , ζ 22 ζ 11 , . . . , ζ 12 ζ 21 , . . . , ζ 22 ζ n 1 , . . . , ζ n 2
v1 = v 11 , . . . , v 12 0.02 , . . . , 0.42 0.02 , . . . , 0.03 0.12 , . . . , 0.46 0.91 , . . . , 0.65 0.01 , . . . , 0.43
v2 = v 11 , . . . , v 12 0.01 , . . . , 0.62 0.05 , . . . , 0.04 0.41 , . . . , 0.43 0.12 , . . . , 0.45 0.04 , . . . , 0.02
v3 = v 11 , . . . , v 12 0.02 , . . . , 0.32 0.03 , . . . , 0.03 0.14 , . . . , 0.11 0.31 , . . . , 0.22 0.05 , . . . , 0.09
⋮,⋮,⋮,⋮,

References

  1. Bielik, T.; Fonio, E.; Feinerman, O.; Duncan, R.G.; Levy, S.T. Working together: Integrating computational modeling approaches to investigate complex phenomena. J. Sci. Educ. Technol. 2021, 30, 40–57. [Google Scholar] [CrossRef]
  2. Chen, L.; He, A.; Zhao, J.; Kang, Q.; Li, Z.Y.; Carmeliet, J.; Tao, W.Q. Pore-scale modeling of complex transport phenomena in porous media. Prog. Energy Combust. Sci. 2022, 88, 100968. [Google Scholar] [CrossRef]
  3. Kumar, A. Control of Nonlinear Differential Algebraic Equation Systems with Applications to Chemical Processes; Chapman and Hall/CRC: Boca Raton, FL, USA, 2020. [Google Scholar]
  4. Chichurin, A.; Filipuk, G. The properties of certain linear and nonlinear differential equations of the fourth order arising in beam models. J. Phys. Conf. Ser. 2019, 1425, 012107. [Google Scholar] [CrossRef]
  5. Zein, D.A. Solution of a set of nonlinear algebraic equations for general-purpose CAD programs. IEEE Circuits Devices Mag. 1985, 1, 7–20. [Google Scholar] [CrossRef]
  6. Moles, C.G.; Mendes, P.; Banga, J.R. Parameter estimation in biochemical pathways: A comparison of global optimization methods. Genome Res. 2003, 13, 2467–2474. [Google Scholar] [CrossRef] [PubMed]
  7. Andersson, J.A.; Gillis, J.; Horn, G.; Rawlings, J.B.; Diehl, M. CasADi: A software framework for nonlinear optimization and optimal control. Math. Program. Comput. 2019, 11, 1–36. [Google Scholar] [CrossRef]
  8. Ni, P.; Li, J.; Hao, H.; Yan, W.; Du, X.; Zhou, H. Reliability analysis and design optimization of nonlinear structures. Reliab. Eng. Syst. Saf. 2020, 198, 106860. [Google Scholar] [CrossRef]
  9. Raja, M.A.Z.; Khan, J.A.; Chaudhary, N.I.; Shivanian, E. Reliable numerical treatment of nonlinear singular Flierl–Petviashivili equations for unbounded domain using ANN, GAs, and SQP. Appl. Soft. Comp. 2016, 38, 617–636. [Google Scholar] [CrossRef]
  10. Jeswal, S.K.; Chakraverty, S. Solving transcendental equation using artificial neural network. Appl. Soft. Comp. 2018, 73, 562–571. [Google Scholar] [CrossRef]
  11. Lai, Y.C. Finding nonlinear system equations and complex network structures from data: A sparse optimization approach. Chaos Interdiscip. J. Nonlinear Sci. 2021, 31, 1–10. [Google Scholar] [CrossRef]
  12. He, J.H. Homotopy perturbation method: A new nonlinear analytical technique. Appl. Math. Comp. 2003, 135, 73–79. [Google Scholar] [CrossRef]
  13. Liao, S. On the homotopy analysis method for nonlinear problems. Appl. Math. Comp. 2004, 147, 499–513. [Google Scholar] [CrossRef]
  14. Berger, M.S. Nonlinearity and Functional Analysis: Lectures on Nonlinear Problems in Mathematical Analysis; Academic Press: Cambridge, MA, USA, 1977; Volume 74. [Google Scholar]
  15. Liu, C.S.; Atluri, S.N. A novel time integration method for solving a large system of non-linear algebraic equations. CMES Comp. Model. Eng. Sci. 2008, 31, 71–83. [Google Scholar]
  16. Dennis, J.E., Jr.; Schnabel, R.B. Numerical methods for unconstrained optimization and nonlinear equations. SAIM 1996, 1, 1–10. [Google Scholar]
  17. Eichfelder, G.; Jahn, J. Vector optimization problems and their solution concepts. In Recent Developments in Vector Optimization; Springer: Berlin/Heidelberg, Germany, 2011; pp. 1–27. [Google Scholar]
  18. Budzko, D.; Cordero, A.; Torregrosa, J.R. A new family of iterative methods widening areas of convergence. Appl. Math. Comput. 2015, 252, 405–417. [Google Scholar] [CrossRef]
  19. Drummond, L.G.; Iusem, A.N. A projected gradient method for vector optimization problems. Comput. Optimiz. Appl. 2004, 28, 5–29. [Google Scholar] [CrossRef]
  20. Yun, B.I. A non-iterative method for solving non-linear equations. Appl. Math. Comput. 2008, 198, 691–699. [Google Scholar]
  21. Kelley, C.T. Solving nonlinear equations with Newton’s method. SIAM 2003, 1, 1–12. [Google Scholar]
  22. Ortega, J.M.; Rheinbolt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  23. Wang, X.; Fan, X. Two Efficient Derivative-Free Iterative Methods for Solving Nonlinear Systems. Algorithms 2016, 9, 14. [Google Scholar] [CrossRef]
  24. Amat, S.; Busquier, S.; Gutiérrez, J.M. Third-order iterative methods with applications to Hammerstein equations: A unified approach. J. Comput. Appl. Math. 2011, 235, 2936–2943. [Google Scholar] [CrossRef]
  25. Ostrowski, A.M. Solution of equations in Euclidean and Banach spaces. SIAM Rev. 1974, 16, 1–25. [Google Scholar]
  26. Kou, J.; Li, Y.; Wang, X. Some variants of Ostrowski’s method with seventh-order convergence. J. Computat. Appl. Math. 2007, 209, 153–159. [Google Scholar] [CrossRef]
  27. Liu, L.; Wang, X. Eighth-order methods with high efficiency index for solving nonlinear equations. Appl. Math. Comput. 2010, 215, 3449–3454. [Google Scholar] [CrossRef]
  28. Liu, T.; Qin, X.; Wang, P. Local convergence of a family of iterative methods with sixth and seventh order convergence under weak conditions. Inter. J. Comput. Meth. 2019, 16, 1850120. [Google Scholar] [CrossRef]
  29. Noor, M.A.; Waseem, M. Some iterative methods for solving a system of nonlinear equations. Comput. Math. Appl. 2009, 57, 101–106. [Google Scholar] [CrossRef]
  30. Darvishi, M.T.; Barati, A. Super cubic iterative methods to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 1678–1685. [Google Scholar] [CrossRef]
  31. Golbabai, A.; Javidi, M. A new family of iterative methods for solving system of nonlinear algebric equations. Appl. Math. Comput. 2007, 190, 1717–1722. [Google Scholar] [CrossRef]
  32. Ortega, J.M. Matrix Theory: A Second Course; Springer Science & Business: Berlin, Germany, 2013. [Google Scholar]
  33. Shah, F.A.; Noor, M.A.; Batool, M. Derivative-free iterative methods for solving nonlinear equations. Appl. Math. Inf. Sci. 2014, 8, 2189. [Google Scholar] [CrossRef]
  34. Thangkhenpau, G.; Panday, S.; Panday, B.; Stoenoiu, C.E.; Jäntschi, L. Generalized high-order iterative methods for solutions of nonlinear systems and their applications. AIMS Math. 2024, 9, 6161–6182. [Google Scholar] [CrossRef]
  35. Heath, M.T.; Ng, E.; Peyton, B.W. Parallel algorithms for sparse linear systems. SIAM Rev. 1991, 33, 420–460. [Google Scholar] [CrossRef]
  36. Pelinovsky, D.E.; Stepanyants, Y.A. Convergence of Petviashvili’s iteration method for numerical approximation of stationary solutions of nonlinear wave equations. SIAM J. Numer. Anal. 2004, 42, 1110–1127. [Google Scholar] [CrossRef]
  37. Varona, J.L. Graphic and numerical comparison between iterative methods. Math. Intell. 2002, 24, 37–47. [Google Scholar] [CrossRef]
  38. Werner, W. On the simultaneous determination of polynomial roots. In Iterative Solution of Nonlinear Systems of Equations: Proceedings of a Meeting Held at Oberwolfach, Germany; Springer: Berlin/Heidelberg, Germany, 1982; Volume 5, pp. 188–202. [Google Scholar]
  39. Batiha, B. Innovative Solutions for the Kadomtsev–Petviashvili Equation via the New Iterative Method. Math. Probl. Eng. 2024, 1, 5541845. [Google Scholar] [CrossRef]
  40. Falcão, M.I.; Miranda, F.; Severino, R.; Soares, M.J. Weierstrass method for quaternionic polynomial root-finding. Math. Methods Appl. Sci. 2018, 41, 423–437. [Google Scholar] [CrossRef]
  41. Presic, S. Un procédé itératif pour la factorisation des polynômes. CR Acad. Sci. Paris 1966, 262, 862–863. [Google Scholar]
  42. Terui, A.; Sasaki, T. Durand-Kerner method for the real roots. Jpn. J. Ind. Appl. Math. 2002, 19, 19–38. [Google Scholar] [CrossRef]
  43. Dochev, K. Modified Newton methodfor the simultaneous computation of all roots of a givenalgebraic equation. Bulg. Phys. Math. J. Bulg. Acad. Sci. 1962, 5, 136–139. [Google Scholar]
  44. Kerner, I.O. Ein gesamtschrittverfahren zur berechnung der nullstellen von polynomen. Numer. Math. 1966, 8, 290–294. [Google Scholar] [CrossRef]
  45. Proinov, P.D.; Cholakov, S.I. Semilocal convergence of Chebyshev-like root-finding method for simultaneous approximation of polynomial zeros. Appl. Math. Comput. 2014, 236, 669–682. [Google Scholar] [CrossRef]
  46. Nedzhibov, G.H. Convergence of the modified inverse Weierstrass method for simultaneous approximation of polynomial zeros. Commun. Numer. Anal. 2016, 16, 74–80. [Google Scholar] [CrossRef]
  47. Nedzhibov, G.H. Improved local convergence analysis of the Inverse Weierstrass method for simultaneous approximation of polynomial zeros. In Proceedings of the MATTEX 2018 Conference, Targovishte, Bulgaria, 16–17 November 2018; Volume 1, pp. 66–73. [Google Scholar]
  48. Aberth, O. Iteration methods for finding all zeros of a polynomial simultaneously. Math. Comput. 1973, 27, 339–344. [Google Scholar] [CrossRef]
  49. Nourein, A.W.M. An improvement on two iteration methods for simultaneous determination of the zeros of a polynomial. Int. J. Comp. Math. 1977, 6, 241–252. [Google Scholar] [CrossRef]
  50. Petković, M.S. On a general class of multipoint root-finding methods of high computational efficiency. SIAM J. Numer. Anal. 2010, 47, 4402–4414. [Google Scholar] [CrossRef]
  51. Mir, N.A.; Shams, M.; Rafiq, N.; Akram, S.; Rizwan, M. Derivative free iterative simultaneous method for finding distinct roots of polynomial equation. Alex. Eng. J. 2020, 59, 1629–1636. [Google Scholar] [CrossRef]
  52. Cholakov, S.I.; Vasileva, M.T. A convergence analysis of a fourth-order method for computing all zeros of a polynomial simultaneously. J.Comput. Appl. Math. 2017, 321, 270–283. [Google Scholar] [CrossRef]
  53. Cholakov, S.I. Local and semilocal convergence of Wang-Zheng’s method for simultaneous finding polynomial zeros. Symmetry 2019, 11, 736. [Google Scholar] [CrossRef]
  54. Marcheva, P.I.; Ivanov, S.I. Convergence analysis of a modified Weierstrass method for the simultaneous determination of polynomial zeros. Symmetry 2020, 12, 1408. [Google Scholar] [CrossRef]
  55. Shams, M.; Carpentieri, B. On highly efficient fractional numerical method for solving nonlinear engineering models. Mathematics 2023, 11, 4914. [Google Scholar] [CrossRef]
  56. Shams, M.; Carpentieri, B. Efficient inverse fractional neural network-based simultaneous schemes for nonlinear engineering applications. Fractal Fract. 2023, 7, 849. [Google Scholar] [CrossRef]
  57. Weierstraß, K. Neuer beweis des satzes, dass jede ganze rationale funktion einer veranderlichen dargestellt werden kann als ein product aus linearen funktionen derstelben veranderlichen. Ges. Werke 1903, 3, 251–269. [Google Scholar]
  58. Neves Machado, R.; Lopes, L.G. Ehrlich-type methods with King’s correction for the simultaneous approximation of polynomial complex zeros. Math. Statist. 2019, 7, 129–134. [Google Scholar] [CrossRef]
  59. Petković, M.S.; Petković, L.D.; Džunić, J. Accelerating generators of iterative methods for finding multiple roots of nonlinear equations. Comput. Math. Appl. 2010, 59, 2784–2793. [Google Scholar] [CrossRef]
  60. Petković, M.S.; Petković, L.D.; Džunić, J. On an efficient simultaneous method for finding polynomial zeros. Appl. Math. Lett. 2014, 28, 60–65. [Google Scholar] [CrossRef]
  61. Shams, M.; Rafiq, N.; Kausar, N.; Agarwal, P.; Park, C.; Mir, N.A. On highly efficient derivative-free family of numerical methods for solving polynomial equation simultaneously. Adv. Differ. Equ. 2021, 2021, 1–10. [Google Scholar] [CrossRef]
  62. Hristov, V.H.; Kyurkchiev, N.V.; Iliev, A.I. On the Solutions of Polynomial Systems Obtained by Weierstrass Method. Comptes Rendus l’Acad. Bulg. Sci. 2009, 62, 1–9. [Google Scholar]
  63. Chinesta, F.; Cordero, A.; Garrido, N.; Torregrosa, J.R.; Triguero-Navarro, P. Simultaneous roots for vectorial problems. Comput. Appl. Math. 2023, 42, 227. [Google Scholar] [CrossRef]
  64. Argyros, I.K. Computational theory of iterative methods, series. Stud. Comput. Math. 2017, 15, 1–19. [Google Scholar]
  65. Triguero-Navarro, P.; Cordero, A.; Torregrosa, J.R. CMMSE: Jacobian-free vectorial iterative scheme to find several solutions simultaneously. Authorea Preprints 2023. [Google Scholar] [CrossRef]
  66. Liu, Z.; Zheng, Q.; Zhao, P. A variant of Steffensen’s method of fourth-order convergence and its applications. Appl. Math. Comput. 2010, 216, 1978–1983. [Google Scholar] [CrossRef]
  67. Wang, X.; Zhang, T. A family of steffensen type methods with seventh-order convergence. Numer. Algor. 2013, 62, 429–444. [Google Scholar] [CrossRef]
  68. Mir, N.A.; Shams, M.; Rafiq, N.; Rizwan, M.; Akram, S. Derivative free iterative simultaneous method for finding distinct roots of non-linear equation. PONTE Int. Sci. Res. J. 2019, 75, 1–9. [Google Scholar] [CrossRef]
  69. Akram, S.; Shams, M.; Rafiq, N.; Mir, N.A. On the stability of Weierstrass type method with King’s correction for finding all roots of non-linear function with engineering application. Appl. Math. Sci. 2020, 14, 461–473. [Google Scholar] [CrossRef]
  70. Rafiq, N.; Shams, M.; Mir, N.A.; Gaba, Y.U. A highly efficient computer method for solving polynomial equations appearing in engineering problems. Math. Prob. Eng. 2021, 1, 9826693. [Google Scholar] [CrossRef]
  71. Shams, M.; Rafiq, N.; Carpentieri, B.; Ahmad Mir, N. A New Approach to Multiroot Vectorial Problems: Highly Efficient Parallel Computing Schemes. Fractal Fract. 2024, 8, 162. [Google Scholar] [CrossRef]
  72. Chapra, S.C. Numerical Methods for Engineers; Mcgraw-Hill: New York, NY, USA, 2010. [Google Scholar]
  73. Kiusalaas, J. Numerical Methods in Engineering with Python 3; Cambridge University Press: Cambridge, UK, 2013. [Google Scholar]
  74. Acary, V.; Brogliato, B. Numerical Methods for Nonsmooth Dynamical Systems: Applications in Mechanics and Electronics; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  75. Shams, M.; Carpentieri, B. Q-Analogues of Parallel Numerical Scheme Based on Neural Networks and Their Engineering Applications. Appl. Sci. 2024, 14, 1540. [Google Scholar] [CrossRef]
  76. Xavier, A.E.; Xavier, V.L. Flying elephants: A general method for solving non-differentiable problems. J. Heuristics 2016, 22, 649–664. [Google Scholar] [CrossRef]
  77. Budzko, D.A.; Hueso, J.L.; Martinez, E.; Teruel, C. Dynamical study while searching equilibrium solutions in N-body problem. J. Comput. Appl. Math. 2016, 297, 26–40. [Google Scholar] [CrossRef]
  78. Grosan, C.; Abraham, A. Multiple solutions for a system of nonlinear equations. Int. J. Innov. Comput. Inf. Control 2008, 4, 2161–2170. [Google Scholar]
  79. Özışık, M.N. Boundary Value Problems of Heat Conduction; Courier Corporation: Chelmsford, MA, USA, 1989. [Google Scholar]
  80. Pavkov, T.M.; Kabadzhov, V.G.; Ivanov, I.K.; Ivanov, S.I. Local convergence analysis of a one parameter family of simultaneous methods with applications to real-world problems. Algorithms 2023, 16, 103. [Google Scholar] [CrossRef]
  81. Corr, R.B.; Jennings, A. A simultaneous iteration algorithm for symmetric eigenvalue problems. Int. J. Numer. Methods Eng. 1976, 10, 647–663. [Google Scholar] [CrossRef]
Figure 1. (ae) Computational efficiency of MMϵ compared to PMϵ and DBϵ − DFϵ. In Figure 1, (a) shows the computational efficiency of MMϵ with respect to PMϵ, (b) with respect to DBϵ, (c) with respect to DFϵ, (d) with respect to EMϵ, and (e) with respect to BMϵ4.
Figure 1. (ae) Computational efficiency of MMϵ compared to PMϵ and DBϵ − DFϵ. In Figure 1, (a) shows the computational efficiency of MMϵ with respect to PMϵ, (b) with respect to DBϵ, (c) with respect to DFϵ, (d) with respect to EMϵ, and (e) with respect to BMϵ4.
Mathematics 12 02357 g001
Figure 2. (ae) Computational efficiency of MMϵ1 BMϵ1 − BMϵ4 with respect to PMϵ1.
Figure 2. (ae) Computational efficiency of MMϵ1 BMϵ1 − BMϵ4 with respect to PMϵ1.
Mathematics 12 02357 g002
Figure 3. Computational local order of convergence of simultaneous schemes PMϵ, MMϵ, BM, EM, DFϵ, and BDϵ using a random initial vector array (Table A1) to solve (85).
Figure 3. Computational local order of convergence of simultaneous schemes PMϵ, MMϵ, BM, EM, DFϵ, and BDϵ using a random initial vector array (Table A1) to solve (85).
Mathematics 12 02357 g003
Figure 4. Computational local order of convergence of simultaneous schemes MMϵ, EMϵ, BMϵ, DSϵ1 − DSϵ3, using a random initial vector array (Appendix A Table A2) to solve (86).
Figure 4. Computational local order of convergence of simultaneous schemes MMϵ, EMϵ, BMϵ, DSϵ1 − DSϵ3, using a random initial vector array (Appendix A Table A2) to solve (86).
Mathematics 12 02357 g004
Figure 5. Computational local order of convergence of simultaneous schemes MMϵ, EMϵ, BMϵ, DS ϵ 1 DS ϵ 3 , using a random initial vector array (Appendix A Table A3) to solve (87).
Figure 5. Computational local order of convergence of simultaneous schemes MMϵ, EMϵ, BMϵ, DS ϵ 1 DS ϵ 3 , using a random initial vector array (Appendix A Table A3) to solve (87).
Mathematics 12 02357 g005
Figure 6. Computational local order of convergence of simultaneous schemes MMϵ, EMϵ, BMϵ, DSϵ1 − DSϵ3, using a random initial vector array (Appendix A Table A4) to solve (88).
Figure 6. Computational local order of convergence of simultaneous schemes MMϵ, EMϵ, BMϵ, DSϵ1 − DSϵ3, using a random initial vector array (Appendix A Table A4) to solve (88).
Mathematics 12 02357 g006
Figure 7. Approximate and exact solutions to the heat Equation (92).
Figure 7. Approximate and exact solutions to the heat Equation (92).
Mathematics 12 02357 g007
Figure 8. Approximate and exact solutions to the heat Equation (93).
Figure 8. Approximate and exact solutions to the heat Equation (93).
Mathematics 12 02357 g008
Figure 9. Approximate and exact solutions to the heat Equation (94).
Figure 9. Approximate and exact solutions to the heat Equation (94).
Mathematics 12 02357 g009
Table 1. Computational cost and basic operations for simultaneous schemes.
Table 1. Computational cost and basic operations for simultaneous schemes.
MethodsPMϵMMϵBDϵ, EMϵ, BMϵ, DFϵ
ASm22 ×  ( m 2 ) + O [ m ] 17 ×  ( m 2 ) + O [ m ] 19 ×  ( m 2 ) + O [ m ]
MMm12 ×  m 2 + O m 6 ×  m 2 + O m 6 ×  m 2 + O m
DDm2 ×  m 2 + O m 2 ×  m 2 + O m 2 ×  m 2 + O m
Table 2. Residual error with initial values near exact root.
Table 2. Residual error with initial values near exact root.
MethodsPMϵMMϵBMEMDFϵBDϵ
Error it050505040505
CPU0.0152310.0134670.012350.0012450.001230.00123
e 1 [ k ] 9.6754 × 10−409.3215 × 10−657.6543 × 10−656.5634 × 10−400.00.0
e 2 [ k ] 7.4532 × 10−430.8753 × 10−649.7865 × 10−678.7453 × 10−952.4642 × 10−711.0042 × 10−91
e 3 [ k ] 6.8750 × 10−436.9873 × 10−853.4534 × 10−806.4534 × 10−867.3432 × 10−877.545 × 10−107
e 4 [ k ] 8.9765 × 10−433.3421 × 10−957.3453 × 10−730.00.00.0
ρ t [ σ 1 ] 9.122151210.03145210.91455610.23239.96549.9865
Table 3. Residual error using a random set of initial-guess vectors.
Table 3. Residual error using a random set of initial-guess vectors.
Methods e 1 [ k ] e 2 [ k ] e 3 [ k ] e 4 [ k ] ρ ς [ k 1 ]
Error on random initial-guess vector v1
PMϵ2.12 × 10−129.73 × 10−232.34 × 10−457.34 × 10−347.63563774
MMϵ3.22 × 10−320.56 × 10−438.76 × 10−340.98 × 10−549.43565765
BM4.33 × 10−120.76 × 10−231.54 × 10−549.99 × 10−349.13456683
EM9.10 × 10−123.43 × 10−449.76 × 10−340.23 × 10−248.56575532
DFϵ0.12 × 10−318.65 × 10−330.98 × 10−548.72 × 10−348.13424556
BDϵ0.16 × 10−200.05 × 10−320.11 × 10−578.02 × 10−329.13424556
Error on random initial-guess vector v2
PMϵ9.23 × 10−238.65 × 10−433.43 × 10−231.34 × 10−267.63563774
MMϵ8.23 × 10−127.65 × 10−457.64 × 10−346.43 × 10−369.43565765
BM2.34 × 10−340.86 × 10−348.97 × 10−467.89 × 10−359.13456683
EM4.54 × 10−210.04 × 10−237.34 × 10−369.80 × 10−468.56575532
DFϵ0.12 × 10−318.65 × 10−330.98 × 10−548.72 × 10−348.13424556
BDϵ5.43 × 10−150.04 × 10−262.23 × 10−260.98 × 10−368.13424556
Error on random initial-guess vector v3
PMϵ5.65 × 10−163.21 × 10−237.56 × 10−378.65 × 10−357.63563774
MMϵ6.54 × 10−275.32 × 10−341.23 × 10−368.65 × 10−349.43565765
BM7.54 × 10−365.32 × 10−458.23 × 10−384.66 × 10−469.13456683
EM7.34 × 10−354.23 × 10−236.23 × 10−280.36 × 10−268.56575532
DFϵ0.72 × 10−311.65 × 10−337.98 × 10−548.79 × 10−347.13424556
BDϵ8.94 × 10−186.43 × 10−444.32 × 10−470.08 × 10−278.13424556
Table 4. CPU time using a random set of starting vectors v1–v3 taken from Appendix A Table A1.
Table 4. CPU time using a random set of starting vectors v1–v3 taken from Appendix A Table A1.
Methods e 1 [ k ] e 2 [ k ] e 3 [ k ] e 4 [ k ]
PMϵ2.54873.45332.45330.3543
MMϵ1.05760.76570.67540.3421
BM0.93423.65981.23430.7342
EM0.76541.87651.93420.0142
DFϵ0.43541.50051.73020.0142
BDϵ0.00420.98420.32340.0342
Table 5. Iterations using random initial guess vectors v1–v3 from Appendix A Table A1.
Table 5. Iterations using random initial guess vectors v1–v3 from Appendix A Table A1.
Methods e 1 [ k ] e 2 [ k ] e 3 [ k ] e 4 [ k ] ρ ς k [ k 1 ]
Random initial guess vectors v1–v3
taken from Appendix A Table A1
PMϵ0808080807
MMϵ0808070708
BM0505050502
EM0606060702
DFϵ0404050707
BDϵ0303030308
Table 6. Consistency of the numerical scheme for solving (85).
Table 6. Consistency of the numerical scheme for solving (85).
MethodsMax- e i [ k ] Avg-CPUAvg-IterationsAvg-COC
Random initial guess vectors v1–v3 taken from Appendix A Table A1
PMϵ0.2144 × 10−101.9453037.3454
MMϵ4.874 × 10−200.8664039.2215
BM2.984 × 10−210.5616051.9432
EM2.983 × 10−210.5615041.9987
DFϵ2.982 × 10−210.5617037.8758
BDϵ3.126 × 10−210.1146036.8796
Table 7. Simultaneous determination of ξ 1 = ξ 1 , 1 , ξ 1 , 2 of (86) using parallel schemes.
Table 7. Simultaneous determination of ξ 1 = ξ 1 , 1 , ξ 1 , 2 of (86) using parallel schemes.
MethodsMMϵEMϵBMϵDSϵ1DSϵ2DSϵ3
CPU0.03210.03210.02120.02130.02320.0342
e 1 , 1 [ k ] 0.103 × 10−340.123 × 10−340.433 × 10−450.343 × 10−349.876 × 10−320.876 × 10−54
e 1 , 2 [ k ] 4.325 × 10−331.325 × 10−330.643 × 10−410.875 × 10−230.875 × 10−447.876 × 10−47
ρ ς [ k 1 ] 8.97658.97659.96787.98549.98767.0987
Simultaneous determination of ξ 2 = ξ 2 , 1 , ξ 2 , 2 of (86) using parallel scheme
CPU0.082010.0930.075420.0930.02320.0002
e 1 , 1 [ k ] 5.435 × 10−370.324 × 10−346.543 × 10−654.324 × 10−348.654 × 10−432.324 × 10−51
e 1 , 2 [ k ] 7.534 × 10−455.004 × 10−347.654 × 10−345.424 × 10−340.985 × 10−436.534 × 10−54
ρ ς [ k 1 ] 8.43657.99849.09787.99849.90067.0007
Table 8. Residual error using a random set of initial-guess vectors.
Table 8. Residual error using a random set of initial-guess vectors.
MethodsMMϵEMϵBMϵDSϵ1DSϵ2DSϵ3
Error on random initial-guess vector v1
e 1 , 1 [ k ] 0.12 × 10−120.705 × 10−134.212 × 10−79.872 × 10−10.246 × 10−134.076 × 10−17
e 1 , 2 [ k ] 9.123 × 10−131.300 × 10−130.765 × 10−139.8721 × 10−237.531 × 10−142.098 × 10−16
e 2 , 1 [ k ] 8.313 × 10−135.432 × 10−141.324 × 10−110.872 × 10−179.736 × 10−154.875 × 10−18
e 2 , 2 [ k ] 0.983 × 10−149.762 × 10−158.312 × 10−131.323 × 10−120.943 × 10−168.765 × 10−18
Error on random initial-guess vectors v2–v3
e 1 , 1 [ k ] 0.765 × 10−130.765 × 10−138.762 × 10−249.8762 × 10−140.002 × 10−290.125 × 10−19
e 1 , 2 [ k ] 1.334 × 10−123.334 × 10−110.762 × 10−130.9872 × 10−250.139 × 10−287.654 × 10−29
e 2 , 1 [ k ] 7.312 × 10−138.382 × 10−103.312 × 10−160.987 × 10−170.983 × 10−163.654 × 10−27
e 2 , 2 [ k ] 8.762 × 10−146.762 × 10−179.723 × 10−250.9872 × 10−170.765 × 10−250.965 × 10−15
Table 9. CPU time using a random set of starting vectors v1–v3 from Appendix A Table A2.
Table 9. CPU time using a random set of starting vectors v1–v3 from Appendix A Table A2.
MethodsMMϵEMϵBMϵDSϵ1DSϵ2DSϵ3
CPU time using vector
e 1 , 1 [ k ] 0.34233.28172.46931.09823.2312.3215
e 1 , 2 [ k ] 2.43249.32132.43653.29813.43214.3093
e 2 , 1 [ k ] 4.32532.43653.31323.28172.43651.3241
e 2 , 2 [ k ] 4.34323.2313.24629.32133.23141.3424
Table 10. Iterations using a random set of starting vectors v1–v3 from Appendix A Table A2.
Table 10. Iterations using a random set of starting vectors v1–v3 from Appendix A Table A2.
MethodsMMϵEMϵ1BMϵDSϵ1DSϵ2DSϵ3
e 1 , 1 [ k ] 060404050603
e 1 , 2 [ k ] 060404050603
e 2 , 1 [ k ] 060404050603
e 2 , 2 [ k ] 060404050603
Table 11. Consistency of the numerical scheme for solving (86) using vectors v1–v3 taken from Appendix A Table A2.
Table 11. Consistency of the numerical scheme for solving (86) using vectors v1–v3 taken from Appendix A Table A2.
MethodsMax- e i [ k ] Avg-CPUAvg-ItAvg-COC
MMϵ7.876 × 10−312.2343046.765
EMϵ4.435 × 10−262.742056.465
BMϵ8.675 × 10−232.1234055.897
DSϵ17.654 × 10−222.7654054.765
DSϵ23.124 × 10−363.3427044.765
DSϵ34.435 × 10−262.7654033.765
Table 12. Simultaneous finding of ξ 1 = ξ 1 , 1 , ξ 1 , 2 s of (87) using parallel schemes.
Table 12. Simultaneous finding of ξ 1 = ξ 1 , 1 , ξ 1 , 2 s of (87) using parallel schemes.
MethodsMMϵEMϵBMϵDSϵ1DSϵ2DSϵ3
CPU-time0.03210.02320.02120.02130.02320.0342
e 1 , 1 [ k ] 0.873 × 10−247.298 × 10−140.561 × 10−450.343 × 10−449.298 × 10−240.346 × 10−42
e 1 , 2 [ k ] 1.365 × 10−231.103 × 10−440.343 × 10−410.875 × 10−330.113 × 10−341.056 × 10−13
ρ ς [ k 1 ] 8.97659.00769.87687.98549.00767.0987
Simultaneous finding of ξ 2 = ξ 2 , 1 , ξ 2 , 2 s of (87) using parallel sachems.
CPU-time0.084510.0930.075420.0930.02010.0232
e 1 , 1 [ k ] 6.345 × 10−141.134 × 10−246.554 × 10−554.334 × 10−248.994 × 10−338.345 × 10−42
e 1 , 2 [ k ] 9.834 × 10−155.004 × 10−277.674 × 10−345.884 × 10−240.115 × 10−343.534 × 10−47
ρ ς [ k 1 ] 8.11658.99848.04588.99849.83068.0237
Table 13. Residual error using a random set of starting vectors.
Table 13. Residual error using a random set of starting vectors.
MethodsMMϵEMϵBMϵDSϵ1DSϵ2DSϵ3
Error on random initial-guess vector v1
e 1 , 1 [ k ] 7.12 × 10−129.072 × 10−13.871 × 10−79.072 × 10−10.126 × 10−134.002 × 10−17
e 1 , 2 [ k ] 1.653 × 10−237.232 × 10−231.465 × 10−137.212 × 10−237.531 × 10−142.098 × 10−16
e 2 , 1 [ k ] 4.093 × 10−233.832 × 10−178.624 × 10−110.872 × 10−179.736 × 10−154.875 × 10−18
e 2 , 2 [ k ] 0.983 × 10−341.333 × 10−128.312 × 10−131.323 × 10−120.943 × 10−168.765 × 10−18
Error on random initial-guess vectors v2–v3
e 1 , 1 [ k ] 2.453 × 10−139.072 × 10−111.542 × 10−240.1242 × 10−139.802 × 10−262.135 × 10−12
e 1 , 2 [ k ] 1.334 × 10−124.212 × 10−230.762 × 10−130.9872 × 10−240.139 × 10−252.354 × 10−23
e 2 , 1 [ k ] 7.312 × 10−130.822 × 10−173.312 × 10−160.9873 × 10−150.983 × 10−141.124 × 10−26
e 2 , 2 [ k ] 8.762 × 10−142.223 × 10−129.723 × 10−250.9872 × 10−160.765 × 10−220.345 × 10−14
Table 14. CPU time using a random set of starting vectors v1–v3 taken from Appendix A Table A3.
Table 14. CPU time using a random set of starting vectors v1–v3 taken from Appendix A Table A3.
MethodsMMϵEMϵBMϵDSϵ1DSϵ2DSϵ3
e 1 , 1 [ k ] 0.77650.05730.78330.05730.76340.0573
e 1 , 2 [ k ] 0.05730.76420.07420.93420.07420.7642
e 2 , 1 [ k ] 0.93420.93420.23420.77650.00420.4211
e 2 , 2 [ k ] 0.77650.77650.00420.98420.32340.0342
Table 15. Iterations using a random set of starting vectors v1–v3 from Appendix A Table A3.
Table 15. Iterations using a random set of starting vectors v1–v3 from Appendix A Table A3.
MethodsMMϵEMϵBMϵDSϵ1DSϵ2DSϵ3
e 1 , 1 [ k ] 050404050603
e 1 , 2 [ k ] 050404050603
e 2 , 1 [ k ] 050404050603
e 2 , 2 [ k ] 050404050603
Table 16. Consistency of the numerical scheme for solving (87) using vectors v1–v3 from Appendix A Table A3.
Table 16. Consistency of the numerical scheme for solving (87) using vectors v1–v3 from Appendix A Table A3.
MethodsMax- e i [ k ] Avg-CPUAvg-ItAvg-COC
MMϵ3.622 × 10−201.1133057.435
EMϵ2.656 × 10−211.5433047.435
BMϵ6.87 × 10−332.4534065.917
DSϵ10.854 × 10−322.7764032.795
DSϵ26.54 × 10−363.3548045.705
DSϵ34.435 × 10−262.7223032.975
Table 17. Determination of ξ 1 = ξ 1 , 1 , ξ 1 , 2 of (88) using parallel schemes.
Table 17. Determination of ξ 1 = ξ 1 , 1 , ξ 1 , 2 of (88) using parallel schemes.
MethodsMMϵEMϵBMϵDSϵ1DSϵ2DSϵ3
CPU0.03210.02230.02120.02130.02320.0342
e 1 , 1 [ k ] 0.003 × 10−340.343 × 10−140.411 × 10−450.343 × 10−145.876 × 10−440.876 × 10−42
e 1 , 2 [ k ] 1.325 × 10−330.115 × 10−230.512 × 10−410.115 × 10−230.875 × 10−347.876 × 10−43
ρ ς [ k 1 ] 8.97657.98748.32487.98749.00767.0181
Simultaneous determination of ξ 2 = ξ 2 , 1 , ξ 2 , 2 of (88) using parallel schemes.
CPU0.082010.02340.075420.0930.02320.0002
e 1 , 1 [ k ] 5.435 × 10−540.343 × 10−146.040 × 10−557.324 × 10−120.654 × 10−342.324 × 10−43
e 1 , 2 [ k ] 7.534 × 10−450.115 × 10−237.654 × 10−343.424 × 10−544.3475 × 10−246.534 × 10−44
ρ ς [ k 1 ] 8.4234557.98749.03247.96549.43067.0127
Table 18. Residual errors using a random set of starting vectors.
Table 18. Residual errors using a random set of starting vectors.
MethodsMMϵEMϵBMϵDSϵ1DSϵ2DSϵ3
Error using random initial-guess vector v1
e 1 , 1 [ k ] 0.12 × 10−120.12 × 10−120.765 × 10−250.943 × 10−160.872 × 10−179.736 × 10−15
e 1 , 2 [ k ] 2.122 × 10−200.139 × 10−280.765 × 10−239.8721 × 10−237.531 × 10−142.098 × 10−16
e 2 , 1 [ k ] 1.003 × 10−110.983 × 10−161.324 × 10−210.872 × 10−179.736 × 10−154.875 × 10−18
e 2 , 2 [ k ] 3.765 × 10−210.765 × 10−258.312 × 10−231.323 × 10−120.943 × 10−168.765 × 10−18
Error using random initial-guess vectors v2–v3
e 1 , 1 [ k ] 0.765 × 10−130.12 × 10−121.324 × 10−210.872 × 10−179.736 × 10−150.125 × 10−19
e 1 , 2 [ k ] 1.404 × 10−110.139 × 10−288.312 × 10−231.323 × 10−120.943 × 10−162.098 × 10−16
e 2 , 1 [ k ] 4.343 × 10−100.983 × 10−160.943 × 10−160.872 × 10−179.736 × 10−154.875 × 10−18
e 2 , 2 [ k ] 8.762 × 10−140.765 × 10−259.723 × 10−250.9872 × 10−170.765 × 10−250.965 × 10−15
Table 19. CPU time using a random set of starting vectors v1–v3 taken from Appendix A Table A4.
Table 19. CPU time using a random set of starting vectors v1–v3 taken from Appendix A Table A4.
MethodsMMϵEMϵBMϵDSϵ1DSϵ2DSϵ3
e 1 , 1 [ k ] 4.30774.23241.76113.29872.00114.2324
e 1 , 2 [ k ] 4.42343.29872.34322.42760.20242.4276
e 2 , 1 [ k ] 2.00112.42761.34242.34324.30771.7611
e 2 , 2 [ k ] 0.20242.34324.30771.76111.65244.3232
Table 20. Iterations using a random set of starting vectors v1–v3 taken from Appendix A Table A4.
Table 20. Iterations using a random set of starting vectors v1–v3 taken from Appendix A Table A4.
MethodsMMϵEMϵBMϵDSϵ1DSϵ2DSϵ3
e 1 , 1 [ k ] 060404050603
e 1 , 2 [ k ] 060404050603
e 2 , 1 [ k ] 060404050603
e 2 , 2 [ k ] 060404050603
Table 21. Consistency of the numerical scheme for solving (88) using vectors v1–v3 from Appendix Table A4.
Table 21. Consistency of the numerical scheme for solving (88) using vectors v1–v3 from Appendix Table A4.
MethodsMax- e i [ k ] Avg-CPUAvg-ItAvg-COC
MMϵ7.676 × 10−150.2343067.535
EMϵ7.676 × 10−210.2343046.535
BMϵ8.675 × 10−330.1234055.007
DSϵ19.654 × 10−421.1234054.431
DSϵ23.004 × 10−362.7654044.555
DSϵ30.095 × 10−462.1454033.765
Table 22. Numerical results of iterative techniques for solving (92).
Table 22. Numerical results of iterative techniques for solving (92).
MethodsMMϵEMϵBMϵDSϵ1DSϵ2DSϵ3
Iteration103115123876665
Max-Err0.13 × 10−204.3 × 10−174.3 × 10−100.13 × 10−204.31 × 10−334.2 × 10−49
Max-time2.12344.15526.18725.23123.13133.1321
COC7.23123.1341.6345.1326.13254.4313
Table 23. Numerical results of iterative techniques for solving (93).
Table 23. Numerical results of iterative techniques for solving (93).
MethodsMMϵEMϵBMϵDSϵ1DSϵ2DSϵ3
Iteration859097586351
Max-Err0.152 × 10−224.3 × 10−94.3 × 10−80.13 × 10−244.31 × 10−434.2 × 10−29
Max-time2.54246.16387.43325.98723.00133.1321
COC7.23123.9841.9345.1326.13254.4313
Table 24. Numerical results of iterative techniques for solving (94).
Table 24. Numerical results of iterative techniques for solving (94).
MethodsMMϵEMϵBMϵDSϵ1DSϵ2DSϵ3
Iteration9711553826345
Max-Err1.43 × 10−204.3 × 10−130.66 × 10−230.54 × 10−204.31 × 10−330.2 × 10−19
Max-time2.112344.12225.10035.12323.16453.5432
COC7.25323.9342.00045.13326.18754.4313
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shams, M.; Carpentieri, B. A High-Order Numerical Scheme for Efficiently Solving Nonlinear Vectorial Problems in Engineering Applications. Mathematics 2024, 12, 2357. https://doi.org/10.3390/math12152357

AMA Style

Shams M, Carpentieri B. A High-Order Numerical Scheme for Efficiently Solving Nonlinear Vectorial Problems in Engineering Applications. Mathematics. 2024; 12(15):2357. https://doi.org/10.3390/math12152357

Chicago/Turabian Style

Shams, Mudassir, and Bruno Carpentieri. 2024. "A High-Order Numerical Scheme for Efficiently Solving Nonlinear Vectorial Problems in Engineering Applications" Mathematics 12, no. 15: 2357. https://doi.org/10.3390/math12152357

APA Style

Shams, M., & Carpentieri, B. (2024). A High-Order Numerical Scheme for Efficiently Solving Nonlinear Vectorial Problems in Engineering Applications. Mathematics, 12(15), 2357. https://doi.org/10.3390/math12152357

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop