1. Introduction
Scalar nonlinear equations of a single variable
,
play a pivotal role in advancing scientific understanding and engineering [
1,
2]. Various scientific disciplines, including physics, chemistry, biology, and economics, utilize nonlinear equations to describe complex correlations and interactions between variables. These equations enable scientists to more accurately characterize chaotic systems, fluid dynamics, and population dynamics compared to linear models. In engineering, nonlinear equations are crucial in areas such as control systems [
3], structural analysis [
4], and electrical circuits [
5]. Engineers use these equations to model and predict real-world behaviors by accounting for nonlinearities in materials and systems. Nonlinear optimization techniques are essential for solving engineering problems including parameter estimation [
6], optimal control [
7], and system design [
8]. The significance of nonlinear equations extends to emerging fields like artificial intelligence and machine learning [
9], where they are used for complex data processing and pattern recognition [
10]. Overall, nonlinear equations and their associated systems are indispensable tools for scientists and engineers striving to understand and manage complex systems, thereby fostering the advancement of knowledge and technology.
Solving nonlinear equations analytically can be challenging, and often impossible, due to the intrinsic complexity of nonlinear interactions. Nonlinear equations include terms that are not simply proportional to the variable of interest, and their solutions may not be expressible in closed-form expressions or simple algebraic equations [
11,
12,
13,
14]. Therefore, we turn to numerical iterative schemes. Iterative numerical methods are effective in solving nonlinear equations and systems, making them invaluable tools for researchers across various fields [
15,
16,
17,
18]. These numerical iterative techniques are classified into three types: single root-finding schemes with local convergence behavior, simultaneous methods for finding all roots of (
1) with global convergence behavior, and schemes that find all solutions to nonlinear systems of equations (i.e., vectorial problems). Iterative techniques for solving nonlinear systems of equations, such as gradient descent [
19] or evolutionary algorithms that search for roots simultaneously across multiple dimensions in the solution space [
20], exhibit local convergence behavior.
The simplest and most efficient method is the classical Newton’s method [
21] for solving (
1), given by
The method (
2) exhibits local quadratic convergence. To reduce the computational cost of (
2), Steffensen [
22] proposed the following modified version:
where
, and
is the first-order forward difference on
, i.e.,
[
23]. In high-precision computing, the divided difference is replaced with a first-order central difference on
as follows:
where
and
is the first-order central difference operator. The two-step modified Newton’s method [
24] of the third-order convergence has the form
where
Higher-order schemes offer considerable advantages over lower-order schemes for solving nonlinear equations due to improved accuracy and efficiency. They achieve higher accuracy per iteration step, reducing truncation errors and requiring fewer iterations to obtain the desired precision. The order of convergence can increase up to three, four, and so on, such as the well-known Ostrowski’s method [
25].
Similarly, Kou et al. [
26] developed sixth-order methods using the weight function technique, written as
Liu et al. [
27] presented the following eighth-order methods using the weight function technique:
where
Numerous single- and multi-step methods exist for solving (
1), and some of these methods can be applied to solving systems of nonlinear equations with local convergence behavior [
28]. Noot et al. [
29], Darvishi et al. [
30], Babajee et al. [
31], Ortega et al. [
32], and others (see e.g., [
33,
34] and references therein) have employed (
2) as a predictor step to construct multi-step approaches to solving nonlinear equation systems. Iterative methods for finding a single root of nonlinear equations, though widely used, have certain inherent limitations that researchers must consider. One primary concern is convergence; these methods may fail to find a solution if the initial guess is not close enough to a root or if the function has abrupt changes. The dependence on initial guesses poses a significant challenge, as inaccurate or poorly chosen starting points can result in a slow convergence or divergence [
35,
36]. Furthermore, iterative methods usually provide only local solutions, with no guarantee of accurately identifying all roots, particularly multiple roots. The computational cost can be significant, especially for complex functions or high-dimensional systems, and ill-conditioned problems can lead to numerical instability. Additionally, these methods generally provide root values without information about their multiplicity, and their applicability may be limited in the presence of discontinuities or non-smooth features [
37]. Due to these limitations, when using iterative methods, we may need to investigate alternative approaches based on the specific characteristics of the nonlinear equations under investigation. Therefore, we turn to simultaneous methods, which are more stable, consistent, and reliable, and can also be applied to parallel computing (see e.g., [
38,
39]).
In 1891, Weierstrass [
40] introduced the generalized form of (
2) by incorporating the Weierstrass correction, which was later explored by Presic [
41], Durand [
42], Dochev [
43], and Kerner [
44]. In 2015, Proinov et al. [
45] proposed the local convergence theorem for the double Weierstrass technique. In 2016, Nedzibove created a modified version of the Weierstrass technique [
46] and presented its local convergence [
47] in 2018. In 1973, Aberth [
48] developed a third-order convergent simultaneous method with derivatives, which was then accelerated by Nourein [
49] to the fourth order in 1977, by Petković [
50] to the sixth order in 2020, and by Mir et al. [
51] to the tenth order using various single root-finding methods as corrections. Cholakov [
52,
53] and Marcheva et al. [
54] proposed the local convergence of multi-step simultaneous methods for determining all roots of (
1). In 2023, Shams et al. [
55,
56] described the global convergence behavior of simultaneous algorithms using random initial gauge values, along with contributions from many others.
Among derivative-free simultaneous methods, the Weierstrass–Dochive method [
57] (abbreviated as BM) is the most attractive. It is given by
where
is the Weierstrass correction. The method (
10) has local quadratic convergence.
In 1977, Ehrlich [
58] presented the following convergent simultaneous method (abbreviated as EM) of the third order:
where
is used as a correction. Petkovic et al. [
59] accelerated the convergence order of (
11) from three to six:
where
and
Petkovic et al. [
60] accelerated the convergence order of (
11) from three to ten, as shown in the following method (abbreviated as PM
ϵ):
where
,
Shams et al. [
61] proposed a three-step simultaneous scheme for finding all polynomial roots (abbreviated as MM
ϵ):
where
and
;
The numerical scheme (
14) exhibits a convergence order of twelve.
A review of the existing literature reveals the following:
Most iterative methods used for solving nonlinear equations and systems are highly effective at converging to solutions when the initial guess is close to a root.
These iterative techniques are particularly sensitive to initial guesses and may fail to converge if the initial values are not chosen precisely.
Local convergence algorithms may lack stability and consistency in many cases.
Iterative methods are susceptible to rounding errors and may fail to converge when the problem is poorly conditioned.
Nonlinear equations and systems can have multiple solutions, and achieving convergence to the desired solution based on initial estimates can be challenging.
Hirstov et al. [
62] proposed the generalized Weierstrass method for solving systems of nonlinear equations (abbreviated as BM
ϵ):
Chinesta et al. [
63] proposed the generalized method (
11) for nonlinear systems (abbreviated as EM
ϵ):
where
, and
The order of convergence of EM
ϵ is 2. Motivated by prior work, the main objective of this study is to develop a novel family of efficient, higher-order simultaneous schemes. These schemes aim not only to compute all roots of nonlinear equations simultaneously but also to solve nonlinear systems of equations comprehensively, thereby addressing the limitations outlined earlier. The structure of the paper is as follows: after the introduction,
Section 2 introduces and analyzes a new family of two-step vectorial simultaneous algorithms.
Section 3 is dedicated to discussing computational efficiencies, while in
Section 4, we present and discuss the numerical results obtained from our proposed schemes. Finally, the concluding section summarizes the key findings, contributions of this research, and directions for future work.
4. Numerical Outcomes
To evaluate the performance and efficiency of the newly designed vectorial scheme, we solved nonlinear vectorial problems arising in science and engineering. We terminated the computer program using the following criteria:
where
represents the absolute error using the Euclidean norm-2, i.e.,
[
70,
71]. In the numerical calculations, we utilized vectors v1–v3 from
Appendix A Table A1,
Table A2 and
Table A3 for (
29) and abbreviated as
,
, and
, respectively. The numerical outcomes considered the following points to analyze the simultaneous schemes PM
ϵ, MM
ϵ, BD
ϵ, BM
ϵ, DF
ϵ, and DS
ϵ:
Computational CPU time (CPU-time);
Residual error computed for all roots using Algorithms 1–3;
Efficiency of the simultaneous schemes PMϵ, MMϵ, BDϵ, BMϵ, DFϵ, and DSϵ;
Consistency and stability analysis;
Algorithm 1: Method for finding all distinct and multiple roots of (1). |
|
Algorithm 2: Derivative-free method for finding all roots of (1). |
|
Algorithm 3: For finding all solution of (26). |
|
Example 1: Quarter car suspension model
The shock absorber in the suspension system regulates the transient behavior of both the vehicle and suspension mass [
72,
73]. The nonlinear behavior of the suspension system makes it one of its most complex components. Nonetheless, the damping force of the dampers is characterized by an asymmetric nonlinear hysteresis loop. Automobile engineers utilize a quarter car suspension model, which is a simplified representation, to examine the vertical dynamics of a vehicle’s single wheel and its interaction with the road. This model is a component of the broader field of vehicle dynamics and suspension design. The quarter car model divides the vehicle into two primary parts: the sprung mass and the unsprung mass.
Automobile Structure Sprung Weight: The vehicle body mass includes the chassis, occupants, and other components directly supported by the suspension. The majority of the sprung mass is typically concentrated around the vehicle’s center of gravity.
The Unsprung Weight and Suspension of the Wheels: The unsprung mass includes the wheel, tire, and any suspension components directly linked to the wheel. These components are not supported by the suspension springs.
The suspension system, comprising a spring and a damper, regulates the interaction between sprung and unsprung masses. The spring represents the suspension’s elasticity, while the damper simulates the shock absorber’s damping effect. Using the quarter car suspension model, engineers can analyze how a vehicle responds to potholes and other road irregularities. This model allows for the calculation of dynamic quantities such as suspension deflection, wheel displacement, and vehicle body forces. Understanding these fundamental dynamics and characteristics of suspension systems aids in designing and optimizing suspension systems for improved ride comfort, handling, and stability. Despite the availability of more advanced models, such as half-car or full-car models, the quarter car model remains a critical tool in vehicle dynamics studies. The equations for mass motion are as follows:
where
represents the mass above the spring,
denotes the mass below the spring,
signifies the displacement of the sprung mass,
indicates the displacement of the unsprung mass,
represents disturbances from road bumps,
corresponds to the spring stiffness, and
pertains to the tire stiffness. To accurately model the damper force
F, one can use the polynomial presented by Barethiye [
74] in Equation (
64):
The exact roots of Equation (
65) are
Initial guesses:
The numerical outcomes for the initial guesses, which are sufficiently close to the exact values, are presented in
Table 2.
The results of
Table 2 clearly show that BF
ϵ and BD
ϵ are superior to PM
ϵ, MM
ϵ, BM, and EM in terms of computational order of convergence, CPU-time, residual error, and error iteration (Error it) for solving (
85).
The initial vectors provided in
Table A1 [
75] are used to verify the global convergence of PM
ϵ, MM
ϵ, BM, EM, DF
ϵ, and BD
ϵ.
The numerical outcomes of the simultaneous vectorial method for solving (
85) in terms of residual error, CPU time, local computational order of convergence, and iterations are shown in
Table 3,
Table 4 and
Table 5 and
Figure 3.
Table 3,
Table 4 and
Table 5 clearly illustrate that DF
ϵ outperforms MM
ϵ, BM, EM, PM
ϵ, and BD
ϵ in terms of global convergence, as it converges faster and utilizes less CPU time and fewer iterations than the other methods.
The numerical results from iterative methods using random initial vectors, as presented in
Table 3,
Table 4,
Table 5 and
Table 6, demonstrate that the newly developed schemes BD
ϵ and DF
ϵ outperform existing methods such as PM
ϵ, MM
ϵ, BM, and EM, achieving a significantly higher accuracy with maximum errors of 0.11 × 10
−57, 0.98 × 10
−54, and 7.98 × 10
−54 for the three sets of initial vectors v1–v3 (see
Table 3 and
Figure 3). These techniques also exhibit superior performance compared to DM, DM1, and DM3 in terms of average CPU time (Avg-CPU) and average number of iterations (Avg-Iterations).
Table 6 provides an overall assessment of the simultaneous schemes, confirming that DF
ϵ shows greater stability and consistency compared to MM
ϵ, BD
ϵ, BM
ϵ, PM
ϵ, and DS
ϵ, respectively.
Example 2: Solving a non-differentiable system [
76]
Consider the non-differentiable system:
The exact solutions to
are
, and the trivial solution is
. We chose the following initial guesses:
and
. The numerical results are presented in
Table 7.
The results of
Table 7 clearly demonstrate that
outperforms
−
, MM
ϵ, EM
ϵ, and BM in terms of computational order of convergence, CPU-time, and residual error for solving (
86).
To check the global convergence behavior, we utilized the following starting set of vectors presented in
Appendix A Table A2.
In terms of residual error, CPU time, local computational order of convergence, and iterations, the numerical results of the simultaneous vectorial method for solving (
86) are presented in
Table 8,
Table 9 and
Table 10 and
Figure 4. These tables clearly illustrate that
outperforms
−
, MM
ϵ, EM
ϵ, and BM
ϵ in terms of global convergence, converging faster, and utilizing less CPU time and fewer iterations than other methods.
The numerical results of the iterative method using random initial vectors presented in
Table 8,
Table 9 and
Table 10 show that the newly developed scheme
−
is more efficient than existing methods MM
ϵ, EM
ϵ, BM
ϵ because it achieves a much higher accuracy, with errors of 4.8756 × 10
−18, 7.651 × 10
−29, and 7.654 × 10
−29 for the three sets of initial vectors v1–v3, respectively. Additionally, it consumes less CPU time and requires fewer iterations.
Table 11 depicts the overall behavior of the simultaneous schemes and demonstrates that
is more stable and consistent than
−
, MM
ϵ, EM
ϵ, and BM
ϵ.
Example 3: Computing the steady state of the epidemic model [77] Consider the following system of nonlinear equations:
where
,
and
, although different values of
R may be considered. The exact solutions of
are
and
. We choose the initial guesses as
and
. The numerical results are presented in
Table 12.
The results of
Table 12 clearly show that
outperforms
−
, MM
ϵ, EM
ϵ, and BM
ϵ in terms of computational order of convergence, CPU-time, and residual error for solving (
87).
To evaluate the global convergence behavior, we utilize the initial vector set presented in
Appendix A Table A3.
In terms of residual error, CPU time, local computational order of convergence, and iterations, the numerical outcomes of the simultaneous vectorial method for solving (
87) are shown in
Table 13,
Table 14 and
Table 15 and
Figure 5.
Table 13,
Table 14 and
Table 15 clearly illustrate that
outperforms
−
, MM
ϵ, EM
ϵ, and BM in terms of global convergence, converging faster and consuming less CPU time, and requiring fewer iterations than other methods.
With errors of 8.75 × 10
−18, 1.124 × 10
−24, and 1.121 × 10
−24 for the three sets of initial vectors v1–v3, the newly developed scheme
−
achieved a significantly higher accuracy compared to the existing methods MM
ϵ, EM
ϵ, and BM
ϵ. It also consumed less CPU time and required fewer iterations, as evidenced by the numerical results of the iterative method using random initial vectors presented in
Table 3,
Table 4,
Table 5 and
Table 6.
Table 16 depicts the overall behavior of the simultaneous schemes and demonstrates that DS
is more stable and consistent than
−
, MM
ϵ, EM
ϵ, and BM
ϵ.
Example 4: Searching for the equilibrium point of the N-Body system [
78]
Consider the nonlinear system of equations describing how to find the equilibrium solution in an N-body system as
The nonlinear system of equations Ņ
has more than one solution depending on the parameter values. For instance, if we choose
and
, Ņ
has five solutions. Initial starting values are chosen as
and
. The numerical outcomes are presented in
Table 7.
The results in
Table 17 clearly demonstrate that
outperforms
−
, MM
ϵ, EM
ϵ, and BM
ϵ in terms of computational order of convergence, CPU-time, and residual error for solving (
88).
To assess the global convergence behavior, we utilize the following initial set of vectors presented in
Appendix A Table A4.
In terms of residual error, CPU time, local computational order of convergence, and iterations, the numerical outcomes of the simultaneous vectorial method for solving (
88) are shown in
Table 18,
Table 19 and
Table 20 and
Figure 6. These tables clearly illustrate that
outperforms
−
, MM
ϵ, EM
ϵ, and BM
ϵ in terms of global convergence, again converging faster and requiring less CPU time and iterations than other methods.
The newly developed schemes
−
achieved a substantially higher accuracy than the existing methods, with errors of 4.876 × 10
−18, 4.875 × 10
−18, and 4.34 × 10
−18 for the three sets of initial vectors v1–v3. The numerical results of the iterative method using random initial vectors reported in
Table 18,
Table 19 and
Table 20 also indicate that
−
required less CPU time and fewer iterations
Table 21 presents the overall behavior of the simultaneous schemes and demonstrates that
is more stable and consistent than
−
, MM
ϵ, EM
ϵ, and BM
ϵ.
Example 5: Solving the diffusion equation [
79]
Consider the heat diffusion equation with various boundary conditions as follows:
To find the solution of (
89), we set the diffusivity parameter as
and ensure the stability of the implicit finite difference scheme with
:
Applying approximations to (
89), we derive the tridiagonal system of equations:
For different initial and boundary conditions, we obtain an additional set of partial differential equations:
Using (
90) in (
92), we derive a nonlinear system of equations similar to (
91) after incorporating the initial and boundary conditions. The exact and approximate solutions obtained by BDS
ϵ1 − DS
ϵ3, MM
ϵ, EM
ϵ, and BM are presented in
Figure 7.
Using (
90) in (
93), we obtain another nonlinear system of equations similar to (
91) after incorporating the respective initial and boundary conditions. The exact and approximate solutions obtained by DS
ϵ1 − DS
ϵ3, MM
ϵ, EM
ϵ, and BM are presented in
Figure 8.
Using a random set of initial vectors
from
Appendix A Table A5, the numerical results from
Table 22,
Table 23 and
Table 24 and
Figure 7,
Figure 8 and
Figure 9 clearly show that the scheme DS
ϵ3 is more stable and consistent than DS
ϵ1 − DS
ϵ2, MM
ϵ, EM
ϵ, and BM when solving (
89) with different boundary and initial conditions.
The scheme DS
ϵ1 − DS
ϵ3 outperformed the MM
ϵ, EM
ϵ, and BM
ϵ. It also required less CPU time and fewer iterations, as evidenced by the numerical results of the iterative technique under various initial conditions shown in
Table 22,
Table 23 and
Table 24. These tables depict the overall behavior of the simultaneous schemes and demonstrate that DS
ϵ3 is more stable and consistent than DS
ϵ1 − DS
ϵ2, MM
ϵ, EM
ϵ, and BM
ϵ.