1. Introduction
In recent years, researchers have concentrated on addressing non-linear equations that incorporate complex functions such as exponential, logarithmic, and trigonometric ones, which are prevalent across various disciplines, including science and engineering. However, obtaining analytical solutions for these equations remains a significant challenge. Generally, in a non-linear system, , where , we are interested in finding a vector such that , where is a differentiable function and . In order to tackle such problems there are various approaches in the literature. One of them is metaheuristic techniques.
Metaheuristic algorithms draw their inspiration from diverse natural phenomena, animal behaviors and strategies, principles from biological sciences and genetics, concepts from physical sciences, human activities, game rules, and evolutionary processes [
1]. From the perspective of their primary inspirational source, these algorithms are classified into five distinct categories highlighted in
Figure 1.
Metaheuristic algorithms begin by randomly creating several initial solutions within the problem’s search domain. These solutions are iteratively improved based on the algorithm’s performance. The best solution found during this process is offered as the final solution. However, due to the randomness in the search process, metaheuristic algorithms cannot ensure the discovery of the global optimum. Their ability to produce better solutions depends on their capacity for exploration and exploitation. Exploration entails broadly searching different areas of the problem space to pinpoint promising regions, while exploitation involves fine-tuning solutions locally within these regions to near the global optimum. Achieving an effective balance between exploration and exploitation is essential for the success of metaheuristic algorithms.
On the other hand, mathematically, non-linear systems have been solved by iterative methods. The most prominent and most used technique is the Newton–Raphson method [
2], which is given below in Equation (
1):
where
denotes the Jacobian of
.
However, Newton’s method has some limitations, such as the selection of the initial guess and the case where the derivative of the function
is zero or nearly zero. Over time, many researchers such as Halley [
3], Househölder [
4], Ostrowski [
5], Traub [
6], Jarratt [
7], and Behl and Kanwar [
8] have constructed various higher-order iterative techniques to solve non-linear systems of equations.
In recent years, various hybrid algorithms, which involve the hybridization of two optimization algorithms or hybridization of an optimization algorithm with iterative techniques, have been developed. This first started in 1988, where Karr et al. [
9] proposed the first hybrid approach that combines the genetic algorithm and Newton’s method, in which the genetic algorithm determines the initial guess for Newton’s method. As the genetic algorithm was the first and oldest evolutionary algorithm, the hybridization of the genetic algorithm with iterative methods did not gain much importance at that time due to its inefficient results. After that, Luo et al. [
10] in 2008 solved some complex non-linear equations by combining Quasi-Newton and Chaos Optimization algorithms. But at that time the mathematical convergence of the Chaos Optimization algorithm had not been proved, so the convergence analysis of the developed method was not discussed. In 2021, Sihwail et al. [
11] constructed a hybrid algorithm called NHHO to solve an arbitrary system of non-linear equations by fusing Newton’s method with Harris Hawk’s optimization technique. Sihwail et al. in 2022 [
12] constructed a hybrid approach by integrating Jarratt’s iterative scheme with the Butterfly Optimization Algorithm for solving non-linear systems. In 2023 Solaiman et al. [
13] developed a modified hybrid algorithm of Newton with the Sperm Swarm Optimization Algorithm. A hybrid algorithm can overcome the shortcomings of one approach while utilizing the advantages of the other. Some of them are [
9,
10,
11,
12,
13,
14,
15]. Optimization algorithms are widely applied across diverse domains, but the No Free Lunch (NFL) theorem [
16] highlights that no single algorithm excels for all problems. When addressing non-linear systems of equations, these algorithms may face challenges like divergence or getting stuck in local optima. Moreover, the iterative methods for non-linear equations often converge slowly, affected by the properties and derivatives of the function. They struggle with costly or undefined derivatives and rely heavily on accurate initial guesses. A poor initial guess can lead to divergence or an incorrect root. Consequently, integrating optimization techniques with iterative methods is crucial for effectively solving such systems. The hybridization of iterative techniques with optimization algorithms represents a robust approach to tackling complex optimization problems. By combining the local precision of iterative methods with the global search capabilities of optimization algorithms, hybrid approaches have demonstrated superior performance in various applications, from machine learning to engineering design.
Keeping these facts in mind, we constructed a Hybrid Walrus Optimization Algorithm (HWOA), which is discussed in
Section 2.
3. Numerical Results
In this section, the performance of the proposed algorithms is evaluated using various non-linear systems of equations. Additionally, real-world applications are selected to demonstrate the capabilities of these methods. These are well-known problems and have been widely used in numerous research studies. The results are compared with optimization algorithms such as POA [
19], HSA [
20], SMA [
21], ECO [
22], WOA [
17], NHHO [
11], and JBOA [
12]. The parameters for the proposed methods include an initial population of 100 search agents and 50 iterations. The number of iterations plays a significant role in the robustness of an algorithm. Algorithm stability is assessed by considering the best fitness value. The proposed methods are also compared based on fitness value, number of iterations, CPU time (in seconds), and optimal roots. The optimal roots are the roots (solutions) that give the smallest fitness value (functional error). The fitness value [
11] is computed as the Euclidean norm of the vector
, given as
.
The parametric values used for different methods are listed in
Table 1. The parameters of the metaheuristic techniques are generated by the heuristic methods randomly within the search space (dimension) of the problem, specified in the form of lower and upper bounds. In the proposed method, HWOA, the parameters for the Walrus Optimization Algorithm (WOA) are generated randomly within the search space for each test case, while the value of the adjustable parameter
b in the iterative scheme is set to 1 for all the test cases. For more detailed information, the parametric values used for different methods are listed in
Table 1.
All computations are performed using MATLAB R2024b on a Windows 11 system with an Intel i5 processor running at 1.60 GHz and 2.11 GHz, with 8 GB of RAM. Due to space constraints, results in the tables are rounded to ten decimal places. For the validation of the proposed approach, four benchmark functions taken from Estonian test collection [
23] in
Table 2 have been tested.
Table 3 clearly illustrates that the for the functions
, the proposed method HWOA provided the best fitness with a lower number of iterations and less computational time. Moreover, the NHHO and JBOA provide better fitness than the traditional optimization approaches but do not attain better results than the proposed HWOA method. This is irrespective of the fact that the JBOA is also a hybridization of Jarratt’s fourth-order iterative scheme with the Butterfly Optimization Algorithm. But due to the poor exploration capabilities of the Butterfly Optimization Algorithm, it does not provide efficient results in fewer iterations.
Table 4 presents the results of
on different initial points. Since the iterative methods are largely based on the choice of the initial guess for any problem, with a poor choice of the initial point the scheme can diverge or fail to converge to an optimal solution.
Table 4 depicts that the iterative scheme diverges at the initial point when it is not near to the root and does not provide an accurate solution to the test function. And if it converges to a solution, the fitness is much lower. On the other hand, when it comes to the proposed scheme HWOA, it is not necessary to provide the initial guess. This is performed through a meta-heuristic method that is applied in developing the proposed method. In addition, the iterative scheme can minimize the functional error to significant levels provided we increase the level of precision of the results. But that will increase the computational cost of the scheme. The results for all the problems indicate the efficiency of the proposed method HWOA over other traditional meta-heuristic approaches as well as the standalone iterative technique, as the proposed method requires a lower number of iterations and less CPU time to produce the best optimal roots.
5. Application to Non-Linear Ordinary Differential Equation
In this application, a non-linear boundary value problem [
25] is considered to check the efficiency of the proposed method.
The given non-linear boundary value problem can be transformed into a system of non-linear equations by discretizing the domain of the differential equation [2, 3] into 100 equal subintervals. This is achieved using the finite difference method, with a step size of
and
for all values of
, where
. After discretization, the second derivative
and first derivative
are approximated using the central difference formulas, as described in [
25], resulting in the following equations:
Putting the values of
and
in (
12), a system of
equations is generated.
The approximated roots for the above problem are
, which are obtained form the proposed scheme. The results in
Table 8 illustrate that the proposed method provided the better fitness value in fewer iterations and less CPU time as compared to the other methods.
From results in
Table 11 and
Table 12, we can recognize that the proposed method HWOA outperforms all other methods as well as the iterative scheme in terms of CPU time, fitness, iteration count, and solution accuracy. The NHHO and JBOA also provide the same approximate fitness as the HWOA, but the computational cost and number of iterations utilized are more as compared to the HWOA. From the results, the HWOA proves to be a robust and effective method for solving large non-linear systems as an application of non-linear boundary value problems.
6. Convergence Analysis
Convergence speed is a critical metric when evaluating and comparing algorithms, as it reflects how quickly an algorithm can arrive at a solution. This attribute is crucial, as it shows how effectively an algorithm can solve problems, especially non-linear equation systems. The convergence of the proposed method HWOA is calculated based on the fitness value in each iteration. A higher convergence speed indicates that the algorithm can achieve a precise solution with fewer iterations or in less time, which is critical for real-world applications. The convergence chart visually represents this by illustrating both the speed at which the algorithm approaches the solution and the accuracy of the results, providing a clear comparison of different algorithms’ effectiveness in tackling complex problems.
Figure 3 represents the convergence analysis of
. Across all the test cases, the HWOA gives the best fitness in a maximum of 10–50 iterations. The convergence graphs of the traditional optimization and hybrid approaches, such as SMA, POA, HSA, ECO, WOA, NHHO, and JBOA, show their local optimal entrapment and lower consistency in the fitness value. But the convergence graph of the HWOA shows that it gives a consistent fitness value in a lower number of iterations without any local optima entrapment. This is possible due to the balance of exploration and exploitation phases in the proposed method. The exploration and exploitation phases are performed by the Walrus Optimization Algorithm, and then the best candidate solution is again refined in the exploitation phase by the iterative scheme, which helps in the faster convergence of the HWOA. In terms of efficiency, the HWOA requires much less computational time and fewer iterations to reach the optimal solution, which is 60–80% better than the traditional metaheuristic and iterative techniques. All these parametric conclusions make the HWOA a robust, stable, and efficient method for solving systems of non-linear equations.
Remark 1. To find the optimal solution, the stopping criterion for the proposed method is . The criterion for determining the fitness values (functional value) is .
Stability and Consistency
To demonstrate the stability and efficiency of the proposed algorithm HWOA, the average (mean) and standard deviation are the key parameters. The statistical results have been observed on 30 independent runs for all the algorithms. They also show the consistency of the methods. True consistency is achieved when the fitness value remains the same or approximately the same in every run. Meanwhile, the standard deviation measures the algorithm’s stability. The lower the standard deviation value, the more reliable and robust an algorithm is.
Table 13 and
Table 14 show the mean and standard deviation results for the functions
for different methods. From the results it is clear that the proposed method gives the same or approximately the same fitness value in 30 independent runs. This makes the algorithm more stable, efficient, and robust.
Figure 4 represents the graphical analysis of different problems over multiple runs. It is concluded from the graphs that the fitness in each run in traditional approaches fluctuates, while the proposed method HWOA gives consistent fitness in each run. This makes the proposed method more robust and stable. In
Figure 5, the graphical analysis of convergence, iterations, and 30-run convergence results of different methods for the application of ordinary differential equations is presented.
7. Conclusions
The proposed hybrid algorithm, integrating a fourth-order iterative scheme with the Walrus Optimization Algorithm (WOA), has demonstrated superior performance in solving complex systems of non-linear equations, as evidenced by its application to benchmark problems. Comparative analysis with established metaheuristic and hybrid algorithms, including the Pelican Optimization Algorithm (POA), Walrus Optimization Algorithm (WOA), Slime Mould Algorithm (SMA), Newton–Harris Hawk Optimization Algorithm (NHHO), and Jarratt–Butterfly Optimization Algorithm (JBOA), reveals that the hybrid approach HWOA consistently outperforms these methods in terms of iteration count, computational time, and fitness value. By leveraging the rapid convergence properties of the fourth-order iterative scheme and the robust global search capabilities of the WOA, the hybrid algorithm achieves a remarkable balance between exploration and exploitation, leading to fewer iterations and reduced computational time while maintaining high accuracy in fitness values. These results underscore the hybrid algorithm’s efficacy in addressing real-world applications, such as hydrocarbon combustion and circuit design, and an ordinary differential equation, where computational efficiency and precision are paramount. This work establishes the hybrid algorithm as a powerful tool for tackling non-linear systems, offering significant advantages over traditional and contemporary optimization techniques. Future research can focus on enhancing the hybrid algorithm by incorporating adaptive parameter tuning to further optimize its performance across diverse problem domains. Exploring its application to large-scale, real-world problems, such as multi-dimensional optimization in machine learning or dynamic systems in engineering, could expand its practical utility.