Previous Article in Journal
Exponential Tail Estimates for Lacunary Trigonometric Series
Previous Article in Special Issue
Stone and Flat Topologies on the Minimal Prime Spectrum of a Commutator Lattice
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid Walrus Optimization-Based Fourth-Order Method for Solving Non-Linear Problems

1
Department of Mathematics, Chandigarh University, Gharuan, Mohali 140413, India
2
Instituto Universitario de Matemática Multidisciplinar, Universitat Politécnica de Valéncia, 46002 Valéncia, Spain
3
Department of Mathematics, College of Science and Humanities in Al-Kharj, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
4
Mathematical Modelling and Applied Computation Research Group (MMAC), Department of Mathematics, Faculty of Science, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia
*
Author to whom correspondence should be addressed.
Submission received: 9 October 2025 / Revised: 16 December 2025 / Accepted: 17 December 2025 / Published: 23 December 2025
(This article belongs to the Special Issue Advances in Classical and Applied Mathematics, 2nd Edition)

Abstract

Non-linear systems of equations play a fundamental role in various engineering and data science models, where accurate solutions are essential for both theoretical research and practical applications. However, solving such systems is highly challenging due to their inherent non-linearity and computational complexity. This study proposes a novel hybrid iterative method with fourth-order convergence. The foundation of the proposed scheme combines the Walrus Optimization Algorithm and a fourth-order iterative technique. The objective of this hybrid approach is to enhance global search capability, reduce the likelihood of convergence to local optima, accelerate convergence, and improve solution accuracy in solving non-linear problems. The effectiveness of the proposed method is checked on standard benchmark problems and two real-world case studies, hydrocarbon combustion and electronic circuit design, and one non-linear boundary value problem. In addition, a comparative analysis is conducted with several well-established optimization algorithms, based on the optimal solution, average fitness value, and convergence rate. Furthermore, the proposed scheme effectively addresses key limitations of traditional iterative techniques, such as sensitivity to initial point selection, divergence issues, and premature convergence. These findings demonstrate that the proposed hybrid method is a robust and efficient approach for solving non-linear problems.

1. Introduction

In recent years, researchers have concentrated on addressing non-linear equations that incorporate complex functions such as exponential, logarithmic, and trigonometric ones, which are prevalent across various disciplines, including science and engineering. However, obtaining analytical solutions for these equations remains a significant challenge. Generally, in a non-linear system, F ( X ) = 0 , where F : D R n R n , we are interested in finding a vector X * = ( x 1 * , x 2 * , , x n * ) T such that F ( X * ) = 0 , where F ( X ) = ( f 1 ( X ) , f 2 ( X ) , , f n ( X ) ) T is a differentiable function and X = ( x 1 , x 2 , , x n ) T R n . In order to tackle such problems there are various approaches in the literature. One of them is metaheuristic techniques.
Metaheuristic algorithms draw their inspiration from diverse natural phenomena, animal behaviors and strategies, principles from biological sciences and genetics, concepts from physical sciences, human activities, game rules, and evolutionary processes [1]. From the perspective of their primary inspirational source, these algorithms are classified into five distinct categories highlighted in Figure 1.
Metaheuristic algorithms begin by randomly creating several initial solutions within the problem’s search domain. These solutions are iteratively improved based on the algorithm’s performance. The best solution found during this process is offered as the final solution. However, due to the randomness in the search process, metaheuristic algorithms cannot ensure the discovery of the global optimum. Their ability to produce better solutions depends on their capacity for exploration and exploitation. Exploration entails broadly searching different areas of the problem space to pinpoint promising regions, while exploitation involves fine-tuning solutions locally within these regions to near the global optimum. Achieving an effective balance between exploration and exploitation is essential for the success of metaheuristic algorithms.
On the other hand, mathematically, non-linear systems have been solved by iterative methods. The most prominent and most used technique is the Newton–Raphson method [2], which is given below in Equation (1):
X n + 1 = X n [ F ( X n ) ] 1 F ( X n ) ,
where F ( X n ) denotes the Jacobian of F ( X n ) .
However, Newton’s method has some limitations, such as the selection of the initial guess and the case where the derivative of the function F ( X ) is zero or nearly zero. Over time, many researchers such as Halley [3], Househölder [4], Ostrowski [5], Traub [6], Jarratt [7], and Behl and Kanwar [8] have constructed various higher-order iterative techniques to solve non-linear systems of equations.
In recent years, various hybrid algorithms, which involve the hybridization of two optimization algorithms or hybridization of an optimization algorithm with iterative techniques, have been developed. This first started in 1988, where Karr et al. [9] proposed the first hybrid approach that combines the genetic algorithm and Newton’s method, in which the genetic algorithm determines the initial guess for Newton’s method. As the genetic algorithm was the first and oldest evolutionary algorithm, the hybridization of the genetic algorithm with iterative methods did not gain much importance at that time due to its inefficient results. After that, Luo et al. [10] in 2008 solved some complex non-linear equations by combining Quasi-Newton and Chaos Optimization algorithms. But at that time the mathematical convergence of the Chaos Optimization algorithm had not been proved, so the convergence analysis of the developed method was not discussed. In 2021, Sihwail et al. [11] constructed a hybrid algorithm called NHHO to solve an arbitrary system of non-linear equations by fusing Newton’s method with Harris Hawk’s optimization technique. Sihwail et al. in 2022 [12] constructed a hybrid approach by integrating Jarratt’s iterative scheme with the Butterfly Optimization Algorithm for solving non-linear systems. In 2023 Solaiman et al. [13] developed a modified hybrid algorithm of Newton with the Sperm Swarm Optimization Algorithm. A hybrid algorithm can overcome the shortcomings of one approach while utilizing the advantages of the other. Some of them are [9,10,11,12,13,14,15]. Optimization algorithms are widely applied across diverse domains, but the No Free Lunch (NFL) theorem [16] highlights that no single algorithm excels for all problems. When addressing non-linear systems of equations, these algorithms may face challenges like divergence or getting stuck in local optima. Moreover, the iterative methods for non-linear equations often converge slowly, affected by the properties and derivatives of the function. They struggle with costly or undefined derivatives and rely heavily on accurate initial guesses. A poor initial guess can lead to divergence or an incorrect root. Consequently, integrating optimization techniques with iterative methods is crucial for effectively solving such systems. The hybridization of iterative techniques with optimization algorithms represents a robust approach to tackling complex optimization problems. By combining the local precision of iterative methods with the global search capabilities of optimization algorithms, hybrid approaches have demonstrated superior performance in various applications, from machine learning to engineering design.
Keeping these facts in mind, we constructed a Hybrid Walrus Optimization Algorithm (HWOA), which is discussed in Section 2.

2. Development of a New Iterative Method

The proposed method is the combination of the Walrus Optimization Algorithm and a fourth-order iterative scheme. So, the next subsections give a brief explanation of the schemes utilized for the construction of the proposed method.

2.1. Walrus Optimization Algorithm and Its Mechanism

In 2023, Trojovský et al. [17] introduced the Walrus Optimization Algorithm (WOA), a bio-inspired metaheuristic algorithm inspired by walrus behaviors, including foraging, migration, and predator evasion. Walruses, large marine mammals inhabiting the Arctic, are characterized by their prominent tusks and social behaviors. These features guide the WOA’s design to balance the exploration and exploitation in solving optimization problems. As a population-based algorithm, the WOA represents candidate solutions as walruses, with each walrus being a vector of decision variables in the search space.
The WOA operates in three phases: Initialization, Exploration (feeding strategy), and Exploitation (migration, and predator evasion).
Initialization phase: The population is initialized randomly as a matrix, where each walrus’s position is evaluated using the problem’s objective function.
Exploration phase: In the feeding strategy, the walrus with the longest tusks (best objective function value) guides the group, enhancing global search. The position update is modeled in (2) and (3):
x i , j P 1 = x i , j + rand i , j · SW j I i , j · x i , j ,
X i = X i P 1 , if F i P 1 < F i , X i , else ,
where X i P 1 is the new position of the i-th walrus, x i , j P 1 is its j-th dimension, F i P 1 is its objective function value, rand i , j [ 0 , 1 ] , I i , j { 1 , 2 } , and SW is the best solution.
Exploitation phase: In the migration phase, walruses move toward randomly selected positions, improving exploration, and the mathematical expressions for this phase are as in (4) and (5):
x i , j P 2 = x i , j + rand i , j · x k , j I i , j · x i , j , if F k < F i , x i , j + rand i , j · x i , j x k , j , else ,
X i = X i P 2 , if F i P 2 < F i , X i , else ,
where x k ( k i and k 1 , 2 , , N ) is a randomly selected walrus.
The predator evasion phase focuses on local search with shrinking bounds, and the mathematical modeling of this phase is represented by (6)–(8):
x i , j P 3 = x i , j + l b local , j t + u b local , j t rand · l b local , j t ,
l b local , j t = l b j t , u b local , j t = u b j t ,
X i = X i P 3 , if F i P 3 < F i , X i , else ,
where l b local , j t and u b local , j t are dynamic local lower and upper bounds, P 1 ,   P 2 ,   P 3 represents the position of each walrus in different dimensions, and t is the iteration count. These phases enable the WOA to effectively balance exploration and exploitation, demonstrating superior performance in benchmark functions and real-world applications [17]. It is known for its adaptability and ease of use, making it highly versatile across various types of problems. Its key strengths include simplicity, flexibility, and the capacity to handle intricate search spaces effectively. Despite all these advantages, the global search ability of the WOA faces issues such as premature convergence and exploration–exploitation imbalance in high-dimensional spaces. The success of such an algorithm hinges on precise parameter tuning; poor settings result in slow or sub-optimal solutions.

2.2. Iterative Technique

Based on various iterative schemes proposed in the literature for solving systems of non-linear equations, Kansal et al. [18] in 2020 enhanced the fourth-order Chebyshev–Halley-type methods with two points, which were originally introduced by Behl and Kanwar [8] in 2014. The iterative scheme is defined in Equation (9):
Y n = X n 2 3 ϕ ( X n ) , X ( n + 1 ) = X n η ( X n ) 1 μ ( X n ) ϕ ( X n ) ,
where
ϕ ( X n ) = { F ( X n ) } 1 F ( X n ) , μ ( X n ) = A 1 I + 3 A 2 Ω ( X n ) + 9 A 3 Ω ( X n ) 2 27 A 4 Ω ( X n ) 3 , η ( X n ) = 2 A 5 I 3 A 6 Ω ( X n ) + 9 A 7 Ω ( X n ) 2 27 A 8 Ω ( X n ) 3 , Ω ( X n ) = { F ( X n ) } 1 F ( Y n ) ,
and the coefficients are given by
A 1 = 54 b 3 135 b 2 + 102 b 23 , A 2 = 54 b 3 + 135 b 2 112 b + 29 , A 3 = 18 b 3 45 b 2 + 38 b 11 , A 4 = ( b 1 ) 2 ( 2 b 1 ) , A 5 = 27 b 3 54 b 2 + 33 b 4 , A 6 = 27 b 3 54 b 2 + 35 b 6 , A 7 = 9 b 3 18 b 2 + 11 b 2 , A 8 = b ( b 1 ) 2 .
where b is a freely adjustable parameter, F ( X n ) represents the Jacobian matrix of F evaluated at the X n iterate, and I denotes an identity matrix of order n.

2.3. Hybrid Walrus Optimization Algorithm (HWOA)

The hybrid approach combining a fourth-order iterative method with the Walrus Optimization Algorithm (WOA) is utilized to solve non-linear systems and real-world applications. By integrating the rapid local convergence of fourth-order methods with the WOA’s global search capabilities, this hybrid method effectively navigates complex, high-dimensional search spaces while avoiding local optima traps.
The Pseudocode Algorithm 1 of the proposed method HWOA is explained as follows: The method starts working by the initialization of parameters such as population size (N), maximum iterations (T), and lower and upper bounds for a randomly generated population. It initializes a random population and the objective function value F(x). Walruses are generated randomly as the search agents, and the position update value in the exploration phase is calculated using Equations (2) and (3). Then the exploitation phase is performed using Equations (4)–(8). The process iterates for all i and j, providing the walrus updated position from the WOA. After that, the iterative technique is applied at the end of each iteration (9). The fitness value is calculated for the new walrus position provided by the iterative method and compared to the fitness provided by the WOA. Finally, the position that has better fitness is selected. This loop continues until the stopping criteria for the proposed scheme ( 1 × 10 80 ) are achieved, after which the best optimal solution from the Hybrid Walrus Optimization Algorithm (HWOA) is obtained, ending the process. The iterative method ensures precise local refinement, while the WOA’s bio-inspired strategies, mimicking walrus behaviors like migration and foraging, enhance global exploration. This hybridization yields superior performance in achieving high accuracy and computational efficiency in tackling non-linear equations and optimization challenges.
The Generalized Pseudocode for the proposed HWOA algorithm is as shown in Algorithm 1.
Algorithm 1 Hybrid Walrus Optimization Algorithm (HWOA)
BEGIN
  1:
Input all optimization parameters.
  2:
Set the number of Walruses (N) and the total number of iterations (T).
  3:
Initialization process of walruses positions.
  4:
For t = 1:T
  5:
Update the strongest walrus on objective function value criterion.
  6:
For i = 1:N
  7:
Phase 1: Feeding Strategy (Exploration).
  8:
Calculate New position of the jth walrus using (2).
  9:
Update the ith walrus position using (3).
10:
Phase 2: Migration
11:
Choose an immigration destination for the ith walrus.
12:
Calculate new position of the jth walrus using (4)
13:
Update the ith walrus position using (5).
14:
Escaping and fighting against Predators (Exploitation)
15:
Calculate a new position in the neighborhood of the ith walrus using (6) and (7)
16:
Update the ith walrus position using (8).
17:
end
18:
Output the updated walrus position obtained by WOA for the given problem.
19:
Update the Iterative Method’s position X n + 1 in (9) by using the updated walrus position obtained by WOA.
20:
Calculate the fitness of X n + 1 and compare with WOA’s updated walrus position fitness.
21:
Fitness( X n + 1 ) < WOA’s fitness.
22:
Update fitness = fitness( X n + 1 ).
23:
If termination criteria not met.
24:
end
25:
Return to Step 4.
Return: Best individual and best fitness value.
END

3. Numerical Results

In this section, the performance of the proposed algorithms is evaluated using various non-linear systems of equations. Additionally, real-world applications are selected to demonstrate the capabilities of these methods. These are well-known problems and have been widely used in numerous research studies. The results are compared with optimization algorithms such as POA [19], HSA [20], SMA [21], ECO [22], WOA [17], NHHO [11], and JBOA [12]. The parameters for the proposed methods include an initial population of 100 search agents and 50 iterations. The number of iterations plays a significant role in the robustness of an algorithm. Algorithm stability is assessed by considering the best fitness value. The proposed methods are also compared based on fitness value, number of iterations, CPU time (in seconds), and optimal roots. The optimal roots are the roots (solutions) that give the smallest fitness value (functional error). The fitness value [11] is computed as the Euclidean norm of the vector F ( X n ) = ( f 1 , f 2 , , f n ) , given as F i t n e s s = | | F ( X n ) | | 2 = f 1 2 + f 2 2 + + f n 2 .
The parametric values used for different methods are listed in Table 1. The parameters of the metaheuristic techniques are generated by the heuristic methods randomly within the search space (dimension) of the problem, specified in the form of lower and upper bounds. In the proposed method, HWOA, the parameters for the Walrus Optimization Algorithm (WOA) are generated randomly within the search space for each test case, while the value of the adjustable parameter b in the iterative scheme is set to 1 for all the test cases. For more detailed information, the parametric values used for different methods are listed in Table 1.
All computations are performed using MATLAB R2024b on a Windows 11 system with an Intel i5 processor running at 1.60 GHz and 2.11 GHz, with 8 GB of RAM. Due to space constraints, results in the tables are rounded to ten decimal places. For the validation of the proposed approach, four benchmark functions taken from Estonian test collection [23] in Table 2 have been tested.
Table 3 clearly illustrates that the for the functions F 1 ,   F 2 ,   F 3 ,   F 4 , the proposed method HWOA provided the best fitness with a lower number of iterations and less computational time. Moreover, the NHHO and JBOA provide better fitness than the traditional optimization approaches but do not attain better results than the proposed HWOA method. This is irrespective of the fact that the JBOA is also a hybridization of Jarratt’s fourth-order iterative scheme with the Butterfly Optimization Algorithm. But due to the poor exploration capabilities of the Butterfly Optimization Algorithm, it does not provide efficient results in fewer iterations. Table 4 presents the results of F 1 ,   F 2 ,   F 3 ,   F 4 on different initial points. Since the iterative methods are largely based on the choice of the initial guess for any problem, with a poor choice of the initial point the scheme can diverge or fail to converge to an optimal solution. Table 4 depicts that the iterative scheme diverges at the initial point when it is not near to the root and does not provide an accurate solution to the test function. And if it converges to a solution, the fitness is much lower. On the other hand, when it comes to the proposed scheme HWOA, it is not necessary to provide the initial guess. This is performed through a meta-heuristic method that is applied in developing the proposed method. In addition, the iterative scheme can minimize the functional error to significant levels provided we increase the level of precision of the results. But that will increase the computational cost of the scheme. The results for all the problems indicate the efficiency of the proposed method HWOA over other traditional meta-heuristic approaches as well as the standalone iterative technique, as the proposed method requires a lower number of iterations and less CPU time to produce the best optimal roots.

4. Real-Life Application

In this section two real-life applications are tested to check the stability and robustness of the proposed algorithm.

4.1. Hydrocarbon Combustion

Hydrocarbon combustion [24] is a fundamental process in engines, power plants, and heating systems and can be modeled as a system of non-linear equations due to the complex chemical reactions involved. When hydrocarbons (e.g., methane, C H 4 ) react with oxygen ( O 2 ), they produce carbon dioxide ( C O 2 ), water ( H 2 O ), and heat. The reaction kinetics, including reaction rates and equilibrium conditions, lead to non-linear relationships governed by factors such as temperature, pressure, and concentration of reactants and products. This combustion process can be modeled as a system of non-linear equations as F 5 , in which x 1 ,   x 2 ,   x 3 ,   x 4 ,   x 5 represent variables such as chemical species concentrations and C ,   C 5 ,   C 6 C 10 are the equillirium parameters used in the system. The values for equillibrium parameters are mentioned in Table 5.
F 5 ( x ) = x 1 x 2 + x 1 3 x 5 = 0 , 2 x 1 x 2 + x 1 + 3 C 10 x 2 2 + x 2 x 3 2 + C 7 x 2 x 3 + C 9 x 2 x 4 + C 5 x 2 C x 5 = 0 , 2 x 2 x 2 + C 7 x 2 x 3 + 2 C 5 x 2 + C 6 x 3 8 x 5 = 0 , C 9 x 2 x 4 + 2 x 2 4 C x 5 = 0 , x 1 x 2 + x 1 + C 10 x 2 2 + x 2 x 3 2 + C 7 x 2 x 3 + C 9 x 2 x 4 + C 8 x 5 + C 5 x 2 + C 6 x 3 + x 3 2 1 = 0 .
Table 6 indicate that the proposed HWOA method significantly outperforms other metaheuristic techniques as well as the NHHO and JBOA in terms of convergence speed, number of iterations, computational efficiency, and solution accuracy for the hydrocarbon combustion problem. It also provides the precise optimal roots of the problem.
Table 7 clearly indicates that the iterative scheme does not provide the optimal solution for the first two initial points. Moreover for the third initial point, the scheme may give the solution but the CPU time utilized is high and the fitness is much lower as compared to the proposed scheme. The results in This highlights that the HWOA is a robust and efficient optimizer for solving complex engineering problems.

4.2. Circuit Design Problem

Circuit design [24], particularly for analog and mixed-signal circuits, often involves solving systems of non-linear equations to model the behavior of components like diodes, transistors, and operational amplifiers. These components exhibit non-linear relationships between voltage, current, and resistance, governed by equations such as Kirchhoff’s laws, Ohm’s law, etc. Figure 2 is the diagrammatical representation of a simple circuit design.
The following system of non-linear equations F 6 has been formulated to account for the extraordinary sensitivities that occur due to small pertubations during the circuit design. In the following system of equations, x 1 ,   x 2 x 9 are the variables, which represent the voltages or currents at specific nodes, and f i k represents circuit parameters (e.g., resistances or gains) for different operating conditions whose values are mentioned in Table 8.
F 6 ( x ) = ( 1 x 1 x 2 ) x 3 exp x 5 f 1 k f 3 k x 7 10 3 f 5 k x 8 10 3 1 f 5 k + f 4 k x 2 = 0 , ( 1 x 1 x 2 ) x 4 exp x 6 f 1 k f 2 k f 3 k x 7 10 3 + f 4 k x 9 10 3 1 f 5 k x 1 + f 4 k = 0 , x 1 x 3 x 2 x 4 = 0 ,
where k = 1 ,   2 ,   3 ,   4 .
From Table 9, it is concluded that the traditional methods SMA, HSA, POA, ECO, and WOA converge badly to the optimal solution, while the hybrid NHHO and JBOA provide better fitness as compared to them. On the other hand, the proposed method HWOA performs better in terms of CPU time, iteration count, fitness value, and optimal roots. This indicates the applicability of the proposed method to a variety of applied science problems.
Table 10 shows the results of the circuit design problem on different initial points. It is clearly shown that the scheme provide a much lower fitness as compared to the proposed scheme and also require more computational time as compared to the proposed HWOA scheme. Moreover, it also diverges for the second initial guess used as it is far from the root.

5. Application to Non-Linear Ordinary Differential Equation

In this application, a non-linear boundary value problem [25] is considered to check the efficiency of the proposed method.
y = y + 2 ( y l o g x ) 3 x 1 , 2 x 3 , y ( 2 ) = 1 2 + l o g 2 ; y ( 3 ) = 1 3 + l o g 3 .
The given non-linear boundary value problem can be transformed into a system of non-linear equations by discretizing the domain of the differential equation [2, 3] into 100 equal subintervals. This is achieved using the finite difference method, with a step size of h = 0.01 and x i = a + h for all values of x i , where a = 2 . After discretization, the second derivative y ( x i ) and first derivative y ( x i ) are approximated using the central difference formulas, as described in [25], resulting in the following equations:
y ( x i ) = 1 h 2 ( y ( x i + 1 ) 2 y ( x i ) + y ( x i 1 ) ) , y ( x i ) = 1 2 h ( y ( x i + 1 ) y ( x i 1 ) ) ,
Putting the values of y ( x i ) and y ( x i ) in (12), a system of 99 × 99 equations is generated.
The approximated roots for the above problem are ( 1.9914432165 , 1.9832654371 , 1.97540043256 , , , , 2.9136763490 , 2.9559999999 , 2.0161906542 ) T , which are obtained form the proposed scheme. The results in Table 8 illustrate that the proposed method provided the better fitness value in fewer iterations and less CPU time as compared to the other methods.
From results in Table 11 and Table 12, we can recognize that the proposed method HWOA outperforms all other methods as well as the iterative scheme in terms of CPU time, fitness, iteration count, and solution accuracy. The NHHO and JBOA also provide the same approximate fitness as the HWOA, but the computational cost and number of iterations utilized are more as compared to the HWOA. From the results, the HWOA proves to be a robust and effective method for solving large non-linear systems as an application of non-linear boundary value problems.

6. Convergence Analysis

Convergence speed is a critical metric when evaluating and comparing algorithms, as it reflects how quickly an algorithm can arrive at a solution. This attribute is crucial, as it shows how effectively an algorithm can solve problems, especially non-linear equation systems. The convergence of the proposed method HWOA is calculated based on the fitness value in each iteration. A higher convergence speed indicates that the algorithm can achieve a precise solution with fewer iterations or in less time, which is critical for real-world applications. The convergence chart visually represents this by illustrating both the speed at which the algorithm approaches the solution and the accuracy of the results, providing a clear comparison of different algorithms’ effectiveness in tackling complex problems.
Figure 3 represents the convergence analysis of F 1 F 6 . Across all the test cases, the HWOA gives the best fitness in a maximum of 10–50 iterations. The convergence graphs of the traditional optimization and hybrid approaches, such as SMA, POA, HSA, ECO, WOA, NHHO, and JBOA, show their local optimal entrapment and lower consistency in the fitness value. But the convergence graph of the HWOA shows that it gives a consistent fitness value in a lower number of iterations without any local optima entrapment. This is possible due to the balance of exploration and exploitation phases in the proposed method. The exploration and exploitation phases are performed by the Walrus Optimization Algorithm, and then the best candidate solution is again refined in the exploitation phase by the iterative scheme, which helps in the faster convergence of the HWOA. In terms of efficiency, the HWOA requires much less computational time and fewer iterations to reach the optimal solution, which is 60–80% better than the traditional metaheuristic and iterative techniques. All these parametric conclusions make the HWOA a robust, stable, and efficient method for solving systems of non-linear equations.
Remark 1.
To find the optimal solution, the stopping criterion for the proposed method is 1 × 10 80 . The criterion for determining the fitness values (functional value) is F i t n e s s = | | F ( X n ) | | 2 = f 1 2 + f 2 2 + + f n 2 .

Stability and Consistency

To demonstrate the stability and efficiency of the proposed algorithm HWOA, the average (mean) and standard deviation are the key parameters. The statistical results have been observed on 30 independent runs for all the algorithms. They also show the consistency of the methods. True consistency is achieved when the fitness value remains the same or approximately the same in every run. Meanwhile, the standard deviation measures the algorithm’s stability. The lower the standard deviation value, the more reliable and robust an algorithm is. Table 13 and Table 14 show the mean and standard deviation results for the functions F 1 F 7 for different methods. From the results it is clear that the proposed method gives the same or approximately the same fitness value in 30 independent runs. This makes the algorithm more stable, efficient, and robust. Figure 4 represents the graphical analysis of different problems over multiple runs. It is concluded from the graphs that the fitness in each run in traditional approaches fluctuates, while the proposed method HWOA gives consistent fitness in each run. This makes the proposed method more robust and stable. In Figure 5, the graphical analysis of convergence, iterations, and 30-run convergence results of different methods for the application of ordinary differential equations is presented.

7. Conclusions

The proposed hybrid algorithm, integrating a fourth-order iterative scheme with the Walrus Optimization Algorithm (WOA), has demonstrated superior performance in solving complex systems of non-linear equations, as evidenced by its application to benchmark problems. Comparative analysis with established metaheuristic and hybrid algorithms, including the Pelican Optimization Algorithm (POA), Walrus Optimization Algorithm (WOA), Slime Mould Algorithm (SMA), Newton–Harris Hawk Optimization Algorithm (NHHO), and Jarratt–Butterfly Optimization Algorithm (JBOA), reveals that the hybrid approach HWOA consistently outperforms these methods in terms of iteration count, computational time, and fitness value. By leveraging the rapid convergence properties of the fourth-order iterative scheme and the robust global search capabilities of the WOA, the hybrid algorithm achieves a remarkable balance between exploration and exploitation, leading to fewer iterations and reduced computational time while maintaining high accuracy in fitness values. These results underscore the hybrid algorithm’s efficacy in addressing real-world applications, such as hydrocarbon combustion and circuit design, and an ordinary differential equation, where computational efficiency and precision are paramount. This work establishes the hybrid algorithm as a powerful tool for tackling non-linear systems, offering significant advantages over traditional and contemporary optimization techniques. Future research can focus on enhancing the hybrid algorithm by incorporating adaptive parameter tuning to further optimize its performance across diverse problem domains. Exploring its application to large-scale, real-world problems, such as multi-dimensional optimization in machine learning or dynamic systems in engineering, could expand its practical utility.

Author Contributions

Conceptualization, A.C., E.M., R.B. and S.B.; methodology, A.C., E.M., R.B. and S.B.; software, A.C., E.M., R.B. and S.B.; validation, A.C., E.M., R.B. and S.B., formal analysis, A.C., E.M., R.B. and S.B.; investigation, A.C., R.B. and S.B.; resources, A.C., R.B. and S.B.; data curation, A.C., R.B. and S.B.; writing—original draft preparation, A.C., R.B. and S.B.; writing—review and editing, A.C., E.M., R.B., S.B. and S.A.; visualization, A.C., E.M., R.B., S.B. and S.A.; supervision, R.B. and S.B. All authors have read and agreed to the published version of the manuscript.

Funding

This study is supported via funding from Prince Sattam bin Abdulaziz University project number (PSAU/2025/01/33287).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article.

Acknowledgments

The authors extend their appreciation to Prince Sattam bin Abdulaziz University for funding this research work through the project number (PSAU/2025/01/33287).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Blum, C.; Roli, A.; Alba, E. An introduction to metaheuristic techniques. In Parallel Metaheuristics; John Wiley & Sons: Hoboken, NJ, USA, 2005; pp. 3–42. [Google Scholar]
  2. Epperson, J.F. An Introduction to Numerical Methods and Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  3. Chen, D.; Argyros, I.; Qian, Q. A note on the halley method in banach spaces. Appl. Math. Comput. 1993, 58, 215–224. [Google Scholar] [CrossRef]
  4. Householder, A.S. The Numerical Treatment of a Single Nonlinear Equation; McGraw Hill Text: New York, NY, USA, 1970. [Google Scholar]
  5. Ostrowski, A. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1966. [Google Scholar]
  6. Traub, J.F. Iterative Methods for the Solution of Equations; American Mathematical Society: Englewood Cliffs, NJ, USA, 1982; Volume 312. [Google Scholar]
  7. Jarratt, P. Some fourth-order multipoint iterative methods for solving equations. Math. Comput. 1966, 20, 434–437. [Google Scholar] [CrossRef]
  8. Behl, R.; Kanwar, V. Highly efficient classes of chebyshev-halley type methods free from second-order derivative. In Proceedings of the Recent Advances in Engineering and Computational Sciences, Chandigarh, India, 6–8 March 2014; pp. 1–6. [Google Scholar]
  9. Karr, C.; Weck, B.; Freeman, L. Solutions to systems of nonlinear equations via a genetic algorithm. Eng. Appl. Artif. Intell. 1998, 11, 369–375. [Google Scholar] [CrossRef]
  10. Luo, Y.-Z.; Tang, G.-J.; Zhou, L.-N. Hybrid approach for solving systems of nonlinear equations using chaos optimization and quasi-newton method. Appl. Soft Comput. 2008, 8, 1068–1073. [Google Scholar] [CrossRef]
  11. Sihwail, R.; Solaiman, O.S.; Omar, K.; Ariffin, K.A.Z.; Alswaitti, M.; Hashim, I. A hybrid approach for solving systems of nonlinear equations using harris hawks optimization and newton’s method. IEEE Access 2021, 9, 95791–95807. [Google Scholar] [CrossRef]
  12. Sihwail, R.; Solaiman, O.S.; Ariffin, K.A.Z. New robust hybrid jarratt-butterfly optimization algorithm for nonlinear models. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 8207–8220. [Google Scholar] [CrossRef]
  13. Solaiman, O.S.; Sihwail, R.; Shehadeh, H.; Hashim, I.; Alieyan, K. Hybrid newton–sperm swarm optimization algorithm for nonlinear systems. Mathematics 2023, 11, 1473. [Google Scholar] [CrossRef]
  14. Zukhri, Z.; Paputungan, I.V. A hybrid optimization algorithm based on genetic algorithm and ant colony optimization. Int. J. Artif. Intell. Appl. 2013, 4, 63–75. [Google Scholar] [CrossRef]
  15. Garg, H. A hybrid pso-ga algorithm for constrained optimization problems. Appl. Math. Comput. 2016, 274, 292–305. [Google Scholar] [CrossRef]
  16. Adam, S.P.; Alexandropoulos, S.-A.N.; Pardalos, P.M.; Vrahatis, M.N. No free lunch theorem: A review. In Approximation and Optimization: Algorithms, Complexity, and Applications; Springer: Cham, Switzerland, 2019; pp. 57–82. [Google Scholar]
  17. Trojovský, P.; Dehghani, M. Walrus optimization algorithm: A new bio-inspired metaheuristic algorithm. Expert Syst. Appl. 2022. [Google Scholar] [CrossRef]
  18. Kansal, M.; Cordero, A.; Bhalla, S.; Torregrosa, J.R. New fourth-and sixth-order classes of iterative methods for solving systems of nonlinear equations and their stability analysis. Numer. Algorithms 2021, 87, 1017–1060. [Google Scholar] [CrossRef]
  19. Trojovský, P.; Dehghani, M. Pelican optimization algorithm: A novel nature-inspired algorithm for engineering applications. Sensors 2022, 22, 855. [Google Scholar] [CrossRef] [PubMed]
  20. Mehta, P.; Yildiz, B.S.; Sait, S.M.; Yildiz, A.R. Hunger games search algorithm for global optimization of engineering design problems. Mater. Test. 2022, 64, 524–532. [Google Scholar] [CrossRef]
  21. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  22. Lian, J.; Zhu, T.; Ma, L.; Wu, X.; Heidari, A.A.; Chen, Y.; Chen, H.; Hui, G. The educational competition optimizer. Int. J. Syst. Sci. 2024, 55, 3185–3222. [Google Scholar] [CrossRef]
  23. Roose, A. Test Examples of Systems of Nonlinear Equations: Version 3-90; Estonian Software and Computer Service Company: Tallinn, Estonia, 1990. [Google Scholar]
  24. Maranas, C.D.; Floudas, C.A. Finding all solutions of nonlinearly constrained systems of equations. J. Glob. Optim. 1995, 7, 143–182. [Google Scholar] [CrossRef]
  25. Faires, J.; Burden, R. Numerical Methods, 4th ed.; Cengage Learning: Pacific Grove, CA, USA, 2012. [Google Scholar]
Figure 1. Different metaheuristic techniques.
Figure 1. Different metaheuristic techniques.
Axioms 15 00006 g001
Figure 2. Simple circuit design.
Figure 2. Simple circuit design.
Axioms 15 00006 g002
Figure 3. (af) Represents the convergence results based on 50 iterations for the functions F 1 F 6 .
Figure 3. (af) Represents the convergence results based on 50 iterations for the functions F 1 F 6 .
Axioms 15 00006 g003aAxioms 15 00006 g003b
Figure 4. (af) Represents the convergence results based on 30 runs for the functions F 1 F 6 .
Figure 4. (af) Represents the convergence results based on 30 runs for the functions F 1 F 6 .
Axioms 15 00006 g004aAxioms 15 00006 g004b
Figure 5. (a,b) Represent the convergence results based on 50 iterations as well as 30 runs for the ordinary differential equation.
Figure 5. (a,b) Represent the convergence results based on 50 iterations as well as 30 runs for the ordinary differential equation.
Axioms 15 00006 g005
Table 1. Parameter values for the competitor algorithms.
Table 1. Parameter values for the competitor algorithms.
AlgorithmParameterValue
SMAz0.03
r and r a n d random numbers lie in the range [0, 1]
HSAVariation control parameter V C 2 = 0.03
POAParameter II = number that can be randomly equal to 1 or 2
R is a constant0.2
ECOFixed Motivation Value1
Judgemental Threshold Value (H)0.5
R 1 , R 2 random numbers lie in the range [0, 1]
WOA r a n d random number lies in [0, 1]
I I { 1 , 2 }
SWRepresents the best solution.
NHHO β 1.5
E 0 random number [−1, 1]
JBOAp probability parameter switch0.8
Power exponent0.1
Sensory modality0.01
HWOAb1
r a n d random number lies in [0, 1]
I I { 1 , 2 }
SWRepresents the best solution.
Table 2. Benchmark non-linear functions.
Table 2. Benchmark non-linear functions.
Example NumberFunction NameSystem of Non-Linear EquationsDimension
F 1 Bertsekas f 1 ( x ) = 2 2 x 2 10 15 ( exp ( 40 ( 13 x 2 x 1 12 ) ) 1 ) 2
f 2 ( x ) = 3 2 x 2 0.5 x 1 10 15 ( exp ( 40 ( x 1 1 ) ) 1 )
F 2 Weber f 1 ( x ) = x 1 2 2 x 1 + x 2 3 / 3 + 2 / 3 2
f 2 ( x ) = x 1 3 x 1 x 2 2 x 1 + 0.5 x 2 2 + 1.5
F 3 Carhanan f 1 ( x ) = 0.5 sin ( x 1 x 2 ) x 2 / ( 4 π ) 0.5 x 1 2
f 2 ( x ) = ( 1 1 / ( 4 π ) ) ( exp ( 2 x 1 ) exp ( 1 ) ) + exp ( 1 ) x 2 / π 2 exp ( 1 ) x 1
F 4 Yamamura f 1 ( x ) = x 5 x 1 2 + 3 5
f 2 ( x ) = x 1 + x 2 2
f 3 ( x ) = exp ( x 3 + 2 ) x 1 + 0.1
f 4 ( x ) = x 3 + x 4 0.3
f 5 ( x ) = x 1 x 3 x 2 x 4 x 5
Table 3. Comparison of results of proposed method with metaheuristic techniques for F 1 F 4 .
Table 3. Comparison of results of proposed method with metaheuristic techniques for F 1 F 4 .
MethodsIterationTimeFitnessOptimal Roots
F 1
SMA36 8.792590 1.87948 × 10 1 ( 1.0488901900 , 1.0488901900 )
HSA40 0.049135 2.1206 × 10 1 ( 1.2615280500 , 0.9581941880 )
POA48 0.011122 2.2625 × 10 7 ( 1.8052050650 , 1.0002028772 )
ECO48 0.029825 3.3519 × 10 7 ( 1.8053584547 , 0.9999095158 )
WOA50 0.055237 1.74061 × 10 24 ( 1.8052409298 , 0.9999999999 )
NHHO25 0.072142 3.7824 × 10 26 ( 1.8052409298 , 1.0000000000 )
JBOA20 0.048176 1.5172 × 10 32 ( 1.8052409298 , 1.0000000000 )
HWOA5 0.000651 2.5275 × 10 42 ( 1.8052409298 , 1.0000000000 )
F 2
SMA49 7.540561 4.8737 × 10 10 ( 1.0027102546 , 0.9999928277 )
HSA49 0.062006 2.8249 × 10 24 ( 1.0000007484 , 0.9999999999 )
POA46 0.025997 1.2355 × 10 13 ( 0.9999660570 , 0.9999996473 )
ECO48 0.002142 2.2586 × 10 10 ( 1.0016123565 , 0.9999847545 )
WOA44 0.104289 1.2325 × 10 32 ( 1.0000000000 , 0.9999999999 )
NHHO27 0.074741 3.4635 × 10 24 ( 1.587859041 , 2.6726569638 )
JBOA24 0.048468 4.5606 × 10 32 ( 0.9999999999 , 1.0000000000 )
HWOA5 0.001436 6.4301 × 10 56 ( 1.0000000000 , 1.0000000000 )
F 3
SMA50 7.274036 3.7905 × 10 08 ( 0.2604589880 , 0.6232985745 )
HSA50 0.046914 9.4904 × 10 13 ( 0.2994507085 , 2.8369326400 )
POA48 0.027222 1.3449 × 10 8 ( 0.5000381423 , 3.1414781821 )
ECO48 0.003783 8.8102 × 10 7 ( 0.5014180140 , 3.1428316783 )
WOA50 0.095421 8.1325 × 10 23 ( 0.2605992900 , 0.6225308965 )
NHHO25 0.071887 4.9303 × 10 24 ( 0.2605992900 , 0.6225308966 )
JBOA22 0.230190 1.2325 × 10 32 ( 0.2605992900 , 0.6225308966 )
HWOA4 0.003214 1.4327 × 10 52 ( 0.5000000000 , 3.1415925432 )
F 4
SMA50 8.997483 3.3698 × 10 4 ( 1.250989314 , 0.7494006651 , 1.8450978400 ,
2.1489968190 , 1.437298277 )
HSA50 0.053360 8.21505 × 10 5 ( 1.2533485300 , 0.7552331614 , 1.8560851500 ,
2.1570386800 , 1.4271854424 )
POA50 0.047195 9.6454 × 10 2 ( 1.1750171378 , 0.6126915170 , 1.7998505484 ,
2.0926208904 , 1.7433298682 )
ECO49 0.001493 1.5110 × 10 22 ( 1.2652542711 , 0.7440795151 , 1.8266111820 ,
2.2168204708 , 1.4417401940 )
WOA50 0.102124 3.3528 × 10 11 ( 1.2504180384 , 0.7495795117 , 1.859871437 ,
2.1598693410 , 1.4364565401 )
NHHO23 0.010996 9.6296 × 10 22 ( 1.2504184697 , 0.7495815302 , 1.854874327 ,
2.1598742371 , 1.4364536505 )
JBOA20 0.050610 3.4666 × 10 30 ( 1.2504184697 , 0.7495815302 , 1.8598742378 ,
2.1598742371 , 1.4364536505 )
HWOA4 0.001003 2.7655 × 10 32 ( 1.2504184697 , 0.7495815302 , 1.8598742378 ,
2.1598742371 , 1.4364536505 )
Table 4. Results of fourth-order iterative scheme for F 1 F 4 on different initial points.
Table 4. Results of fourth-order iterative scheme for F 1 F 4 on different initial points.
Initial PointsIterationsFitnessCPU TimeOptimal Roots
F 1
x 0 = ( 0.50 , 0.60 ) 8 8.1264 × 10 16 1.235163 (1.8052…, 1.0000…)
x 0 = ( 0.50 , 1.50 ) 1divergesNaN
x 0 = ( 1.00 , 0.50 ) 7 3.1243 × 10 17 1.390796 ( 1.8052 , 1.0000 )
F 2
x 0 = ( 0.80 , 0.95 ) 12 3.4773 × 10 15 1.247189 ( 0.9999 , 1.0000 )
x 0 = ( 0.50 , 1.25 ) 10 4.0029 × 10 16 1.346229 ( 1.0000 , 1.0000 )
x 0 = ( 0.90 , 0.90 ) 9 1.1102 × 10 16 1.279682 ( 1.0000 , 1.0000 )
F 3
x 0 = ( 0.23 , 2.50 ) 4 2.2377 × 10 16 1.248250 ( 0.2994 , 2.8369 )
x 0 = ( 0.35 , 3.50 ) 7 4.4408 × 10 16 1.271054 ( 0.5000 , 3.1415 )
x 0 = ( 0.40 , 3.20 ) 7 3.8735 × 10 18 2.345386 ( 0.5000 , 3.1415 )
F 4
x 0 = ( 0.5 , 0.1 , 1 , 1.8 , 1 ) 6 2.8977 × 10 16 1.418280 ( 1.2504 , 0.7485 , 1.8598 , 2.1598 , 1.4364 )
x 0 = ( 0.5 , 0.17 , 0.2 , 1.5 , 0.1 ) 5 1.8619 × 10 18 1.258548 ( 1.2347 , 0.6340 , 1.8590 , 2.0096 , 1.4043 )
x 0 = ( 2.5 , 1.5 , 2.7 , 2.5 , 2.7 ) 10 4.8154 × 10 14 1.377646 ( 1.0045 , 0.6543 , 1.8765 , 2.0000 , 1.6545 )
Table 5. Parametric values for the hydrocarbon combustion problem.
Table 5. Parametric values for the hydrocarbon combustion problem.
C C 5 C 6 C 7 C 8 C 9 C 10
10 1.930 × 10 1 4.106 × 10 4 5.451 × 10 4 4.497 × 10 7 3.407 × 10 5 9.615 × 10 7
Table 6. Comparison of results of proposed methods with metaheuristic techniques for hydrocarbon combustion.
Table 6. Comparison of results of proposed methods with metaheuristic techniques for hydrocarbon combustion.
MethodsIterationTimeFitnessOptimal Roots
F 5
SMA49 0.072365 2.5110 × 10 3 ( 0.0838240122 , 0.7515165416 , 0.3938795065 ,
0.8366942052 , 0.0350507767 )
HSA50 0.066188 1.0223 × 10 2 ( 0.1577092757 , 0.1832080791 , 0.6132323936 ,
0.8141647954 , 0.0330185043 )
POA49 0.027889 4.6962 × 10 3 ( 0.0996126435 , 0.5587589718 , 0.4213919960 ,
0.8244278851 , 0.0338937239 )
ECO49 0.052379 4.9047 × 10 2 ( 0.1011182377 , 0.0922942511 , 0.6709302748 ,
0.8746764666 , 0.0376941116 )
WOA50 0.105180 8.5662 × 10 4 ( 0.0498250918 , 1.6546566388 , 0.2839121146 ,
0.8510741750 , 0.0362146081 )
NHHO26 0.088917 3.2741 × 10 30 ( 0.0034301771 , 31.3269988688 , 0.0683498782 ,
0.85952290850 , 0.0369624446 )
JBOA21 0.068898 5.2577 × 10 37 ( 0.0034301771 , 31.3269988680 , 0.0683498782 ,
0.8595290580 , 0.0369624446 )
HWOA6 0.005439 1.0893 × 10 76 ( 0.0031017643 , 34.7891496543 , 0.0651807643 ,
0.8599116543 , 0.0369978742 )
Table 7. Results of fourth-order iterative scheme for hydrocarbon combustion on different initial points.
Table 7. Results of fourth-order iterative scheme for hydrocarbon combustion on different initial points.
Initial PointsIterationsFitnessCPU TimeOptimal Roots
F 5
x 0 = ( 1.10 , 10.35 , 0.10 , 1.25 , 0.20 ) 6divergesNaN
x 0 = ( 0.01 , 30.10 , 0.05 , 0.54 , 0.01 ) 7divergesNaN
x 0 = ( 0.002 , 33.15 , 0.06 , 0.85 , 0.03 ) 6 3.2214 × 10 16 1.481034 ( 0.003 , 34.068 , 0.056 , 0.864 , 0.036 )
Table 8. Parametric values for circuit design problem.
Table 8. Parametric values for circuit design problem.
k f 1 k f 2 k f 3 k f 4 k f 5 k
1 0.4850 0.3690 5.2095 23.3037 28.5132
2 0.7520 1.2540 10.0677 101.7790 111.8467
3 0.8690 0.7030 22.9274 111.4610 134.3884
4 0.9820 1.4550 20.2153 191.2670 211.4823
Table 9. Comparison of results of proposed methods with metaheuristic techniques for circuit design.
Table 9. Comparison of results of proposed methods with metaheuristic techniques for circuit design.
MethodsIterationTimeFitnessOptimal Roots
F 6
SMA20 0.857072 7.8597462123 ( 0.9000000000 , 0.9000000000 , 0.9580669234 ,
2.6414185134 , 8.0000000000 , 7.9557601854 ,
2.0000000000 , 1.2644478791 , 2.2294398821 )
HSA32 0.080901 7.0408662854 ( 0.9000000000 , 0.8865572298 , 0.9000000000 ,
2.6414185134 , 8.0000000000 , 7.9557601854 ,
2.0000000000 , 1.2644478791 , 2.2294398821 )
POA47 0.054716 5.5269924312 ( 0.9000000000 , 0.9000000000 , 1.0143436430 ,
1.9886794021 , 8.0000000000 , 8.9432132880 ,
2.0000000000 , 1.3060663928 , 2.3421345674 )
ECO38 0.054324 4.5883574187 ( 0.8994942596 , 0.8538569294 , 0.9000000000 ,
1.9113586386 , 8.4238067254 , 8.99114165490 ,
2.0000000000 , 1.4203709935 , 2.4270129531 )
WOA50 0.050925 5.1927605431 ( 0.9000000000 , 0.8942605680 , 1.0225517761 ,
1.9544350658 , 8.0006050561 , 8.9999513748 ,
2.0000000000 , 1.3144356581 , 2.1663982240 )
NHHO26 0.065764 1.7652 × 10 24 ( 0.89999999734 , 0.4499874671 , 1.0000000006 ,
2.0000000045 , 7.9999654871 , 8.0000000059 ,
5.0000675345 , 0.9999995432 , 2.0000000056 )
JBOA19 0.542390 3.7320 × 10 27 ( 0.8999999952 , 0.4499874719 , 1.0000000648 ,
2.0000685416 , 7.9999714405 , 7.9996926842 ,
5.0000312759 , 0.9999877234 , 2.0000524834 )
HWOA5 0.009832 5.4534 × 10 72 ( 0.9000000000 , 0.4499873810 , 1.0000068541 ,
2.0000696743 , 7.9999714405 , 7.9996937654 ,
5.0000312759 , 0.9999877234 , 2.0000524834 )
Table 10. Results of fourth-order iterative scheme for circuit design on different initial points.
Table 10. Results of fourth-order iterative scheme for circuit design on different initial points.
Initial PointsIterationsFitnessCPU TimeOptimal Roots
F 6
x 0 = ( 0.8 , 0.4 , 1.0 , 2.5 , 8.0 , 8.0 , 4.9 , 1.0 , 1.9 ) 8 8.1264 × 10 16 1.235163 ( 0.90 , 0.44 , 1.00 , 1.99 7.89 , 7.99 ,
4.99 , 1.00 , 1.99 )
x 0 = ( 0.5 , 0.2 , 0.9 , 3.2 , 7.9 , 7.5 , 5 , 4.67 , 2.6 ) 3divergesNaN
x 0 = ( 0.9 , 0.4 , 1.0 , 2.5 , 8 , 8 , 5 , 1.0 , 2.0 ) 7 3.1243 × 10 17 1.390796 ( 0.90 , 0.44 , 1.00 , 1.99 7.89 , 7.99 ,
4.99 , 1.00 , 1.99 )
Table 11. Comparison of results of proposed methods with metaheuristic techniques for application of differential equations.
Table 11. Comparison of results of proposed methods with metaheuristic techniques for application of differential equations.
MethodsIterationTimeFitness
SMA48 8.665423 3.7218 × 10 1
HSA46 0.391553 3.7629 × 10 1
POA49 0.308421 8.3046 × 10 1
ECO50 0.013101 6.0331 × 10 1
WOA50 0.753174 1.6325 × 10 1
NHHO14 0.724151 3.2778 × 10 5
JBOA10 0.542764 3.7643 × 10 6
HWOA3 0.005436 1.6543 × 10 7
Table 12. Results of fourth-order iterative scheme for application of differential equations on different initial points.
Table 12. Results of fourth-order iterative scheme for application of differential equations on different initial points.
Initial PointsIterationsFitnessCPU TimeOptimal Roots
x 0 = ( 1.5 , 1.5 , 1.5 , , ) 12 3.3215 × 10 4 1.757782 ( 1.99 , 1.98 , 1.97 , , , 2.93 , 2.94 , 2.95 )
x 0 = ( 0.2 , 0.2 , 0.2 , , ) 5divergesNaN
x 0 = ( 1 , 1 , 1 , , ) 10 2.9715 × 10 3 1.722630 ( 1.99 , 1.98 , 1.97 , , , 2.93 , 2.94 , 2.95 )
Table 13. Average (mean) of F 1 F 7 in 30 runs.
Table 13. Average (mean) of F 1 F 7 in 30 runs.
Methods F 1 F 2 F 3 F 4 F 5 F 6 F 7
SMA 0.43235 9.2712 × 10 10 1.9098 × 10 7 0.02687 0.10302 11.74216 0.41405
HSA 1.20139 1.5299 × 10 26 8.2187 × 10 4 0.55241 0.02034 6.68844 0.45330
POA 1.3703 × 10 7 3.7933 × 10 12 1.0251 × 10 8 0.03967 0.03267 5.55985 0.84383
ECO 7.2948 × 10 6 4.3668 × 10 6 1.1988 × 10 4 0.77826 0.71197 41.73655 6.49791
WOA 1.9863 × 10 19 1.2286 × 10 26 1.3610 × 10 12 0.00656 0.01428 6.37899 0.22963
NHHO 2.3432 × 10 24 4.6534 × 10 22 1.5654 × 10 24 1.3423 × 10 24 2.7654 × 10 32 1.8745 × 10 24 1.3467 × 10 6
JBOA 7.5434 × 10 30 2.6754 × 10 32 3.7655 × 10 32 1.7654 × 10 30 1.5434 × 10 34 3.8777 × 10 30 1.6666 × 10 6
HWOA 1.4560 × 10 41 1.7656 × 10 54 1.6754 × 10 52 1.5654 × 10 32 2.6565 × 10 76 1.3422 × 10 72 1.688 × 10 7
Table 14. Standard deviation of F 1 F 7 in 30 runs.
Table 14. Standard deviation of F 1 F 7 in 30 runs.
Methods F 1 F 2 F 3 F 4 F 5 F 6 F 7
SMA 0.4588 1.4331 × 10 9 4.6562 × 10 7 0.12703 0.01360 5.42167 0.06945
HSA 1.14605 8.3669 × 10 26 3.1320 × 10 3 1.15932 0.03220 2.92626 0.06865
POA 2.3286 × 10 7 7.2393 × 10 12 1.8416 × 10 8 0.05219 0.04096 0.29112 0.03779
ECO 2.7846 × 10 5 1.8563 × 10 5 6.4065 × 10 4 1.48158 2.26831 29.1613 5.76165
WOA 5.5787 × 10 19 4.6002 × 10 26 5.0599 × 10 12 0.01674 0.02842 3.51521 0.05372
NHHO 1.0076 × 10 22 1.5564 × 10 23 1.0013 × 10 24 3.6522 × 10 24 1.5434 × 10 32 8.6545 × 10 24 1.8876 × 10 6
JBOA 7.1123 × 10 30 1.4543 × 10 32 1.4454 × 10 32 1.6503 × 10 28 1.5404 × 10 34 1.7766 × 10 30 2.6086 × 10 6
HWOA 2.1760 × 10 41 2.3883 × 10 54 2.3434 × 10 52 2.4826 × 10 32 2.7656 × 10 73 1.5647 × 10 71 2.7656 × 10 7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chandel, A.; Martínez, E.; Bhalla, S.; Alharbi, S.; Behl, R. A Hybrid Walrus Optimization-Based Fourth-Order Method for Solving Non-Linear Problems. Axioms 2026, 15, 6. https://doi.org/10.3390/axioms15010006

AMA Style

Chandel A, Martínez E, Bhalla S, Alharbi S, Behl R. A Hybrid Walrus Optimization-Based Fourth-Order Method for Solving Non-Linear Problems. Axioms. 2026; 15(1):6. https://doi.org/10.3390/axioms15010006

Chicago/Turabian Style

Chandel, Aanchal, Eulalia Martínez, Sonia Bhalla, Sattam Alharbi, and Ramandeep Behl. 2026. "A Hybrid Walrus Optimization-Based Fourth-Order Method for Solving Non-Linear Problems" Axioms 15, no. 1: 6. https://doi.org/10.3390/axioms15010006

APA Style

Chandel, A., Martínez, E., Bhalla, S., Alharbi, S., & Behl, R. (2026). A Hybrid Walrus Optimization-Based Fourth-Order Method for Solving Non-Linear Problems. Axioms, 15(1), 6. https://doi.org/10.3390/axioms15010006

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop