The performances of the algorithms: Gravitational Search Algorithm (GSA), Electromagnetism-Like Algorithm (EMLA), Fluid Search Algorithm (FSA), hybrid GSA + EMLA, EMLA + GSA, GSA + FSA, FSA + GSA, ELMA + FSA, and FSA + ELMA, and A* were investigated for optimal path finding of vehicles. These nine algorithms deploy metaheuristic techniques to find optimal or feasible paths. The brute force and heuristics methods cost high in path finding problems for large grids.
To have a base line algorithm to compare the performance of the metaheuristic algorithms with, we ran four widely used path-planning heuristic algorithms: A*, Theta*, RRT*, and D* Lite on grid 100 × 100 with 15% obstacles, each 30 replications. In
Table 2, the 143.26, 0.0127, and 145.12 are the averages of distance (D), time(T), and energy(E), respectively, when we ran A* algorithm 30 replications (or trials). We did the same for the other heuristic algorithms on the same data. We computed the normalized values for each algorithm across the metrics. For example, the minimum value of average distances across all algorithms is 140.8 and the maximum value is 173.78. The normalized values of the distances for A*, D* Lite, Theta*, and RRT*, are computed as (143.26 − 104.8)/(173.78 − 140.8) = 0.075, (143.26 − 140.8)/32.98 = 0.075, (140.8 − 140.8)/32.98 = 0, and (173.38)/32.98 = 1, respectively.
Table 2 shows the results where a weighted C (multiple objective function) was computed by Equation (12), where the weight is 0.3, 0.4, 0.2, 01, for D, T, E, and number of encountered obstacles (O), respectively. The normalization in a table is performed across all algorithms for each metric on the averages of an n replications and per scenario.
Each code of an algorithm was initialized with the population, number of iterations, density of the obstacles, grid size, start point, goal point, and the weights of D, T, E, O, to compute the cost C. The codes are written in the Python programming language. The output includes the values of five metrics and the optimal paths.
4.1. Verification and Validation
For verification and validation, we implemented the ten algorithms on a grid of 100 × 100 with 10 trials, where weighed C = 0.3Normalize(D) + 0.4Nor(T) + 0.2Nor(E) + 0.10Nor (O) and obtained results as shown in
Table 3.
Table 3 shows the average results of running the ten algorithms 10 trials on grid 100 × 100. The performance of A* on small problems outperforms the other algorithms in D, T and O, primarily due to the small grid size as well as the weights of D (30%) and T (40%). It shows simulated values to demonstrate the valid results compared to the A* algorithm. The rank is from lowest C (best = 1) to highest C (worst = 10). Since A* is a deterministic shortest-path algorithm, its distance is the smallest, and the distance of the hybrid FSA + GSA comes next. The average C for A* is 0.08 using Min-Max normalization. The smallest value of C = 0.3 × (141.5 − 140.2)/(150 − 140.2) + 0.4 × (151.2 − 150)/((164 − 150) + 0.2 × (1755 − 1755)/(1870 – 17,550 + 0.1 × (0 − 0)/(0.2 − 0) = 0.074, rounded to 3 digits. The last row in
Table 3 shows the relative differences between the lowest average values produced by A* and the highest average of a metric produced by other metaheuristic algorithms. For example, the relative difference in Av D = 0.0699 was computed as (150 − 140.2)/140.2. For example, the largest relative difference between the distance, the time, and the energy produced by hybrid GSA + FSA compared to that produced by the A* algorithm is 0.067, 0.093, and 0.038, respectively. These are the largest deviations from the best measures, which indicates acceptable metric values resulted from running the metaheuristic algorithms. The averages and the standard deviations of each column are shown in the Table. These also assure the acceptance of the results from the metaheuristic algorithms compared to the base line A*.
The hybrid GSA + FSA had the largest values in all metrics. The FSA + GSA has the best rank in C, and the GSA + FSA has the worst in all grid sizes. Even though
Table 4 shows results for grids 100 × 100 and 500 × 500, the hybrid GSA + FSA shows the highest measures of D, T, E, O, and C. We tested the algorithms on five grids, but we present those with the smallest and the largest grids, due to space limitations. In Scenario (1), the distance, time, and energy generally increase with grid size. The E scales directly with D. When comparing A* with metaheuristic algorithms in multi-objective problems, it tends to obtain the lowest C, especially on small or medium grids. The metaheuristic FSA, GSA, EMLA, and the hybrids are stochastic and are better at handling multi-objective trade-offs in complex scenarios. As the grid size grows or obstacles increase, metaheuristic may find better energy-efficient paths.
We ran the algorithms on grid 100 × 100 with 15% obstacles, each 5 replications (trials).
Table 4 and
Table 5 show the results coming from each replication. The scenario C = 0.3Normalize(D) + 0.4Nor(T) + 0.2Nor(E) + 0.10Nor (O) was assumed to have the results in
Table 4 and
Table 5. For example, in trail 1 (Rep 1), A* produced the minimum distance 128, and FSA produced the highest distance., Thus the normalize (D) = (128 − 128)/(210 − 128) = 0 for A*, Norm(D) = (210 − 128)/(210 − 128) = 1 for FSA, and Norm(D) = (153 − 128)/(210 − 153) ~ 0.44 for FSA + GSA. The C is computed by Equation (12). In all replications, A* has the smallest values for the four metrics because the grid is small, and this is not the case for other larger grids.
Table 5 shows the results after replications 4 and 5 on grid 100 × 100. The normalized values are rounded up to two digits in both
Table 4 and
Table 5. The performance of the hybrid algorithms dominates single metaheuristic algorithms. The combining of EMLA with other metaheuristic algorithms has better performance than the other metaheuristic algorithms in the 5 replications.
Additionally, we ran FSA and obtained results as shown in
Figure 12, where the starting point is (0, 0) and the goal point is (99, 99).
Figure 12 illustrates samples of optimal path visualization for implementing the FSA optimization algorithm in both synthetic and real terrains (Jibal Sharat, Jordan), based on a weighted multi-objective fitness function with grid 100 × 100 and the specification as in
Table 2.
4.2. Experiment Results and Discussion
The ten algorithms were implemented in the Python programming language, and ran on randomly generated grids 100 × 100, 200 × 200, …, 500 × 500.
Table 6 shows results based on Scenario (1), where weighted cost C = 0.3Normalize(D) + 0.4Nor(T) + 0.2Nor(E) + 0.1Nor (O) and on the grid 100 × 100, using Min-Max normalization, with population = 30, number of iterations = 50, and number of obstacles is 15% of the grid size. The minimum, maximum, standard deviation, and averages of replications were recorded. Only the averages are reported in the following tables and figures.
Table 7 shows the results for grids 100 × 100, 200 × 200, …, 500 × 500, where C = 0.2Nor(D) + 0.2Nor(T) + 0.4Nor(E) + 0.2Nor(O), Scenario (2). In this case, it is observed that the hybrid FSA + GSA consistently ranks best for all grid sizes compared with other metaheuristic algorithms and ranked the lowest in C because energy efficiency now dominates the cost. All the metaheuristic algorithms outperformed A* in average weighted cost. The FSA + GSA ranked 1, but GAS + FSA ranked 9 (worst) among the hybrids in C. The ranks remain consistent across grids, and the metaheuristic scale better for energy-focused multi-objective optimization. For a balanced D, T, and E scenario where C = 0.3D + 0.3T + 0.3E + 0.1O, the hybrids consistently best and provide the multi-objective trade-off. The results show that the methods adaptively optimize energy. The hybrids adaptively optimize the obstacle avoidance, so for large grids with more complex paths, they outperform A* when E or O has a higher weight. When the sizes of the grid increase, more differences between metaheuristic algorithms and A* emerge. Since larger grids imply longer distances, more time, and more energy expenditure, the weighted cost increases with grid size for all algorithms. These results indicate that a hybrid metaheuristic is preferred when energy efficiency, obstacle avoidance, and scalability matter for practical vehicle pathfinding.
Since the weight cost depends on the importance of each metric, we computed the rank of each algorithm for each metric across all grid sizes and then ranked each algorithm by the average rank.
Table 8 shows the results when we did not compute the C, but we computed the ranks of the algorithms and their averages on grids 200 to 500. The rank of each metric is displayed, where the small value 1 indicates the least cost. The R-Met is the average of the ranks for the four metrics across each algorithm of the nine metaheuristics.
Table 7 shows the average ranks for all metrics on grids 200, 300, 400, and 500. The FSA + EMLA achieved the best performance over the nine algorithms; EMLA + FSA comes next. The EMLA outperformed FSA and GSA. The hybrids of EMLA with the others outperformed the GSA + FSA, FSA + GSA, FSA, and GSA. The FSA + EMLA, EMLA + FAS, and EMLA + GSA performed better than the other six algorithms. The results in
Table 6 agree with results from experiments with weighted cost in that the hybrids FSA + EMLA, EMLA + FSA, and EMLA + GSA with EML outperformed the other algorithms.
Figure 13 shows the heat map of average metric ranks across the grids 100, 200, …, 500. The lower rank is the better (The color goes from white (the lower) to dark blue (higher)). The FSA + EMLA, EMLA + FSA, EMLA + GSA, EMLA, and GSA + EMLA have the lowest average rank, and in these orders. Thus, the hybrid with EMLA performed the other hybrids, FSA, GSA, and A*. The FSA + EMLA is the best if the D is a matter, the EMLA + GSA is the best if the T is the matter, and the EMLA + GSA is the best if the E and O are matters. These results indicate which algorithms for cars, robots, or drones.
The path weighted cost C increases with grid size for all algorithms. In larger grids, the metaheuristic is better to use because there is more path choice, which allows optimization for multiple objectives. The E penalties accumulate, leading to more efficient paths, and obstacles are more frequent, which need avoidance. When E and O avoidance are important, the metaheuristic hybrids leverage stochastic search to explore alternative paths, balancing D, T, E, and O more effectively.
Table 9 shows the synthetic results when the A* and the nine metaheuristic algorithms were tested on grid 500 × 500 with obstacles 5%, 15%, and 30%. It was assumed that C = 0.2Norm(D) + 0.3Norm(T) + 0.3Norm(E) + 0.2Norm(O). Each algorithm was run 10 times, and the average was recorded. The hybrids containing EMLA are always the best for our assumptions and A* is worse than all physics-based methods. The table shows the average cost and rank for those three obstacle levels. The four EMLA hybrids had ranks 1–4. The C increases for more difficult environment compared to 5% obstacles. All algorithms show increased as obstacles increase from 5% to 30%.
It is clear from
Table 10 that the execution time grows with grid size and increases nonlinearly from grid 100 × 100 to grid 500 × 500. Since the nine metaheuristic algorithms are population-based algorithms they took longer than deterministic A*. The hybrids took more time than single metaheuristic because they combined more rules as is the case with GSA + FSA. The FSA + EMLA has less running time because it balances exploration and refinement, reduces number of iterations, and requires less candidates’ evaluations. The hybrid GSA + FSA is the slowest in both convergence and runtime.
The convergence speed is the number of iterations an algorithm needs to reach.
Table 11 shows the averages convergence speed (10 trials each). The number of iterations increases as the grid size increases. This is because there is larger search space, more cells to explore, and there are more solutions to be evaluated. The hybrid FSA + EMLA and GSA + EMLA converge faster than single FSA or pure GSA. This is because hybrids combine exploration abilities with the exploitation strengths and avoid becoming stuck in local minimum. The A* requires fewer iterations because it is deterministic and expands nodes only once, but it runs slower on grid with large size.
Based on observation of the results and previous research, the hybrid FSA + GSA consistently performs well in scenarios that prioritize E or balance D, T, and E equally when the rank is based on weighted cost. For large grids with more complex paths, the metaheuristic algorithms outperform A* when E or O has a higher weight, because they adapt energy and obstacle avoidance. Even A* remains excellent in distance and time metrics, but it does not optimize energy or dynamic obstacle negotiation. Thus, A* is limited in multi-objective scenarios and has the lowest normalized C. Physics-based metaheuristic algorithms can explore energy-optimal and obstacle-avoiding paths, and that is why they obtain a lower C than A* in some scenarios of metric weights. The C formulas influence ranking, because if the weights of D and T are high, then A* is better to use, and if the weights of E and O are high, then the metaheuristic is better to use.
Inspired by fluid dynamics, the FSA has continuously shown better computing efficiency with the lowest energy consumption and execution times in most cases. This is consistent with its nature as a fast, fluid search process that moves across solution spaces rapidly converging to local optima, which would lead to somewhat lower fitness ratings. On the other hand, GSA, which is based on gravitational forces, performed more steadily and conservatively, offering consistent but unexceptional results on all metrics. The EMLA demonstrated remarkable adaptive capabilities, especially in hybrid setups where it successfully balanced the phases of exploration and exploitation. EMLA is based on electromagnetic attraction and repulsion.
The hybrid approaches of GSA + EMLA and EMLA + GSA showed interesting synergies between the physical laws of gravitation and electromagnetism. GSA + EMLA showed superior performance, which can perform well, especially in fitness-oriented scenarios where superior solutions were obtained via gravitational convergence followed by electromagnetic refining. In large and complicated terrains, EMLA + GSA performed efficient strength, indicating that gravitational acceleration after electromagnetic search results in a positive response to uncertain environments. However, the two hybrid schemes also registered an increase in variability for certain measures, an indicator of parameter setting sensitivity and the need for careful tuning to make their performance stable across problem domains.
The results illustrate the importance of adapting algorithms based on application requirements. The FSA is the preferred option for tasks where speed and energy efficiency are top priorities. The GSA + EMLA is better for balanced optimization, particularly when fitness is important. In large, erratic terrains, the EMLA + GSA works especially well. It should be noted that energy values were unusually low in terrain scenarios, potentially due to oversimplified simulation assumptions. Furthermore, hybrid algorithms exhibited increased variability in a few metrics, indicating that parameter tuning is necessary.
If the rank is based on the weighted cost, then if D and T are priorities, the GSA + EMLA and FSA + GSA perform well in the multi-objective function. If E is a priority, the FSA performs better for energy-limited vehicles or battery-powered robots and drones. If the O is priority, then the EMLA and the EMLA-based hybrids excel in dense or dynamic environments.
The execution times in
Table 11 demonstrate a clear relationship between theoretical complexity and empirical runtime. All algorithms exhibit increasing execution time as grid size increases from 100 to 500. The relative growth rate of execution time differs substantially among algorithms. For example, algorithms with quadratic complexity, such as GSA (1.65 s at grid 500), EMLA + GSA (1.60 s), and GSA + FSA (1.90 s), show the steepest increases in computational cost. These results align with their
scaling, in which pairwise interactions dominate the runtime as the search space expands. In other side, algorithms based on FSA or incorporating substantial FSA components demonstrate markedly lower execution times and slower growth. For example, FSA (1.30 s), FSA + EMLA (1.00 s), and FSA + GSA (1.15 s) grow more gently with grid size, which is consistent with the linear complexity
of the underlying fluid-dynamics mechanism. The empirical data therefore confirms the theoretical predictions. Overall, hybrids that incorporate GSA or EMLA inherit their quadratic costs, while hybrids centered on FSA retain the favorable linear-time scaling. These results provide a coherent validation that theoretical complexity directly explains and predicts empirical runtime differences across the algorithms.
Note we presented some formal statistics such minimum, maximum, averages, standard deviation, and relative differences. In this study, we focused primarily on evaluating trends in performance, convergence behavior, and robustness across multiple replications. Incorporating formal statistical significance testing is an important direction for future research and will further strengthen the empirical validity of the findings.