Next Article in Journal
Adapting Vision–Language Models for Few-Shot Industrial Defect Detection
Previous Article in Journal
Enhanced Facial Realism in Personalized Diffusion Models: A Memory-Optimized DreamBooth Implementation for Consumer Hardware
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Optimization Algorithms: Comprehensive Classification, Principles, and Scientometric Trends

by
Khadija Abouhssous
1,
Rasha Hasan
2,*,
Asmaa Zugari
3 and
Alia Zakriti
1
1
Laboratory of Science and Advanced Technology, Department of Civil and Industrial Sciences and Technologies, National School of Applied Sciences, Abdelmalek Essadi University, Tetouan 93000, Morocco
2
College of Engineering and Computing, Liwa University, Abu Dhabi P.O. Box 41009, United Arab Emirates
3
Systems Design Laboratory (ISD), Electronics & Smart Systems, Faculty of Sciences, Abdelmalek Essaadi University, Tetouan 93000, Morocco
*
Author to whom correspondence should be addressed.
Algorithms 2026, 19(4), 258; https://doi.org/10.3390/a19040258
Submission received: 31 January 2026 / Revised: 5 March 2026 / Accepted: 17 March 2026 / Published: 27 March 2026

Abstract

In recent years, optimization algorithms have emerged as powerful computational tools for addressing complex and dynamic challenges across diverse domains. These domains include engineering, technology, management, and decision-making. Their growing importance is motivated by (a) the increasing complexity of modern systems, (b) the need for efficient resource utilization, and (c) the demand for scalable algorithmic solutions. These algorithms enable the systematic and computational exploration of large solution spaces, supporting decision-making and design under uncertainty, large-scale data, and evolving requirements. This study provides a structured review and comparative scientometric analysis of optimization algorithms, covering: (a) exact methods, (b) approximation techniques, (c) metaheuristics, and (d) emerging physics-informed frameworks. The analysis highlights algorithmic trends, performance-oriented research directions, and the increasing integration of mathematical programming, machine learning, and numerical methods. The results show a renewed focus on classical algorithmic paradigms. Moreover, rapid growth in hybrid and physics-informed optimization approaches is observed. These findings confirm the central role of optimization algorithms in modern algorithm engineering and interdisciplinary computational research.

1. Introduction

Optimization algorithms constitute a core area of algorithm design and analysis. This provides mathematical and computational frameworks for solving constrained and unconstrained problems. This is achieved with high dimensionality, multiple objectives, and complex search spaces. Optimization algorithms provide valuable solutions to challenges in many fields. For example, in engineering, these algorithms can be used to design more efficient systems and structures [1], optimize resource utilization for sustainable solutions [2] and minimize production costs [3]. They allow engineers to solve complex problems such as optimizing product design, facility layout and supply chain management. In electronics and telecommunications, optimization methods are used to reduce the size of circuits [4] and improve their performance [5] in terms of speed, power consumption and reliability, besides enhancing security [6] and end-to-end services [7]. These algorithms are essential for finding the optimum combinations of components [8], connections and parameters to maximize the performance of electronic systems. For the financial sector [9], optimization algorithms play a key role in optimizing investment portfolios [10], managing risk [11] and improving trading strategies [12]. They enable us to identify the best combinations of assets, manage risk optimally and improve the profitability of investments. Optimization methods also find applications in production engineering [13], where they are used to reduce production costs, improve the efficiency of manufacturing processes and optimize resource utilization. They can be used to optimize operations planning, inventory management, machine layout and logistics processes. In fact, optimization methods are used in a multitude of fields, including logistics [14], energy [15], social sciences [16] and medical sectors [17]. They offer powerful tools for solving complex problems and making informed decisions in a variety of contexts. Over the years, numerous optimization techniques and their variants have been developed to meet the specific needs of each field. With the advancement of technology and the growing availability of computing power, optimization algorithms continue to evolve and adapt to meet the growing challenges of complex problems. They play an essential role in improving performance [18], reducing costs, maximizing resources and making strategic decisions in a variety of fields. From an algorithmic perspective, these challenges are characterized by (a) combinatorial explosion, (b) nonlinearity, and (c) scalability constraints. This requires advanced algorithmic strategies such as approximation algorithms, metaheuristics, and hybrid numerical methods. The major interest in optimization arises when problems reach a significant level of complexity, and the use of optimization tools becomes not only necessary but often unavoidable. Traditional methods are generally insufficient to effectively solve these complex problems, as they are unable to exhaustively explore all possible solutions or to simultaneously consider all constraints and objectives. Optimization tools, on the other hand, offer more advanced and methodical approaches to solving these problems by identifying the most efficient solutions while respecting constraints. Optimization tools are capable of handling complex problems involving a large number of variables, multiple constraints, multiple criteria and complex interactions between the various components of the problem. They provide structured methods for mathematically formulating these problems and solving them efficiently and systematically. Thanks to these tools, it is possible to find optimal or near-optimal solutions that meet the specific requirements of complex problems. By using optimization tools, practitioners and researchers can achieve better results, more efficient designs, more cost-effective production plans, more optimized distribution schemes, and many other benefits.
In this context, this paper presents a unified and up-to-date review of optimization algorithms, providing a comprehensive overview of the current state of the field. It offers detailed comparative analyses of the different methods, including their principles, computational complexity, performance, scalability, and application domains. In addition, a quantitative scientometric study based on the Scopus database is included to highlight emerging trends, and directions for future research. This approach aims to provide a structured and current resource for the scientific community, integrating both historical developments and recent advances in the field. This review is intended to benefit a broad audience, including engineers, algorithm designers, and academic researchers, by providing actionable insights and practical guidance for selecting and applying optimization methods in real-world scenarios.
The main contributions of this paper are as follows:
  • A clear and structured synthesis of optimization methods, highlighting their strengths, limitations, and areas of application.
  • An in-depth scientometric analysis that identifies research trends, maturity phases, and emerging paradigms in optimization.
  • The identification of research gaps and opportunities, particularly at the interface of theoretical developments and practical applications, as well as in emerging hybrid approaches.
  • A forward-looking discussion on the evolution of optimization paradigms, emphasizing potential directions for integration and hybridization of methods.

2. Overview of Optimization Methods

The process of finding a set of optimal solutions, among all possible solutions for a given problem, is called optimization. Optimization problems (OPs) are classified according to different criteria, as shown in Figure 1. An optimization problem is said to be linear if all objective and constraint functions are linear; otherwise, it is said to be nonlinear. Conversely, an OP is classified according to the type of variables as continuous if its variables are continuous, discrete if the variables are of the integer type, or mixed when both types of variables are involved. Optimization problems with only one objective are said to be single-objective. If there are several objectives to optimize, the problem is said to be multi-objective, while multi-criteria decision-making (MCDM) approaches address problems involving multiple evaluation criteria without explicitly aggregating them into a single objective function. Optimization problems can also be classified according to the presence of constraints (constrained or unconstrained formulations) and the level of uncertainty, distinguishing deterministic from stochastic problems. According to the number of variables, we distinguish between small-scale OP and large-scale OP. Where small-scale OPs are manageable and can guarantee optimal solutions, large-scale OPs are complex and may require heuristics or approximations.
In addition, Figure 1 highlights the main families of solution methods, including exact algorithms, approximation algorithms, heuristics, and metaheuristics. Finally, optimization strategies are distinguished between derivative-based approaches, such as gradient and Newton methods, and derivative-free approaches, including direct search techniques and metaheuristic frameworks.
The basic method to optimize a device is the trial-and-error method: it involves testing a number of potential solutions until an adequate solution is obtained. Generally speaking, the three stages of optimization are analysis, synthesis, evaluation and validation, as shown in Figure 2.
The analysis step is used to understand the problem, objectives, constraints, and variables. This step includes making choices regarding:
  • Problem variables: What are the important parameters to be varied?
  • Research space: Within which limits should these parameters be varied?
  • Objective function: What are the objectives to be reached? How can we express them mathematically?
  • Optimization method: Which method do we choose?
Following this analysis phase, the selected method synthesizes potential solutions that are evaluated and possibly eliminated until an acceptable solution is obtained. The evaluation stage involves assessing the performance of each candidate solution through the computation of the objective function and the measurement of its performance according to the defined criteria.
Optimization techniques are, therefore, used to determine a set of design parameters (the problem variables), x = { x 1 , x 2 , , x n } . These variables can be of a very different nature. For example, for a microwave device, they can be its shape, its geometric dimensions, the materials used, the polarization conditions, etc. The number of these parameters is directly related to the number of degrees of freedom of the algorithm to discover new solutions. However, a priori knowledge can allow us to limit the number of variables to the essential. Moreover, the search space can be infinite or finite. In most cases, optimization algorithms require finite search spaces, but this is not problematic. The definition intervals of the variables are usually naturally limited, and orders of magnitude are known. For each variable, bounds are, therefore, introduced, such as
x i m i n x i x i m a x ; i = 1 , n .
A function that reflects the relevance of the potential solutions must then be defined from the quantities to be optimized. This function is noted as f ( x ) and is called the objective function (or cost function, or adaptation function, with the term “adaptation” being used when several objective functions are combined in the case of multiple objectives). This function is usually minimized (or, depending on the sign, maximized) according to given constraints, which can be equality constraints of the type G i ( x )   =   0   ( i   =   1 ,   ,   m e ) and/or inequality constraints such as G i ( x )     0   ( i   =   m e + 1 ,   ,   m ) . The general form of an optimization problem is as follows:
m i n x f ( x )
under the constraints
G i x = 0   i = 1 ,   ,   m e
G i x   0   i = m e + 1 ,   ,   m
The length, m, contains the values of the equality and inequality constraints evaluated for x . These constraints depend on the nature of the problem to be solved.
For complex or computationally expensive optimization problems, the objective function, f ( x ) , and the constraint functions, G i x , may not be directly computable analytically. Instead, they are often evaluated via time-consuming “black-box” numerical simulations, denoted as S ( x ) . In such cases, the general problem formulation (Equations (2)–(4)) evolves into a simulation-based optimization (SBO) problem.
To mitigate the computational burden, the expensive true functions are replaced by approximations called metamodels (or surrogate models), denoted as f ^ ( x ) and G ^ i x . These metamodels are trained using modern machine learning techniques to emulate the simulation outputs. Consequently, the SBO formulation can be expressed as
m i n x   f ^ x
subject to
G ^ i x   0   i = m 1 ,   ,   m
where
f ^ ( x ) f ( S x )   and   G ^ i x = G i S ( x )
This learning model perspective enables rapid evaluation of candidate solutions while maintaining acceptable accuracy. It allows for efficient exploration of high-dimensional design spaces, after which the final candidate solution is validated using the exact simulator.
This formulation bridges classical analytical optimization with modern data-driven, simulation-driven, and machine learning-assisted optimization paradigms increasingly adopted in engineering design, digital twins, aerospace systems, energy optimization, and industrial decision-making environments.
Following the formulation of both analytical and simulation-based optimization frameworks, the optimization workflow proceeds with evaluation, verification, and iterative refinement of candidate solutions. Once candidate solutions have been evaluated, a verification and validation stage is required to determine whether objectives are satisfied and constraints are respected. This stage includes feasibility checking, constraint satisfaction analysis, and convergence assessment. If requirements are not met, an iterative refinement process is performed, which may involve updating decision variables, tuning algorithm parameters, redefining the search space, or improving model formulation. The process continues until predefined stopping criteria are reached.
After defining the optimization problem, the final step consists of selecting an appropriate optimization method. Given the wide diversity of available techniques, establishing a structured classification of optimization methods becomes essential for guiding methodological selection and comparative analysis.

3. Optimization Methods Classification

Combinatorial optimization methods can be divided into two main categories: exact methods [19] and approximate methods [20]. Figure 3 gives an overview of the classification of these methods.

3.1. Exact Optimization Methods

Exact (also called complete) methods produce an optimal solution for a given optimization problem instance. They are generally based on tree search and partial enumeration of the solution space. They are used to find at least one optimal solution to a problem. A variety of exact algorithms and modeling frameworks are detailed in the following subsections, ranging from classical mathematical programming to advanced decomposition techniques, as illustrated in Figure 3.

3.1.1. Mixed-Integer Programming (MIP)

Mixed-integer programming (MIP) [21] is a mathematical optimization method that extends linear programming by allowing certain decision variables to take integer values, while others remain continuous. It is based on the formulation of a mathematical model comprising a linear objective function and linear constraints, accompanied by integrity conditions that allow discrete or logical decisions to be represented within the framework of the problem being studied. These modeling frameworks are widely used to solve combinatorial optimization problems, particularly in the areas of scheduling, production planning, facility location, transportation, and network design.
MIP is a powerful and flexible modeling approach that can accurately represent many complex real-world problems. It guarantees an optimal solution once the solution is complete and benefits from the advanced exact algorithms built into modern solvers, such as Branch and Bound strategies, cutting plane methods, and Branch and Cut strategies [22].
Nevertheless, mixed-integer programming can present significant computational challenges for large-scale problems involving a large number of integer variables. The combinatorial growth of the search space often results in long computation times. As a result, decomposition techniques such as Benders decomposition and column generation are frequently used to improve the scalability and efficiency of solutions.

3.1.2. Branch and Bound Algorithm (B&B)

The Branch and Bound method [23] is based on a tree-like method of finding an optimal solution through separations and evaluations, as shown in Figure 4. This figure illustrates the Branch and Bound method as a tree search process. The problem is divided into sub-problems (branching), bounds are calculated, non-promising branches are eliminated (pruning), and only feasible nodes are explored to reach the optimal solution.
Branch and bound is based on three main axes: separation, evaluation and path strategy:
Separation: Separation involves dividing the problem into sub-problems. Thus, by solving all the sub-problems and keeping the best solution found, we are sure to have solved the initial problem. This is equivalent to building a tree to enumerate all the solutions. The set of nodes of the tree that are still to be traversed as being likely to contain an optimal solution, still to be divided, is called the set of active nodes.
Evaluation: The evaluation allows us to reduce the search space by eliminating some subsets that do not contain the optimal solution. The objective is to try to evaluate the interest of exploring a subset of the tree. Branch and Bound uses branch elimination in the search tree in the following way: the search for a minimum cost solution consists of storing the lowest cost solution encountered during the exploration and comparing the cost of each node traversed to that of the best solution. If the cost of the considered node is higher than the best cost, we stop the exploration of the branch, and all the solutions of this branch will necessarily be of higher cost than the best solution already found.
The path strategy
Width first: This strategy favors the vertices closest to the root by making fewer separations from the initial problem. It is less efficient than the two other strategies presented.
Depth first: This strategy favors the vertices farthest from the root (of higher depth) by applying more separations to the initial problem. This path quickly leads to an optimal solution by saving memory.
The best first: This strategy consists of exploring sub-problems with the best bound. It also avoids exploring all sub-problems that have a bad evaluation with respect to the optimal value.
The Branch and Bound algorithm has a number of advantages and disadvantages. Among the advantages is the guarantee of obtaining an optimal solution for combinatorial optimization problems, which is extremely valuable in many fields. In addition, thanks to its ability to selectively eliminate branches of the search tree that cannot lead to an optimal solution, it reduces computation time by concentrating on the most promising parts of the search space. The flexibility of the algorithm is also a major advantage, as it can be adapted to the specific characteristics of the problem through different strategies for node selection, sub-problem generation and lower bound calculation. However, the use of Branch and Bound also has its drawbacks. Firstly, the algorithm suffers from exponential complexity, meaning that its computation time can become excessively high, or even impractical, for problems with a large number of possible combinations. Furthermore, the efficiency of the algorithm is highly dependent on the quality of the lower bounds used to evaluate the sub-problems. If the lower bounds are low, the algorithm will have to explore more branches, which can lead to considerable computation times. Sensitivity to heuristic choices is also a challenge, as sub-optimal decisions in node selection and sub-problem generation can lead to inefficient exploration of the search space.

3.1.3. Linear Programming (LP)

Linear programming is a method of mathematical optimization that aims to maximize or minimize a linear function while respecting linear constraints. It is based on the formulation of a mathematical model with a linear objective function and linear constraints representing the limits and requirements of the problem. The objective is to find the values of the decision variables that optimize the objective function while satisfying the given constraints. The applications of linear programming can be found in a variety of fields, including operations management, finance, logistics, production and economics. It can be used to solve problems such as planning, resource allocation, inventory optimization, cost minimization and many others. The benefits of linear programming are manifold. It guarantees optimal solutions, i.e., the best possible solutions according to defined criteria. It also offers flexible modeling to take account of different constraints and objectives. In addition, linear programming benefits from efficient solution methods such as the simplex, which enable results to be obtained quickly and sensitivity analyses to be carried out to assess the impact of parameter changes. However, linear programming has its limits. It is suited to problems that can be modeled linearly, and nonlinear problems require other approaches. Solving large problems can be complex and require significant computing resources. In addition, linear programming can be sensitive to input data, meaning that small variations can have an impact on results.

3.1.4. Dynamic Programming (DP)

The dynamic programming involves placing the problem in a family of problems of the same nature but of different difficulties to find a recurrence relation linking the optimal solutions of these problems. Dynamic programming is proving to be a powerful method for solving a variety of optimization problems in different domains, such as planning, partitioning, resource allocation, common sequence search and optimal path problems. It offers significant advantages in solving optimization problems. It ensures that optimal solutions are obtained using the results of sub-problems that have already been solved. By storing solutions in a table, it avoids redundant calculations, which saves computing time. In addition, thanks to efficient management of overlapping sub-problems, it avoids unnecessary repetition of calculations. However, implementing this technique can be complex, as it requires a thorough understanding of the problem and the recursive relationships between the sub-problems. In addition, it can require significant memory usage and is sensitive to the quality of the recurrence functions used. Despite these limitations, dynamic programming remains a powerful method for efficiently solving optimization problems, providing optimal solutions while reducing the computation time required.

3.1.5. Cutting Plane Method (CPM)

The cutting plane method is a technique used in linear programming to strengthen linear relaxations and improve lower bounds in solving optimization problems. It involves adding additional constraints, called cutting planes, to reduce the space of feasible solutions. This approach is applicable to a variety of problems, such as backpacking, production planning and vehicle routing. The advantages of cutting planes lie in their ability to improve lower bounds, restrict the space of feasible solutions and adapt to the specific characteristics of the problem. However, the generation of cutting planes can be complex, which can have an impact on solution performance, and their effectiveness can vary depending on the specific characteristics of the problem under consideration.
Figure 5 shows a 2D geometric illustration of the cutting plane method. The figure shows the following: the feasible region of the relaxed linear problem (a polygon), an optimal fractional solution, one or more cutting planes (linear constraints) added as straight lines, the feasible region shrinking after each cut, and the final integer optimal solution highlighted. This visually conveys how cuts iteratively eliminate infeasible fractional solutions while preserving integer ones.

3.1.6. Branch and Cut Method (B&C)

The Branch and Cut method is a sophisticated optimization technique that integrates the principles of the Branch and Bound algorithm and linear programming. It is specifically used to solve integer linear programming (ILP) problems, which involve finding an optimal solution for a linear objective function under linear constraints, while requiring decision variables to take integer values. The Branch and Cut method works by iterating over fractional solutions obtained from a linear relaxation of the ILP. At each stage, if the fractional solution respects all the constraints, it provides a lower bound for the optimal solution. If not, the method identifies a violated constraint, known as the cutting plane, which is added to the linear relaxation to strengthen it and improve the lower bound. The branching process involves choosing a variable from the fractional solution and creating two branches with different constraints on this variable. This leads to the construction of a tree structure where each node represents a sub-problem with its own linear relaxation. The method then continues by solving the linear relaxation at each node and branching again until a feasible integer solution is found or the impossibility of finding such a solution is proven. The Branch and Cut method benefits from the combined strengths of Branch and Bound and linear programming. By adding cutting planes and performing iterative branching, it efficiently explores the solution space to converge on an optimal integer solution. It is particularly well suited to solving large-scale ILPs containing both continuous and discrete variables. However, the success of the Branch and Cut method depends on several factors. The efficiency of cut planes in improving the lower bound is essential, and designing relevant algorithms to generate these planes can be a challenge. Furthermore, the branching strategy plays a crucial role in the efficiency of the method, and it is important to choose variables and constraints wisely to achieve good performance. Finally, solving linear relaxations at each node can become complex and require significant computational resources for large-scale problems.

3.1.7. Column Generation Method (CGM)

The principle of column generation is based on the fact that not all the variables in a linear program will necessarily be used to reach the optimal solution. The objective of this method is to solve a reduced problem with a limited set of variables. The initial problem is called the master problem, and the reduced problem is called the restricted problem. The restricted problem is simpler to solve, but if its set of variables does not contain those that give the optimal solution for the master problem; in order to achieve the optimal solution for the master problem, variables must be added to the restricted problem that can be used to improve the solution. The problem of finding the best variable to add to the restricted problem is called the sub-problem associated with the master problem (or oracle). Its objective is to find the variable (or column) of minimum reduced cost (i.e., the most promising for improving the solution). The reduced cost of the variables is calculated using the dual variables obtained after solving the restricted problem. The point of the dual variable used in the sub-problem is called the separation point. It is often an optimal solution to the dual of the restricted problem.
Column generation reduces the complexity of the model and improves the efficiency of the solution. The advantages of this method are its ability to handle large problems by generating only the relevant columns, which reduces the time taken to solve the problem. However, its implementation can be complex and requires the expertise of a specialist.

3.1.8. Benders Decomposition (BD)

Benders decomposition (BD) [24] is an exact optimization method particularly suited to solving large mixed linear problems involving both integer and continuous variables. This approach is based on the idea of dividing the initial problem into two complementary sub-problems in order to reduce the complexity of the solution. More specifically, the problem is divided into a main problem, which deals with integer variables, and a sub-problem, which deals with continuous variables.
Benders decomposition works iteratively. Initially, the main problem is solved by considering only the integer variables and a limited set of constraints. Based on this solution, the sub-problem is then solved to determine the optimal values of the continuous variables. If the general solution obtained does not satisfy all constraints or can be improved, Benders cuts (additional constraints) are generated and added to the main problem. This process is repeated until no new cuts can improve the solution, thus ensuring global optimality [25].
Benders decomposition is widely used in various fields requiring the resolution of complex combinatorial problems. It has applications in production planning and supply chain management, network design and optimization (energy, transportation, and telecommunications), and facility location problems combining strategic and operational decisions [26]. This method is also effective in stochastic contexts where certain variables are subject to uncertainty or variation. One of the main advantages of Benders decomposition is its ability to effectively separate the integer and continuous components of a problem, which helps reduce the complexity of the search and, in many cases, significantly decrease computation time. In addition, this approach is flexible enough to adapt to different problem structures and types of constraints. However, it also has some limitations. Convergence to the optimal solution can be slow when the generated cuts provide little information, and the correct formulation of sub-problems and cuts requires specific expertise. Furthermore, for very large or poorly structured problems, the number of iterations required can be high, which can increase the overall cost of the calculation.

3.2. Approximate Optimization Methods

Approximate optimization methods, also known as heuristic algorithms or metaheuristics, are techniques used to obtain near-optimal solutions to complex optimization problems. Unlike exact methods, which aim to find the global optimal solution, approximate methods focus on the efficient exploration of the search space in order to find solutions of acceptable quality within a reasonable time. These methods are detailed in the following subsections, including approximation algorithms with formal performance guarantees and various classes of heuristics and metaheuristics.

3.2.1. Approximation Algorithms

Approximation algorithms are a class of optimization methods suited to complex combinatorial problems for which exact methods become impractical due to their computational cost. Unlike exact methods, which aim to find the global optimal solution, approximation algorithms seek to produce solutions close to the optimum within a reasonable time frame while guaranteeing, in some cases, a maximum deviation from the optimal solution. These methods are particularly useful for NP-hard problems where exhaustive search or exact mathematical formulations lead to exponential complexity.
Approximation algorithms work by constructing solutions whose objective value is bounded relative to the optimum according to a predefined approximation rate. This rate measures the maximum possible deviation from the optimal solution and allows the performance of the algorithm to be evaluated. This characteristic distinguishes approximation algorithms from classical heuristics, which can provide good empirical results but without any formal guarantee of quality.
These techniques often rely on greedy strategies, problem relaxations, or combinatorial constructions. For example, greedy algorithms are frequently used for covering or knapsack problems because of their simplicity and speed, while relaxations in linear programming followed by rounding procedures can be used to obtain integer solutions close to the optimum. These approaches have found numerous applications in the fields of network design, planning, scheduling, and resource allocation.
One of the main advantages of approximation algorithms is their ability to balance computational efficiency and solution reliability. They allow large-scale problems to be handled while maintaining predictable performance. However, their effectiveness depends heavily on the structure of the problem, and for some problems, there is no effective approximation scheme unless certain complexity assumptions are challenged. Furthermore, although the approximation rate is theoretically limited, the solutions obtained may sometimes deviate from the optimum for highly constrained or atypical cases.
Approximation algorithms are an intermediate approach between exact methods and metaheuristics. They offer an attractive compromise: faster computation than exact methods while guaranteeing superior performance to simple heuristics, making them particularly well suited to large-scale optimization problems or those subject to time constraints.

3.2.2. Heuristics Concepts

The word heuristic, derived from the Greek language, comes from the verb heuriskein, which means to find. A heuristic is an algorithm that allows us to find, in polynomial time, a feasible solution, considering an objective function, not necessarily optimal (approximate) or exact, for a difficult optimization problem. This type of method translates a strategy (a way of thinking) based on the knowledge of the problem. A heuristic is used to find optimal decisions for the design or management of a wide range of complex systems. The following section describes a variety of (meta)heuristic search methods.

3.2.3. Metaheuristics

The word metaheuristics is composed of two Greek words: meta and heuristics. The word meta is a suffix meaning beyond, i.e., at a higher level. Metaheuristics are methods generally inspired by nature. Unlike heuristics, they are applied to several problems of different natures. For this reason, we can say that they are modern heuristics of a higher level, dedicated particularly to the solution of optimization problems. Their goal is to reach a global optimum while avoiding local optima. Metaheuristics group methods that can be divided into two classes:
  • Single-solution metaheuristics: These methods process one solution at a time in order to find the optimal solution.
  • Population-based metaheuristics: These methods use a population of solutions at each iteration until the global solution is obtained.
Single-solution metaheuristics
Single-solution metaheuristics have several names: they can be called local search methods or trajectory methods. Their mechanism involves iteratively evolving a solution in the search space in order to move towards the global optimum. Figure 6 presents the most well-known methods, which include: the descent method, simulated annealing, tabu search, the GRASP method, etc.
The first category, Basic Local Search, encompasses fundamental methods that explore the immediate neighborhood of a solution to find local improvements:
Descent Algorithm (DS): The DS may seem the most intuitive and simple to understand in the field of optimization. It is also called “hill climbing” in maximization problems. The principle of these methods is to start from a solution,   x 0 , and to choose a solution, x , in the neighborhood of x 0 , such that x improves the search. The descent is completed when all candidate neighbors are worse than the current solution; a local optimum is then reached. The algorithm stops when no further improvement is possible. To choose the best solution in the neighborhood, there are several different strategies. The algorithm can choose the one that has the best fitness compared to all the other solutions in the neighborhood or choose the first solution in the neighborhood that improves the fitness. It can also select the solution that improves the fitness the least (“the least good solution”); it is also possible to choose the solution randomly, etc. The main weakness of this descent method is that it is easily trapped in a local optimum. Improving this algorithm involves launching several restarts when a local optimum is found, starting from a new randomly generated solution; this is called hill climbing with random restarts.
Simulated Annealing (SA): The SA derives its principle from metallurgy. It involves carrying out cycles of slow cooling and reheating of a material in order to minimize its energy, based on the laws of thermodynamics of Boltzmann. Kirkpatrick, C.D. Gelatt, and M.P. Vecchi in 1983 [27], and later V. Černy in 1985 [28], adapted this method to optimization: the objective function, f, corresponds to the energy of the material, and the parameters of the algorithm are the stopping criterion; the initial temperature, T 0 ; the criterion for changing the temperature step; and the temperature decay function. During the search for the optimum, the temperature decreases;. The algorithm starts with a random walk; then, the bad solutions are less and less accepted, and the algorithm converges towards a solution in the search space. A compromise must, therefore, be found in order to adapt the decrease in the temperature to avoid a too strong decrease in the temperature and thus risk trapping the algorithm in a local optimum. Moreover, there are several temperature decay laws. Simulated annealing has been adapted to solve continuous optimization problems by Patrick Siarry [29], and it has been widely used in various fields of application.
Tabu Search (TS): The TS method was proposed by Fred Glover in 1986. This method is called a tabu list because it contains the solutions that have already been visited and for which it is forbidden to go back. The objective of this method is to prevent the algorithm from being trapped in a local optimum because it allows it to go through solutions that deteriorate the fitness. The size of its memory—the number of memorized elements—is a parameter of the algorithm that allows one to consider the diversification and intensification equilibrium mentioned previously. Indeed, if the memory is small, then it will favor intensification because a restricted number of solutions will be forbidden. However, the more the memory size increases, the more diversification is favored because the algorithm will visit the previous regions less and less, as they are likely to be close to the current solution. The procedure of this metaheuristic starts with an initial solution, which is randomly generated; then, the method will iteratively select the best solution in its neighborhood and memorize the previous solution. This avoids the so-called cycling phenomena (the algorithm loops over the same solutions). There are also methods with an adaptive memory [30]; the size of the memory can be adapted according to the search context and even create new solutions.
GRASP: A greedy algorithm follows a very simple principle involving improving the solution by searching in its neighborhood for a better solution. It will, therefore, act locally in a specific area. The Greedy Randomized Adaptive Search Procedure (GRASP) was proposed by Feo and Resende. This method alternates the construction and improvement phases until the stopping criterion is reached. The GRASP construction step is similar to a randomized greedy method; it generates a feasible solution from a list of potential choices called a restricted candidate list (RCL). This list is sorted, and this is the greedy component of the algorithm. The improvement step uses the solution generated in the previous phase as an initial solution to perform a local search. The latter can be a descent, a tabu search, or any other heuristic.
To overcome the limitations of basic local search and improve the exploration of the search space, a second family of methods, known as advanced, iterative, and large-scale methods, has been developed. These approaches build on local search by using iterative strategies, modifying neighborhoods, or exploring large portions of the solution space in order to escape local optima and explore the search space more efficiently:
Variable Neighborhood Search (VNS): VNS systematically explores several neighborhood structures to avoid stagnation in local optima. Starting from an initial solution, a local search is performed in a given neighborhood; if no improvement is found, the method moves on to another, often larger, neighborhood and repeats the local search. The key idea is that a local optimum relative to one neighborhood may not be an optimum relative to another, which increases the chances of finding better solutions while maintaining search efficiency. VNS is widely used in planning, routing, and network design problems, and continues to be the subject of recent research [31].
Iterative Local Search (ILS): ILS extends basic local searches by iterating from modified solutions. Once a local optimum has been reached, the current solution is perturbed, for example, by larger movements or random modifications in order to escape the local optimum basin. A further local search is then performed from this perturbed solution, and the process is repeated until the stopping criterion is satisfied. By combining exploration and exploitation, this approach allows different regions of the search space to be visited [32].
Large Neighborhood Search (LNS): LNS modifies large parts of a solution at each iteration, rather than a few variables at a time. A part of the solution is deleted (destroyed) and then rebuilt (repaired) using a heuristic adapted to the problem. This method takes account of much larger neighborhoods than in conventional local search, allowing it to escape deep local optima and explore promising regions of the search space. LNS is particularly effective for large-scale problems and has recently been studied in hybrid frameworks incorporating learning mechanisms or dynamic neighborhood selection [33].
Adaptive Large Neighborhood Search (ALNS): ALNS extends LNS by maintaining a set of destruction and repair operators and selecting them adaptively based on their past performance. At each iteration, a destruction operator and a repair operator are chosen based on adaptive probabilities reflecting their past effectiveness and then applied to the current solution to generate a new candidate solution. This adaptive selection allows for a better balance between diversification and intensification, thereby improving the quality and robustness of the solutions. ALNS is widely used in modern combinatorial optimization contexts, such as vehicle routing, planning, and mixed integer programming problems, and it remains an active area of research [34].
Population-based metaheuristics
Population search methods work on a population of solutions and not on a unique solution. The general principle of all these methods is to combine solutions to form new ones by trying to inherit the “good” characteristics of the parent solutions. Such a process is repeated until a stopping criterion is satisfied (maximum number of generations, number of generations without improvements, maximum time, limit reached, etc.). Population-based methods are classified into several categories, as shown in Figure 7.
Evolutionary Algorithms: Evolutionary algorithms were among the first algorithms inspired by biology, used to solve optimization problems. These algorithms are inspired by biological evolution mechanisms, which concern the development of successive generations containing a set of individuals called a population [35]. Evolutionary algorithms are based on three main elements: the population, the fitness function and the evolution mechanism. The individuals of the population are characterized by a chromosome representing the solution in the decision space. The fitness value measures its quality in the objective space, while the evolution mechanism allows one to eliminate the individuals with a low fitness using the selection operator and to generate new individuals using the two operators of mutation and crossing. Evolutionary algorithms include several commonly used examples, such as genetic algorithms (GAs) that use crossover and mutation operations on individuals, evolutionary strategies (ESs) that work with real vectors for continuous optimization, and genetic programming (GP) that evolves computer programs in the form of syntax trees. Each of these algorithms offers different approaches to solving specific problems, demonstrating the diversity and adaptability of evolutionary algorithms. Evolutionary algorithms are widely used to solve a variety of complex problems in different fields, such as optimization, design and planning. Their strength lies in their ability to explore vast search spaces and adapt to changing conditions. These algorithms generate potential new solutions using selection, recombination and mutation mechanisms. Moreover, their robustness to noise and incomplete data makes them reliable tools for finding quality solutions even with limited information. However, it is worth noting a few limitations of evolutionary algorithms. Computation time can be a challenge, especially for complex problems requiring many iterations. In addition, there is a risk of convergence towards local optima rather than the optimal global solution. However, diversification strategies can be implemented to overcome this problem. Finally, the adjustment of parameters, such as population size or mutation rates, may require expertise to achieve optimal performance. By understanding these aspects, evolutionary algorithms can be used effectively to find high-quality solutions in a variety of application domains.
Swarm-Based Algorithms: The second category includes algorithms inspired by natural swarms. Unlike the first category, which contains only algorithms of the old generation, this one contains algorithms of both generations. Algorithms of the old generation include the following: Particle swarm optimization (PSO), which updates the position of a set of particles represented by vectors in the decision space [36]; ant colony optimization (ACO), inspired by the foraging behavior observed in ants [37]; and artificial immune system (AIS) optimization, emulating the protective reaction of the human immune system against intruders [38]. The new generation also includes a variety of algorithms such as: Bacterial Foraging Optimization (BFO), inspired by the foraging behavior of bacteria [39]; artificial bee colony-based optimization (ABC), inspired by the foraging behavior of bees [40]; biogeography-based optimization (BBO), inspired by the process of migration from one island or habitat to another and the appearance and extinction of species [41]; the dragonfly algorithm (DA), inspired by the social interaction of dragonflies during navigation and foraging [42]; Cuckoo Search (CS), emulating the aggressive reproduction of some cuckoo species that lay their eggs in the nests of other species [43]; Grey Wolf Optimization (GWO). inspired by the dominant and strict hierarchy observed in grey wolves [44]; and the salp swarm algorithm (SSA), inspired by the foraging behavior of salps in the ocean [45]. Swarm-based algorithms are applied to a variety of problems, such as continuous optimization and planning. They are particularly well suited to solving continuous optimization problems by finding optimal values for real variables. They can also be used to solve planning problems by finding efficient solutions for resource management, task scheduling or route planning. The advantages of swarm-based algorithms lie in their ability to efficiently explore the search space and adapt to changes in the environment. They rapidly converge on promising regions and offer dynamic adaptability. However, swarm-based algorithms can have limitations, including convergence to local optima instead of the global optimal solution and parameter sensitivity, which requires appropriate tuning. Despite these limitations, PSO remains a powerful tool for solving a wide range of optimization and planning problems.
Physics-Based Algorithms: The third category includes physics-based algorithms, for example, the gravitational search algorithm (GSA), emulating the gravitational forces between particles [46]; the Charged System Search (CSS) algorithm, where individuals are considered charged particles that can apply electric forces to each other, resulting in electric particle movements with a certain speed and acceleration [47]; And black hole-based optimization (BH), whose principle is to guide the individuals to the best individuals of the current iteration, called black holes [48]. Subsequently, the individuals in the black hole fields will be replaced by new individuals. The galaxy-based search algorithm (GbSA) mimics the spiral motion of galaxies to search its environment by escaping local optima through chaos (randomness) [49]. Physics-based algorithms are used to solve a variety of problems, such as physical simulation and optimization of physical systems. They offer significant advantages, including high accuracy and realism, as well as adaptability to different situations. These algorithms can accurately model the physical behavior of systems and consider various constraints. However, they can be computationally expensive and require simplified models, which can limit their applicability in certain contexts.
Algorithms Based on Human Society: A relatively new category includes algorithms based on human society. For example, the decomposition-based brainstorming optimization (DBSO) algorithm simulates brainstorming, which is a collective behavior observed in humans [50]. It is a simple algorithm based on the principle of dividing solutions into several clusters. New solutions are generated by applying genetic operators to one or more individuals. The harmony search (HS) algorithm imitates musicians’ improvisation [51]. The algorithm uses stochastic search, harmony memory, and a pitch adjustment rate to find the perfect harmony state and subsequently find the optimal solutions. Teaching–learning-based optimization (TLBO) is inspired by the traditional learning process [52]. Algorithms based on human society offer significant advantages in various fields of application, such as resource allocation, social planning, modeling crowd behavior and multi-agent collaboration. They stand out for their ability to realistically model social interactions, adapt to changing environments, and foster the emergence of collective solutions. However, they also have limitations, such as modeling complexity, parameter sensitivity and scalability. Despite these limitations, algorithms based on human society offer promising approaches for solving complex problems by drawing inspiration from social behavior and taking advantage of interactions between individuals or agents.
Plant-Based Algorithms: Still other categories are emerging, such as the category of plant-based algorithms [53], such as the flower pollination algorithm (FPA) [54]. The tree physiology-based optimization algorithm (TPO) is inspired by the plant development system [55]. The algorithm is mainly based on the root-to-sprout ratio, which defines the developmental regulation and functional balance between the two upper and lower parts. Others include weed-based optimization (IWO), inspired by weed ecology and biology [56]. Plant-based algorithms offer significant advantages in a variety of application areas, such as the optimization of energy distribution networks, antenna placement, the design of sensor networks and route planning. Their adaptability to the environment, their ability to efficiently search for solutions, and their resilience make them promising tools. However, the complexity of modeling, interpretation of results, and adaptation to other problems can pose challenges. Despite these limitations, plant-based algorithms offer interesting approaches to optimization by drawing inspiration from plant biological mechanisms.
Hybrid and Adaptive Algorithms: Hybrid algorithms typically integrate two or more algorithms in order to exploit the complementary advantages of each method. For example, a combination of particle swarm optimization (PSO) and differential evolution (DE) can increase the overall efficiency of the search by improving both exploration and solution diversity. A recent study proposes a hybrid algorithm called MDE-DPSO, which combines dynamic inertia weighting strategies and the DE mutation–crossover operator applied to PSO to improve the solution outcome and help particles escape local optima. Comparisons on several standard test suites show that this method is highly competitive compared to other advanced metaheuristics [57]. Adaptive algorithms, on the other hand, automatically modify their parameters, operators, or strategies based on feedback from the search process. A recent example is adaptive optimization using particle swarm optimization with landscape learning, where fitness landscape analysis is used to dynamically adjust PSO parameters during optimization, thereby improving performance on test functions and feature selection problems [58]. Other work proposes adaptive variants of PSO that incorporate learning strategies based on the state of the system, adaptive hybrid methods, or techniques inspired by group theory to improve the balance between exploration and exploitation [59].
Hybrid and adaptive algorithms offer significant advantages for solving complex optimization problems, particularly those that are dynamic, multimodal, or involve conflicting objectives. By leveraging the strengths of multiple methods and adapting to the context of the problem, these algorithms efficiently explore the search space and produce high-quality solutions. Recent theoretical reviews also address the robustness of hybrid metaheuristics with respect to transformations of objective functions, confirming their effectiveness in different optimization contexts [60]. However, these approaches can be more computationally expensive and require careful design to ensure effective coordination between the combined algorithms or relevant parameter tuning. Despite these challenges, hybrid and adaptive algorithms have proven particularly effective in areas such as engineering optimization, planning, resource allocation, and feature selection in machine learning.

3.2.4. Simulation-Based Optimization (SBO) and Machine Learning Metamodels

As introduced in the SBO formulation (Section 2), many contemporary optimization problems rely on highly complex simulators, such as Computational Fluid Dynamics (CFD) or Finite Element Analysis (FEA), where a single evaluation can be extremely computationally expensive. In these cases, the general problem formulation (Equations (2)–(4)) evolves into a simulation-based optimization (SBO) problem, in which the true objective and constraint functions are replaced by approximations known as metamodels or surrogate models. These models are trained on simulation outputs, often using advanced machine learning techniques, to rapidly predict the performance of candidate solutions without requiring execution of the full simulator at every step. Traditionally, SBO relied on simple polynomial response surfaces. However, modern approaches increasingly leverage sophisticated machine learning and deep learning models. These surrogate models efficiently guide exact, heuristic, or metaheuristic search algorithms by providing quick estimates of the fitness landscape, significantly reducing the computational cost associated with evaluating candidate solutions. Table 1 provides a comparative overview of the main machine learning models used as metamodels in SBO, highlighting their characteristics, strengths, limitations, and typical applications.
These surrogate models enable rapid exploration of high-dimensional design spaces while maintaining acceptable accuracy. Final candidate solutions can then be validated and, if necessary, refined using the exact simulator. The SBO approach, combined with machine learning, substantially reduces computational cost for problems involving expensive simulations and enables effective optimization under complex constraints.
The effectiveness of SBO depends on the quality of the training data and the appropriate selection of metamodels. More complex models, such as deep neural networks or PINNs, require careful hyperparameter tuning and may be prone to overfitting or poor extrapolation outside the sampled design space. Nonetheless, these approaches are widely applied across diverse domains, including aerodynamic and structural optimization, energy planning, robotics, and industrial design with digital twins.

3.3. Discussion and Comparative Synthesis

Following the detailed presentation of exact, heuristic, metaheuristic, and hybrid optimization methods, it is essential to contextualize their relative advantages, limitations, and applicability. This discussion aims to guide methodological choices based on problem characteristics, instance size, computational constraints, and performance objectives.
Exact methods, such as mixed-integer programming (MIP) [61], Benders decomposition [62,63], and Branch and Cut, guarantee globally optimal solutions and are particularly well suited for small- to medium-scale, well-structured problems. However, their computational costs escalate rapidly with problem size, rendering them less practical for large-scale or highly combinatorial instances. In contrast, single-solution heuristics, including simulated annealing (SA) and tabu search (TS) [64], offer a balanced trade-off between solution quality and computational efficiency, leveraging mechanisms that escape local optima while maintaining exploration–exploitation balance.
Population-based metaheuristics, such as genetic algorithms (GAs) and particle swarm optimization (PSO), enable more effective exploration of complex solution spaces but require careful parameter tuning and greater computational resources. Hybrid and adaptive methods, which combine multiple strategies, further enhance exploration and exploitation capabilities, providing robust and high-performing solutions, particularly for dynamic or multi-objective problems. Nonetheless, their implementation is more complex and demands higher computational investment.
Simulation-based optimization (SBO), particularly when integrated with machine learning surrogate models, represents a modern paradigm that bridges classical analytical optimization and heuristic/metaheuristic approaches. SBO is highly advantageous for complex or computationally expensive problems, where direct evaluation of the objective and constraint functions is prohibitive. By employing surrogate models to approximate these expensive simulations, SBO enables rapid exploration of high-dimensional design spaces while maintaining acceptable solution quality. Compared to exact methods, SBO does not always guarantee global optimality, but it provides substantial computational efficiency. When combined with metaheuristic search algorithms, SBO offers a flexible and powerful framework capable of addressing multi-objective, stochastic, or dynamic optimization problems.
In summary, the problem structure and instance size determine the choice of method: exact methods are most appropriate for small, well-structured instances; heuristics and metaheuristics provide a satisfactory trade-off for large-scale or uncertain problems; hybrid approaches deliver optimal performance in complex contexts; and SBO methods provide an efficient and scalable solution strategy for computationally expensive and high-dimensional problems, complementing the traditional optimization families. Table 2 provides a concise comparison of the main method families, highlighting their preferred application domains, strengths, limitations, and typical performance.

4. Comparative Study and Scientometric Framework

Table 3 presents a structured comparison of three major optimization paradigms: exact optimization methods, approximate optimization methods, and simulation-based optimization. This comparative framework highlights their fundamental differences in terms of principles, computational complexity, scalability, performance guarantees, and applicability domains.
Exact optimization methods are grounded in rigorous mathematical formulations and aim to identify provably optimal solutions. They rely on exhaustive or systematically pruned search strategies, ensuring convergence and optimality guarantees. However, their computational complexity typically grows exponentially with problem size, limiting their scalability to small or well-structured problems where precision is critical.
Approximate optimization methods, in contrast, prioritize computational efficiency and flexibility over strict optimality guarantees. Based on heuristics, metaheuristics, evolutionary strategies, or stochastic search mechanisms, these approaches enable efficient exploration of large, high-dimensional, and non-convex search spaces. Although they do not guarantee global optimality, they provide high-quality solutions within reasonable computational budgets, making them suitable for large-scale and complex problems.
Simulation-based optimization (SBO) introduces a hybrid paradigm in which expensive objective or constraint functions are approximated using surrogate (metamodel) techniques. By integrating machine learning models or statistical approximations with optimization algorithms, SBO reduces computational costs while maintaining acceptable solution accuracy. This approach is particularly effective in engineering, physics-driven systems, and computationally intensive design environments.
From a comparative perspective, the three paradigms represent different trade-offs between:
Optimality guarantees;
Computational complexity;
Scalability;
Resource requirements;
Implementation complexity.
Rather than forming a strict hierarchy, these paradigms address distinct classes of optimization problems. The selection of a suitable paradigm depends on problem size, structural properties, available computational resources, and acceptable trade-offs between accuracy and efficiency.
This comparative framework, initially applied to distinguish broad classes of optimization paradigms (exact, approximate, and simulation-based methods), provides the conceptual basis for the subsequent scientometric investigation. In the following sections, this framework is extended and operationalized through a rigorous quantitative analysis of six major optimization paradigms, enabling a detailed examination of publication trends, disciplinary diffusion, and epistemological evolution.

4.1. Scientometric Methodology

To ensure rigor, reproducibility, and scientific relevance, a structured scientometric methodology was adopted to analyze optimization paradigms. This approach goes beyond a purely descriptive overview, providing a quantitative framework to compare publication trends, disciplinary diffusion, and epistemological evolution.
The complete workflow of the study is illustrated in Figure 8, which synthesizes all steps from data collection to the analysis of the studied paradigms. The figure’s highlights are listed below.
Data Sources and Retrieval: Bibliographic data were extracted from Scopus, selected for its multidisciplinary coverage and standardized indexing. Although the use of a single database may influence absolute publication counts, the large corpus analyzed ensures that the relative structural patterns identified in this study remain statistically robust. Structured Boolean queries targeting the TITLE-ABS-KEY fields (title, abstract, and keywords) ensured comprehensive retrieval.
Corpus Selection: Publications were filtered according to language (English), document type (journal articles and conference papers), and relevant scientific disciplines (computer science, engineering, mathematics, physics, energy, decision sciences). Screening was applied to exclude irrelevant records, such as purely biological uses of terms like “evolutionary algorithm.”
Data Cleaning and Standardization: The dataset was refined through synonym merging, keyword standardization, duplicate removal, and anomaly verification, ensuring high-quality and consistent data.
Analytical Methods: Analyses were performed along three complementary dimensions: publication volume, disciplinary distribution, and document type. In addition, a Correspondence Analysis (CA) was conducted to visualize structural associations between optimization paradigms and scientific disciplines in a reduced factorial space, enabling the identification of clusters, trends, and epistemological shifts.
Paradigms Studied: The final dataset encompasses six major optimization paradigms: evolutionary algorithms, swarm intelligence, tabu search, linear programming, descent algorithms combined with statistics, and physics-based optimization (PINNs).

4.2. Comparative Scientometric Analysis of Optimization Paradigms

The corpus constructed (Section 4.1) is analyzed across six primary optimization paradigms and three complementary dimensions: temporal dynamics, disciplinary distribution, and document type. The scientometric indicators derived enable a comparative assessment of the paradigms, highlighting their growth trajectories, scientific maturity, and disciplinary anchoring.

4.2.1. Scientometric Analysis of Evolutionary Algorithms

The temporal dynamics of scientific publication related to evolutionary algorithms, as illustrated in Figure 9a, show a long-term steady growth over the last few decades, followed by a recent stabilization of scientific output at a very high level. The number of publications, which reached 7814 documents in 2024 [65] and is projected to be around 7695 in 2025 [66,67], should not be interpreted as a decline in academic interest. Rather, it reflects a phase of knowledge consolidation. This stabilization is typical of areas that have passed their exponential growth phase and are now focusing on methodological rigor and the diversification of applications. Therefore, it may be regarded as a turning point marking the transition of EAs into a “maturity” phase.
Concerning the disciplines (Figure 9b), computer science [68,69] still represents the most important theoretical base (32.8%); however, the significance of engineering (20.5%) [70,71] and mathematics (18.2%) [72], reflects a balanced transdisciplinary approach between applied optimization and formal methodological development. Furthermore, the rise of physics and astronomy [73] (3.6%) and biochemistry (3.0%) [74,75] suggests the increasing use of EAs as “black boxes” in solving complex problems. Their ability to manage non-differentiable, noisy, or multimodal objective functions appears to be a distinctive advantage that makes their integration with deep learning architectures highly promising. The editorial organization of the field (Figure 9c), split between scientific journals (52.3%) [76,77] and conferences (42.4%) [78,79], reflects diverse dissemination practices within the evolutionary algorithm’s community. This dual publication channel supports both rapid communication of emerging ideas and more comprehensive methodological validation. At the same time, it may indicate variability in evaluation approaches and reporting depth across venues.
From an open innovation perspective, these scientometric patterns highlight the broad diffusion of evolutionary algorithms across interconnected research ecosystems, fostering cross-domain knowledge exchange and methodological hybridization rather than implying uniform consolidation processes.

4.2.2. Scientometric Dynamics of Swarm Intelligence Algorithms

Figure 10 illustrates a scientometric study of the Scopus dataset on swarm intelligence (SI), which comprises 2598 documents in total. The temporal evolution analysis (Figure 10a) exhibits a growth pattern broadly comparable to that observed for EA, demonstrating two discrete periods: a long latency phase, followed by a marked surge starting in the early 2010s [80]. This evolution is characterized by an initial slow growth followed by accelerated expansion and subsequent stabilization, peaking at 292 papers in 2023 [81] and then stabilizing on a plateau 279 papers in 2024 [82,83] and 275 papers predicted for 2025 [84,85]. Rather than indicating a fundamentally distinct dynamic from EA, this trend mainly reflects a lower overall publication volume for SI. The observed differences should, therefore, be interpreted primarily as quantitative rather than structural.
Regarding the disciplinary distribution (Figure 10b), computer science still carries the most weight (34.8%) [86]. However, the contribution of engineering sciences is particularly substantial (24.1%) [87,88]. This high share reflects the strong operational presence of swarm intelligence, used extensively in energy networks, collective robotics, multi-agent systems, and adaptive logistics. Meanwhile, mathematics (15.3%) [89], still provides the analytical foundation for the study of convergence dynamics. Furthermore, the pervasiveness of swarm intelligence in such areas as physics and decision sciences underscores its interdisciplinary nature and highlights its role as a preferred tool for distributed or noisy optimization problems.
The comparison of document types (Figure 10c) indicates that the majority of works are journal articles (57.5%) [90], with the rest being conference proceedings (35.5%) [91]. The editorial focus emphasizes rigor and in-depth theoretical validation. Considering that journal articles provide sufficient space to detail complex analyses, they are particularly appropriate for cases of experimental variability and parametric sensitivity, which are inherent in these algorithms.
In conclusion, swarm intelligence can now be considered a mature field in terms of application, yet it remains engaged in a process of epistemological standardization. The community’s ability to establish unified experimental protocols and generalize large-scale evaluations, thereby relying on advanced computing infrastructures, may play a critical role in guiding the future development of the field.

4.2.3. Scientometric Analysis of the Intersection Between Descent Algorithms and Statistics

Figure 11 illustrates the scientific corpus produced from the intersection between the terms “descent algorithms” and “statistics,” totaling 1042 documents. This corpus is quantitatively less extensive than the one related to evolutionary algorithms and swarm intelligence, pointing to a fundamentally different research orientation. Indeed, the area under consideration is mainly a theoretical and mathematical one, lying at the heart of numerical optimization and statistical learning.
The temporal evolution (Figure 11a) shows a relatively late start followed by a rapid expansion after the 2010s, which is largely a consequence of the proliferation of machine learning and the general use of stochastic gradient descent (SGD) and its numerous variants. Unlike the stabilization observed in the two domains analyzed earlier, this domain shows considerable recent fluctuations. The number of publications is projected to drop from 89 in 2024 to 72 in 2025, indicating that the domain is still far from reaching a plateau, which would indicate that it is structurally undergoing a phase of alternating theoretical innovation and rigorous validation.
The distribution of disciplines (Figure 11b) points to a balanced triad consisting of computer science (32.8%) [92,93], engineering (18.8%) [94,95], and mathematics (17.4%) [96]. The contribution of mathematics appears relatively prominent, highlighting its central role in convergence analysis, error bounds, and the statistical characterization of optimization trajectories. In addition to that, the significant contributions from physics and astronomy (4.8%), as well as from decision sciences (4.2%), indicate that the methods based on descent are used to model complex systems and minimize energy-like objective functions.
In terms of publications (Figure 11c), journal articles represent the majority of the work reflected in the corpus, accounting for 60% of total output, a higher percentage than in the other two fields. This editorial preference aligns with a highly analytical research culture that prioritizes rigor through formal mathematical treatment, which is better suited to extended journal formats than to more restrictive conference proceedings.
Descent algorithms have been closely linked to statistics, forming the very core of modern optimization research, characterized by high mathematical requirements and a strong emphasis on demonstrable guarantees. The main challenges currently facing this field are extending theoretical results to non-convex cases, reducing dependence on hyperparameters, and creating common benchmarks to allow reliable and systematic comparisons between the rapidly increasing number of descent optimization methods.

4.2.4. Scientometric Dynamics of Tabu Search

Figure 12 illustrates the scientometric review of the Tabu Search literature, spanning 13,495 documents since 1972 [97]. The sheer size of this dataset results in a very high position of this heuristic among the founding single-solution-based metaheuristics of contemporary operations research. It is clear from the content that tabu search is much more mature and has deep roots in operations research (OR) and decision-making applications.
The temporal analysis (Figure 12a) reveals a dynamic of “cyclic resilience.” Following foundational work in late 1972, scientific production experienced sustained growth, reaching a first plateau in the early 2010s. Rather than a gradual decline, a trend often observed for historical methods, there is a recent resurgence in activity: production rose from 495 documents in 2023 [98,99] to 569 in 2024 [100,101], close to historical peaks. Preliminary data for 2025 (535 documents) [102,103] confirm continued high stability. This remarkable period of activity has been explained by the shift in tabu search usage from a standalone method to an intensification component in hybrid algorithms (memetic metaheuristics, multi-level methods, and AI hybridizations).
The disciplinary distribution (Figure 12b) highlights a distinctive characteristic compared to other analyzed domains: a stronger anchorage in management and decision sciences. While computer science (30.1%) [104,105] and engineering (23.7%) [106] remain dominant, the contribution of decision sciences (10%) [107] and business/management (4.2%) [108] is significantly higher than for EAs or swarm methods. The reason for this is the extensive application of tabu search to real-world combinatorial problems: vehicle routing, industrial scheduling, resource planning, and network management. The explicit memory logic inherent to the method responds particularly well to the operational constraints in these sectors.
The study on the type of publications (Figure 12c) echoes the level of sophistication achieved by this heuristic. Compared to 34.2% of conference papers [109], 60.8% of journal articles [110] were still published in the tabu search literature, the highest proportion among the domains analyzed. This suggests that the work has been directly applicable and methodologically thorough, requiring extensive experiments rather than introducing novel concepts only to be rapidly disseminated through conferences. This editorial support reinforces the strong operations research orientation of the work, where the validation with complex industrial cases has always been a key factor.
Tabu search’s trend was not like the one seen in recent AI algorithms, but resembled a foundational technology fully embedded in the nature of combinatorial optimization. Its recent transformation has been that of a metaheuristic component, serving as a key local optimization element in hybrid high-performance system architectures.

4.2.5. Scientometric Dynamics of Linear Programming

Figure 13 depicts the dissection of the linear programming (LP) corpus, a domain whose extent is multiple orders of magnitude larger than that of the other topics discussed. There are 193,432 documents in this corpus dating back to 1918; the size of this collection is almost twenty times that of tabu search and one hundred times that of swarm intelligence. This mass of data undeniably places linear programming not simply as a method but as the fundamental paradigm and epistemological bedrock of contemporary mathematical optimization.
The chronological study (Figure 13a) challenges the misconception portraying LP as an old-fashioned field, gradually losing steam. After a steady linear climb in production throughout the last century, scientific output has undergone a phenomenal exponential growth since the turn of the millennium. The upsurge in output in the last few years is even more striking: the number of documents published annually was 10,045 in 2023 [111] and 12,342 in 2024 [112,113]. Projected figures for 2025 (12,106 documents) [114] confirm that the trend is still on the rise. This “rebirth” is explained by a number of structural factors:
  • As an integral part of MIP solvers, LP has been an important subroutine;
  • LP is increasingly becoming a part of AI and machine learning pipelines (convex relaxations, dual bounds, and model calibration);
  • The advent of Big Data applications entails solving huge instances with millions of variables;
  • Parallel computing and tailor-made architectures have been significantly developed.
The cross-disciplinary spread (Figure 13b) demonstrates extremely wide applications. Whereas metaheuristics concentrate more on computer science, linear programming shows an almost equivalency of computer science (25.4%) [115] and engineering (24.9%) [116]. Mathematics (16.3%) [117] is still the main pillar, mainly through the theoretical work on interior-point methods and the complexity analysis. There is a striking energy sector (5.0%) [118] contribution, which is a lot higher than the corresponding figures for other methods. This share demonstrates that LP has established itself as an indispensable instrument for the management of smart grids, the optimization of energy markets, and the integration of renewables in the context of the global energy transition.
The analysis of publication types (Figure 13c) echoes the portrayal of an extremely well-established discipline. The share of LP journal articles is 62.8% [119] as compared to those presented at conferences, which is 33.7% [120], representing the highest journal publication rate of any method discussed in this chapter. This proportion is indicative of the research that has theoretical, applied, and certified works at its core. In various domains of engineering, economics, and logistics, the practice of expressing a problem in a linear format is often synonymous with considering it solvable, hence the dominance of specialized journals in spreading the most seminal works.
To conclude, the scientometric research makes it clear that linear programming is not some outdated gadget but rather a contemporary source of progress and creativity. In fact, LP is thus far from being replaced by new AI methods. Rather, it collaborates with them, supplying the optimality bounds, the convex relaxations, and the theoretical guarantees that are necessary for the evaluation, bounding, and hybridizing of the modern stochastic algorithms.

4.2.6. Scientometric Dynamics of Physics-Based Algorithms

Figure 14 presents the analysis of the corpus dedicated to physics-based algorithms, totaling 5489 indexed documents. Although this volume is modest compared to linear programming, its recent growth highlights a major technological breakthrough. This field embodies the convergence between artificial intelligence and scientific modeling, marking the transition from purely numerical optimization to optimization informed by the conservation laws.
The temporal analysis (Figure 14a) highlights a “hockey-stick” trajectory, characteristic of emerging paradigms. After a long latency period (1980–2015) where production remained marginal, the corpus has experienced recent exponential growth. Annual production increased from 598 documents in 2023 [121,122] to 635 in 2024 [123], and preliminary data for 2025 already reach 776 documents [124,125], an unprecedented record. This rapid acceleration is not due to the organic growth of an established field but to the emergence of a new paradigm: the massive adoption of Physics-Informed Machine Learning (PINNs), which integrates a priori knowledge (conservation laws, physical symmetries) into neural architectures to solve complex partial differential equations.
The disciplinary distribution (Figure 14b) reveals a notable inversion compared to classical AI standards. Engineering takes the top spot (26.2%) [126], relegating computer science to 21.3% [127]. The strong representation of physics/astronomy (8.5%) and energy (6.5%) [128] underscores the pragmatic applications of these algorithms, such as fluid mechanics, heat transfer, or industrial digital twins. Mathematics (9.5%) provides theoretical support, particularly for studying the convergence and stability of these hybrid models.
The analysis of publication types (Figure 14c) shows a balanced distribution between journal articles (53.9%) [129] and conference papers (39.7%) [130]. The relatively high share of conference publications reflects the accelerated pace of this emerging field, aligned with that of AI research. Nevertheless, the relative majority of journal articles indicates that the community is now striving to rigorously validate these approaches on benchmarks, demonstrating their superiority over traditional numerical solvers.
In summary, physics-based algorithms constitute the pioneering front of modern optimization. Unlike the mature domains analyzed previously, this field is in a full exploratory expansion phase. It represents an effort to reconcile “black-box” approaches (data) and “white-box” approaches (fundamental laws), offering promising perspectives for problems where data is limited but underlying physical laws are well known. This domain thus exemplifies the fusion of computational and physical paradigms, opening new directions for both scientific and applied optimization.

4.2.7. Comparative Synthesis of Scientometric Dynamics

This section presents a comparative overview of the six optimization domains analyzed previously: evolutionary algorithms, swarm intelligence, descent and statistics, tabu search, linear programming, and physics-based algorithms. Although each field has its own growth dynamics, disciplinary foundations, and publication patterns, a synthesis allows us to identify general trends, epistemological phases, and the main drivers of innovation. Table 4 below summarizes the quantitative and qualitative differences, highlighting contrasts in terms of volume, growth trajectories, disciplinary orientation, journal publication ratios, and epistemological status. This comparative perspective provides an immediate visualization of the spectrum ranging from mature, established paradigms to emerging “white-box” frameworks.
This comparative overview reveals structured heterogeneity within the optimization landscape. Linear programming undeniably dominates in terms of volume, serving as the quantitative basis for the field. Evolutionary algorithms, swarm intelligence, and tabu search represent mature or consolidating areas, characterized by great methodological rigor and a trend toward hybridization. In contrast, physics-based algorithms represent a new frontier, showing explosive “hockey-stick” growth driven by the need to integrate physical laws into AI models. Finally, the field of descent and statistics occupies a unique theoretical niche, marked by fluctuating results and a strong emphasis on mathematical validation rather than pure volume. The variation in ratios between journals and conferences further highlights the gap between fields that favor rigorous theoretical consolidation (e.g., tabu and descent) and those that balance validation with the rapid dissemination of emerging paradigms (e.g., physics-based methods). Based on the comparative insights presented in Table 1 and Table 2, this review provides practical guidance for selecting optimization methods tailored to problem size, computational complexity, and available resources. Practitioners and researchers can leverage these analyses to make informed decisions when choosing appropriate algorithms for specific applications, bridging the gap between theoretical developments and real-world implementation. These results also highlight gaps between theoretical advances and practical applications, particularly in areas such as hybrid methods, large-scale industrial deployment, and physics-informed optimization. This suggests clear directions for future research, encouraging the development of methods that not only advance theoretical understanding but are also readily deployable in complex real-world scenarios. A more detailed analysis by period indicates that, between 2010 and 2015, evolutionary algorithms and swarm intelligence experienced rapid growth, while descent-and-statistics methods started expanding. From 2015 to 2020, classical paradigms consolidated, and physics-informed AI algorithms emerged. Since 2020, PINNs have shown explosive growth, contrasting with the stabilization of mature paradigms. These analyses also reveal that descent-and-statistics and hybrid methods remain less explored, highlighting potential gaps for future research.

4.2.8. Multivariate Statistical Validation of Paradigm–Discipline Associations

To rigorously assess the structural relationships between optimization paradigms and scientific disciplines, we applied a combined approach based on the Chi-square independence test and Correspondence Analysis (CA). This methodology goes beyond descriptive statistics, providing quantitative and structural evidence of significant associations.
A contingency matrix was constructed using publications indexed in Scopus, including six optimization paradigms—evolutionary algorithms (EA), swarm intelligence, descent and statistics, tabu search, linear programming (LP), and physics-based methods—and nine scientific disciplines: computer science, engineering, mathematics, physics, decision sciences, energy, materials science, environmental science, and social science. Each cell represents the number of publications for the corresponding paradigm–discipline pair. For clarity, only the most represented disciplines for each paradigm are discussed in the text, although the complete matrix was used in all statistical analyses. This selection ensures relevance and representativeness.
The Chi-square test was applied to evaluate the dependence between paradigms and disciplines. The expected frequency for each cell under the null hypothesis of independence is defined as
E i j = t o t a l   p u b l i c a t i o n s   f o r   p a r a d i g m   i × ( t o t a l   p u b l i c a t i o n s   f o r   d i s c i p l i n e   j ) N
where N is the total number of publications. The contribution of each cell to the overall Chi-square statistic is
χ 2 = i , j ( o i j E i j ) 2 E i j
The total Chi-square value obtained is 18,927.71 with 40 degrees of freedom and p-value < 0.001, confirming a highly significant dependence between paradigms and disciplines. Figure 15 illustrates individual contributions to the Chi-square statistic, color-coded according to magnitude.
The largest contributions are observed for evolutionary algorithms–computer science and linear programming–mathematics/engineering, while secondary significant associations include tabu search–decision sciences, physics-based methods–physics/materials science, and swarm intelligence–computer science. Low-contribution cells, displayed in lighter shades, correspond to marginal associations.
To further explore these associations, a Correspondence Analysis (CA) was conducted. The row and column coordinates in the reduced factorial space, representing paradigms and disciplines, are given by
F r o w s = D r 1 / 2 U Σ
F c o l u m s = D C 1 / 2 V Σ
where U Σ V T is the singular value decomposition of the standardized residual matrix, and D r and D C are diagonal matrices of row and column totals. The first two factorial axes explain 73.13% and 19.92% of the total inertia, respectively, accounting for 93.05% of the cumulative variance.
The resulting factorial map (Figure 16) reveals a clear disciplinary specialization of optimization paradigms along two main dimensions. The first axis captures an application versus theory gradient, where the evolutionary algorithms, swarm intelligence, and descent and statistics cluster near computer science and mathematics, reflecting their computational and methodological orientation. In contrast, linear programming and tabu search are positioned closer to decision sciences, social sciences, and energy, confirming their operational and applied focus. The second axis represents a physical modeling gradient, with physics-based methods occupying a distinct position strongly associated with physics and materials science, indicating a specialized niche in physically informed optimization.
Overall, this multivariate analysis provides quantitative and structural evidence of disciplinary differentiation among optimization paradigms. It complements the descriptive statistics presented in previous sections and strengthens the scientific contribution of the study by addressing methodological rigor and analytical depth.

5. Practical Decision Framework for Optimization Algorithm Selection

The scientometric and methodological analyses presented in the preceding sections underscore a rapid expansion within the optimization landscape, marked by a burgeoning diversification of algorithmic strategies and the rising prominence of metaheuristics and hybrid techniques. While this evolution signifies robust methodological innovation, it simultaneously presents a significant challenge for practitioners: the sheer proliferation of available algorithms complicates the selection process.
Consequently, this section seeks to operationalize the findings of the scientometric analysis into an action-oriented decision-making framework. This framework is specifically designed to assist academic researchers in benchmarking new models and industrial engineers in navigating the trade-offs between computational overhead and solution quality. Rather than advocating for any specific class of algorithms, the objective is to offer a structured methodology that aligns the mathematical attributes of a problem with the most appropriate optimization strategies. By providing a critical synthesis of current trends, this guide helps practitioners avoid the common pitfall of selecting overly complex metaheuristics when simpler, exact methods offer global optimality guarantees.
In addition, the rapid growth of simulation-driven engineering and data-intensive applications has introduced new optimization scenarios in which objective functions are evaluated through computationally expensive simulators rather than explicit analytical models. These emerging contexts motivate the inclusion of simulation-based optimization strategies within the proposed decision framework.
By bridging theoretical insights, research trends, and practical application, this section addresses a fundamental question in computational engineering: how can one rationally select an optimization algorithm tailored to the specific characteristics of a problem?

5.1. Problem Characterization and Algorithm Selection

Effective algorithm selection begins with a structured characterization of the optimization problem. Beyond mere “algorithmic hype,” the following dimensions represent the critical criteria derived from our longitudinal study of the field. These dimensions should be analyzed prior to choosing a method:
Variable type: Continuous, discrete/combinatorial, or mixed;
Mathematical structure: Convex or non-convex, linear or nonlinear, smooth or non-differentiable;
Problem scale: Small (tens of variables), medium (hundreds), or large-scale (thousands or more);
Constraint complexity: Weakly constrained, strongly constrained, or equality-dominated;
Objective structure: Single-objective or multi-objective;
Uncertainty level: Deterministic or stochastic formulation;
Computational requirements: Offline optimization or real-time/near–real-time execution.
Evaluation cost: Analytical evaluation versus computationally expensive simulation or black-box model.
The inclusion of evaluation cost as a decision criterion reflects the growing importance of simulation-based optimization, where the dominant challenge is no longer mathematical solvability but the computational expense associated with objective function evaluations.
Based on these dimensions, Table 5 establishes a correspondence between problem characteristics and appropriate algorithm families, complemented by representative application domains.
This table serves as a critique of current trends by highlighting that the “best” algorithm is strictly dependent on the problem’s mathematical landscape rather than its popularity in the recent literature.

5.2. Operational Decision Map for Practitioners

To enhance usability and provide a direct “how-to” guide, Figure 17 synthesizes these complex relationships into a graphical decision map. The figure serves as an operational decision aid, allowing engineers, algorithm designers, and academic researchers to rapidly identify suitable methodological directions while considering computational resources and validation requirements.
Unlike the detailed technical mapping in Table 4, this visual tool guides the user through a hierarchical logic: from identifying the broad family (exact, metaheuristics, multi-objective, or hybridization) to the specific sub-category (e.g., bio-inspired vs. population-based).
Within this decision process, simulation-based optimization represents a complementary pathway activated when objective evaluations rely on computational simulations or black-box models. In such situations, surrogate-assisted optimization workflows enable iterative learning of the response surface while minimizing expensive simulator calls.
This dual-entry framework (tabular and graphical) provides a comprehensive toolkit that ensures the selection process is grounded in both theoretical rigor and practical efficiency. Rather than prescribing fixed solutions, the framework provides decision guidelines derived from empirical research evolution and theoretical optimization principles.
By integrating simulation-based optimization alongside exact and metaheuristic approaches, the proposed framework reflects the ongoing transition toward data-driven, simulation-centric, and intelligent optimization ecosystems.

6. Conclusions

This paper has presented a comprehensive review and comparative scientometric analysis of optimization algorithms. The presented algorithms range from foundational exact methods to emerging physics-based algorithmic frameworks. By bridging theoretical classification with quantitative bibliometric trends, this study highlights how optimization algorithms function as core computational engines for addressing uncertainty in complex adaptive systems. The scientometric analysis reveals three structural dynamics shaping the current landscape. First, linear programming is experiencing an exponential “rebirth,” confirming its role as the epistemological bedrock for modern AI verification and large-scale solvers. Second, nature-inspired metaheuristics (evolutionary algorithms and swarm intelligence) have transitioned from an exploratory phase to a stage of maturity and consolidation, characterized by the standardization of protocols. Third, physics-based optimization and learning algorithms, particularly Physics-Informed Neural Networks (PINNs), exhibit a disruptive growth trajectory. Reflecting a shift toward algorithmic models that integrate differential equations and physical constraints. From an Open Innovation perspective, these findings have significant implications. The evolution of these algorithms mirrors the dynamics of open innovation ecosystems: knowledge is not static but flows through hybridization (e.g., combining descent algorithms with statistics). The shift towards Physics-Informed Machine Learning reflects a growing market demand for explainable and robust AI, essential for collaborative decision-making in critical sectors like energy and engineering. By integrating methodological classification, conceptual comparison, and scientometric trends, this review not only maps the current landscape of optimization algorithms but also provides actionable insights and recommendations for both researchers and practitioners, guiding future investigations and real-world applications.
Future developments in this field will likely be driven by hybridization and interoperability. The strict dichotomy between exact and approximate methods is fading in favor of synergistic architectures. For researchers and practitioners in open innovation, the challenge is no longer merely to select an algorithm, but to design and analyze integrated optimization algorithms and hybrid computational frameworks. This design should be capable of addressing multi-objective and large-scale problems.

Author Contributions

Conceptualization, K.A., R.H. and A.Z. (Asmaa Zugari); methodology, K.A., R.H. and A.Z. (Asmaa Zugari); software, K.A., R.H., A.Z. (Asmaa Zugari); validation, K.A., R.H., A.Z. (Asmaa Zugari) and A.Z. (Alia Zakriti); formal analysis, K.A., R.H. and A.Z. (Asmaa Zugari); investigation, K.A., R.H. and A.Z. (Asmaa Zugari); resources, A.Z. (Asmaa Zugari) and A.Z. (Alia Zakriti); data curation, K.A. and R.H.; writing—original draft preparation, K.A. and R.H.; writing—review and editing, K.A., R.H., A.Z. (Asmaa Zugari) and A.Z. (Alia Zakriti); visualization, K.A. and R.H.; supervision, A.Z. (Asmaa Zugari) and A.Z. (Alia Zakriti); project administration, A.Z. (Alia Zakriti) All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data used in this research paper can be made available upon request.

Acknowledgments

The authors thank Liwa University—Abu Dhabi for the funds provided to publish this work.

Conflicts of Interest

The authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest (such as honoraria; educational grants; participation in speakers’ bureaus; membership, employment, consultancies, stock ownership, or other equity interest; and expert testimony or patent-licensing arrangements), or non-financial interest (such as personal or professional relationships, affiliations, knowledge or beliefs) in the subject matter or materials discussed in this manuscript.

References

  1. Gao, B.; Peng, C.; Kong, D.; Wang, X.; Li, C.; Gao, M.; Ghadimi, N. Optimum structure of a combined wind/photovoltaic/fuel cell-based on amended Dragon Fly optimization algorithm: A case study. Energy Sources Part A Recovery Util. Environ. Eff. 2022, 44, 7109–7131. [Google Scholar] [CrossRef]
  2. Hasan, R.; Mendizabal, O.; Dotti, F. Green Virtual Networks for Timely Hybrid Synchrony Distributed Systems. In Proceedings of the 2024 4th International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME), Malé, Maldives, 4–6 November 2024; pp. 1–6. [Google Scholar] [CrossRef]
  3. Tambe, P.P. Selective Maintenance Optimization of a Multi-component System based on Simulated Annealing Algorithm. Procedia Comput. Sci. 2022, 200, 1412–1421. [Google Scholar] [CrossRef]
  4. Abouhssous, K.; Wakrim, L.; Zugari, A.; Zakriti, A. A Three-Band Patch Antenna Using a Defected Ground Structure Optimized by a Genetic Algorithm for the Modern Wireless Mobile Applications. Jordanian J. Comput. Inf. Technol. 2023, 9, 11–20. [Google Scholar] [CrossRef]
  5. Dejen, A.; Anguera, J.; Jayasinghe, J.; Ridwan, M. Bandwidth Improvement of Dualband mm-wave Microstrip Antenna Using Genetic Algorithm. Int. J. Comput. Digit. Syst. 2023, 13, 1187–1194. [Google Scholar] [CrossRef]
  6. Hasan, R.; Abdelaziz, A. 5G-OPS: Optimizer of Private 5G Slices. In Proceedings of the 2023 3rd International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME), Tenerife, Spain, 19–21 July 2023; pp. 1–6. [Google Scholar] [CrossRef]
  7. Owezarski, P.; Hasan, R.; Kremer, G.; Berthou, P. First Step in Cross-Layers Measurement in Wireless Networks How to Adapt to Resource Constraints for Optimizing End-to-End Services? In Wired/Wireless Internet Communications; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6649, pp. 150–161. [Google Scholar] [CrossRef]
  8. George, R.; Mary, T.A.J. A sea lion optimized microstrip patch antenna for enhancing the energy efficiency of the WSN. Spat. Inf. Res. 2023, 31, 439–452. [Google Scholar] [CrossRef]
  9. Xie, C.; Wu, X.; Bai, X. Optimal Strategy: A Comprehensive Model for Predicting Price Trend and Algorithm Optimization. In Proceedings of the 7th International Conference on Cyber Security and Information Engineering. ICCSIE ’22; Association for Computing Machinery: New York, NY, USA, 2022; pp. 457–460. [Google Scholar] [CrossRef]
  10. Song, Y.; Zhao, G.; Zhang, B.; Chen, H.; Deng, W.; Deng, W. An enhanced distributed differential evolution algorithm for portfolio optimization problems. Eng. Appl. Artif. Intell. 2023, 121, 106004. [Google Scholar] [CrossRef]
  11. Zhou, H.; Sun, G.; Fu, S.; Liu, J.; Zhou, X.; Zhou, J. A Big Data Mining Approach of PSO-Based BP Neural Network for Financial Risk Management with IoT. IEEE Access 2019, 7, 154035–154043. [Google Scholar] [CrossRef]
  12. Chen, C.H.; Chen, Y.H.; Lin, J.C.W.; Wu, M.E. An Effective Approach for Obtaining a Group Trading Strategy Portfolio Using Grouping Genetic Algorithm. IEEE Access 2019, 7, 7313–7325. [Google Scholar] [CrossRef]
  13. Chan, F.T.S.; Wang, Z.X.; Goswami, A.; Singhania, A.; Tiwari, M.K. Multi-objective particle swarm optimization based integrated production inventory routing planning for efficient perishable food logistics operations. Int. J. Prod. Res. 2020, 58, 5155–5174. [Google Scholar] [CrossRef]
  14. Wang, C.; Qian, Y.; Shaic, S. The Applications of Nature-Inspired Algorithms in Logistic Domains: A Comprehensive and Systematic Review. Arab. J. Sci. Eng. 2021, 46, 3443–3464. [Google Scholar] [CrossRef]
  15. Devaraj, A.F.S.; Elhoseny, M.; Dhanasekaran, S.; Lydia, E.L.; Shankar, K. Hybridization of firefly and Improved Multi-Objective Particle Swarm Optimization algorithm for energy efficient load balancing in Cloud Computing environments. J. Parallel Distrib. Comput. 2020, 142, 36–45. [Google Scholar] [CrossRef]
  16. Li, Y.; Soleimani, H.; Zohal, M. An improved ant colony optimization algorithm for the multi-depot green vehicle routing problem with multiple objectives. J. Clean. Prod. 2019, 227, 1161–1172. [Google Scholar] [CrossRef]
  17. Khan, M.A.; Khan, A.; Alhaisoni, M.; Alqahtani, A.; Alsubai, S.; Alharbi, M.; Malik, N.A.; Damaševičius, R. Multimodal brain tumor detection and classification using deep saliency map and improved dragonfly optimization algorithm. Int. J. Imaging Syst. Technol. 2023, 33, 572–587. [Google Scholar] [CrossRef]
  18. Jacob, I.J.; Darney, P.E. Artificial Bee Colony Optimization Algorithm for Enhancing Routing in Wireless Networks. J. Artif. Intell. Capsul. Netw. 2021, 3, 62–71. [Google Scholar] [CrossRef]
  19. Khatouri, H.; Benamara, T.; Breitkopf, P.; Demange, J. Metamodeling techniques for CPU-intensive simulation-based design optimization: A survey. Adv. Model. Simul. Eng. Sci. 2022, 9, 1. [Google Scholar] [CrossRef]
  20. Vesselinova, N.; Steinert, R.; Perez-Ramirez, D.F.; Boman, M. Learning Combinatorial Optimization on Graphs: A Survey with Applications to Networking. IEEE Access 2020, 8, 120388–120416. [Google Scholar] [CrossRef]
  21. William-West, T.O.; Ibrahim, M.A. On shadowed set approximation methods. Soft Comput. 2023, 27, 4463–4482. [Google Scholar] [CrossRef]
  22. Kleinert, T.; Labbé, M.; Ljubić, I.; Schmidt, M. A Survey on Mixed-Integer Programming Techniques in Bilevel Optimization. EURO J. Comput. Optim. 2021, 9, 100007. [Google Scholar] [CrossRef]
  23. Zhang, J.; Liu, C.; Li, X.; Zhen, H.-L.; Yuan, M.; Li, Y.; Yan, J. A survey for solving mixed integer programming via machine learning. Neurocomputing 2023, 519, 205–217. [Google Scholar] [CrossRef]
  24. Theurich, F.; Fischer, A.; Scheithauer, G. A branch-and-bound approach for a Vehicle Routing Problem with Customer Costs. EURO J. Comput. Optim. 2021, 9, 100003. [Google Scholar] [CrossRef]
  25. Rahmaniani, R.; Crainic, T.G.; Gendreau, M.; Rei, W. The Benders Decomposition Algorithm: A Literature Review. Eur. J. Oper. Res. 2017, 259, 801–817. [Google Scholar] [CrossRef]
  26. Karbowski, A. Generalized Benders Decomposition Method to Solve Big Mixed-Integer Nonlinear Optimization Problems with Convex Objective and Constraints Functions. Energies 2021, 14, 6503. [Google Scholar] [CrossRef]
  27. Yang, Y. Improved Benders Decomposition and Feasibility Validation for Two-Stage Chance-Constrained Programs in Process Optimization. Ind. Eng. Chem. Res. 2019, 58, 4853–4865. [Google Scholar] [CrossRef]
  28. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  29. Černý, V. Thermodynamical approach to the traveling salesman problem: An efficient simulation algorithm. J. Optim. Theory Appl. 1985, 45, 41–51. [Google Scholar] [CrossRef]
  30. Siarry, P.; Berthiau, G.; Durdin, F.; Haussy, J. Enhanced simulated annealing for globally minimizing functions of many-continuous variables. ACM Trans. Math. Softw. 1997, 23, 209–228. [Google Scholar] [CrossRef]
  31. Taillard, E.D. La Programmation a Memoire Adaptative et les Algorithmes Pseudo-Gloutons: Nouvelles Perspectives pour les Meta-Heuristiques; Guide Books; Istituto Dalle Molle Di Studi Sull Intelligenza Artificiale: Lugano, Switzerland, 1998. [Google Scholar]
  32. Brimberg, J.; Salhi, S.; Todosijević, R.; Urošević, D. Variable Neighborhood Search: The power of change and simplicity. Comput. Oper. Res. 2023, 155, 106221. [Google Scholar] [CrossRef]
  33. Stützle, T. Iterated local search for the quadratic assignment problem. Eur. J. Oper. Res. 2006, 174, 1519–1539. [Google Scholar] [CrossRef]
  34. Lu, S.; Hu, C.; Kong, M.; Fathollahi-Fard, A.M.; Wu, B. A modified adaptive large neighborhood search algorithm for serial-batching machines scheduling considering changeover time and rate-modifying activities. Eng. Appl. Artif. Intell. 2025, 141, 109865. [Google Scholar] [CrossRef]
  35. Liu, S.; Sun, J.; Duan, X.; Liu, G. Parallel adaptive large neighborhood search based on spark to solve VRPTW. Sci. Rep. 2024, 14, 23809. [Google Scholar] [CrossRef] [PubMed]
  36. Nedjah, N.; Junior, L.S. Review of methodologies and tasks in swarm robotics towards standardization. Swarm Evol. Comput. 2019, 50, 100565. [Google Scholar] [CrossRef]
  37. Jain, M.; Saihjpal, V.; Singh, N.; Singh, S.B. An Overview of Variants and Advancements of PSO Algorithm. Appl. Sci. 2022, 12, 8392. [Google Scholar] [CrossRef]
  38. Dorigo, M.; Stützle, T. Ant Colony Optimization: Overview and Recent Advances. In Handbook of Metaheuristics; Gendreau, M., Potvin, J.Y., Eds.; International Series in Operations Research & Management Science; Springer International Publishing: Cham, Switzerland, 2019; pp. 311–351. [Google Scholar] [CrossRef]
  39. Aldhaheri, S.; Alghazzawi, D.; Cheng, L.; Barnawi, A.; Alzahrani, B.A. Artificial Immune Systems approaches to secure the internet of things: A systematic review of the literature and recommendations for future research. J. Netw. Comput. Appl. 2020, 157, 102537. [Google Scholar] [CrossRef]
  40. Guo, C.; Tang, H.; Niu, B.; Lee, C.B.P. A survey of bacterial foraging optimization. Neurocomputing 2021, 452, 728–746. [Google Scholar] [CrossRef]
  41. Zeng, T.; Wang, W.; Wang, H.; Cui, Z.; Wang, F.; Wang, Y.; Zhao, J. Artificial bee colony based on adaptive search strategy and random grouping mechanism. Expert Syst. Appl. 2022, 192, 116332. [Google Scholar] [CrossRef]
  42. Ma, H.; Simon, D.; Siarry, P.; Yang, Z.; Fei, M. Biogeography-Based Optimization: A 10-Year Review. IEEE Trans. Emerg. Top. Comput. Intell. 2017, 1, 391–407. [Google Scholar] [CrossRef]
  43. Meraihi, Y.; Ramdane-Cherif, A.; Acheli, D.; Mahseur, M. Dragonfly algorithm: A comprehensive review and applications. Neural Comput. Appl. 2020, 32, 16625–16646. [Google Scholar] [CrossRef]
  44. Guerrero-Luis, M.; Valdez, F.; Castillo, O. A Review on the Cuckoo Search Algorithm. In Fuzzy Logic Hybrid Extensions of Neural and Optimization Algorithms: Theory and Applications; Castillo, O., Melin, P., Eds.; Studies in Computational Intelligence; Springer International Publishing: Cham, Switzerland, 2021; pp. 113–124. [Google Scholar] [CrossRef]
  45. Emary, E.; Yamany, W.; Hassanien, A.E.; Snasel, V. Multi-Objective Gray-Wolf Optimization for Attribute Reduction. Procedia Comput. Sci. 2015, 65, 623–632. [Google Scholar] [CrossRef]
  46. Abualigah, L.; Shehab, M.; Alshinwan, M.; Alabool, H. Salp swarm algorithm: A comprehensive survey. Neural Comput. Appl. 2020, 32, 11195–11215. [Google Scholar] [CrossRef]
  47. Hashemi, A.; Dowlatshahi, M.B.; Nezamabadi-Pour, H. Gravitational Search Algorithm: Theory, Literature Review, and Applications. In Handbook of AI-Based Metaheuristics; CRC Press: Boca Raton, FL, USA, 2021. [Google Scholar]
  48. Talatahari, S.; Hakimpour, F.; Ranjbar, A. The application of multi-objective charged system search algorithm for optimization problems. Sci. Iran. 2019, 26, 1249–1265. [Google Scholar] [CrossRef]
  49. Ebadifard, F.; Babamir, S.M. Optimizing multi objective based workflow scheduling in cloud computing using black hole algorithm. In Proceedings of the 2017 3th International Conference on Web Research (ICWR), Tehran, Iran, 19–20 April 2017; pp. 102–108. [Google Scholar] [CrossRef]
  50. Tolabi, H.B.; Shakarami, M.R.; Hosseini, R.; Ayob, S.B.M. Novel FGbSA: Fuzzy-Galaxy-based search algorithm for multi-objective reconfiguration of distribution systems. Russ. Electr. Eng. 2016, 87, 588–595. [Google Scholar] [CrossRef]
  51. Sun, Y.; Xu, P.; Zhang, Z.; Zhu, T.; Luo, W. Brain Storm Optimization Integrated with Cooperative Coevolution for Large-Scale Constrained Optimization. In Advances in Swarm Intelligence; Tan, Y., Shi, Y., Luo, W., Eds.; Lecture Notes in Computer Science; Springer Nature: Cham, Switzerland, 2023; pp. 356–368. [Google Scholar] [CrossRef]
  52. Abdulkhaleq, M.T.; Rashid, T.A.; Alsadoon, A.; Hassan, B.A.; Mohammadi, M.; Abdullah, J.M.; Chhabra, A.; Ali, S.L.; Othman, R.N.; Hasan, H.A.; et al. Harmony search: Current studies and uses on healthcare systems. Artif. Intell. Med. 2022, 131, 102348. [Google Scholar] [CrossRef]
  53. Gómez Díaz, K.Y.; De León Aldaco, S.E.; Aguayo Alquicira, J.; Ponce-Silva, M.; Olivares Peregrino, V.H. Teaching–Learning-Based Optimization Algorithm Applied in Electronic Engineering: A Survey. Electronics 2022, 11, 3451. [Google Scholar] [CrossRef]
  54. Akyol, S.; Alatas, B. Plant intelligence based metaheuristic optimization algorithms. Artif. Intell. Rev. 2017, 47, 417–462. [Google Scholar] [CrossRef]
  55. Ong, K.M.; Ong, P.; Sia, C.K. A new flower pollination algorithm with improved convergence and its application to engineering optimization. Decis. Anal. J. 2022, 5, 100144. [Google Scholar] [CrossRef]
  56. Halim, A.H.; Ismail, I. Nonlinear plant modeling using neuro-fuzzy system with tree physiology optimization. In Proceedings of the 2013 IEEE Student Conference on Research and Development, Putrajaya, Malaysia, 16–17 December 2013; pp. 295–300. [Google Scholar] [CrossRef]
  57. Goli, A.; Tirkolaee, E.B.; Malmir, B.; Bian, G.B.; Sangaiah, A.K. A multi-objective invasive weed optimization algorithm for robust aggregate production planning under uncertain seasonal demand. Computing 2019, 101, 499–529. [Google Scholar] [CrossRef]
  58. Xu, H.; Deng, Q.; Zhang, Z.; Lin, S. A hybrid differential evolution particle swarm optimization algorithm based on dynamic strategies. Sci. Rep. 2025, 15, 4518. [Google Scholar] [CrossRef]
  59. Abbal, K.; El-Amrani, M.; Aoun, O.; Benadada, Y. Adaptive Particle Swarm Optimization with Landscape Learning for Global Optimization and Feature Selection. Modelling 2025, 6, 9. [Google Scholar] [CrossRef]
  60. Han, J.; Chen, Y.; Huang, X. An Advanced Adaptive Group Learning Particle Swarm Optimization Algorithm. Symmetry 2025, 17, 667. [Google Scholar] [CrossRef]
  61. Sroka, G.; Wierzchoń, S.T. Robustness and Invariance of Hybrid Metaheuristics under Objective Function Transformations. arXiv 2025, arXiv:2509.05445. [Google Scholar] [CrossRef]
  62. Peñate-Rodríguez, H.C.; Rivera, G.; Sánchez-Solís, J.P.; Florencia, R. The Scientific Landscape of Hyper-Heuristics: A Bibliometric Analysis Based on Scopus. Algorithms 2025, 18, 294. [Google Scholar] [CrossRef]
  63. Ariffin, M.A.; Rejab, M.M.; Ibrahim, R.; Mahdin, H. The Evolution of Metaheuristic Research: A Bibliometrics Analysis of Research Trends in Computer Sciences. J. Appl. Sci. Technol. Comput. 2025, 2, 1–15. Available online: https://publisher.uthm.edu.my/ojs/index.php/jastec/article/view/23558 (accessed on 8 March 2026).
  64. Abouhssous, K.; Wakrim, L.; Zugari, A.; Zakriti, A. A multi-objective genetic algorithm approach applied to compact meander branch line couplers design for 5G-enabled IoT applications. J. Comput. Electron. 2024, 23, 634–646. [Google Scholar] [CrossRef]
  65. Hoseini Karani, M.M.; Nikoo, M.R.; Dolatshahi Pirooz, H.; Shadmani, A.; Al-Saadi, S.; Gandomi, A.H. Multi-objective evolutionary framework for layout and operational optimization of a multi-body wave energy converter. Energy 2024, 313, 134045. [Google Scholar] [CrossRef]
  66. Mohammadian KhalafAnsar, H.; Keighobadi, J.; Farhid, M. Estimation of thrust force of cold gas propulsion using a multi-layer perceptron optimized by genetic algorithm. Eng. Res. Express 2025, 7, 045580. [Google Scholar] [CrossRef]
  67. Jongpluempiti, J.; Vengsungnle, P.; Poojeera, S.; Srichat, A.; Naphon, N.; Eiamsa-ard, S.; Naphon, P. Efficient thermal performance prediction and optimization in HVAC/thermoelectric systems with artificial neural networks and non-dominated sorting genetic algorithm II. Eng. Res. Express 2025, 7, 045550. [Google Scholar] [CrossRef]
  68. Abouhssous, K.; Wakrim, L.; Zugari, A.; Haddi, S.B.; Handri, K.E.; Zakriti, A. High efficiency and low group delay in an ultra-compact H-shaped directional coupler for 5G applications: An AI-based optimization approach using genetic algorithms and neural networks. AEU-Int. J. Electron. Commun. 2025, 201, 155985. [Google Scholar] [CrossRef]
  69. Abouhssous, K.; Zugari, A.; Bakkali, M.E.; Wakrim, L.; Zakriti, A. A multi-objective binary genetic algorithm for a low-profile, low-group delay directional coupler: An innovative design for 5G applications. Telecommun. Syst. 2025, 88, 94. [Google Scholar] [CrossRef]
  70. Zhou, Z.; Li, N.; Bao, H.; Li, Y.; Guo, X.; Zheng, R.; Dong, W.; Ding, W. Large-Scale Multi-Objective Dual-Population Co-Evolutionary Algorithm Based on Decision Variable Boundary Penalty. Concurr. Comput. Pract. Exp. 2025, 37, e70443. [Google Scholar] [CrossRef]
  71. Yang, Z.; Deng, L.; Di, Y.; Li, C.; Qin, Y.; Zhang, L. A Dual-Population Constrained Multi-Objective Evolutionary Algorithm with Success Incentive Mechanism and its application to uncertain multimodal transportation problems. Eng. Appl. Artif. Intell. 2025, 162, 112586. [Google Scholar] [CrossRef]
  72. Li, P.; Wu, B.; Wu, N. The three-party evolutionary game of preannouncement strategies for platform’s AI updates considering users’ loyalties. Eng. Appl. Artif. Intell. 2025, 162, 112782. [Google Scholar] [CrossRef]
  73. Sun, Y.; Shao, Z.; Yan, B. IAGA: A New Method of Automatic Layout for Dimensioning Engineering Drawings. Teh. Vjesn. 2025, 32, 255–266. [Google Scholar] [CrossRef]
  74. Zhou, R.; Wang, J.; Xie, Z.; Sun, Y.; Lu, G.; Yeow, J.T.W. Superior terahertz radiation detection through novel micro circular log-periodic antenna engineered with an advanced evolutionary neural network algorithm. Microsyst. Nanoeng. 2025, 11, 160. [Google Scholar] [CrossRef]
  75. Ma, H.; Kang, X.; Huang, Y.; Duan, S.; Li, Y.; Fang, D. Structural transient dynamic topology optimization based on autoencoder-enhanced generative adversarial network and elitist guidance evolutionary algorithm. Comput. Methods Appl. Mech. Eng. 2025, 447, 118417. [Google Scholar] [CrossRef]
  76. Ahmed Alsarori, A.M.; Sulaiman, M.H. Integrated deep learning for cardiovascular risk assessment and diagnosis: An evolutionary mating algorithm-enhanced CNN-LSTM. MethodsX 2025, 15, 103466. [Google Scholar] [CrossRef]
  77. Koehler, S.I.; Pentz, J.T.; Middlebrook, E.A.; Hovde, B.T.; Hanschen, E.R. Protocol to detect dilution cycles in chemostat experiments and estimate growth rate slopes with linear modeling with R software chemostat_regression. STAR Protoc. 2025, 6, 104113. [Google Scholar] [CrossRef]
  78. Malik, V.; Pande, A.; Majumder, A. IclForge: Enhancing In-Context Learning with Evolutionary Algorithms under Budgeted Annotation. In Proceedings of the 34th ACM International Conference on Information and Knowledge Management. CIKM ’25; Association for Computing Machinery: New York, NY, USA, 2025; pp. 5931–5938. [Google Scholar] [CrossRef]
  79. Bauz-Olvera, S.A.; Quiroga-Sierra, W.A.; Pambabay-Calero, J.J.; Galindo-Villardón, P. Optimizing Gondola Replenishment: A Simulation Approach Using Promodel and Evolutionary Algorithms to Balance Costs. E3S Web Conf. 2025, 658, 01003. [Google Scholar] [CrossRef]
  80. Jin, X.; Wei, B.; Deng, L.; Yang, S.; Zheng, J.; Wang, F. An adaptive pyramid PSO for high-dimensional feature selection. Expert Syst. Appl. 2024, 257, 125084. [Google Scholar] [CrossRef]
  81. Jose, M.R.; Vigila, S.M.C. F-CAPSO: Fuzzy chaos adaptive particle swarm optimization for energy-efficient and secure data transmission in MANET. Expert Syst. Appl. 2023, 234, 120944. [Google Scholar] [CrossRef]
  82. Kumar, S.; Saini, M.; Goel, M.; Panda, B.S. Modeling information diffusion in online social networks using a modified forest-fire model. J. Intell. Inf. Syst. 2021, 56, 355–377. [Google Scholar] [CrossRef]
  83. Zhang, J.; Xiao, C.; Liang, X.; Yang, W.; Fang, Z.; Zhang, L.; Dai, R.; Li, W.; Ni, H. Machine learning based on a swarm intelligence algorithm and explainable AI for the prediction of reservoir temperature. Energy 2025, 341, 139412. [Google Scholar] [CrossRef]
  84. Bodha, K.D.; Arun, V.; Awasthi, A.; Mahato, B.; Fotis, G. Rotational gate based quantum particle swarm optimization for benchmark suites and combined economic emission dispatch. Eng. Res. Express 2025, 7, 045345. [Google Scholar] [CrossRef]
  85. Krishnakumar, B.; Kousalya, K. Optimal Trained Deep Learning Model for Breast Cancer Segmentation and Classification. Inf. Technol. Control 2023, 52, 915–934. [Google Scholar] [CrossRef]
  86. Goyal, S.; Patterh, M.S. Wireless Sensor Network Localization Based on Cuckoo Search Algorithm. Wirel. Pers. Commun. 2014, 79, 223–234. [Google Scholar] [CrossRef]
  87. Biondi Neto, L.; Becceneri, J.C.; da Silva, J.D.S.; da Luz, E.F.P.; de Campos Velho, H.F.; da Silva Neto, A.J. Computational Intelligence in Optimization Problems. In Computational Intelligence Applied to Inverse Problems in Radiative Transfer; da Silva Neto, A.J., Becceneri, J.C., de Campos Velho, H.F., Eds.; Springer International Publishing: Cham, Switzerland, 2023; pp. 29–34. [Google Scholar] [CrossRef]
  88. Chen, Y.; Zhao, X.; Hao, J. A novel MOPSO-SODE algorithm for solving three-objective SR-ES-TR portfolio optimization problem. Expert Syst. Appl. 2023, 233, 120742. [Google Scholar] [CrossRef]
  89. da Silva Neto, A.J.; de Campos Velho, H.F. Inverse Problems in Radiative Transfer: An Implicit Formulation. In Computational Intelligence Applied to Inverse Problems in Radiative Transfer; da Silva Neto, A.J., Becceneri, J.C., de Campos Velho, H.F., Eds.; Springer International Publishing: Cham, Switzerland, 2023; pp. 19–28. [Google Scholar] [CrossRef]
  90. Jondri, J.; Indwiarti, I.; Puspandari, D. Retweet Prediction Using Multi-Layer Perceptron Optimized by The Swarm Intelligence Algorithm. J. Online Inform. 2023, 8, 252–260. [Google Scholar] [CrossRef]
  91. Hu, K.; Zhang, X.; Jiang, Y. Wireless Sensor Networks Node Localization Approach Based on The Cuckoo Algorithm. In Proceedings of the 2023 International Conference on Artificial Intelligence, Systems and Network Security. AISNS ’23; Association for Computing Machinery: New York, NY, USA, 2024; pp. 349–354. [Google Scholar] [CrossRef]
  92. Li, R.; Ma, Y.; Chen, H.; Yang, X.; Xing, Z. Coordinate descent for top-k multi-label feature selection with pseudo-label learning and manifold learning. Neurocomputing 2025, 658, 131640. [Google Scholar] [CrossRef]
  93. Korbit, M.; Adeoye, A.D.; Bemporad, A.; Zanon, M. Exact Gauss-Newton optimization for training deep neural networks. Neurocomputing 2025, 658, 131738. [Google Scholar] [CrossRef]
  94. Ahmed, U.; Waqar, M.; Zouari, F.; Luo, L.; Louati, M.; Li, J.; Ghidaoui, M.S. Multiple scattering-assisted data-driven full waveform inversion for pipeline blockage detection. Eng. Appl. Artif. Intell. 2025, 162, 112420. [Google Scholar] [CrossRef]
  95. Zhang, J.; Yang, X.; Mou, C.; Zhou, C. Learning surrogate potential mean field games via Gaussian processes: A data-driven approach to ill-posed inverse problems. J. Comput. Phys. 2025, 543, 114412. [Google Scholar] [CrossRef]
  96. Xu, K.L.; Dong, L.H.; Wang, B.; Li, Z. Preserving-periodic Riemannian descent model reduction of linear discrete-time periodic systems with isometric vector transport on product manifolds. Appl. Math. Lett. 2025, 171, 109692. [Google Scholar] [CrossRef]
  97. Mishelevich, D.J. Computer graphics in medicine: Six theses. SIGGRAPH Comput. Graph. 1972, 6, 2–12. [Google Scholar] [CrossRef]
  98. Wang, X.; Liang, Y.; Tang, X.; Jiang, X. A multi-compartment electric vehicle routing problem with time windows and temperature and humidity settings for perishable product delivery. Expert Syst. Appl. 2023, 233, 120974. [Google Scholar] [CrossRef]
  99. Li, Z.; Li, G.; Bilal, M.; Liu, D.; Huang, T.; Xu, X. Blockchain-Assisted Server Placement with Elitist Preserved Genetic Algorithm in Edge Computing. IEEE Internet Things J. 2023, 10, 21401–21409. [Google Scholar] [CrossRef]
  100. Kulaç, S.; Kazancı, N. Optimization of In-Plant Logistics Through a New Hybrid Algorithm for the Capacitated Vehicle Routing Problem with Heterogeneous Fleet. Sak. Univ. J. Sci. 2024, 28, 1242–1260. [Google Scholar] [CrossRef]
  101. Tsogbetse, I.; Bernard, J.; Manier, H.; Manier, M.A. Influence of encoding and neighborhood in landscape analysis and tabu search performance for job shop scheduling problem. Eur. J. Oper. Res. 2024, 319, 739–746. [Google Scholar] [CrossRef]
  102. Ahmed, G.; Sheltami, T.; Yasar, A. Optimal path recommendation in dynamic traffic networks using the hybrid Tabu-A* algorithm. Transp. Res. Part E Logist. Transp. Rev. 2025, 204, 104414. [Google Scholar] [CrossRef]
  103. Weng, X.; Hu, R.; Fan, H.; Zhang, J.; Yun, L. A Reliability Model for Electric Vehicle Routing Problem Under Charging Failure Risk. Soc. Sci. Res. Netw. 2024, 12, 4890097. [Google Scholar] [CrossRef]
  104. Peng, Z.; Kang, Y.; Li, X.; Gao, L.; Liu, Q.; Zhang, C. A hybrid algorithm incorporating sequencing flexibility for integrated process planning and scheduling problem. Swarm Evol. Comput. 2025, 99, 102201. [Google Scholar] [CrossRef]
  105. Becker, C.; Schneider, M. An A-Priori-Splitting-Based Heuristic for the Split Delivery Vehicle Routing Problem with Time Windows. Networks 2025, 86, 468–516. [Google Scholar] [CrossRef]
  106. Naz, S.; Zahid, Z.; Ali, R.; Jamil, M.K. S-box generation through enhanced hybrid chaotic maps and tabu-optimized technique. Nonlinear Dyn. 2025, 113, 34001–34023. [Google Scholar] [CrossRef]
  107. Pandya, K.; Maiti, A. Benchmarking the three-dimensional and the numerical three-dimensional matching problems on the D-Wave Advantage quantum annealer. Inf. Sci. 2025, 721, 122584. [Google Scholar] [CrossRef]
  108. Goodarzian, F.; Ghasemi, P. A case-driven simulation-optimization model for sustainable medical logistics network. Socio-Econ. Plan. Sci. 2025, 101, 102271. [Google Scholar] [CrossRef]
  109. Gloria, R.S.; Wahyuningsih, S. Study of the adaptive large neighborhood search with Tabu search (ALNS-TS) algorithm on CVRPTW and its implementation. AIP Conf. Proc. 2025, 3446, 020023. [Google Scholar] [CrossRef]
  110. Nazari, N.; Yaghoobi, M.A.; Mansouri, N. A dual-phase strategy for clustering: Integrating genetic algorithms with tabu search. Knowl. Inf. Syst. 2025, 67, 12211–12266. [Google Scholar] [CrossRef]
  111. Mermoud, D.L.; Grabisch, M.; Sudhölter, P. Minimal balanced collections and their applications to core stability and other topics of game theory. arXiv 2025, arXiv:2507.05898. [Google Scholar] [CrossRef]
  112. Kiiski, E.; Hyytiäinen, K. Trade-offs between carbon conservation and profitability in crop cultivation: Unlocking potential through diversifying crop allocations regionally. Agric. Food Sci. 2024, 33, 268–279. [Google Scholar] [CrossRef]
  113. Dehshiri, S.J.H.; Amiri, M.; Hajiaghaei-Keshteli, M.; Keshavarz-Ghorabaee, M.; Zavadskas, E.K.; Antuchevičienė, J. Designing a sustainable closed-loop supply chain using robust possibilistic-stochastic programming in pentagonal fuzzy numbers. Transport 2024, 39, 323–349. [Google Scholar] [CrossRef]
  114. Dick, H.; Dahm, T. Optimization of Ab-Initio Based Tight-Binding Models. Electron. Struct. 2025, 7, 047001. [Google Scholar] [CrossRef]
  115. Hao, H.; Fang, X.; Chen, Y. Heuristic-Enhanced ILP Process Discovery with Multidimensional Dependency Filtering | Request PDF. Concurr. Comput. Pract. Exp. 2025, 37, e70380. [Google Scholar] [CrossRef]
  116. Robandi, I.; Aji, A.A.S.; Wibowo, R.S.; Prakasa, M.A.; Prabowo; Putri, V.L.B.; Sutrisno, D.; Widiyawati, E.; Fauzi, M.A. Dynamic Economic Dispatch Using Mixed-Integer Linear Programming for Indonesian Electricity System Integrated with Cascaded Hydropower Plant Considering Take or Pay Contract. Int. J. Intell. Eng. Syst. 2025, 18, 467–481. [Google Scholar] [CrossRef]
  117. Su, K.; Yang, C.; Shao, Y.; Jiang, D.; Zhou, C.; Wang, L.; Liu, D.; Zhu, P.; Ding, Y.; Zheng, C.; et al. Accelerating multi-energy system online optimization via integer state variable prediction with operation strategy learning. Energy 2025, 341, 139337. [Google Scholar] [CrossRef]
  118. Ning, C.; Ma, A.; Dong, Z. Data-driven multi-stage distributionally robust scheduling for coupled electricity-hydrogen-refinery systems. Appl. Energy 2025, 401, 126620. [Google Scholar] [CrossRef]
  119. Haviv, I.; Rabinovich, D. A near-optimal kernel for a coloring problem. Discret. Appl. Math. 2025, 377, 66–73. [Google Scholar] [CrossRef]
  120. Ariyanto, A.Y.; Kurdhi, N.A. A whale optimization algorithm approach to the heterogeneous electric vehicle routing problem. AIP Conf. Proc. 2025, 3446, 020035. [Google Scholar]
  121. Ansari, M.; Khamooshi, M.; Toyserkani, E. Adaptive model-based optimization for fusion-based metal additive manufacturing (directed energy deposition). J. Manuf. Process. 2023, 108, 588–595. [Google Scholar] [CrossRef]
  122. Mirzaee, H.; Kamrava, S. Estimation of internal states in a Li-ion battery using BiLSTM with Bayesian hyperparameter optimization. J. Energy Storage 2023, 74, 109522. [Google Scholar] [CrossRef]
  123. Ding, X.; Wang, Y.; Guo, P.; Sun, W.; Harrison, G.P.; Lv, X.; Weng, Y. A novel physical and data-driven optimization methodology for designing a renewable energy, power to gas and solid oxide fuel cell system based on ensemble learning algorithm. Energy 2024, 313, 134002. [Google Scholar] [CrossRef]
  124. Kasterke, M.; Kaufmann, L.; Kateri, M.; Brands, T. An expectation—Maximization algorithm for spectral reconstruction under the spectral hard model. Chemom. Intell. Lab. Syst. 2025, 267, 105518. [Google Scholar] [CrossRef]
  125. Dey, B.; Zhao, D.; Andrews, B.H.; Newman, J.A.; Izbicki, R.; Lee, A.B. Towards Instance-Wise Calibration: Local Amortized Diagnostics and Reshaping of Conditional Densities (LADaR). arXiv 2025, arXiv:2205.14568. [Google Scholar] [CrossRef]
  126. Xue, Z.; Peng, W.; Zhang, J.; Chen, R. Two-echelon optimization framework for semi-autonomous truck platooning in container drayage. Comput. Oper. Res. 2026, 188, 107343. [Google Scholar] [CrossRef]
  127. Li, Z.; Zhang, S.; Yang, X. Evaluation and development of Nusselt number and friction factor correlations for airfoil-fin printed circuit heat exchangers. Int. J. Heat Mass Transf. 2025, 253, 127512. [Google Scholar] [CrossRef]
  128. Roudbari, N.; Firouzjah, K.G.; Ghasemi, J. Scenario-based sizing and siting of battery swapping stations for electric buses using realistic demand modeling on distribution network. Energy 2025, 341, 139378. [Google Scholar] [CrossRef]
  129. Sarker, P.; Choi, K.; Nahid, A.A.; Samad, M.A. CatBoost with physics-based metaheuristics for thyroid cancer recurrence prediction. BioData Min. 2025, 18, 84. [Google Scholar] [CrossRef]
  130. Rickett, C.; Sukumar, S.R.; West, K. Search and Query Framework for Workflows with HPC and AI Models. In Proceedings of the Cray User Group; ACM: New York, NY, USA, 2025; pp. 59–68. [Google Scholar] [CrossRef]
Figure 1. Optimization problem classification.
Figure 1. Optimization problem classification.
Algorithms 19 00258 g001
Figure 2. Optimization steps.
Figure 2. Optimization steps.
Algorithms 19 00258 g002
Figure 3. Combinatory optimization methods classification.
Figure 3. Combinatory optimization methods classification.
Algorithms 19 00258 g003
Figure 4. Branch and Bound method principle.
Figure 4. Branch and Bound method principle.
Algorithms 19 00258 g004
Figure 5. Cutting plane method illustration.
Figure 5. Cutting plane method illustration.
Algorithms 19 00258 g005
Figure 6. Most popular single-solution metaheuristics.
Figure 6. Most popular single-solution metaheuristics.
Algorithms 19 00258 g006
Figure 7. Population-based metaheuristic classification.
Figure 7. Population-based metaheuristic classification.
Algorithms 19 00258 g007
Figure 8. Methodological framework of the scientometric study.
Figure 8. Methodological framework of the scientometric study.
Algorithms 19 00258 g008
Figure 9. Evolutionary algorithm statistics generated by the Scopus database: (a) annual number of papers; (b) per subject area; (c) per document type.
Figure 9. Evolutionary algorithm statistics generated by the Scopus database: (a) annual number of papers; (b) per subject area; (c) per document type.
Algorithms 19 00258 g009
Figure 10. Swarm algorithm statistics generated by the Scopus database: (a) annual number of papers; (b) per subject area; (c) per document type.
Figure 10. Swarm algorithm statistics generated by the Scopus database: (a) annual number of papers; (b) per subject area; (c) per document type.
Algorithms 19 00258 g010
Figure 11. Descent algorithm statistics generated by the Scopus database: (a) annual number of papers; (b) per subject area; (c) per document type.
Figure 11. Descent algorithm statistics generated by the Scopus database: (a) annual number of papers; (b) per subject area; (c) per document type.
Algorithms 19 00258 g011
Figure 12. Tabu search algorithm statistics generated by the Scopus database: (a) annual number of papers; (b) per subject area; (c) per document type.
Figure 12. Tabu search algorithm statistics generated by the Scopus database: (a) annual number of papers; (b) per subject area; (c) per document type.
Algorithms 19 00258 g012
Figure 13. Linear programming algorithm statistics generated by the Scopus database: (a) annual number of papers; (b) per subject area; (c) per document type.
Figure 13. Linear programming algorithm statistics generated by the Scopus database: (a) annual number of papers; (b) per subject area; (c) per document type.
Algorithms 19 00258 g013
Figure 14. Physics-based algorithm statistics generated by the Scopus database: (a) annual number of papers; (b) per subject area; (c) per document type.
Figure 14. Physics-based algorithm statistics generated by the Scopus database: (a) annual number of papers; (b) per subject area; (c) per document type.
Algorithms 19 00258 g014
Figure 15. Chi-square contributions for paradigm–discipline associations.
Figure 15. Chi-square contributions for paradigm–discipline associations.
Algorithms 19 00258 g015
Figure 16. Correspondence Analysis factorial map of optimization paradigms and scientific disciplines.
Figure 16. Correspondence Analysis factorial map of optimization paradigms and scientific disciplines.
Algorithms 19 00258 g016
Figure 17. Operational decision map integrating exact, metaheuristic, hybrid, and simulation-based optimization strategies for modern engineering applications.
Figure 17. Operational decision map integrating exact, metaheuristic, hybrid, and simulation-based optimization strategies for modern engineering applications.
Algorithms 19 00258 g017
Table 1. Comparison of machine learning models used as metamodels in simulation-based optimization (SBO).
Table 1. Comparison of machine learning models used as metamodels in simulation-based optimization (SBO).
ML MetamodelLearning ParadigmStrengths for SBOLimitationsApplications
Gaussian Processes (Kriging)Probabilistic/BayesianProvides predictive uncertainty; highly effective for small datasetsComputationally expensive for large datasets (O(n3))Aerospace design, expensive black-box tuning
Support Vector Regression (SVR)Supervised LearningRobust in high-dimensional spaces; guarantees global optimum for its loss functionKernel selection and hyperparameter tuning can be complexElectronics and telecommunications optimization
Random Forest (RF)Ensemble (Bagging)Handles nonlinearities and mixed variable types well; robust to outliersPoor extrapolation outside the training data domainSupply chain, logistics, combinatorial problems
Gradient Boosting (XGBoost/LightGBM)Ensemble (Boosting)Extremely high predictive accuracy; computationally efficient trainingRisk of overfitting if hyperparameters are not carefully tunedIndustrial process optimization, energy systems
Deep Neural Networks (DNN)Deep LearningUnmatched capacity for modeling highly complex, nonlinear, large-scale systemsRequires large amounts of simulation data for effective trainingRobotics, complex multi-physics simulations
Physics-Informed Neural Networks (PINNs)Deep Learning + PhysicsEmbeds physical laws in the loss function, reducing data dependencyComplex to formulate and train; loss landscape optimization is challengingFluid dynamics, thermodynamics, structural mechanics
Table 2. Comparative Overview of Optimization Method Families: Application Domains, Strengths, Limitations, and Typical Performance.
Table 2. Comparative Overview of Optimization Method Families: Application Domains, Strengths, Limitations, and Typical Performance.
MethodsBest Suited ProblemsStrengthsWeaknessesTypical Performance
GACombinatorial, scheduling, antenna designStrong global exploration, flexible encodingSlow convergence, parameter sensitiveGood robustness, moderate speed
PSOContinuous optimization, feature selection, production planningFast convergence, easy implementationPremature convergenceHigh efficiency on smooth landscapes
ACORouting, multi-depot vehicle problemsExcellent for path constructionHigh computation on large graphsHigh solution quality, slower runtime
DEHigh-dimensional continuous optimizationStrong mutation-based explorationSensitive to control parametersHigh accuracy, good scalability
SAMaintenance optimization, traveling salesmanCan escape local minima, simple implementationSlow for very large problemsGood for global search
TSScheduling, vehicle routingEscapes local optima, flexible heuristicsComplex implementationGood solution quality
HybridComplex real-world systems, multi-objective optimizationBalanced exploration & exploitationHigh complexitySuperior robustness
DAEnergy systems, WSN optimizationGood balance exploration-exploitationNewer algorithm, parameter sensitiveHigh solution quality
(CS)Wireless sensor networksEfficient search for global optimumSensitive to population sizeGood convergence on multimodal problems
ABCWireless networks, adaptive searchGood exploration and adaptive mechanismsSlower convergence in large-scale problemsCompetitive accuracy
WOAElectric vehicle routingGlobal exploration, flexible applicationCan converge prematurelyEffective for multi-depot VRP
HSHealthcare system optimizationSimple, few parametersSlow convergence for large-scale problemsModerate efficiency
TLBOElectronic engineering optimizationSimple implementation, no parametersSlower convergence for complex problemsModerate solution quality
SSAMulti-objective optimizationGood for exploration-exploitation balanceSensitive to parametersCompetitive performance
GWOAttribute reductionEffective multi-objective searchMay converge prematurelyHigh quality solutions
BBOEngineering optimizationStrong global explorationParameter sensitiveHigh solution quality
GSAOptimization problemsStrong exploration capabilityParameter sensitiveCompetitive performance
BHACloud workflow schedulingGlobal search, simple mechanismCan be trapped in local minimaGood solution quality
FGbSADistribution system reconfigurationEfficient multi-objective searchNew algorithm, parameter sensitiveHigh-quality solutions
BSOLarge-scale constrained optimizationCooperative co-evolutionComplex implementationGood convergence
FPAEngineering optimizationImproved convergenceSensitive to parametersGood performance
IWOAggregate production planningEffective multi-objective optimizationParameter sensitiveHigh-quality solutions
SBOExpensive simulations, high-dimensional problemsReduces computational cost; enables rapid exploration; can be combined with metaheuristicsDoes not always guarantee global optimality; surrogate selection criticalEfficient for large-scale, computationally expensive problems; flexible for multi-objective and stochastic optimization
Table 3. Comparative analysis of exact, approximate, and simulation-based optimization methods.
Table 3. Comparative analysis of exact, approximate, and simulation-based optimization methods.
Exact Optimization MethodsApproximate Optimization MethodsSimulation-Based Optimization
Principle
  • Exhaustive search for optimal solution;
  • Uses rigorous mathematical techniques (LP, IP, QP);
  • Convergence provable;
  • Suitable for small to medium, well-defined problems.
  • Focus on acceptable-quality solutions;
  • Uses heuristics, metaheuristics, evolutionary algorithms;
  • Convergence stochastic, depends on hyperparameters;
  • Flexible for various problem types.
  • Uses surrogate models to approximate expensive simulations;
  • combined with metaheuristics or gradient-based search;
  • balances accuracy and efficiency.
Exhaustiveness
  • Guaranteed exhaustiveness;
  • Precise modeling with constraints;
  • Exploration complete with pruning of non-promising branches.
  • Exhaustiveness limited;
  • Selective exploration;
  • Risk of getting trapped in local optima; uses stochastic moves or probabilistic transitions.
  • Selective exploration guided by surrogate models;
  • not guaranteed globally optimal.
Complexity
  • Exponential complexity: (O(2n) or O(n!));
  • Time increase with problem size.
  • Typically, polynomial (O (n × k)), depending on iterations and population size.
  • Moderate; cost depends on surrogate training and simulation calls;
  • scalable to high-dimensional problems.
Speed and applicability
  • High calculation time;
  • Accurate calculation time;
  • Effective for small-to-medium instances (<1000 variables);
  • Limited use due to constraints.
  • Short calculation times;
  • Trade-off between speed and quality;
  • Suitable for large problems;
  • Flexibility for different types of problems;
  • Robust for large-scale, high-dimensional problems;
  • Efficient for computationally expensive problems;
  • suitable for multi-objective, stochastic, or dynamic problems.
Performance
  • Optimal performance guaranteed;
  • Problem size sensitivity;
  • Effectiveness for well-posed problems;
  • Optimality guarantee;
  • Provides unique or set of optimal solutions.
  • Performance of satisfactory quality;
  • Adaptability to complex problems;
  • Sensitivity to parameters and configuration;
  • No guarantee of optimality
  • Provides “sufficiently good” solutions.
  • High-quality approximate solutions;
  • maintains solution accuracy with fewer simulations.
Scalability
  • Poor scalability for large problems.
  • Good scalability with heuristics/metaheuristics.
  • Good; surrogate models allow scaling to large, complex systems.
Memory/Computational Resources
  • High memory and CPU for large problems.
  • Moderate, depends on population or iterations.
  • Moderate; resources primarily used for surrogate training and evaluation.
Ease of Implementation
  • Requires advanced mathematical formulation.
  • Easier to implement, flexible with problem changes.
  • Requires surrogate model selection and integration with optimization algorithm;
Stopping Criteria
  • Reaching optimum or proof of infeasibility.
  • Fixed number of iterations, stagnation, or computational budget/time limit.
  • Budget of simulator calls, convergence of surrogate-assisted optimization;
Typical Solution Output
  • Exact optimal solution(s).
  • Near-optimal or satisfactory solution(s).
  • High-quality approximate solutions validated by simulation.
Applications
  • Scheduling small production systems, linear resource allocation;
  • Critical problems requiring absolute precision (financial allocation, satellite routing).
  • Large-scale supply chains, routing problems, engineering design.
  • Combinatorial, real-time, noisy/uncertain problems.
  • Aerospace simulations, CFD/FEA optimization, energy systems, digital twin scenarios.
Table 4. Comparative scientometric overview of research trends and epistemological phases in optimization algorithms.
Table 4. Comparative scientometric overview of research trends and epistemological phases in optimization algorithms.
Research DomainVol. ScopeGrowth Pattern
(Annual)
Dominant DisciplinesJournal RatioEpistemological PhaseKey Driver/Challenge
Evolutionary AlgorithmsMassive (~140,000)≈−1.5%CS 32.8% (≈4596), Eng 20.5% (≈2870), Math 18.2% (≈2548), Physics 3.6% (≈504), Biochemistry 3% (≈420)High (52%)MaturityStandardization & Hybridization
Swarm IntelligenceMedium (~2600)≈−1.4%CS 34.8% (≈904), Eng 24.1% (≈626), Math 15.3% (≈397), Physics 5% (≈130), Decision Sci 3% (≈78)High (57%)ConsolidationProtocol Unification & Large-Scale Benchmarking
Descent & StatisticsLow (~1000)≈−19%CS 32.8% (≈342), Eng 18.8% (≈196), Math 17.4% (≈181), Physics 4.8% (≈50), Decision Sci 4.2% (≈44)Very High (60%)Theoretical ValidationNon-Convex Analysis & Hyperparameter Reduction
Tabu SearchHigh (~14,000)≈+8%CS 30.1% (≈4064), Eng 23.7% (≈3200), Decision Sci 10% (≈1350), Business/Mgt 4.2% (≈567)Very High (61%)Integrated ComponentHybridization within High-Performance Solvers
Linear ProgrammingMassive (~193,000)≈+1.9%CS 25.4% (≈49,100), Eng 24.9% (≈48,100), Math 16.3% (≈31,500), Energy 5% (≈9670)Very High (63%)Fundamental BedrockBig Data, MIP Solvers & AI Calibration
Physics-based (PINNs)Emerging (~5500)≈+22%Eng 26.2% (≈1437), CS 21.3% (≈1169), Physics/Astr 8.5% (≈467), Energy 6.5% (≈357), Math 9.5% (≈521)Balanced (54%)Paradigm ShiftBridging “Black-Box” (Data) & “White-Box” (Laws)
Table 5. Problem characterization and recommended optimization algorithms.
Table 5. Problem characterization and recommended optimization algorithms.
Problem CharacteristicsRecommended ApproachRationaleExample Applications
Convex, continuous, well-structuredLinear programming or convex optimizationGlobal optimality guarantees and high efficiencyPortfolio optimization, production planning
Moderate-sized mixed problemsMixed-integer programmingExact methods suited to structured formulationsSupply chain and scheduling optimization
Large-scale combinatorial problemsGenetic Algorithms, PSO, ACONear-optimal solutions within reasonable computation timeVehicle routing, task scheduling
Highly nonlinear non-convex problemsMetaheuristics or hybrid methodsReduced risk of local minima trappingStructural design optimization
Multi-objective optimizationPareto-based algorithms (e.g., NSGA-II)Explicit exploration of trade-offsEnergy system design, engineering design
Real-time constrained systemsLightweight heuristics or local searchLow computational overheadRobotics control, adaptive planning
Physics-governed systemsPhysics-informed or hybrid approachesConsistency with physical constraintsFluid dynamics and structural simulations
Expensive simulator or black-box evaluationsimulation-based optimization with surrogate modelsReduced number of costly simulations while preserving solution accuracyDigital twins, aerospace design, energy systems optimization
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abouhssous, K.; Hasan, R.; Zugari, A.; Zakriti, A. Optimization Algorithms: Comprehensive Classification, Principles, and Scientometric Trends. Algorithms 2026, 19, 258. https://doi.org/10.3390/a19040258

AMA Style

Abouhssous K, Hasan R, Zugari A, Zakriti A. Optimization Algorithms: Comprehensive Classification, Principles, and Scientometric Trends. Algorithms. 2026; 19(4):258. https://doi.org/10.3390/a19040258

Chicago/Turabian Style

Abouhssous, Khadija, Rasha Hasan, Asmaa Zugari, and Alia Zakriti. 2026. "Optimization Algorithms: Comprehensive Classification, Principles, and Scientometric Trends" Algorithms 19, no. 4: 258. https://doi.org/10.3390/a19040258

APA Style

Abouhssous, K., Hasan, R., Zugari, A., & Zakriti, A. (2026). Optimization Algorithms: Comprehensive Classification, Principles, and Scientometric Trends. Algorithms, 19(4), 258. https://doi.org/10.3390/a19040258

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop