Algorithms for Bidding Strategies in Local Energy Markets: Exhaustive Search through Parallel Computing and Metaheuristic Optimization

: The integration of different energy resources from traditional power systems presents new challenges for real-time implementation and operation. In the last decade, a way has been sought to optimize the operation of small microgrids (SMGs) that have a great variety of energy sources (PV (photovoltaic) prosumers, Genset CHP (combined heat and power), etc.) with uncertainty in energy production that results in different market prices. For this reason, metaheuristic methods have been used to optimize the decision-making process for multiple players in local and external markets. Players in this network include nine agents: three consumers, three prosumers (consumers with PV capabilities), and three CHP generators. This article deploys metaheuristic algorithms with the objective of maximizing power market transactions and clearing price. Since metaheuristic optimization algorithms do not guarantee global optima, an exhaustive search is deployed to ﬁnd global optima points. The exhaustive search algorithm is implemented using a parallel computing architecture to reach feasible results in a short amount of time. The global optimal result is used as an indicator to evaluate the performance of the different metaheuristic algorithms. The paper presents results, discussion, comparison, and recommendations regarding the proposed set of algorithms and performance tests.


Introduction and State of Art
These days, there are multiples changes in the structure of transmission and distribution of electric energy that allows the integration of new technologies notions in generation, storage, electric mobility [1,2], and energy metering [3]. As a result of these system changes, electricity dependence and energy transactions have increased. The Local Energy Markets (LEM) is an opportunity for small grid actors to actively take part in the bidding process. LEM allows local transactions that empower consumers, producers, and prosumers in the goal of creating energy balances [4].
LEM are defined locally in terms of residential customers close enough in the same geographical and social area. However, the larger the number of actors, the harder the synchronization, control, and optimal market operation. To replicate and achieve optimal local energy market responses, it is necessary to guarantee coordination among the different market participants and appropriate tools for proper decision making [4,5].
The CEC (Congress on Evolutionary Computation) and GECCO (Genetic and Evolutionary Computation Conference) 2021 competition on "Evolutionary Computation in the Energy Domain: Smart Grid Applications" called for participants to develop metaheuristic optimization solutions for LEMs operation under complex conditions such as those that include trading energy in an LM (Local Market) [6]. Considering several explicit mathematical formulations, these problems cannot be solved efficiently due to the complex nature of LEM models. Heuristic and metaheuristic algorithms have been shown to be a reliable option in problems that include energy transactions [7]. An example of this is the different set of evolutionary algorithms explored for the solution of a bi-level energy market problem of nine participants in [6,7]. Track 1 of a 2021 competition includes a complex bi-level market problem with nine different participants. The upper-level agents try to maximize their profits, depending on the solution of the lower level problem. This interdependency between decisions makes the problem not an easy task [6,8]. Therefore, the complexity of this problem with the high number of variables is adequate for the application of different heuristics-based algorithms and special computational structures [6].
More complex computational architectures had been required for the development of electrical networks, control strategies, and optimal decisions. In order, to obtain optimal and feasible solutions in recent times for the planning and operation of complex systems with a high number of devices, one of the proposed architectures is the use of Parallel Computing (PC). PC allows the execution of tasks in parallel, thus avoiding the long processing times required in the execution of activities sequentially [9]. In this work, a parallel architecture is used to reach a global optimum in a bi-level local market with different agents using brute force [10,11]. In this way, not only a complex optimization problem is solved, but it is also possible to measure the performance of different metaheuristic algorithms.
The remainder of this document includes the following: Section 1-a summary of the state-of-the-art for different approaches in terms of metaheuristic algorithm for SMGs operation planning and PC power system applications. The next section is a presentation of a test case. Section 3 is a presentation of the HyDE-DF (hybrid-adaptive differential evolution with a decay function), HHO (Harris Hawks Optimization), WOA (whale optimization algorithm), and DEEPSO (Differential Evolutionary Particle Swarm Optimization) algorithms. Section 4 presents the algorithm for PC; then, it presents the evaluation of a metaheuristic algorithm using PC global optimal. Finally, the last section shows the conclusion of the work.

State of Art
In recent years, the concerns of fossil fuel reduction and climate change, combined with the gradual decrease in costs of unconventional sources, are pushing the incorporation of renewable energy or other types of sources on isolated networks (PV, EV, Genset, BESS) or small grids that allow local energy transactions, as seen in LEM. Nevertheless, the changeable and undefined nature of renewable energy has created significant challenges for power networks because the power produced by renewable energy is not completely dispatchable and therefore cannot be controlled. For this reason, new approaches have developed to integrate large amounts of renewable energy in smart grids and local markets, leading them to operate in a more effective, efficient, economical, and sustainable direction [4,12].
A local grid could include loads and distributed energy resources, storage, and controllable loads, and it should be managed in a controlled manner whether or not it is interconnected to the main power grid [12]. A local small grid could deliver auxiliary services to TSO or DSO could operate energy arbitrage (store energy when the price is low to sell it or use it when the price is high) and actively participate in energy markets, in addition to participating in the distribution systems with the sell or bid of additional or required energy. However, to ensure that these activities are achievable, a small grid has to optimize the utilization of its resources, pursuing economic profit. Due to the nature of renewable energy sources, load forecasting elements, and the energy market, the time horizon for the optimization and dispatch is the day-ahead. On the other hand, LEMs whose energy prices can vary hourly may require optimization with an equivalent time horizon. Thus, day-ahead and intra-day optimal load balances are the most important requirements in an LEM.
The operation of LEMs is one of the main problems in recent years due to the increased number of agents in the grid. The growing interest in these systems has been the result of [13,14], the integration of renewable sources, DGs price reduction, community selfresilience strategies, market flexibility, and dynamic loads, among other factors. LEM requires two main elements: a market controller that creates conditions and policies to carry out the local balance and maximize their economic gains [8], and an energy controller that distributes and schedules the available energy request and resources [8]. A good number of publications had been conducted in the area of autonomous generation and load control, including intermittent sources and demand response [15][16][17][18][19][20][21][22][23]; however, few have investigated the operation of autonomous market controllers [8]. Some efforts have focused on the theoretical assessment of optimal strategies. The solutions include the performance evaluation of real customer behavior, making the problem complex [24].
Since these types of problems are normally non-convex with a wide search space, they can become NP-hard problems [25]. This characteristic results in high computation time that limits the possibility of mathematical simplifications that allow finding points closer to the global optimal solution. Different alternatives have emerged to solve these problems. (a) Metaheuristic solutions search for problem optimization in efficient time; nevertheless, the formulation does not guarantee global optimal solutions. (b) Exhaustive searches guarantee feasible and global optimal solutions with the evaluation of various combinations of different decision variables. The last process can be cumbersome, making the problem unfeasible for applications that require responses in time frames of a day or hours.
Metaheuristic algorithms are presented as possible strategies for optimal bidding in an LEM. Different computational strategies for optimal agents bidding are assessed in [6]. In that research, the authors evaluate different LEM strategies based on the community storage and agent's revenues, cost, and local electricity consumption. The evaluation results suggest different opportunities to increase the energy transactions in the system. On the other hand, parallel structures are useful when problems depicts high complexity, which leaves the exhaustive enumeration as the only way to find the global optimum [11]. However, this search was previously dependent on human calculations, which led it to becoming error-prone, exhausting, and without coherent results that allowed concise conclusions. Nowadays, with the development of computers and more suitable structures for the parallelization of activities, brute force algorithms have become feasible possibilities to support the exhaustive search and the solutions of these problems. This computational development was initially shown with the use of Message Passing Interfaces-MPI architectures. Currently, cloud or fog architectures, along with FPGA and GPU architectures, are used in the parallel processing of operations. In the case of power systems, some parallelization applications have been shown in solving different problems. Some researchers have used parallel architectures mainly for the solution of power flow [26], transient stability [27][28][29], EMT analysis [30,31], and the integration of renewable energy sources [32] problems. Reduced applications in power systems have been cited in the exhaustive search for solutions through brute force algorithms.

Materials: Case Studies from Bi-Level Optimization Competition
The energy market problem in this research considers a bi-level optimization problem, where: (1) Multiple agents interact through bids/offers; (2) Agents can be consumers, producers, or prosumers; (3) Agents try to maximize their profits; (4) Agents have access to the local and main grid; and (5) The system tries to maximize the energy transacted All operations are firstly transacted in the local market and later complemented by the interaction with the main grid. Figure 1 illustrates the problem architecture including the system and the agents involved. (3) Agents try to maximize their profits; (4) Agents have access to the local and main grid; and (5) The system tries to maximize the energy transacted All operations are firstly transacted in the local market and later complemented by the interaction with the main grid. Figure 1 illustrates the problem architecture including the system and the agents involved. Following the structure of previous years, this year's competition presents two tracks but, in this paper, only the analysis and results of track 1 will be presented. The first test bed proposes a bi-level optimization problem for bidding strategies in local energy markets with agents in the upper level trying to maximize their profits [6].

Track 1: Bi-Level Optimization of Bidding Strategies in LEM
The case considers an LEM where multiple agents interact to maximize system transactions. Agents that interact in the problem include consumers, producers, and prosumers. The main concern is to maximize the transactions locally and interact, if required, with the power grid back-up system. The problem is formulated using a bi-level problem formulation. The lower level maximizes the internal transactions; the upper level looks for agents' profit maximization.
The solution of the problem does not allow independent optimization for both levels. The solutions must guarantee that the clearing price (cp) is reached in the lower level, and the solution affects the interaction with the wholesale market. cp is affected by agents' strategies for selling and buying energy. The bidding process uses merit order for the allocation of generation and load offers.
The system profit maximization follows Equations (1)- (3): Following the structure of previous years, this year's competition presents two tracks but, in this paper, only the analysis and results of track 1 will be presented. The first test bed proposes a bi-level optimization problem for bidding strategies in local energy markets with agents in the upper level trying to maximize their profits [6].

Track 1: Bi-Level Optimization of Bidding Strategies in LEM
The case considers an LEM where multiple agents interact to maximize system transactions. Agents that interact in the problem include consumers, producers, and prosumers. The main concern is to maximize the transactions locally and interact, if required, with the power grid back-up system. The problem is formulated using a bi-level problem formulation. The lower level maximizes the internal transactions; the upper level looks for agents' profit maximization.
The solution of the problem does not allow independent optimization for both levels. The solutions must guarantee that the clearing price (cp) is reached in the lower level, and the solution affects the interaction with the wholesale market. cp is affected by agents' strategies for selling and buying energy. The bidding process uses merit order for the allocation of generation and load offers.
The system profit maximization follows Equations (1)- (3): where: P j : Producers' profits C i : Cost for consumer agents cp: LEM clearing price x j,i : Energy sold by the agent c F : LEM clearing price Esell j,grid : Energy sold by the agent c m : Marginal price of generation unit G j : Energy produced by generation unit c G : Grid price Ebuy grid,i : Energy bought by agent i

Algorithm Structure
The case study and evaluation platform require two codes: the first code is provided by the competition organizers, and the second is proposed by each competitor. The organizers' codes include decision parameters, bounds, and some algorithm settings. The main algorithm evaluates the algorithm performance of different competitors using an encrypted code [7]. The evaluation outputs include fitness, penalties, and violations. The platform structure is summarized in Figure 2.

Algorithm Structure
The case study and evaluation platform require two codes: the first code is provided by the competition organizers, and the second is proposed by each competitor. The organizers' codes include decision parameters, bounds, and some algorithm settings. The main algorithm evaluates the algorithm performance of different competitors using an encrypted code [7]. The evaluation outputs include fitness, penalties, and violations. The platform structure is summarized in Figure 2.

Fitness Function Encoding
The optimization function is based on the maximum profits from the interaction of multiple agents in the LEM. Each agent transaction is sent in tuples that include quantity and price ( , ). Bids and offers are collected all day in t periods that go from 1 h to 24

Fitness Function Encoding
The optimization function is based on the maximum profits from the interaction of multiple agents in the LEM. Each agent transaction is sent in tuples that include quantity and price (q k , p k ). Bids and offers are collected all day in t periods that go from 1 h to 24

Problem Formulation
The problem structure is represented by an objective function and a set of constraints and assumptions.

Objective Function
The objective function maximizes the transaction on the LEM for all agents in the system. The objective function is represented by the average profit of producers, consumers, and prosumers, as shown in Equation (4). The lower the value of the objective function, the better the profits among the agents in the LEM.

Model Assumptions
 Local and wholesale markets are considered [7].  Consumers make bids, producers make offers, and prosumers can make both.


No initialization tweaks are allowed.  A maximum of 10,000 evaluations are allowed in the competition.  The maximum generation capacity for generator agents is 2 kW.  Bound tariffs consider a flat feed-in tariff = 0.095 units and = 0.20 units.

Distance to Global Optima
To evaluate the performance of different metaheuristic algorithms, a distance to optimal is included in this article. The metric is shown in Equation (5).

HyDE-DF Algorithm
HyDE-DF is a Hybrid-Adaptive Differential Evolutionary with a Decay Function algorithm proposed in [33]. It is an improvement of the HyDE algorithm presented in [34], which uses a mutation strategy called "DE/target −to −perturbed_best/1" and self-adaptive mechanisms. This algorithm has been applied to smart grid problems showing high competitiveness.
The HyDE-DF version implements a decay function that prevents fast convergence unto the best individual in the population. It incorporates a re-initialization mechanism that is active if several successive iterations do not exhibit improvement, replacing the individuals with a new population around the best solution.

Problem Formulation
The problem structure is represented by an objective function and a set of constraints and assumptions.

Objective Function
The objective function maximizes the transaction on the LEM for all agents in the system. The objective function is represented by the average profit of producers, consumers, and prosumers, as shown in Equation (4). The lower the value of the objective function, the better the profits among the agents in the LEM.

Model Assumptions
• Local and wholesale markets are considered [7]. • Consumers make bids, producers make offers, and prosumers can make both.

•
No initialization tweaks are allowed. • A maximum of 10,000 evaluations are allowed in the competition.

•
The maximum generation capacity for generator agents is 2 kW. • Bound tariffs consider a flat feed-in tariff c F = 0.095 units and c G = 0.20 units.

Distance to Global Optima
To evaluate the performance of different metaheuristic algorithms, a distance to optimal is included in this article. The metric is shown in Equation (5).

HyDE-DF Algorithm
HyDE-DF is a Hybrid-Adaptive Differential Evolutionary with a Decay Function algorithm proposed in [33]. It is an improvement of the HyDE algorithm presented in [34], which uses a mutation strategy called "DE/target −to −perturbed_best/1" and selfadaptive mechanisms. This algorithm has been applied to smart grid problems showing high competitiveness.
The HyDE-DF version implements a decay function that prevents fast convergence unto the best individual in the population. It incorporates a re-initialization mechanism that is active if several successive iterations do not exhibit improvement, replacing the individuals with a new population around the best solution.

HHO Algorithm
Harris's hawks optimization (HHO) [35] is a population-based, gradient-free, natureinspired optimization algorithm that emulates hawks and rabbits in a predation scenario. Hawks (candidate solutions) perch randomly on some locations and wait to detect a prey; hawks move toward the rabbit, but the prey escape, jumping randomly to its new position. The hawks will pretend to reduce their prey energy (minimize the objective function), looking for positions where the rabbit will be tired.
HHO balances exploration and exploitation through random jumps. It implements a gradual transition from these two states simulating the behavior of Harris's hawks, in which they cooperate to surround the prey. These are features that have allowed the algorithm to have a satisfactory performance in engineering problems [35].

DEEPSO Algorithm
DEEPSO is an ensemble-heuristic algorithm that is a mix of "Differential Evolution" (DE) and "Evolutionary Particle Swarm Optimization" (EPSO) that is already a hybrid approach. A robust algorithm results from the combination of these techniques that orients the optimum in a globally correct direction [36]. To find a successful self-adaptive scheme, several variations of DE have been generated [36].
"Stochastic star", one of the features of this algorithm, allows additional communication probability between individuals. This scheme increases the shear in space of the algorithm to not fall into local optima. Although [36] does not arise from a deductive proof, it offers results that are superior to other unassembled algorithms. The results are supported by the algorithms' responses for solving complex problems [12].

HHO-DEEPSO-HyDE-DF Algorithm Tuning
For the hybrid algorithm used, HHO-DEEPSO-HyDE-DF, different parameter combinations were tested, looking for the best mean expected value and robustness in the solution of Track 1. For HHO, DEEPSO, and HyDE-DF, several values of the population and number of iterations were probed; additionally, for the HyDE-DF phase, the use or not of an adaptative DE, the mutation, and the crossover factor were also varied.
A total of 20,736 combinations were probed; for each combination, 10 runs were executed (none exceeded the 10,000 function evaluations). It took 45.8 h distributed in 192 workers.

Parallel Computing Algorithm for Exhaustive Search
The use of exhaustive or brute force search algorithms is the result of the computers era, where high computational burden can be supported with current processor structures. These algorithms are simple to implement but have high computational cost, sometimes making them infeasible. Exhaustive search algorithms require two main stages: (a) Enumeration, listing all the possible candidate solutions, and (b) Solutions check, confirming if the candidate solutions meet the problem's constraints. The general basic algorithm followed by this type of development is summarized in Figure 1. The algorithm (Algorithm 1) shows P as the problem's solution space, Λ as a null space, and c as a candidate solution. The algorithm keeps working with the calculation of different candidate solutions until all candidates are evaluated. Figure 1 shows in green two strategies to speed up the algorithm evaluation. The first is the reduction of the search space. The second is the algorithm's parallelization. if valid(P,c) then 6: Output(P,c) 7: C<-next (P,c) 8: end while

Search Space Reduction
A large list of candidate solutions might require large evaluation periods. Reference [37] shows that exhaustive search algorithms can be limited in memory rather than in execution time. However, algorithms that require days, months, or years are not desired in the solution of real-life problems. Reference [11] recommends design space reduction before performing the optimization. Additional analysis will often lead to dramatic reductions in the number of candidate solutions and may turn an intractable problem into a trivial one. In some cases, the analysis may reduce the entire candidate list to only a set of valid solutions; that is, it may yield an algorithm that directly enumerates all the feasible solutions without wasting time with tests and the generation of invalid candidates. The algorithm should balance search space reduction and the available computational resources and time.

Algorithm Parallelization
Exhaustive search algorithms are easy to implement. However, they may result in long computation time. Large serial evaluations can show fast results in seconds or minutes when only small sets of candidates are computed. Unfortunately, the story is not the same when millions or quintillions of candidates appear on a problem. In those conditions, not only fast processors are required, but also, appropriate computing structures allow simultaneous evaluations. Reference [10] shows the result of the evaluation of an optimization problem of fisheries management. The results showed an acceleration of 12× using an exhaustive search algorithm using a parallel CPU-GPU computing structure. The results allowed the identification of global maxima. An algorithm parallelization strategy is suggested as a speed-up strategy in exhaustive search. Previous algorithm and acceleration strategies were used in the evaluation of exhaustive search, as described in Section 4.

Algorithm Parallelization
The day-ahead local energy market bidding optimization problem has nine agents: three consumers, three producers, and three prosumers. Prosumer agents are consumers with PV generation capabilities. The agent data correspond to a derivation of standard power profiles of residential houses and PV systems based on the open datasets available in [38]. As shown in Figure 1, each agent has a pair of values corresponding to each period's quantity and price in the day-ahead market. Consequently, each agent has 48 variables, and the solution gathers all agents within a vector with 432 variables. However, most of the variables are fixed, since the lower and upper boundaries are the same. Table 1 shows the fixed variables and the variables that can change for the quantity and price by the agent and period. The empty cells represent the fixed variables. The last column presents the number of variables that can change by period for all agents. This analysis reduces the number of variables from 432 to 206.

Period
Quantity Price  Total  A1 A2 A3 A4 A5 A6  A7  A8  A9  A1  A2  A3  A4  A5  A6  A7  A8 A9 The exhaustive search methodology evaluates all possible solutions of the 206 variables. This paper refers to the number of possible values for each variable as states. Defining the states as N, the number of possible solutions is defined as: Evaluating only two states of the solution (N = 2), the number of possible solutions is 1.02 × 10 62 . The workstation that evaluates all possible solutions is an Intel Xeon E5-26700/2.60 GHz/Ram 128 GB/16 cores. The workstation can evaluate in parallel 10,000 solutions in 18 s using 16 cores. Considering the number of solutions to evaluate, the workstation is not capable of evaluating them in an acceptable time. However, the problem's nature allows dividing the exhaustive search by period, since the behavior of the agents for all periods are independent. To reduce the required time to perform the exhaustive search, the problem is fractioned and analyzed by period, reducing the number of variables from 206 to 6 regarding periods 1 to 7 and 20 to 24, 8 for period 8, 10 for periods 9 to 10 and 19, and 12 corresponding to periods 11 to 18. Using (1) to calculate the new number of possible solutions with N = 2, this number is reduced from 1.02 × 10 62 to 36,864, allowing the workstation to evaluate all possibilities in 66.35 s. Table 2 presents the summary of the states, combinations, and expected simulation time by period. The first fitness of the initial solution is Global opt = 183. 16 This reduction allows increasing the states to perform a more detailed search in the optimization problem search space. Table 3 presents the new settings of all combinations by period. The number of combinations is 344,393,472, and the expected evaluation time is 619,908.25 s (7.17 days). After evaluating all combinations, the fitness of the final solution improves from Global opt = 1.5910 to Global opt = 1.4741.

Results: Algorithms Performance Analysis Using Global Optimum Results through Parallel Computing
Different metaheuristic algorithms were tested to solve the LEM problem. The test included two phases of evaluation. The algorithms are run initially using an exploration strategy and later exploitation. The exploration stage is the process of identifying new optimal regions while searching for the best solution. On the other hand, the exploitation consists of probing a limited region of search with the expectation of improving the already identified optimal point.
The list of tested algorithms is summarized in Table 4. The results show the minimum and average fitness. Some of the algorithms show better performance during the exploration phase but not the exploitation phase. Figure 4 shows the convergence rate for different strategies. A faster convergence rate shows a positive exploration phase, while better optimal points resulted in a better exploitation phase; e.g., HyDE-DF was superior to all in the exploitation phase. Some of the strategies included hybrid metaheuristic techniques such as HHO-DEEPSO, I-GWO, and WOA. It results in better average optimal points, since their implementation did easily fall in local optima.   HHO-DEEPSO-HyDE-DF was probed with 20,736 runs where the most significant parameters were the population and number of iterations of each step. The best results appeared with HHO and DEEPSO phases, which have a low population (three individuals) but a high number of iterations, while the population of the HyDE-DF phase was slightly higher (five individuals) with adaptative DE. The results for some parameters combinations are displayed in Table 5. The Big O notation resulting from these functions to describe algorithm efficiency in order to evaluate computational complexity gives a polynomial order of growth.  The distance to global optimal shows that the lower the magnitude is, the closer the fitness is to the global optimum point.
HHO-DEEPSO-HyDE-DF was probed with 20,736 runs where the most significant parameters were the population and number of iterations of each step. The best results appeared with HHO and DEEPSO phases, which have a low population (three individuals) but a high number of iterations, while the population of the HyDE-DF phase was slightly higher (five individuals) with adaptative DE. The results for some parameters combinations are displayed in Table 5. The Big O notation resulting from these functions to describe algorithm efficiency in order to evaluate computational complexity gives a polynomial order of growth. Some parameters' combinations with non-adaptative DE for HyDE-DF show better fitness (1.52 inclusive), but their results present high variance, and they were not repeatable. The results showed on average an inferior performance. Tables 6 and 7 show the results of the run for the HHO-DEEPSO-HyDE-DF solution reported in Table 4. The green cells represent fixed variables. Table 6 shows the results for bids or offer quantities for each agent. Table 7 represents prices. Positive and negative signs indicate bids and offers.  Table 7. Prices for each agent.

Conclusions
The solution of complex non-convex problems present in academia and industry requires the use of alternative algorithms such as the use of metaheuristics. However, these do not guarantee finding the global optimal solution. As a result of the advancement of PC architectures, this article described a search strategy and algorithm parallelization that allows global optimum in feasible times. The PC architecture is used to run an exhaustive search algorithm in local stations and clusters with multiple cores. PC architecture is also used to perform the parameter tuning of the metaheuristic algorithm that attained the best results, reducing the computation time required in that activity. In this paper, we evaluated the performance of different metaheuristic algorithms using the global optimal that has previously been found using PC. The algorithms are evaluated in terms of distance to optimal solution and ranking position.
The strategy allowed us to verify the performance and results of different algorithms in solving the bi-level local market problem. In this way, the distance to the optimum of different strategies was verified. The strategy with the best performance was HHO-DEEPSO-HyDE-DF, achieving results 10% less effective than the global optimum achieved. In this way, this article shows the results of different metaheuristic algorithms to maximize the energy transactions in the system. To evaluate the performance of such strategies, an efficient parallel computing algorithm is created. The results allow the verification of global optima and the evaluation of different strategies based on the distance to the optimal solution, giving a contribution in the industry sector and academic field with optimization research problems.