DM: Dehghani Method for Modifying Optimization Algorithms

: In recent decades, many optimization algorithms have been proposed by researchers to solve optimization problems in various branches of science. Optimization algorithms are designed based on various phenomena in nature, the laws of physics, the rules of individual and group games, the behaviors of animals, plants and other living things. Implementation of optimization algorithms on some objective functions has been successful and in others has led to failure. Improving the optimization process and adding modiﬁcation phases to the optimization algorithms can lead to more acceptable and appropriate solution. In this paper, a new method called Dehghani method (DM) is introduced to improve optimization algorithms. DM e ﬀ ects on the location of the best member of the population using information of population location. In fact, DM shows that all members of a population, even the worst one, can contribute to the development of the population. DM has been mathematically modeled and its e ﬀ ect has been investigated on several optimization algorithms including: genetic algorithm (GA), particle swarm optimization (PSO), gravitational search algorithm (GSA), teaching-learning-based optimization (TLBO), and grey wolf optimizer (GWO). In order to evaluate the ability of the proposed method to improve the performance of optimization algorithms, the mentioned algorithms have been implemented in both version of original and improved by DM on a set of twenty-three standard objective functions. The simulation results show that the modiﬁed optimization algorithms with DM provide more acceptable and competitive performance than the original versions in solving optimization problems. M.D., Z.M., A.D., H.S., O.P.M. and J.M.G.; methodology, M.D., Z.M. and A.D.; software, M.D., Z.M. and H.S.; validation, J.M.G., G.D., A.D., H.S., O.P.M., R.A.R.-M., C.S., D.S. and A.E.; formal analysis, A.D., A.E.; investigation, M.D. and Z.M.; resources, A.D., A.E. and J.M.G.; data curation, G.D.; writing—original draft preparation, M.D. and Z.M.; writing—review and editing, A.D., R.A.R.-M., D.S., C.S., A.E., O.P.M., G.D., and J.M.G.; visualization, M.D.; supervision, M.D., Z.M. and A.D.; project administration, A.D. and Z.M.; funding C.S., D.S. and R.A.R.-M. All version


Introduction
The purpose of optimization is to determine the best solution among the available solutions for an optimization problem according to the constraints of problem [1]. Each optimization problem is designed with three parts: constraints, objective functions, and decision variables [2]. There are many optimization problems in different sciences that should be optimized using the appropriate method. Stochastic search-based optimization algorithms have always been of interest to researchers in solving optimization problems [3]. Optimization algorithms are able to provide a quasi-optimal solution based on random scan of the search space instead of a full scan. The quasi-optimal solution is not the best solution, but it is close to the global optimal solution of the problem [1]. In this regard, optimization algorithms have been applied by scientists in various fields such as energy [4][5][6], protection [7], electrical engineering [8][9][10][11][12][13], topology optimization [14] and energy carriers [15][16][17] to achieve the quasi-optimal solution. Table 1 shows the optimization algorithms grouped according to the main design idea. Optimization Algorithms

Swarm-based
General description: Designed based on simulation of the living thing behavior processes of the plants, and other swarm-based phenomena.
Refs. [19,20] It is based on the modeling the discovery of the shortest path by ants.
Ref. [29] It has gained wide acceptance among the optimization researchers. This algorithm is a teaching-learning process inspired algorithm and is based on the effect of influence of a teacher on the output of learners in a class.
Ref. [31] It is designed by simulating the process of treating patients by a physician. The treatment process has three phases, including vaccination, drug administration, and surgery. Ref. [32] It mimics the leadership hierarchy and hunting mechanism of grey wolves in nature. Four types of grey wolves such as alpha, beta, delta, and omega are employed for simulating the leadership hierarchy.

Game-based
General description: Designed based on simulation of different processes and rules of individual and group games.
Ref. [33] It is based on the simulation of the players behaviors and their trying to find a hidden object in the hide object game.

Physics-based
General description: Designed based on the ideation of various laws of physics.

Evolutionary-based
General description: They have involved evolution of a population in order to create new generations of genetically superior individuals [47].
Ref. [50] It has found wide acceptance in many disciplines, with application to environmental science problems. This algorithm is an optimization tool that mimics natural selection and genetics.
Each optimization problem has a definite solution called a global solution. Optimization algorithms provide a solution based on random search of the search space, which is not necessarily a universal solution, but because it is close to the optimal solution, it is an acceptable solution. The solution that is provided by optimization algorithms is called quasi-optimal solution. Therefore, an optimization algorithm that offers a better quasi-optimal solution than another algorithm is a better optimizer algorithm. In this regard, many optimization algorithms have been proposed by researchers to solve optimization problems and achieve to the better quasi-optimal solution.
Although optimization algorithms have been successful in solving many optimization problems, improving the equations of optimization algorithms and adding modification phases to optimization algorithms can lead to better quasi-optimal solutions. In fact, the purpose of improving an optimization algorithm is to increase the ability of that algorithm to more accurately scan the problem search space and thus provide a more appropriate quasi-optimal solution and closer to the global optimal solution.
In this paper, a new modification method called Dehghani method (DM) is proposed to improve the performance of optimization algorithms. DM is designed based on the use of the algorithm population members information. In the proposed DM, the information of each population member can improve the situation of the new generation. The main idea of DM is to amplify the best population member of an optimization algorithm using population member information. The proposed method is fully described in the next section.
The continuation of the present article is organized in such a way that in Section 2, the DM is fully explained and modeled. Following this, Section 3 explains how to implement the proposed method on several algorithms. The simulation of the proposed method for solving optimization problems is presented in Section 4. Finally, conclusions and several suggestions for future studies are presented in Section 5.

Dehghani Method (DM)
In this section, first DM is explained and then its mathematical modeling is presented. DM shows that all population members of the optimization algorithm, even the worst one, can contribute to the development of the population of algorithm.
Each population-based optimization algorithm has a matrix called the population matrix, which each row of this matrix represents a population member. Each member of the population is actually a vector which represents the values of the problem variables. Given that each member of the population is a random vector in the problem search space, it is a suggested solution (SS) to the problem. After the formation of the population matrix, the values proposed by each population member for the problem variables are evaluated in the objective function (OF). The population matrix and values of the objective functions are defined in Equation (1).
where, SS is the suggested solutions matrix, X is the population matrix, SS i is the i'th suggested solution, X i is the i'th population member, x d i is the value of d'th variables of optimization problem suggested by i'th population member, N is the number of population members or suggested solutions, m is the number of variables, and OF i is the value of objective function for the i'th suggested solution.
Different values for the objective function are obtained based on the values proposed for the variables by the population members. The member that offers the best-suggested solution (BSS) to the optimization problem plays an important role in improving the algorithm population. The row number of this member in the population matrix is determined using Equation (2). best = the row number o f X matrix withminOF, f or minimization problems the row number o f X matrix withmaxOF, f or maximization problems , where, best is the row number of the member with BSS. The BSS and it's OF are specified by Equations (3) and (4). X best : variables value o f minob jective f unction, where, X best is the BSS or best population member and F best is the value of OF for BSS. As mentioned, the best member of the population plays an important role in improving the population of the algorithm and thus the performance of the optimization algorithm. Optimization algorithms update the status of its population members according to its own process to achieve a quasi-optimal solution. Accordingly, with the improvement of the best member of the population, is expected that the population be updated more effectively and the performance of the algorithm in solving the optimization problem is improved.
DM is designed with the idea of modifying the best population member with the aim of improving the performance of an optimization algorithm.
In DM, just as the best member is influential in updating the population members, the population members even the worst member can influence to modification the best member. The measurement criterion for suggested solutions is the value of the objective function. However, a suggested solution that is not the best solution may provide appropriate values for some problem variables. The proposed DM modifies the best member considering this issue and using the values suggested by other of the population members. This concept is mathematically simulated in Equations (5) and (6).
where, X DM is the modified best member by DM, X i,d DM is the modified best member based on the suggested value for d'th variable by i'th SS, X new best is the new status for best member based on DM, and OF(X i,d DM ) is the objective function value for modified best member by DM. The pseudo code of DM is presented in Algorithm 1. In addition, the different stages of the proposed method with the aim of improving the best member are shown as a flowchart in Figure 1.  For i = 1:N populaion N populaion : number of population members.

2.
For d = 1:m m: number of variables.
End for d 10.
End: for i

DM Implantation on Optimization Algorithms
This section describes how to implement the proposed DM on optimization algorithms. The proposed DM is applicable to modify population-based optimization algorithms. Although the idea of designing optimization algorithms is different, the procedure is the same. These algorithms provide a quasi-optimal solution starting from a random initial population and following a process based on repetition and population updates in each iteration.
The pseudo code of implantation of the DM for modifying optimization algorithms is presented in Algorithm 2. The steps of the modified version of the optimization algorithms using DM are shown in Figure 2. Set parameters.

4.
Create another matrix (if there are).
Calculate OF.
Update X new best based on DM. 10.
Continue the processes of optimization algorithm. 11.
End for t 13.

Simulation and Discussion
In this section, the performance of the proposed DM in improving optimization algorithms is evaluated. Thus, the present work and the optimization algorithms described in [18,29,32,50,53] are developed using the same computational platform: Matlab R2014a (8.3.0.532) version in the environment of Microsoft Windows 10 with 64 bits on Core i-7 processor with 2.40 GHz and 6 GB memory. To generate and report the results, for each objective function, optimization algorithms utilize 20 independent runs where each run employs 1000 times of iterations.

Algorithms Used for Comparisons and Benchmark Test Functions
To evaluate the performance of the proposed DM, the following methodology is applied: (1) Find in the literature five well-known optimization algorithms, such as: genetic algorithm (GA) [50], particle swarm optimization (PSO) [18], gravitational search algorithm (GSA) [53], teaching learning based optimization (TLBO) [29] and grey wolf optimizer (GWO) [32]. (2) Modify the optimization algorithms implementing the proposed DM.
(3) Define the set of twenty-three objective functions and divide it into three main categories: unimodal [53,54], multimodal [31,54], and fixed-dimension multimodal [54] functions (see Appendix A). (4) Implement the present work and the optimization algorithms in the same computational platform. (5) Compare the performance of the modified and the original optimization algorithms using the following metrics: the average and the standard deviation of the best obtained optimal solution till the last iteration is computed.

Results
Optimization algorithms in the original version and the modified version, using the proposed DM, are implemented on the objective functions. The simulation results are presented from Tables 2-6 for three different categories: unimodal, multimodal, and fixed-dimension multimodal functions. The first category consists of seven objective functions, F 1 to F 7 , the second category consists of six objective functions, F 8 to F 13 , and the third category consists of ten objective functions, F 14 to F 23 .
To further analyze the simulation results, the convergence curves of the optimization algorithms for the twenty-three objective functions are shown from Figures 3-7. In these figures, the convergence curves for the original and the modified versions are plotted simultaneously.
Computational time analysis in accessing the optimal solution is presented in Tables 7-9. This analysis shows computational time for per iteration, per complete run, and the overall time required for the original and modified algorithm to achieve similar objective function value. In these tables, P.I. means per iteration, P.C. means per complete run, and the O.T.S. means overall time required for the original and modified algorithm to achieve similar objective function value.

Discussion
Exploitation and exploration abilities are two important indicators in evaluating optimization algorithms. The exploitation ability of an optimization algorithm means its power to provide a quasi-optimal solution. An algorithm that offers a better quasi-optimal solution than another algorithm has a higher exploitation ability. The unimodal objective functions F 1 to F 7 , which have only one global optimal solution without local solutions, are applied to analyze the exploitation ability of optimization algorithms. The results presented in Tables 2-6 show that the proposed DM by modifying the optimization algorithms is able to increase the exploitation ability of the optimization algorithms and as a result more suitable quasi-optimal solutions are provided by the modified version.
The exploration ability means the power of the optimization algorithm to scan the search space of an optimization problem. Given that the basis of optimization algorithms is random scanning of the search space, an algorithm that scans the search space more accurately is able move towards a quasi-optimal solution by escape from local optimal solutions. In the second and third category of objective functions F 8 to F 23 , there are multiple local solutions besides the global optimum which are useful to analyze the local optima avoidance and an explorative ability of an algorithm. Tables 2-6 show that the modified version with the DM of optimization algorithms has a higher exploration ability than the original version.
The convergence curves shown in Figures 3-7 visually show the effect of the proposed DM on the modifying the optimization algorithms. In these figures it is clear that the modified version moves with more convergence towards the quasi-optimal solution.
The simulation results of optimization algorithms to solve the optimization problems show that the modified version of the optimization algorithms with the DM are much more competitive than the its original version. Therefore, the proposed method has the ability to be implemented on a variety of optimization algorithms in order to solve various optimization problems.
The result of computational time analysis for both original and modified by DM versions is presented in Tables 7-9. In these tables, three different time criteria are presented, which are the average time per iteration (P.I.), the average time per complete run (P.C.R.), and overall time required for the original and modified algorithm to achieve similar objective function value (O.T.S.). Due to the addition of a correction phase based on proposed DM, P.I. and P.C.R. have been increased compared to the original version. Table 7 shows that except for four cases (TLBO: F 3 , GWO: F 3 , F 5 , and F 7 ), in all unimodal objective functions, the modified version provides the final solution of the original version in less time. Tables 8 and 9 show that the modified version of the studied algorithms for all F 8 to F 23 objective functions provides the final solution of the original version in less time.

Conclusions
There are various optimization problems in different sciences that should be optimized using the appropriate method. The optimization algorithm is one method to solve such problems, and it can provide a quasi-optimal solution by random scanning in the search space. Many optimization algorithms have been proposed by researchers which have been applied by scientists to solve optimization problems. The performance of optimization algorithms in achieving quasi-optimal solutions is improved by modifying optimization algorithms. In this paper, a new modification method has been presented for optimization algorithms called Dehghani method (DM). The main idea of the proposed DM is to improve and strengthen the best member of the population using the information of the population members. In DM, all members of a population, even the worst one, can contribute to the development of the population. The various stages of DM have been described and then has been modeled mathematically. The DM has been implemented on five different optimization algorithms including GA, PSO, GSA, TLBO, and GWO. The effect of the proposed method on modifying the performance of optimization algorithms in solving optimization problems has been evaluated on a set of twenty-three standard objective functions. In this evaluation, the results of optimizing the objective functions set has been presented for both the original and the modified by DM version of the optimization algorithms. The results of simulation and implementation of DM on the mentioned optimization algorithms with the aim of optimizing the optimization problems show that the proposed method improves the performance of the optimization algorithms. The optimization of different objective functions in the three groups unimodal, multimodal, and fixed-dimension multimodal functions indicates that the modified version with the proposed method is much more competitive than the original version. Moreover, the convergence curves visually show that the modified version moves with more convergence towards the quasi-optimal solution.
The authors suggest several ideas and proposals for future studies and perspectives of this study to researchers. The main potential for these ideas is to be found in modifying various optimization algorithms using DM. DM may also be used to overcome many objective real-life optimizations as well as multi-objective problems.

Conflicts of Interest:
The authors declare no conflict of interest. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Appendix A Table A1. Unimodal objective functions.