Differential Evolution (DE) is one of the prevailing search techniques in the present era to solve global optimization problems. However, it shows weakness in performing a localized search, since it is based on mutation strategies that take large steps while searching a local area. Thus, DE is not a good option for solving local optimization problems. On the other hand, there are traditional local search (LS) methods, such as Steepest Decent and Davidon–Fletcher–Powell (DFP) that are good at local searching, but poor in searching global regions. Hence, motivated by the short comings of existing search techniques, we propose a hybrid algorithm of a DE version, reflected adaptive differential evolution with two external archives (RJADE/TA) with DFP to benefit from both search techniques and to alleviate their search disadvantages. In the novel hybrid design, the initial population is explored by global optimizer, RJADE/TA, and then a few comparatively best solutions are shifted to the archive and refined there by DFP. Thus, both kinds of searches, global and local, are incorporated alternatively. Furthermore, a population minimization approach is also proposed. At each call of DFP, the population is decreased. The algorithm starts with a maximum population and ends up with a minimum. The proposed technique was tested on a test suite of 28 complex functions selected from literature to evaluate its merit. The results achieved demonstrate that DE complemented with LS can further enhance the performance of RJADE/TA.
Nonlinear unconstrained optimization is an active research area, since many real-life challenges/problems can be modeled as a continuous nonlinear optimization problem . To deal with this kind of optimization problems, various nature-inspired population based search mechanisms have been developed in the past . A few of those are Differential Evolution (DE) [3,4], Evolution Strategies (ES) [2,5], Partical Swarm Optimization (PSO) [6,7,8,9], Ant Colony Optimization (ACO) [10,11,12,13], Bacterial Foraging Optimization (BFO) [14,15], Genetic Algorithm (GA) [16,17,18], Genetic Programming (GP) [2,19,20,21], Cuckoo Search (CS) [22,23], Estimation of Distribution Algorithm (EDA) [24,25,26,27,28] and Grey Wolf Optimization (GWO) [29,30].
DE does not need specific information about the complicated problem at hand . That is why DE is implemented to solve a wide variety of optimization problems in the past two decades [30,32,33,34]. DE has merits over PSO, GA, ES and ACO, as it depends upon few control parameters. Its implementation is very easy and user friendly, too . Due to these advantages, we selected DE to perform global search in the suggested hybrid design. In addition, because of its easy nature, DE is implemented widely [35,36,37,38,39,40,41,42] on practical optimization problems [35,36,37,38,39,40,41,42]. However, its convergence to known optima is not guaranteed [2,31,43]. Stagnation of DE is another weakness identified in various studies .
Traditional search approaches, such as Nelder–Mead algorithm, Steepest Descent and DFP  may be hybridized with DE to improve its search capability. Implementing LS into a global search for enhancing the solution quality is called Memetic Algorithms (MAs) [31,45]. Some of the recent MAs can be found in [1,31]. Very recently, Broyden–Fletcher–Goldfarb–Shanan LS was merged with an adaptive DE version, JADE , which produced the MA, Hybridization of Adaptive Differential Evolution with an Expensive Local Search Method . In the majority of the established designs, LS is implemented to the overall best solutions, while in our design it is applied to the migrated elements of the archive. In addition, the population is adaptively decreased.
In this work, we propose a hybrid algorithm that combines DFP [44,48,49] with a recently developed algorithm, RJADE/TA , to enhance RJADE/TA’s performance in local regions. The main idea is to operate DFP on the elements that are shifted to archive and record the information from both solutions, the previously brought forward and the new potential solutions to discourage the chance of losing the globally best solution. For this purpose, firstly, DFP is implemented to the archived information. Secondly, a decreasing population mechanism is suggested. The new algorithm is denoted by RJADE/TA-ADP-LS.
The structure of this work is as follows. Section 2 presents primary DE, DFP, and RJADE/TA methods. Section 3 describes the literature review. In Section 4, the suggested hybrid algorithm is outlined. Section 5 is devoted to the validation of results achieved by RJADE/TA-ADP-LS. At the end, the conclusions are summarized in Section 6.
2. Primary DE, DFP, and RJADE/TA
We reviewed in detail traditional DE and JADE in our previous works [47,50]. Here, we briefly review primary DE, DFP and RJADE/TA for ready reference.
2.1. Primary DE
DE [3,4] starts with a random population in the given search region. After initialization, a mutation strategy, where three different individuals from population are randomly selected and the scaled difference of the two individuals to the third one, target vector is added to produce a mutant vector. Following mutation, the mutant and the target vectors are combined through a crossover operator to produce a trial vector. At last, the target and trial vectors are compared based on a fitness function to select the better one for the next generation (see Lines 7–20 of Algorithm 1).
Algorithm 1 Outlines of RJADE/TA Procedure.
To form the primary population produce vectors uniformly and randomly, ;
Initialize ; ; ;
Randomly sample in pop;
Choose in ;
Choose in do random selection;
Produce the mutant vector as ;
Produce the trial vector as follows.
for to ndo
if or then
Best selection ;
if is the best then
, , ;
If size of , delete extra solutions from randomly;
Update as follows.
Centroid calculation ;
Reflection mechanism ;
Result: The best solution corresponding to minimum function value from in the optimization.
2.2. Reflected Adaptive Differential Evolution with Two External Archives (RJADE/TA)
RJADE/TA  is an adaptive DE variant. Its main idea is to archive comparatively best solutions of the population at regular interval of optimization process and reflect the overall poor solutions. RJADE/TA inserts the following techniques in JADE. The techniques are presented in Table 1.
To prevent premature convergence and stagnation, the best solution, is replaced by its reflection in RJADE/TA and is then shifted to the second archive .
The reflected solution replaces in the population and the ever best candidate by itself is migrated to the second archive . RJADE/TA maintains two archives, termed as and for convenience. After half of available resources are utilized (), the first archive update of the second archive, , is made. Afterwards, is updated adaptively with a continuing intermission of generations (see Algorithm 1).
The overall best candidates are transferred to , whereas records the recently explored poor solutions. The size of is fixed, equal to population size , while the size of may exceed . As keeps information of all best solutions found, no solution is deleted from it. records only one solution of the current iteration, it may be a child or a parent, whereas makes a history of more than one inferior “parent solutions” only. is updated at every iteration and , initialized as ∅, is updated with a gap of iterations adaptively. The recorded history of is utilized in reproduction later on. In contrast, in , the recorded best individual is reflected with a new solution, which is then sent to the population. Once a candidate solution is posted to , it remains passive during the whole optimization. When the search procedures are terminated, then the recoded information contributes towards the selection of the best candidate solution.
2.3. Davidon–Fletcher–Powell (DFP) Method
The DFP method is a variable metric method, which was first proposed by Davidon  and then modified by Powell and Fletcher . It belongs to the class of gradient dependent LS methods. If a right line search is used in DFP method, it will assure convergence (minimization) . It calculates the difference between the old and new points, as given in Equation (1). Then, it finds the difference of the gradients at these points as calculated in Equation (2).
It then updates the Hessian matrix as presented in Equation (3). Afterwards, it locates the optimal search direction with the help of the Hessian matrix information as calculated in Equation (4). Finally, the output solution is computed by Equation (5), where is calculated by a line search method; golden section search method is used in this work.
3. Related Work
To fix the above-mentioned weaknesses of DE, many researchers merged various LS techniques in DE. Nelder–Mead LS is hybridized with DE  to improve the local exploitation of DE. Recently, two new LS strategies are proposed and hybridized iteratively with DE in [1,31]. These hybrid designs show performance improvement over the algorithms in comparison. Two LS strategies, Trigonometric and Interpolated, are inserted in DE to enhance its poor exploration. Two other LS techniques are merged in DE along with a restart strategy to improve its global exploration . This algorithm is statistically sound, as the obtained results are better than other algorithms. Furthermore, alopex-based LS is merged in DE  to improve its diversity of population. In another experiment, DE’s slow convergence is enhanced by combining orthogonal design LS  with it. To avert local optima in DE, random LS is hybridized  with it. On the other hand, some researchers borrowed DE’s mutation and crossover in traditional LS methods (see, e.g., [58,59]).
To the best of our knowledge, none of the reviewed algorithms in this section integrate DFP into DE’s framework. Further, the proposed work here maintains two archives: the first one stores inferior solutions and the second one keeps information of best solutions migrated to it by the global search. Furthermore, the second archive improves the solutions quality further by implementing DFP there. Hence, our proposed work has the advantage that the second archive keeps complete information of the solution before and after LS. This way, any good solution found is not lost. It also adopts a population decreasing mechanism.
4. Developed Algorithm
As discussed in the literature review, LS techniques, due to their demerits, should not be used alone to solve optimization problems . The global optimality of global evolution techniques is very high, but they can get stuck in local regions and cannot fine tune the solution at hand. Thus, motivated by above issues of global/local techniques, we hybridize a global optimizer RJADE/TA with DFP to enhance the convergence in both regions. The new design is named as RJADE/TA-ADP-LS. We specifically handle unconstrained, nonlinear, continuous, and single objective optimization problems in the current work.
The initial population is evolved globally by RJADE/TA  until % of the function evaluations; that is, after RJADE/TA’s iterative mutation, crossover, selection and process, as shown in Algorithm 1, the population is sorted and the current best solution is translated to . This best solution may be a parent or a child solution. The DFP is applied to the shifted elements for w iterations. After implementation of DFP, a new improved solution is produced from an old migrant. Then, the previously explored best solution and this new solution are posted to archive . Unlike our perviously proposed archive in RJADE/TA, where the archive keeps the record of best solutions only and no LS is implemented, , as mentioned above, in this method maintains information of both solutions, i.e., the migrated best solution and its improved version, if any, after implementation of DFP.
The archive is updated after regular intervals of generations (20 here). The migrated solutions and those explored by DFP remain there during the entire evolution process. When the evolution process completes, the overall best candidate is selected from . The novelty of RJADE/TA-ADP-LS is that it employs DFP to the archived solutions only, unlike all hybrid designs reviewed in Section 3.
In the proposed hybrid mechanism, we implement DFP to the migrated best solution to obtain its improved form, but without reflection, as displayed in the flowchart given in Figure 1, unlike in our recently proposed work . Moreover, in this model, we propose adaptively decreasing population (ADP) mechanism different from the fixed population approach of Khanum et al. . We refer to this new hybrid as RJADE/TA-ADP-LS throughout this work. The idea of RJADE/TA-ADP-LS is novel in proposing the ADP approach, because, in the literature, majority of the evolutionary algorithms (as reviewed in Section 3) maintain a fixed population throughout the searching process.
In this design, when the first update of is made after half of the available resources are spent, DFP is applied to the archive members. The implementations of DFP and ADP are shown in Algorithm 2. Both the previously located best solution, , and the one exploited by DFP, , are propagated to . No reflection is made here to compensate the decreasing population. The ADP approach (Algorithm 2, Lines 6–8) is implemented as:
Every time is updated, the migrated element is removed from the current population (see Equation (6)), and the population is decreased by one. Thus, after each break of generations, r(= the number of times the κ breaks occur) solutions are removed from , and the population size is updated to , as demonstrated on Line 11 of Algorithm 2. Furthermore, the function values are updated accordingly (see Equations (7) and (8)). In ADP approach, the algorithm begins with a maximum population and terminates with a minimum population.
Algorithm 2 RJADE/TA-ADP-LS.
Update as follows.
Apply DFP to ;
Terminate the iteration;
Repeat the process r number of times and update .
5. Validation of Results
In this section, first we briefly illustrate the five algorithms used for comparison and then the experimental results are presented.
5.1. Global Search Algorithms in Comparison
Among the five algorithms for comparison, the first two, RJADE/TA and RJADE/TA-LS, are our recently proposed hybrid algorithms, while the remaining three, jDE, jDEsoo and jDErpo, are non-hybrid, but adaptive and popular DE variants.
RJADE/TA , similar to RJADE/TA-ADP-LS, utilizes two archives for information. One of the archives stores inferior solutions, while the other keeps a record of superior solutions. However, in RJADE/TA-ADP-LS, the second archive stores elite solutions, which are then improved by DFP. Further details of RJADE/TA can be seen in Section 2.2.
RJADE/TA-LS  is a very recently proposed hybrid version of global and local search. However, it is different from RJADE/TA-ADP-LS in the sense that it utilizes reflection mechanism and a fixed population, while RJADE/TA-ADP-LS uses DFP as LS without reflection and a population decreasing approach.
jDE  is an adaptive version of DE, which is based on self-adaption of control parameters F and . In jDE, the parameters F and keep changing during the evolution process, while the population size is kept unchanged. Every solution in jDE has its own F and values. Better individuals are produced due to better values of F and . Such parameter values translate to upcoming generations of jDE. Because of its unique mechanism and simplicity, jDE has gained popularity among researchers in the field of optimization. Since its establishment, people use it to compare with their own algorithms.
5.1.4. jDEsoo and jDErpo
jDEsoo  is a new version of DE that deals with single-objective optimization. jDEsoo subdivides the population and implements more than one DE strategies. To enhance diversity of population, it removes those individuals from population that remain unchanged in the last few generations. It was primarily developed for CEC 2013 competition.
jDErpo  is an improvement of jDE. It is based on the following mechanisms. Firstly, it incorporates two mutation strategies, different from jDE, DE and RJADE/TA. Secondly, it uses adaptively increasing strategy for adjusting the lower bounds of control parameters. Thirdly, it utilizes two pairs of control parameters for two different mutation strategies in contrast to one pair of parameters used in jDE, classic DE and RJADE/TA. jDErpo was also specially designed for solving CEC 2013 competition problems.
5.2. Parameter Settings/Termination Criteria
Experiments were performed on 28 benchmark test problems of CEC 2013 . They are referred as BMF1–BMF28. The parameters’ settings were kept the same as demanded in . The dimension n of each problem was set to 10, population size to 100, and the to . The number of elite solutions r was kept as 1. The iterations number w of DFP was set to 2. The reduction of population per archive update r was also chosen as 1. The gap between successive updates of was kept as 20. The optimization was terminated if either were reached or the difference between the means of function error values was less than , as suggested in [50,63].
5.3. Comparison of RJADE/TA-ADP-LS against Established Global Optimizers
The mean of function error values, the difference between known and approximated values, for jDE, jDEsoo, jDErpo, RJADE/TA and RJADE/TA-ADP-LS, are presented in Table 2. In Table 2, + indicates that the algorithm won against our algorithm, RJADE/TA-ADP-LS; − indicates that the particular algorithm lost against our algorithm; and = indicates that both algorithms obtained the same statistics. The comparison of RJADE/TA-ADP-LS with other competitors showed its outstanding performance against all of them. RJADE/TA-ADP-LS achieved higher mean values than jDE and jDEsoo on 17 out of 28 problems; the many − signs in columns 2 and 3 of Table 2 support this fact. In contrast, jDE and jDEsoo performed better on six and eight problems, respectively.
RJADE/TA-ADP-LS showed performance improvement against jDErpo and RJADE/TA algorithms as well. In general, RJADE/TA-ADP-LS performed better than all algorithms in comparison, especially in the category of multimodal and composite functions. The proposed mechanism is not only based on LS for local tuning with no reflection, but it also implements an ADP approach, which could be the reasons for its good performance.
5.4. Performance Evaluation of RJADE/TA-ADP-LS Versus RJADE/TA-LS
We empirically studied the performance of RJADE/TA-ADP-LS against RJADE/TA-LS. Table 3 presents the mean results achieved by both methods in 51 runs. The best results are shown in bold face. It is very clear from the results in Table 3 that the proposed RJADE/TA-ADP-LS performed higher than RJADE/TA-LS on 13 out of 28 problems. Furthermore, on five problems, they obtained the same results. RJADE/TA-LS showed performance improvement on 10 test problems.
It is interesting to note that RJADE/TA-ADP-LS showed outstanding performance in the category of composite functions, where it solved BMF22–BMF28 better than RJADE/TA-LS. Again, the two different mechanisms, the ADP approach and the LS search with out reflection, of RJADE/TA-ADP-LS could be the reasons for its better performance. Among 28 problems, RJADE/TA-LS was better on 10 functions. Further, Table 4 presents the percentage performance of RJADE/TA-ADP-LS and RJADE/TA-LS. Since on five test problems, both algorithms showed equal results, thus we compared the percentage for the remaining 23 problems. As shown in Table 4, RJADE/TA-ADP-LS was able to solve of problems against of problems solved by RJADE/TA-LS out of 23 test instances.
Furthermore, box plots were plotted from all means obtained in 25 runs of RJADE/TA, RJADE/TA-LS and RJADE/TA-ADP-LS. Figure 2 and Figure 3 plot one function from each three functions. Box plots are very good tools to show the spread of the data. Figure 2b–d shows that the boxes obtained by RJADE/TA-ADP-LS were lower than the other two boxes, indicating its better performance. Figure 2a presents the plot of BMF3, in which the two boxes in comparison were lower than RJADE/TA-ADP-LS, thus they were better.
Figure 3b,d,f shows that the boxes obtained by RJADE/TA-ADP-LS on BMF19, BMF25 and BMF27 were lower than the boxes of RJADE/TA and RJADE/TA-LS, indicating higher performance of RJADE/TA-ADP-LS. Figure 3a,c,e shows that the two other algorithms were better on the respective test instances.
5.5. Analysis/Discussion of Various Parameters Used
The number of solutions r to be migrated to archive and undergo DFP was kept as 1, since DFP is an expensive method due to gradient calculation. Further, its application to more than one solution might slow down the algorithm. The users may take two, but at most three is suggested. The number of iteration w of DFP to archive elements was kept as 2. DFP is a very good method; it could fine tune the solutions in only two iterations. Moreover, the decreasing number r of population per archive update was also chosen as 1. Since the archive was updated after regular gap of global evolution, each time population was decreased by one. However, if we reduced it by more than one solutions, then a stage would come where the diversity of the population would be decreased and the algorithm would either stop at local optima or converge prematurely. We suggest that the decreasing number be at most 3. In general, these parameters are user defined but should be chosen wisely to compliment the global and local search together, instead of premature convergence or stagnation.
This paper proposed a new hybrid algorithm, RJADE/TA-ADP-LS, where a LS mechanism, DFP is combined with a DE based global search scheme, RJADE/TA to benefit from their searching capabilities in local and global regions. Further, a population decreasing mechanism is also adopted. The key idea is to shift the overall best solution to archive at specified regular intervals of RJADE/TA, where it undergoes DFP for further improvement. The archive stores both the best solution and its improved form. Furthermore, the population is decreased by one solution at each archive update. We evaluated and compared our hybrid method with five established algorithms on test suit of CEC 2013. The results demonstrated that our new algorithm is better than other competing algorithms on majority of the tested problems, particularly our algorithm showed superior performance on hard multimodal and composite problems of CEC 2013. In future, the present work will be extended to constrained optimization. As a second task, some other gradient free LS methods, global optimizers and archiving strategies will be tried to design more efficient algorithms for global optimization.
Conceptualization, R.A.K. and M.A.J.; methodology, R.A.K., M.A.J., and W.K.M.; software, R.A.K., N.T. and H.S.; validation, H.U.K., M.S. and H.S.; formal analysis, R.A.K., M.A.J., and W.K.M.; investigation, R.A.K., M.A.J. and M.S.; resources, N.T. and H.S.; writing—original draft preparation, R.A.K., M.A.J.; writing—review and editing, H.U.K. and M.S.; project administration, N.T.; and funding acquisition, N.T. and H.S.
The authors would like to thank King Khalid University of Saudi Arabia for supporting this research under the grant number R.G.P.2/7/38.
Conflicts of Interest
The authors declare no conflict of interest.
Price, K.V. Eliminating drift bias from the differential evolution algorithm. In Advances in Differential Evolution; Springer: Berlin, Germany, 2008; pp. 33–88. [Google Scholar]
Xiong, N.; Molina, D.; Ortiz, M.L.; Herrera, F. A walk into metaheuristics for engineering optimization: principles, methods and recent trends. Int. J. Comput. Intell. Syst.2015, 8, 606–636. [Google Scholar] [CrossRef]
Storn, R.; Price, K.V. Differential Evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim.1997, 11, 341–359. [Google Scholar] [CrossRef]
Storn, R. Differential evolution research—Trends and open questions. In Advances in Differential Evolution; Springer: Berlin, Germany, 2008; pp. 1–31. [Google Scholar]
Engelbrecht, A.; Pampara, G. Binary Differential Evolution Strategies. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC 2007), Singapore, 25–28 September 2007; pp. 1942–1947. [Google Scholar]
Kennedy, J.; Eberhart, R.C. Particle Swarm Optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
Kennedy, J.; Eberhart, R. A Discrete Binary Version of the Partical Swarm Algorithm. In Proceedings of the World Multiconference on Systemics, Cybernetics and Informatics, Orlando, FL, USA, 12–15 October 1997; pp. 4104–4109. [Google Scholar]
Eberhart, R.C.; Kennedy, J. A New Optimizer using Particle Swarm Theory. In Proceedings of the 6th International Symposium on Micromachine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar]
Eberhart, R.C.; Shi, Y. Guest Editorial: Special Issue on Particle Swarm Optimization. IEEE Trans. Evol. Comput.2004, 8, 201–203. [Google Scholar] [CrossRef]
Gazi, V.; Passino, K.M. Bacteria foraging optimization. In Swarm Stability and Optimization; Springer: Berlin, Germany, 2011; pp. 233–249. [Google Scholar]
Moscato, P. On evolution, search, optimization, genetic algorithms and martial arts: Towards memetic algorithms. Caltech Concurr. Comput. Prog. C3P Rep.1989, 826, 1989. [Google Scholar]
Fan, S.K.S.; Zahara, E. A hybrid Simplex Search and Partical Swarm optimization for unconstrained optimization. Eur. J. Oper. Res.2007, 181, 527–548. [Google Scholar] [CrossRef]
Yuen, S.Y.; Chow, C.K. A Genetic Algorithm that Adaptively Mutates and Never Revisits. IEEE Trans. Evol. Comput.2009, 13, 454–472. [Google Scholar] [CrossRef]
Koza, J.R. Genetic Programming II, Automatic Discovery Of Reusable Subprograms; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
Koza, J.R. Genetic programming as a means for programming computers by natural selection. Stat. Comput.1994, 4, 87–112. [Google Scholar] [CrossRef]
Koza, J.R. Genetic Programming: On the Programming of Computers by Means of Natural Selection; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
Yang, X.S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the IEEE World Congress on Nature & Biologically Inspired Computing, Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
Yang, X.S.; Deb, S. Engineering optimisation by cuckoo search. arXiv2010, arXiv:1005.2908. [Google Scholar] [CrossRef]
Larrañaga, P.; Lozano, J.A. Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation; Springer: Berlin, Germany, 2001; Volume 2. [Google Scholar]
Zhang, Q.; Sun, J.; Tsang, E.; Ford, J. Hybrid estimation of distribution algorithm for global optimization. Eng. Comput.2004, 21, 91–107. [Google Scholar] [CrossRef]
Zhang, Q.; Muhlenbein, H. On the convergence of a class of estimation of distribution algorithms. IEEE Trans. Evol. Comput.2004, 8, 127–136. [Google Scholar] [CrossRef]
Lozano, J.A.; Larrañaga, P.; Inza, I.; Bengoetxea, E. Towards a New Evolutionary Computation: Advances on Estimation of Distribution Algorithms; Springer: Berlin, Germany, 2006; Volume 192. [Google Scholar]
Hauschild, M.; Pelikan, M. An introduction and survey of estimation of distribution algorithms. Swarm Evol. Comput.2011, 1, 111–128. [Google Scholar] [CrossRef]
Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw.2014, 69, 46–61. [Google Scholar] [CrossRef]
Gupta, S.; Deep, K. Hybrid Grey Wolf Optimizer with Mutation Operator. In Soft Computing for Problem Solving; Springer: Berlin, Germany, 2019; pp. 961–968. [Google Scholar]
Leon, M.; Xiong, N. Eager random search for differential evolution in continuous optimization. In Portuguese Conference on Artificial Intelligence; Springer: Berlin, Germany, 2015; pp. 286–291. [Google Scholar]
Biswas, P.P.; Suganthan, P.; Wu, G.; Amaratunga, G.A. Parameter estimation of solar cells using datasheet information with the application of an adaptive differential evolution algorithm. Renew. Energy2019, 132, 425–438. [Google Scholar] [CrossRef]
Sacco, W.F.; Rios-Coelho, A.C. On Initial Populations of Differential Evolution for Practical Optimization Problems. In Computational Intelligence, Optimization and Inverse Problems with Applications in Engineering; Springer: Berlin, Germany, 2019; pp. 53–62. [Google Scholar]
Cui, L.; Li, G.; Zhu, Z.; Ming, Z.; Wen, Z.; Lu, N. Differential evolution algorithm with dichotomy-based parameter space compression. Soft Comput.2018, 23, 1–18. [Google Scholar] [CrossRef]
Meng, Z.; Pan, J.S.; Zheng, W. Differential evolution utilizing a handful top superior individuals with bionic bi-population structure for the enhancement of optimization performance. Enterpr. Inf. Syst.2018, 1–22. [Google Scholar] [CrossRef]
Fletcher, R. Practical Methods of Optimization, 2nd ed.; Wiley: Hoboken, NJ, USA, 1987; pp. 80–87. [Google Scholar]
Lozano, M.; Herrera, F.; Krasnogor, N.; Molina, D. Real-Coded Memetic Algorithms with Crossover Hill-Climbing. Evol. Comput.2004, 12, 273–302. [Google Scholar] [CrossRef] [PubMed]
Khanum, R.A.; Tairan, N.; Jan, M.A.; Mashwani, W.K.; Salhi, A. Reflected Adaptive Differential Evolution with Two External Archives for Large-Scale Global Optimization. Int. J. Adv. Comput. Sci. Appl.2016, 7, 675–683. [Google Scholar]
Spedicato, E.; Luksan, L. Variable metric methods for unconstrained optimization and nonlinear least squares. J. Comput. Appl. Math.2000, 124, 61–95. [Google Scholar]
Mamat, M.; Dauda, M.; bin Mohamed, M.; Waziri, M.; Mohamad, F.; Abdullah, H. Derivative free Davidon-Fletcher-Powell (DFP) for solving symmetric systems of nonlinear equations. IOP Conf. Ser. Mater. Sci. Eng.2018, 332, 012030. [Google Scholar] [CrossRef]
Ali, M.; Pant, M.; Abraham, A. Simplex Differential Evolution. Acta Polytech. Hung.2009, 6, 95–115. [Google Scholar]
Khanum, R.A.; Jan, M.A.; Mashwani, W.K.; Tairan, N.M.; Khan, H.U.; Shah, H. On the hybridization of global and local search methods. J. Intell. Fuzzy Syst.2018, 35, 3451–3464. [Google Scholar] [CrossRef]
Leon, M.; Xiong, N. A New Differential Evolution Algorithm with Alopex-Based Local Search. In International Conference on Artificial Intelligence and Soft Computing; Springer: Berlin, Germany, 2016; pp. 420–431. [Google Scholar]
Dai, Z.; Zhou, A.; Zhang, G.; Jiang, S. A differential evolution with an orthogonal local search. In Proceedings of the IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 2329–2336. [Google Scholar]
Ortiz, M.L.; Xiong, N. Using random local search helps in avoiding local optimum in differential evolution. In Proceedings of the IASTED, Innsbruck, Austria, 17–19 February 2014; pp. 413–420. [Google Scholar]
Khanum, R.A.; Jan, M.A.; Mashwani, W.K.; Khan, H.U.; Hassan, S. RJADETA integrated with local search for continuous nonlinear optimization. Punjab Univ. J. Math.2019, 51, 37–49. [Google Scholar]
Brest, J.; Zamuda, A.; Fister, I.; Boskovic, B. Some Improvements of the Self-Adaptive jDE Algorithm. In Proceedings of the IEEE Symposium on Differential Evolution (SDE), Orlando, FL, USA, 9–12 December 2014; pp. 1–8. [Google Scholar]
Brest, J.; Boskovic, B.; Zamuda, A.; Fister, I.; Mezura-Montes, E. Real Parameter Single Objective Optimization using self-adaptive differential evolution algorithm with more strategies. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), Cancun, Mexico, 20–23 June 2013; pp. 377–383. [Google Scholar]