# A Hybrid Metaheuristics Parameter Tuning Approach for Scheduling through Racing and Case-Based Reasoning

^{1}

^{2}

^{3}

^{4}

^{5}

^{6}

^{7}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Scheduling Problem Definition

## 3. Literature Review and Related Work

#### 3.1. Metaheuristics and Parameter Tuning

#### 3.2. Learning Approaches for Parameter Tuning

#### 3.2.1. Racing

#### 3.2.2. Case-Based Reasoning

#### 3.3. AutoDynAgents: A Multi-Agent Scheduling System

- The hybrid scheduling module that uses metaheuristics and a mechanism to repair activities between resources. The purpose of this mechanism is to repair the operations executed by a machine, taking into account the tasks technological constraints (i.e., the precedence relations of operations) in order to obtain good scheduling plans.
- The dynamic adaptation module that includes mechanisms for regenerating neighborhoods/populations in dynamic environments, increasing or decreasing them according to the arrival of new tasks or cancellation of existing ones.
- The coordination module, aiming to improve the solutions found by the agents through cooperation (agents act jointly in order to enhance a common goal) or negotiation (agents compete with each other in order to improve their individual solutions).

## 4. Racing + CBR Module

#### 4.1. Racing Submodule

Algorithm 1 Racing algorithm. | |

Input:$listObjFunc,listMH$ | ▹ list of objective functions, list of metaheuristics |

Begin | |

$listInstances\leftarrow \varnothing $ | ▹ array to store instances |

$listParams\leftarrow \varnothing $ | ▹ array to store candidate parameters |

$listInstances\leftarrow getInstances\left(\right)$ | ▹ get problem instances |

for all $mh\in listMH$ do | ▹ for all metaheuristics |

$listParams\leftarrow getParams(objFunc,mh)$ | ▹ get candidate parameters |

$race\leftarrow createRace\left(listParams\right)$ | ▹ create race to evaluate candidates |

for all $instance\in listInstances$ do | ▹ for all instances |

for all $params\in listParams$ do | ▹ for all candidate parameters |

$run\leftarrow createRun(race,instance,params)$ | ▹ create run to evaluate each candidate parameters combination |

candidate parameters combination | |

$executeRun\left(run\right)$ | |

end for | |

$listParams\leftarrow removeCandidates(race,listParams)$ | ▹ remove worst candidates |

end for | |

$best\leftarrow race.getBestCandidate\left(\right)$ | ▹ get the best surviving parameters combination |

$race.calculateStoreAvg\left(best\right)$ | ▹ calculate and store the average values for the best candidate |

candidate | |

end for | |

End |

Algorithm 2$removeCandidates\left(\right)$ method. | |

Input:$race,listParams$ | ▹ current race, list of candidate parameters |

Output:$listParams$ | ▹ list of surviving candidate parameters |

Begin | |

if $size\left(listParams\right)>1$ then | |

$listInstances\leftarrow race.getIntances\left(\right)$ | ▹ gets race instances |

if $size\left(listParams\right)>2$then | ▹ applies Friedman when more than two |

$listParams\leftarrow applyFriedmanTest(listParams,listInstances)$ | |

else | ▹ applies Wilcoxon test when there are only two candidates |

$listParams\leftarrow applyWilcoxonTest(listParams,listInstances)$ | |

end if | |

end if | |

return $listParams$ | |

End |

Algorithm 3 The Friedman test. | |

Input:$listParams,listInstances$ | ▹ list of candidate parameters, list of instances |

Output:$listParams$ | ▹ list of surviving candidates |

Begin | |

$ranks\leftarrow \varnothing $ | ▹ rankings matrix for candidate parameters per instance |

$sumRanks\leftarrow \varnothing $ | ▹ array of rankings sum for candidate parameters |

$inst\leftarrow size\left(listInstances\right)$ | ▹ number of instances |

$cand\leftarrow size\left(listParams\right)$ | ▹ number of candidates |

if $inst>1$ then | |

$ranks\leftarrow sortRankFriedman(listInstances,listParams)$ | |

$sumRanks\leftarrow ranksSum\left(ranks\right)$ | |

$surv\leftarrow calcNumSurv(inst,cand)$ | ▹ Equation (3) |

$listParams\leftarrow getSurviving(sobrev,sumRanks)$ | |

end if | |

return $listParams$ | |

End |

Algorithm 4 The Wilcoxon test. | |

Input:$listParams,listInstances$ | ▹ list of candidate parameters, list of instances |

Output:$listParams$ | ▹ list of surviving candidates |

Begin | |

$inst\leftarrow size\left(listInstances\right)$ | ▹ number of instances |

$a\leftarrow getResults(listInstances,listParams.get(0\left)\right)$ | ▹ array with the results per instance for the first candidate |

$b\leftarrow getResults(listInstances,listParams.get(1\left)\right)$ | ▹ array with the results per instance for the second candidate |

$dif\leftarrow calcDifference(a,b)$ | ▹ array with the difference between the first and second candidates |

$absDif\leftarrow abs\left(dif\right)$ | ▹ array with the absolute value for the differences |

$ranks\leftarrow calcRankings\left(absDif\right)$ | ▹ array with the rankings of the differences |

$signs\leftarrow calcSign\left(dif\right)$ | ▹ array with the signal of the differences |

$w\leftarrow calcW(ranks,signs,inst)$ | ▹ value to validate the Wilcoxon test, calculated by Equation (4) |

if $w<0$ then | |

$listParams.remove\left(1\right)$ | ▹ first candidate is better, remove the second |

else | |

$listParams.remove\left(1\right)$ | ▹ second candidate is better, remove the first |

end if | |

return $listParams$ | |

End |

#### 4.2. CBR Submodule

Algorithm 5 Case-based reasoning | |

Input:$newCase$ | |

Begin | |

$listCases\leftarrow retrieve\left(newCase\right)$ | ▹ retrieving phase |

$bestCase\leftarrow reuse\left(listCases\right)$ | ▹ reusing phase |

$solution\leftarrow getSolution\left(bestCase\right)$ | ▹ get the solution for the best case |

$simBestCase\leftarrow getSimilarity\left(bestCase\right)$ | ▹ get the similarity |

$revisedSolution\leftarrow revise(solution,simBestCase)$ | ▹ revising phase |

$results\leftarrow executeCaseMAS(newCase,revisedSolution)$ | ▹ execute in MAS |

$retain(newCase,revisedSolution,results)$ | ▹ retaining phase |

End |

#### 4.2.1. Retrieving Phase

Algorithm 6 Retrieving phase. | |

Input:$newCase$ | |

Output:$listCases$ | ▹ list of retrieved cases, with similarities |

Begin | |

$listCases\leftarrow \varnothing $ | |

$minSim\leftarrow 0.70$ | ▹ minimum similarity |

$listPreCases\leftarrow queryCaseBase\left(newCase\right)$ | ▹ pre-select the casebase |

for all$case\in listPreCases$ do | |

$sim\leftarrow calcSimilarity(case,newCase)$ | ▹ calculate the similarity |

if $sim\ge minSim$then | ▹ if similarity greater than minimum similarity |

$listCases.add(case,sim)$ | ▹ add case to list |

end if | |

end for | |

return $listCases$ | |

End |

#### 4.2.2. Reusing Phase

Algorithm 7 Reusing phase. | |

Input:$listCases$ | |

Output:$bestCase$ | |

Begin | |

$simVerySimilar\leftarrow 0.95$ | ▹ similarity to validate if two cases are very similar |

$minRatio\leftarrow 0.75$ | ▹ minimum ratio to add a case into $listBestCases$ |

$listBestCases\leftarrow \varnothing $ | ▹ list to store the best cases |

if listCases is empty then | |

$bestCase\leftarrow applyPreDefinedParameters\left(\right)$ | |

else | |

for all $case\in listCases$ do | |

if $compareSims(case,bestCase)\ge simVerySimilar$ then | |

$ratio\leftarrow case.OptCmax/case.Cmax$ | |

if $ratio\ge minRatio$ then | |

$listBestCases.add\left(case\right)$ | |

else | |

$ratioBest\leftarrow bestCase.OptCmax/bestCase.Cmax$ | |

if $ratio=ratioBest$ then | |

if $case.TimeExec<bestCase.TimeExec$ then | |

$bestCase\leftarrow case$ | |

end if | |

else | ▹ cases are not very similar, so compare similarities directly |

if $ratio>ratioBest$then | ▹ case is more effective |

$bestCase\leftarrow case$ | ▹ update best case |

end if | |

end if | |

end if | |

else | ▹ cases are not very similar, so compare similarities directly |

if $case.Sim>bestCase.Sim$then | ▹ case more similar |

$bestCase\leftarrow case$ | ▹ update best case |

end if | |

end if | |

if listBestCases is not empty then | |

$index\leftarrow random(0,size(listBestCases\left)\right)$ | ▹ randomly selects a case to be considered as the best |

$bestCase\leftarrow listBestCases.get\left(index\right)$ | |

end if | |

end for | |

end if | |

return$bestCase$ | |

End |

#### 4.2.3. Revising Phase

Algorithm 8 Revising phase. | |

Input:$solution,simBestCase$ | |

Output:$revisedSolution$ | |

Begin | |

$revisedSolution\leftarrow \varnothing $ | |

$simCredMin\leftarrow 0.95$ | |

$credit\leftarrow 0$ | |

if $simCredMin<simBestCase$ then | |

$credit=10+(1-simCredMin)\times 100$ | |

else | |

$credit=10+(1-simBestCase)\times 100$ | |

end if | |

$creditsArray\leftarrow returnCredits(credit,solution)$ | ▹ calculate perturbation |

$revisedSolution\leftarrow solution.updateParameters\left(creditsArray\right)$ | |

return $revisedSolution$ | |

End |

Algorithm 9$returnCredits\left(\right)$ method. | |

Input:$credit,solution$ | |

Output:$creditsArray$ | |

Begin | |

$creditsArray\leftarrow \varnothing $ | |

for all $param\in solution.listParams$ do | ▹ for all parameters |

if $credit>0$ then | |

$creditsArray\left[param\right]\leftarrow random(0,credit/2)$ | ▹ random credit |

$credit=credit-creditsArray\left[param\right]$ | ▹ discount given credit |

if $random(0,1)=1$then | ▹ decide sign of the value |

$creditsArray\left[param\right]\leftarrow -creditsArray\left[param\right]$ | ▹ negative if true |

end if | |

else | |

$creditsArray\left[param\right]\leftarrow 0$ | ▹ there is no more credit to give |

end if | |

end for | |

return $creditsArray$ | |

End |

#### 4.2.4. Retaining Phase

## 5. Computational Results and Discussion

- The median results using previous are larger than the medians of the results obtained from CBR; racing; racing + CBR. Therefore all of the approaches CBR, racing and racing + CBR present improvements when compared to “previous”.
- There are no significant differences between CBR and racing, so using one of those techniques improves the results obtained with previous, but there are no differences between the performances of CBR and racing approaches.
- Combining CBR and racing into a new approach racing + CBR improves the results when compared to using previous or CBR or racing.

## 6. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## References

- Joshi, S.K.; Bansal, J.C. Parameter tuning for meta-heuristics. Knowl. Based Syst.
**2020**, 189, 105094. [Google Scholar] [CrossRef] - Calvet, L.; Juan, A.A.; Serrat, C.; Ries, J. A statistical learning based approach for parameter fine-tuning of metaheuristics. Stat. Oper. Res. Trans.
**2016**, 1, 201–224. [Google Scholar] - Huang, C.; Li, Y.; Yao, X. A survey of automatic parameter tuning methods for metaheuristics. IEEE Trans. Evol. Comput.
**2019**, 24, 201–216. [Google Scholar] [CrossRef] - Birattari, M. Tuning Metaheuristics: A Machine Learning Perspective; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
- Cotta, C.; Sevaux, M.; Sörensen, K. Adaptive and Multilevel Metaheuristics; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
- Madureira, A.; Santos, F.; Pereira, I. Self-Managing Agents for Dynamic Scheduling in Manufacturing. In Proceedings of the 10th Annual Conference Companion on Genetic and Evolutionary Computation (GECCO), Atlanta, GA, USA, 12–16 July 2008. [Google Scholar]
- Pereira, I.; Madureira, A. Self-optimizing through CBR learning. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), Barcelona, Spain, 18–23 July 2010. [Google Scholar]
- Madureira, A.; Pereira, I. Self-Optimization for Dynamic Scheduling in Manufacturing Systems. In Technological Developments in Networking, Education and Automation; Springer: Dordrecht, The Netherlands, 2010. [Google Scholar]
- Pereira, I.; Madureira, A. Self-optimization module for scheduling using case-based reasoning. Appl. Soft Comput.
**2013**, 13, 1419–1432. [Google Scholar] [CrossRef][Green Version] - Pereira, I.; Madureira, A. Self-Optimizing A Multi-Agent Scheduling System: A Racing Based Approach. In Intelligent Distributed Computing IX; Springer: Berlin/Heidelberg, Germany, 2016; pp. 275–284. [Google Scholar]
- Pinedo, M.L. Scheduling: Theory, Algorithms, and Systems; Springer: New York, NY, USA, 2012. [Google Scholar]
- Madureira, A. Meta-Heuristics Application to Scheduling in Dynamic Environments of Discrete Manufacturing. Ph.D. Thesis, University of Minho, Braga, Portugal, 2003. (In Portuguese). [Google Scholar]
- Madureira, A.; Pereira, I.; Pereira, P.; Abraham, A. Negotiation Mechanism for Self-organized Scheduling System with Collective Intelligence. Neurocomputing
**2014**, 132, 97–110. [Google Scholar] [CrossRef] - Gonzalez, T.F. Handbook of Approximation Algorithms and Metaheuristics; CRC Press: Boca Raton, FL, USA, 2007. [Google Scholar]
- Talbi, E.G. Metaheuristics: From Design to Implementation; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
- Pereira, I. Intelligent System for Scheduling Assisted by Learning. Ph.D. Thesis, UTAD, Vila Real, Portugal, 2014. (In Portuguese). [Google Scholar]
- Glover, F.; Kochenberger, G.A. Handbook of Metaheuristics; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
- Sabzi, S.; Abbaspour-Gilandeh, Y.; García-Mateos, G. A fast and accurate expert system for weed identification in potato crops using metaheuristic algorithms. Comput. Ind.
**2018**, 98, 80–89. [Google Scholar] [CrossRef] - Walha, F.; Bekrar, A.; Chaabane, S.; Loukil, T.M. A rail-road PI-hub allocation problem: Active and reactive approaches. Comput. Ind.
**2016**, 81, 138–151. [Google Scholar] [CrossRef] - Hutter, F.; Hamadi, Y.; Hoos, H.H.; Leyton-Brown, K. Performance prediction and automated tuning of randomized and parametric algorithms. In Principles and Practice of Constraint Programming; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
- Smith, J.E. Self-adaptation in evolutionary algorithms for combinatorial optimisation. In Adaptive and Multilevel Metaheuristics; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
- Smit, S.K.; Eiben, A.E. Comparing parameter tuning methods for evolutionary algorithms. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), Trondheim, Norway, 18–21 May 2009. [Google Scholar]
- Hoos, H.H. Automated Algorithm Configuration and Parameter Tuning. In Autonomous Search; Hamadi, Y., Monfroy, E., Saubion, F., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 37–71. [Google Scholar]
- Bartz-Beielstein, T.; Parsopoulos, K.E.; Vrahatis, M.N. Analysis of particle swarm optimization using computational statistics. In Proceedings of the International Conference of Numerical Analysis and Applied Mathematics, Chalkis, Greece, 10–14 September 2004. [Google Scholar]
- Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput.
**1997**, 1, 67–82. [Google Scholar] [CrossRef][Green Version] - Adenso-Diaz, B.; Laguna, M. Fine-tuning of algorithms using fractional experimental designs and local search. Oper. Res.
**2006**, 54, 99–114. [Google Scholar] [CrossRef] - Birattari, M.; Stützle, T.; Paquete, L.; Varrentrapp, K. A Racing Algorithm for Configuring Metaheuristics. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), New York, NY, USA, 9–13 July 2002. [Google Scholar]
- Akay, B.; Karaboga, D. Parameter tuning for the artificial bee colony algorithm. In Proceedings of the International Conference on Computational Collective Intelligence, Wrocław, Poland, 5–7 October 2009; pp. 608–619. [Google Scholar]
- Iwasaki, N.; Yasuda, K.; Ueno, G. Dynamic parameter tuning of particle swarm optimization. IEEE Trans. Electr. Electron. Eng.
**2006**, 1, 353–363. [Google Scholar] [CrossRef] - Bartz-Beielstein, T.; Markon, S. Tuning search algorithms for real-world applications: A regression tree based approach. In Proceedings of the 2004 Congress on Evolutionary Computation, Portland, OR, USA, 19–23 June 2004; Volume 1, pp. 1111–1118. [Google Scholar]
- Amoozegar, M.; Rashedi, E. Parameter tuning of GSA using DOE. In Proceedings of the 4th International Conference on Computer and Knowledge Engineering (ICCKE), Mashhad, Iran, 29–30 October 2014; pp. 431–436. [Google Scholar]
- Vafadarnikjoo, A.; Firouzabadi, S.; Mobin, M.; Roshani, A. A meta-heuristic approach to locate optimal switch locations in cellular mobile networks. In Proceedings of the 2015 American Society of Engineering Management Conference (ASEM2015), Vienna, Austria, 4–9 October 2015. [Google Scholar]
- Tavana, M.; Kazemi, M.R.; Vafadarnikjoo, A.; Mobin, M. An artificial immune algorithm for ergonomic product classification using anthropometric measurements. Measurement
**2016**, 94, 621–629. [Google Scholar] [CrossRef] - Yu, A.J.; Seif, J. Minimizing tardiness and maintenance costs in flow shop scheduling by a lower-bound-based GA. Comput. Ind. Eng.
**2016**, 97, 26–40. [Google Scholar] [CrossRef] - Veček, N.; Mernik, M.; Filipič, B.; Črepinšek, M. Parameter tuning with Chess Rating System (CRS-Tuning) for meta-heuristic algorithms. Inf. Sci.
**2016**, 372, 446–469. [Google Scholar] [CrossRef] - Kayvanfar, V.; Zandieh, M.; Teymourian, E. An intelligent water drop algorithm to identical parallel machine scheduling with controllable processing times: A just-in-time approach. Comput. Appl. Math.
**2017**, 36, 159–184. [Google Scholar] [CrossRef] - Mobin, M.; Mousavi, S.M.; Komaki, M.; Tavana, M. A hybrid desirability function approach for tuning parameters in evolutionary optimization algorithms. Measurement
**2018**, 114, 417–427. [Google Scholar] [CrossRef] - Eiben, A.; Smit, S. Evolutionary algorithm parameters and methods to tune them. In Autonomous Search; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
- Hamadi, Y.; Monfroy, E.; Saubion, F. An introduction to autonomous search. In Autonomous Search; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
- Skakov, E.; Malysh, V. Parameter meta-optimization of metaheuristics of solving specific NP-hard facility location problem. J. Phys. Conf. Ser.
**2018**, 973, 012063. [Google Scholar] [CrossRef] - Karafotias, G.; Hoogendoorn, M.; Eiben, Á.E. Parameter control in evolutionary algorithms: Trends and challenges. IEEE Trans. Evol. Comput.
**2014**, 19, 167–187. [Google Scholar] [CrossRef] - Eiben, A.E.; Smith, J.E. Introduction to Evolutionary Computing; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
- Bartz-Beielstein, T. Experimental Research in Evolutionary Computation; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
- Box, G.E.; Hunter, W.G.; Hunter, J.S. Statistics for Experimenters: Design, Innovation, and Discovery; John Wiley & Sons: Hoboken, NJ, USA, 2005. [Google Scholar]
- Coy, S.P.; Golden, B.L.; Runger, G.C.; Wasil, E.A. Using experimental design to find effective parameter settings for heuristics. J. Heuristics
**2001**, 7, 77–97. [Google Scholar] [CrossRef] - Johnson, D.S. A theoretician’s guide to the experimental analysis of algorithms. In Data Structures, near Neighbor Searches, and Methodology: Fifth and Sixth DIMACS Implementation Challenges; American Mathematical Society: Providence, RI, USA, 2002. [Google Scholar]
- Schaffer, J.D.; Caruana, R.A.; Eshelman, L.J.; Das, R. A study of control parameters affecting online performance of genetic algorithms for function optimization. In Proceedings of the International Conference on Genetic Algorithms, Fairfax, VA, USA, 4–7 June 1989. [Google Scholar]
- Yuan, B.; Gallagher, M. Combining Meta-EAs and racing for difficult EA parameter tuning tasks. In Parameter Setting in Evolutionary Algorithms; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
- Dobslaw, F. A parameter tuning framework for metaheuristics based on design of experiments and artificial neural networks. In Proceedings of the International Conference on Computer Mathematics and Natural Computing, Rome, Italy, 2 April 2010. [Google Scholar]
- Smith-Miles, K.A. Cross-disciplinary perspectives on meta-learning for algorithm selection. ACM Comput. Surv.
**2008**, 41, 1–25. [Google Scholar] [CrossRef] - Stoean, R.; Bartz-Beielstein, T.; Preuss, M.; Stoean, C. A Support Vector Machine-Inspired Evolutionary Approach for Parameter Tuning in Metaheuristics. Working Paper. 2009. Available online: https://www.semanticscholar.org/paper/A-Support-Vector-Machine-Inspired-Evolutionary-for-Stoean-Bartz-Beielstein/e84aa2111ab61e3b000691368fedbfc19b5e01e1 (accessed on 7 February 2021).
- Zennaki, M.; Ech-Cherif, A. A New Machine Learning based Approach for Tuning Metaheuristics for the Solution of Hard Combinatorial Optimization Problems. J. Appl. Sci.
**2010**, 10, 1991–2000. [Google Scholar] [CrossRef] - Lessmann, S.; Caserta, M.; Arango, I.M. Tuning metaheuristics: A data mining based approach for particle swarm optimization. Expert Syst. Appl.
**2011**, 38, 12826–12838. [Google Scholar] [CrossRef] - Maron, O.; Moore, A.W. Hoeffding Races: Accelerating Model Selection Search for Classification and Function Approximation. In Advances in Neural Information Processing Systems; Morgan-Kaufmann: Burlington, MA, USA, 1993. [Google Scholar]
- Hoeffding, W. Probability inequalities for sums of bounded random variables. J. Am. Stat. Assoc.
**1963**, 58, 13–30. [Google Scholar] [CrossRef] - Lee, M.S.; Moore, A. Efficient algorithms for minimizing cross validation error. In Proceedings of the Machine Learning, Eighth International Conference, New Brunswick, NJ, USA, 10–13 July 1994. [Google Scholar]
- Dean, A.; Voss, D. Design and Analysis of Experiments; Springer: New York, NY, USA, 1999. [Google Scholar]
- Montgomery, D.C. Design and Analysis of Experiments; John Wiley & Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
- Sheskin, D.J. Handbook of Parametric and Nonparametric Statistical Procedures; Chapman and Hall/CRC: New York, NY, USA, 2003. [Google Scholar]
- Aamodt, A.; Plaza, E. Case-based reasoning: Foundational issues, methodological variations, and system approaches. AI Commun.
**1994**, 7, 39–59. [Google Scholar] [CrossRef] - Kolodner, J. Case-Based Reasoning; Morgan-Kaufmann: Burlington, MA, USA, 2014. [Google Scholar]
- Chang, J.W.; Lee, M.C.; Wang, T.I. Integrating a semantic-based retrieval agent into case-based reasoning systems: A case study of an online bookstore. Comput. Ind.
**2016**, 78, 29–42. [Google Scholar] [CrossRef] - Zhang, P.; Essaid, A.; Zanni-Merk, C.; Cavallucci, D.; Ghabri, S. Experience capitalization to support decision making in inventive problem solving. Comput. Ind.
**2018**, 101, 25–40. [Google Scholar] [CrossRef] - Khosravani, M.R.; Nasiri, S.; Weinberg, K. Application of case-based reasoning in a fault detection system on production of drippers. Appl. Soft Comput.
**2019**, 75, 227–232. [Google Scholar] [CrossRef] - Beddoe, G.; Petrovic, S.; Li, J. A hybrid metaheuristic case-based reasoning system for nurse rostering. J. Sched.
**2009**, 12, 99–119. [Google Scholar] [CrossRef] - Burke, E.K.; MacCarthy, B.L.; Petrovic, S.; Qu, R. Knowledge discovery in a hyper-heuristic for course timetabling using case-based reasoning. In Practice and Theory of Automated Timetabling IV; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
- Petrovic, S.; Yang, Y.; Dror, M. Case-based selection of initialisation heuristics for metaheuristic examination timetabling. Expert Syst. Appl.
**2007**, 33, 772–785. [Google Scholar] [CrossRef] - Beddoe, G.R.; Petrovic, S. Selecting and weighting features using a genetic algorithm in a case-based reasoning approach to personnel rostering. Eur. J. Oper. Res.
**2006**, 175, 649–671. [Google Scholar] [CrossRef] - Madureira, A.; Pereira, I.; Sousa, N. Self-organization for scheduling in agile manufacturing. In Proceedings of the 10th International Conference on Cybernetic Intelligent Systems, London, UK, 1–2 September 2011. [Google Scholar]
- Madureira, A.; Pereira, I.; Falcão, D. Dynamic Adaptation for Scheduling Under Rush Manufacturing Orders With Case-Based Reasoning. In Proceedings of the International Conference on Algebraic and Symbolic Computation, Boston, MA, USA, 26–29 June 2013. [Google Scholar]
- Madureira, A.; Pereira, I.; Falcão, D. Cooperative Scheduling System with Emergent Swarm Based Behavior. In Advances in Information Systems and Technologies; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
- Madureira, A.; Cunha, B.; Pereira, I. Cooperation Mechanism for Distributed resource scheduling through artificial bee colony based self-organized scheduling system. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014. [Google Scholar]
- R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2014. [Google Scholar]

**Figure 1.**Metaheuristic parameter tuning strategies (adapted from [15]).

**Figure 2.**Graphical representation of the computational effort by racing vs. brute force approaches executing a number of instances over a number of candidates [4]. The dashed line represents the computational effort of brute force approaches. The gray curve represents the computational effort of racing approaches.

**Figure 3.**The case-based reasoning (CBR) cycle (adapted from [61]).

**Figure 4.**AutoDynAgents system architecture [7].

**Table 1.**Values of Tabu search parameters. n is the number of tasks. $NeighGen$ is the percentage of generated neighbors. $NeighSize$ is the size of the neighborhood. $TabuListLen$ is the length of the tabu list.

MH | n | $\mathbf{NeighGen}$ | $\mathbf{NeighSize}$ | $\mathbf{TabuListLen}$ | Stopping Criteria |
---|---|---|---|---|---|

TS | 10 | 3 | 50 | ||

15 | 1 | ||||

20 | 15% | 100% | 2 | 125 | |

30 | 3 | 175 | |||

50 | 5 | 200 |

**Table 2.**Values of genetic algorithm parameters. $InitPopGen$ is the percentage of the initial population. $PopSize$ is the size of the population. $NumGen$ is the number of generations. $CrossRate$ is the crossover rate. $MutRate$ is the mutation rate.

MH | n | $\mathbf{InitPopGen}$ | $\mathbf{PopSize}$ | $\mathbf{NumGen}$ | $\mathbf{CrossRate}$ | $\mathbf{MutRate}$ |
---|---|---|---|---|---|---|

GA | 10 | 100 | 75% | 1% | ||

15 | 100% | 125 | 65% | |||

20 | 15% | 150 | ||||

30 | 200 | 75% | ||||

50 | 35% | 400 |

**Table 3.**Values of simulated annealing parameters. $InitTemp$ is the initial temperature. $\alpha $ is the cooling factor. $NumItK$ is the number of iterations at the same temperature.

MH | n | $\mathbf{InitTemp}$ | $\mathit{\alpha}$ | $\mathbf{NumItK}$ | Stopping Criteria |
---|---|---|---|---|---|

SA | 10 | 5 | 30 | ||

15 | 10 | 70 | |||

20 | 15% | 80% | 20 | ||

30 | 30 | 90 | |||

50 | 200 |

**Table 4.**Values of ant colony optimization parameters. $Ants$ is the number of ants. $EvapRate$ is the pheromone evaporation rate. $\alpha $ controls the influence of pheromones. $\beta $ controls the desirability of state transitions.

MH | n | $\mathbf{Ants}$ | $\mathbf{EvapRate}$ | $\mathit{\alpha}$ | $\mathit{\beta}$ | Stopping Criteria |
---|---|---|---|---|---|---|

ACO | 10 | 20 | 2 | 2 | 75 | |

15 | 50 | 1 | 100 | |||

20 | 30 | 80% | 1 | 2 | ||

30 | 50 | 2 | 250 | |||

50 | 75 | 1 | 350 |

**Table 5.**Values of particle swarm optimization parameters. $Part$ is the number of particles. $mIn$ is the minimum inertia. $MIn$ is the maximum inertia. $c1$ and $c2$ are learning factors. $mVel$ is the minimum velocity. $MVel$ is the maximum velocity.

MH | n | $\mathbf{Part}$ | Stopping Criteria | $\mathbf{mIn}$ | $\mathbf{MIn}$ | $\mathit{c}1$ | $\mathit{c}2$ | $\mathbf{mVel}$ | $\mathbf{MVel}$ |
---|---|---|---|---|---|---|---|---|---|

PSO | 10 | 30 | 1000 | ||||||

15 | 25 | 1500 | |||||||

20 | 1750 | 40% | 95% | 2 | 2 | −4 | 4 | ||

30 | 75 | 3500 | |||||||

50 | 4500 |

**Table 6.**Values of artificial bee colony parameters. $Sn$ is the size of the population. $MaxFail$ is the maximum failures allowed.

MH | n | $\mathbf{Sn}$ | $\mathbf{MaxFail}$ | Stopping Criteria |
---|---|---|---|---|

ABC | 10 | 75 | 1250 | 2000 |

15 | 1750 | 3000 | ||

20 | 2000 | 3500 | ||

30 | 125 | 2250 | 4000 | |

50 | 4500 |

**Table 7.**Descriptive analysis of the ${C}_{max}$ ratio for all approaches, including the obtained minimum values, maximum values, averages and standard deviations.

Min Value | Max Value | Average | $\mathit{\sigma}$ | |
---|---|---|---|---|

Previous results | 0.07 | 0.57 | 0.3792 | 0.13217 |

Racing | 0.08 | 0.53 | 0.3474 | 0.11834 |

CBR | 0.09 | 0.50 | 0.3388 | 0.10736 |

Racing + CBR | 0.07 | 0.48 | 0.3142 | 0.10999 |

Average | $\mathit{\sigma}$ | t | DoF | p | |
---|---|---|---|---|---|

Previous vs. Racing + CBR | 0.06496 | 0.06499 | 5.475 | 29 | 0.000 |

Racing vs. Racing + CBR | 0.03314 | 0.03582 | 5.068 | 29 | 0.000 |

CBR vs. Racing + CBR | 0.02454 | 0.05208 | 2.581 | 29 | 0.015 |

**Table 9.**Median, mean, standard deviation and Shapiro–Wilks’ normality test p-values for the results obtained with the four approaches.

Approach | Median | Mean | Std. Deviation | Shapiro Wilks’ p-Value |
---|---|---|---|---|

Previous | $0.4095$ | $0.3792$ | $0.1321693$ | $0.10$ |

CBR | $0.35210$ | $0.33880$ | $0.107356$ | $0.11$ |

Racing | $0.35770$ | $0.34740$ | $0.1183369$ | $0.08$ |

Racing + CBR | $0.31900$ | $0.31420$ | $0.1099901$ | $0.06$ |

Test Statistics | Df | p-Value | |
---|---|---|---|

Friedman’s Aligned Rank | $37.939$ | 3 | $2.911\times {10}^{-8}<0.05$ |

Quade | $16.988$ | 3 | $9.052\times {10}^{-9}<0.05$ |

CBR | Racing | Racing + CBR | |
---|---|---|---|

p-value | $7.22\times {10}^{-6}<0.05$ | $0.0018<0.05$ | $5.95\times {10}^{-12}<0.05$ |

CBR | Racing | Racing + CBR | |
---|---|---|---|

Previous | $2.2\times {10}^{-5}<0.05$ | $3.6\times {10}^{-3}<0.05$ | $3.6\times {10}^{-11}<0.05$ |

CBR | $1.7\times {10}^{-1}>0.05$ | $1.7\times {10}^{-2}<0.05$ | |

Racing | $5.1\times {10}^{-4}<0.05$ |

Null Hypothesis | Alternative Hypothesis | p-Value |
---|---|---|

${\eta}_{Previous}={\eta}_{CBR}$ | ${\eta}_{Previous}>{\eta}_{CBR}$ | $0.0002214<0.05$ |

${\eta}_{Previous}={\eta}_{Racing}$ | ${\eta}_{Previous}>{\eta}_{Racing}$ | $2.611\times {10}^{-5}<0.05$ |

${\eta}_{Previous}={\eta}_{Racing+CBR}$ | ${\eta}_{Previous}>{\eta}_{Racing+CBR}$ | $3.392\times {10}^{-6}<0.05$ |

${\eta}_{CBR}={\eta}_{Racing+CBR}$ | ${\eta}_{CBR}>{\eta}_{Racing+CBR}$ | $0.007589<0.05$ |

${\eta}_{Racing}={\eta}_{Racing+CBR}$ | ${\eta}_{Racing}>{\eta}_{Racing+CBR}$ | $1.05\times {10}^{-5}<0.05$ |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Pereira, I.; Madureira, A.; Costa e Silva, E.; Abraham, A.
A Hybrid Metaheuristics Parameter Tuning Approach for Scheduling through Racing and Case-Based Reasoning. *Appl. Sci.* **2021**, *11*, 3325.
https://doi.org/10.3390/app11083325

**AMA Style**

Pereira I, Madureira A, Costa e Silva E, Abraham A.
A Hybrid Metaheuristics Parameter Tuning Approach for Scheduling through Racing and Case-Based Reasoning. *Applied Sciences*. 2021; 11(8):3325.
https://doi.org/10.3390/app11083325

**Chicago/Turabian Style**

Pereira, Ivo, Ana Madureira, Eliana Costa e Silva, and Ajith Abraham.
2021. "A Hybrid Metaheuristics Parameter Tuning Approach for Scheduling through Racing and Case-Based Reasoning" *Applied Sciences* 11, no. 8: 3325.
https://doi.org/10.3390/app11083325