Rethinking Metaheuristics: Unveiling the Myth of “Novelty” in Metaheuristic Algorithms
Abstract
1. Introduction
- Ineffective metaphor: Many algorithms claim novelty through “unique terminology based on new metaphors” that differ only in superficial descriptions rather than substantial mechanisms. These metaphorical approaches—such as biological evolution or physical phenomena—offer intuitive appeal but lack scientific rigor, clear definitions, and explanatory mechanisms. They often obscure the underlying mathematical principles, making it difficult to understand the algorithm’s performance or limitations. In some cases, the metaphor is oversimplified or altered to resemble optimization processes, even when such a link is unsubstantiated. More concerning, many metaheuristic algorithms based on metaphors suffer from a disconnect between conceptualization, mathematical modeling, and implementation, severely impairing transparency and hindering further analysis or practical adoption.
- Structural bias: Structural bias in metaheuristic algorithms refers to a tendency for certain regions or patterns in the search space to be favored due to the algorithm’s design. This bias can be unintentionally introduced through initialization, search operators, or parameters, or it can be intentionally embedded to improve performance on specific problems. The intentional introduction of structural bias distorts evaluations, creating misleading conclusions about the algorithm’s capabilities, particularly when tested on simple problems.
- Old wine in new bottles: Many metaheuristic algorithms are little more than rebranded versions of classical methods, with only superficial changes. These incremental modifications often lack substantial innovation in the optimization process, leading to a proliferation of algorithms that offer limited practical or theoretical advancement. This oversaturation of similar algorithms complicates performance evaluation and undermines the field’s rigor, diverting attention from addressing core optimization challenges.
- Controversial experimental comparison: Fairness in experimental comparisons is essential to ensure the validity and reliability of conclusions. However, selective testing—focusing only on problem instances where an algorithm performs well or comparing against overly simplistic “toy” algorithms—can lead to inflated performance claims. This selective reporting distorts the algorithm’s true capabilities, creating a false impression of superiority and undermining the credibility of experimental results.
- Critical challenges that have long hindered progress in the field of metaheuristic algorithms were systematically identified and analyzed, including issues such as metaphor-based design, structural bias, and repackaging. These pervasive flaws significantly obstruct the development of more robust, generalizable, and theoretically sound optimization methods, highlighting the urgent need for a paradigm shift in this domain.
- The work of researchers on existing methods for identifying defects was summarized, which provides essential identification techniques for metaheuristic algorithms. These contributions are pivotal in advancing the understanding and mitigation of structural biases in optimization processes, laying the groundwork for more reliable and effective algorithm development.
- A detailed and thorough investigation was performed into the root causes of the negative impacts associated with current metaheuristic approaches, offering valuable insights into their underlying factors.
- An exhaustive summary and analysis of potential research directions in the metaheuristic algorithm domain was presented, complemented by constructive feedback and practical recommendations. These insights are intended to stimulate new avenues of research, address emerging challenges, and promote innovation in the development of more effective and impactful optimization techniques.
2. Metaheuristic Algorithms
2.1. Swarm Intelligence-Based Algorithms
2.2. Evolution-Based Algorithms
2.3. Physics-Based Algorithms
2.4. Human-Based Algorithms
3. Problems with Metaheuristic Algorithms
3.1. Metaphors
3.2. Structural Bias
3.3. Repackaging
3.4. Other Issues
3.4.1. Inconsistency Between Theory and Code Implementation
3.4.2. Controversial Empirical Comparison
4. Identifying Defects
4.1. Generalized Signature Test
4.2. Shifted Benchmark Function
4.3. Parallel Coordinates Test
4.4. BIAS Toolbox
4.5. Region Scaling
5. Analysis of the Causes of These Negative Effects of Metaheuristic Algorithms
- Over-reliance on simplified metaphors: Many metaheuristic algorithms are inspired by natural phenomena like bird flocking or species evolution. While these metaphors offer intuitive insights, they often oversimplify complex real-world problems. This can lead to overlooking nonlinear relationships, complex dynamics, and systemic constraints. Consequently, algorithm designs may lack thorough analysis of practical contexts, relying more on metaphors than rigorous theoretical research. Excessive reliance on such metaphors can also neglect the algorithm’s core principles, resulting in poor performance on complex problems.
- Overemphasis on performance: Performance is often prioritized in metaheuristic research, with a focus on optimizing speed and accuracy. However, this emphasis can overshadow important factors such as stability, interpretability, and applicability. Selective experimental designs—choosing specific test problems or parameter configurations—can introduce bias. Additionally, an excessive focus on performance may cause algorithms to become stuck in local optima or overfit, limiting their effectiveness on more complex, diverse real-world problems.
- Low-risk, low-cost “repackaging”: The “repackaging” phenomenon involves minor modifications to existing algorithms, such as parameter tweaks, presented as new methods. This low-cost, low-risk approach allows researchers to bypass the challenges of designing new algorithms, making it a shortcut to publishing papers. However, this method often lacks innovation and does not contribute to significant advancements in the field. Table 4 summarizes the key differences between genuinely novel metaheuristic algorithms and those that merely represent repackaged variants. As shown, repackaged algorithms tend to reuse existing structures with minimal modification, offering limited theoretical or practical contributions to the field.
- Perverse incentive structure: Campelo and Aranha [111] highlighted perverse incentives within academia, where the “publish or perish” mentality prioritizes short-term achievements over deep scientific inquiry. This environment rewards substandard methodologies, fostering what has been termed the “natural selection of bad science” [112]. Publishing metaphor-based methods is perceived as a relatively low-effort, low-risk endeavor with potentially high rewards.
6. Actionable Recommendations for Advancing Metaheuristic Algorithm Research
- Propose actionable improvements to existing algorithms rather than relying solely on metaphors: While metaphor-based algorithms can be creative, true innovation requires authors to critically analyze and enhance the underlying mechanisms of existing methods. Instead of merely introducing a new metaphor, researchers should consider the following: (1) Through a systematic algorithm evaluation framework [113,114], specific limitations in existing algorithms (e.g., premature convergence, poor scalability) can be identified, and corresponding modifications can be proposed to address these issues. (2) Justify improvements with theoretical analysis (e.g., convergence guarantees and complexity reduction) or empirical validation on real-world problems. (3) Clearly articulate how the proposed changes differ from prior work and why they constitute a meaningful advance.
- ⋄
- Mechanism-Level Diagnosis: Conduct an in-depth analysis of existing algorithms to identify specific issues such as premature convergence, stagnation, or high computational cost.
- ⋄
- Component-Wise Modification: Replace or enhance key components (e.g., selection, mutation, or update rules) with well-justified alternatives designed to improve performance on identified weaknesses.
- ⋄
- Empirical and Theoretical Justification: Support the modifications with rigorous theoretical analysis (e.g., complexity bounds and convergence proofs) and ablation studies that isolate the effect of each change.
- ⋄
- Highlight Novelty and Relevance: Clearly explain how the proposed modifications differ from prior work and why they address a real shortcoming rather than simply rebranding an existing idea.
- Prioritize solving real-world optimization problems rather than tailoring structurally biased algorithms to fit biased benchmark functions: The transition from theoretical research to practical applications necessitates moving beyond algorithm optimization for benchmark functions toward addressing real-world optimization challenges. The use of biased benchmarks can guide the development of algorithms that exploit such biases, but this should not be the end goal. The true objective is to solve the specific problem at hand in the most efficient and effective way possible. In many cases, real-world problems differ significantly from the idealized scenarios presented in benchmark problems. When benchmarks are designed solely for performance testing, they often contain features that favor algorithms with specific structural biases, which can lead to misleading conclusions and unfair comparisons. This overemphasis on fitting algorithms to benchmark problems can result in solutions that perform well in artificial settings but fail to generalize to more complex, dynamic, and noisy real-world environments.
- ⋄
- Incorporate Real-World Case Studies: Validate algorithms on real-world problems from domains such as logistics, energy, scheduling, or engineering design to assess practical relevance.
- ⋄
- Cross-Validation on Application Domains: Evaluate algorithm performance across multiple real-world scenarios instead of focusing solely on synthetic benchmarks.
- ⋄
- Problem-Aware Algorithm Design: Integrate domain knowledge into algorithm design (e.g., constraints and objective structure) to better adapt to specific application needs.
- A correct and comprehensive view of the NFL Theorem: The No Free Lunch (NFL) Theorem [115], a foundational result in optimization theory, asserts that when averaged over all possible objective functions, all optimization algorithms perform equally in terms of their success rates. In other words, no algorithm is universally superior across the entire space of optimization problems. This theorem has profound implications for the design and evaluation of metaheuristic algorithms [64]. Unfortunately, the NFL Theorem is often misunderstood or misused in the metaheuristics community. Some works cite it merely to justify the creation of new algorithms without rigorously addressing the problem-specific contexts in which such algorithms are to be applied. This superficial use neglects the theorem’s core implication—algorithmic performance is inherently problem-dependent. Simply put, the success of any given metaheuristic is contingent upon the structural characteristics of the target problem domain, such as modality, dimensionality, constraint complexity, or noise properties. A deeper understanding of the NFL Theorem encourages a paradigm shift in metaheuristic design—from the pursuit of generic, one-size-fits-all algorithms to the development of customized, domain-aware strategies. For instance, integrating prior knowledge, adaptive control parameters, or hybrid mechanisms tailored to a specific problem landscape aligns with the spirit of the NFL Theorem. It promotes a scientific, problem-driven methodology rather than blind algorithm proliferation. Therefore, rather than serving as a pretext for proposing yet another algorithm inspired by arbitrary metaphors, the NFL Theorem should be interpreted as a call to rigorously align algorithmic mechanisms with the structural features of specific optimization problems. This fosters a more meaningful balance between theoretical insight and practical effectiveness in the ongoing advancement of metaheuristic research.
- ⋄
- Perform problem landscape analysis: Analyze key characteristics of the optimization problem landscape—such as modality, ruggedness, separability, and epistasis—to classify problem types.
- ⋄
- Construct algorithm portfolios or hybrid frameworks: Develop algorithm portfolios or hybrid metaheuristic frameworks that can dynamically adapt to different problem structures.
- ⋄
- Incorporate domain knowledge or problem-specific priors: Integrate relevant domain-specific information—such as physical laws, empirical constraints, or expert heuristics—into the design of algorithmic operators (e.g., initialization, mutation, repair).
- Use fair experimental comparisons, rather than relying on a subset of problems or choosing “toy” competitors: To ensure the validity and reliability of experimental comparisons, it is crucial to adopt a fair and comprehensive approach, rather than relying on a subset of problems or choosing “toy” competitors. These subset problems are unlikely to represent all scenarios in the real world and can lead to misleading conclusions about an algorithm’s true capabilities. Furthermore, choosing “toy” competitors—algorithms intentionally designed to be weak or perform poorly—only serves to artificially inflate the performance of the evaluated algorithm. Such practices distort the competitive landscape and undermine the credibility of the research. A fair experimental comparison should involve a diverse set of benchmark problems spanning a wide range of difficulty levels and characteristics, reflecting the complexity, uncertainty, and dynamic nature of real-world optimization tasks.
- ⋄
- Adopt Standard Benchmark Suites: Employ well-established and widely accepted benchmark sets (e.g., CEC [116]) that span multiple problem types and difficulties.
- ⋄
- Select Strong Baselines: Compare against state-of-the-art and high-performing algorithms from both classical and recent literature, rather than underperforming ones.
- ⋄
- Ensure Reproducibility: Report experimental setups in full detail (parameter settings, stopping criteria, and platform) and release source code when possible.
- ⋄
- Conduct Statistical Tests: Use non-parametric tests (e.g., Wilcoxon signed-rank and Friedman test) to ensure performance differences are statistically significant.
- Providing more comprehensive and transparent analysis: Researchers are encouraged to incorporate comprehensive and transparent analytical approaches when reporting their findings. Simply listing raw data in full-page tables is no longer sufficient; the proper validation of the results is essential and should include not only the application of appropriate statistical tests but also a clear justification for their use, ensuring that all necessary assumptions are met. In addition to statistical testing, authors should employ visualization techniques that effectively summarize large volumes of data in a way that is easily interpretable by readers.
- ⋄
- Use Appropriate Statistical Testing: Apply statistical analysis with clear explanation of the assumptions, test types, and confidence levels (e.g., 95% confidence intervals).
- ⋄
- Visual Data Summarization: Use plots such as boxplots, convergence curves, or heatmaps to visually communicate algorithm behavior and performance trends.
- ⋄
- Perform Ablation Studies: Show how each algorithmic component contributes to overall performance by systematically removing or modifying them.
- ⋄
- Report Variability and Robustness: Report Variability and Robustness: Include metrics such as standard deviation, interquartile range, or success rate to reflect algorithm stability across runs.
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Wang, C.H.; Nguyen, T.T.; Pan, J.S.; Dao, T.k. An optimization approach for potential power generator outputs based on parallelized firefly algorithm. Smart Innov. Syst. Technol. 2017, 64, 297–306. [Google Scholar]
- Dong, Z.; Wang, C.H.; Zhao, Q.; Wei, Y.; Chen, S.; Yang, Q. A study on intelligent optimization algorithms for capacity allocation of production networks. Lect. Notes Electr. Eng. 2022, 804, 734–743. [Google Scholar]
- Wei, Y.; Wang, C.H.; Suo, Y.; Zhao, Q.; Yuan, J.; Chen, S. FHO-based hybrid neural networks for short-term load forecasting in economic dispatch of power systems. J. Netw. Intell. 2025, 10, 262–284. [Google Scholar]
- Chen, S.; Wang, C.H.; Dong, Z.; Zhao, Q.; Yang, Q.; Wei, Y.; Huang, G. Performance evaluation of three intelligent optimization algorithms for obstacle avoidance path planning. Lect. Notes Electr. Eng. 2022, 833, 60–69. [Google Scholar]
- Wang, C.H.; Lee, C.J.; Wu, X. A coverage-based location approach and performance evaluation for the deployment of 5G base stations. IEEE Access 2020, 8, 123320–123333. [Google Scholar] [CrossRef]
- Liu, H.; Wang, C.H. SEAMS: A surrogate-assisted evolutionary algorithm with metric-based dynamic strategy for expensive multi-objective optimization. Expert Syst. Appl. 2025, 265, 126050. [Google Scholar] [CrossRef]
- Fister Jr, I.; Mlakar, U.; Brest, J.; Fister, I. A new population-based nature-inspired algorithm every month: Is the current era coming to the end. In Proceedings of the 3rd Student Computer Science Research Conference, Ljubljana, Slovenia, 12 October 2016; University of Primorska Press: Koper, Slovenia, 2016; pp. 33–37. [Google Scholar]
- Wang, C.H.; Tian, R.; Hu, K.; Chen, Y.T.; Ku, T.H. A Markov decision optimization of medical service resources for two-class patient queues in emergency departments via particle swarm optimization algorithm. Sci. Rep. 2025, 15, 2942. [Google Scholar] [CrossRef]
- Zhao, Q.; Duan, Q.; Yan, B.; Cheng, S.; Shi, Y. Automated design of metaheuristic algorithms: A survey. arXiv 2023, arXiv:2303.06532. [Google Scholar]
- Hussain, K.; Mohd Salleh, M.N.; Cheng, S.; Shi, Y. Metaheuristic research: A comprehensive survey. Artif. Intell. Rev. 2019, 52, 2191–2233. [Google Scholar] [CrossRef]
- Tomar, V.; Bansal, M.; Singh, P. Metaheuristic algorithms for optimization: A brief review. Eng. Proc. 2024, 59, 238. [Google Scholar]
- Nassef, A.M.; Abdelkareem, M.A.; Maghrabie, H.M.; Baroutaji, A. Review of metaheuristic optimization algorithms for power systems problems. Sustainability 2023, 15, 9434. [Google Scholar] [CrossRef]
- Dokeroglu, T.; Sevinc, E.; Kucukyilmaz, T.; Cosar, A. A survey on new generation metaheuristic algorithms. Comput. Ind. Eng. 2019, 137, 106040. [Google Scholar] [CrossRef]
- Wang, C.H.; Yuan, J.; Zeng, Y.; Lin, S. A deep learning integrated framework for predicting stock index price and fluctuation via singular spectrum analysis and particle swarm optimization. Appl. Intell. 2024, 54, 1770–1797. [Google Scholar] [CrossRef]
- Whitacre, J.M. Survival of the flexible: Explaining the recent popularity of nature-inspired optimization within a rapidly evolving world. Computing 2011, 93, 135–146. [Google Scholar] [CrossRef]
- Karafotias, G.; Hoogendoorn, M.; Eiben, A.E. Parameter Control in Evolutionary Algorithms: Trends and Challenges. IEEE Trans. Evol. Comput. 2015, 19, 167–187. [Google Scholar] [CrossRef]
- Wang, C.H.; Zhao, Q.; Tian, R. Short-term wind power prediction based on a hybrid Markov-based PSO-BP neural network. Energies 2023, 16, 4282. [Google Scholar] [CrossRef]
- Yang, Q.; Wang, C.H.; Dao, T.K.; Nguyen, T.T.; Zhao, Q.; Chen, S. An Optimal Wind Turbine Control Based on Improved Chaotic Sparrow Search Algorithm with Normal Cloud Model. J. Netw. Intell. 2024, 9, 108–125. [Google Scholar]
- Wang, C.H.; Chen, S.; Zhao, Q.; Suo, Y. An Efficient End-to-End Obstacle Avoidance Path Planning Algorithm for Intelligent Vehicles Based on Improved Whale Optimization Algorithm. Mathematics 2023, 11, 1800. [Google Scholar] [CrossRef]
- Taleb, S.M.; Yasin, E.T.; Saadi, A.A.; Dogan, M.; Yahia, S.; Meraihi, Y.; Koklu, M.; Mirjalili, S.; Ramdane-Cherif, A. A Comprehensive Survey of Aquila Optimizer: Theory, Variants, Hybridization, and Applications. Arch. Comput. Methods Eng. 2025, 1–47. [Google Scholar] [CrossRef]
- Kazikova, A.; Pluhacek, M.; Senkerik, R. How Does the Number of Objective Function Evaluations Impact Our Understanding of Metaheuristics Behavior? IEEE Access 2021, 9, 44032–44048. [Google Scholar] [CrossRef]
- Valencia-Rivera, G.H.; Benavides-Robles, M.T.; Morales, A.V.; Amaya, I.; Cruz-Duarte, J.M.; Ortiz-Bayliss, J.C.; Avina-Cervantes, J.G. A systematic review of metaheuristic algorithms in electric power systems optimization. Appl. Soft Comput. 2024, 150, 111047. [Google Scholar] [CrossRef]
- Sadeghian, Z.; Akbari, E.; Nematzadeh, H.; Motameni, H. A review of feature selection methods based on meta-heuristic algorithms. J. Exp. Theor. Artif. Intell. 2025, 37, 1–51. [Google Scholar] [CrossRef]
- Benaissa, B.; Kobayashi, M.; Al Ali, M.; Khatir, T.; Elmeliani, M.E.A.E. Metaheuristic optimization algorithms: An overview. Hcmcou J. -Sci.–Adv. Comput. Struct. 2024, 14, 33–61. [Google Scholar] [CrossRef]
- Li, G.; Zhang, T.; Tsai, C.Y.; Yao, L.; Lu, Y.; Tang, J. Review of the metaheuristic algorithms in applications: Visual analysis based on bibliometrics (1994–2023). Expert Syst. Appl. 2024, 255, 124857. [Google Scholar] [CrossRef]
- Rautray, R.; Dash, R.; Dash, R.; Chandra Balabantaray, R.; Parida, S.P. A review on metaheuristic approaches for optimization problems. In Computational Intelligence in Healthcare Informatics; Springer: Singapore, 2024; pp. 33–55. [Google Scholar]
- Hooker, J.N. Needed: An Empirical Science of Algorithms. Oper. Res. 1994, 42, 201–212. [Google Scholar] [CrossRef]
- Eiben, A.E.; Jelasity, M. A critical note on experimental research methodology in EC. In Proceedings of the 2002 Congress on Evolutionary Computation. CEC’02 (Cat. No. 02TH8600), Honolulu, HI, USA, 12–17 May 2002; Volume 1, pp. 582–587. [Google Scholar]
- García-Martínez, C.; Gutiérrez, P.D.; Molina, D.; Lozano, M.; Herrera, F. Since CEC 2005 competition on real-parameter optimisation: A decade of research, progress and comparative analysis’s weakness. Soft Comput. 2017, 21, 5573–5583. [Google Scholar] [CrossRef]
- Campelo, F.; Takahashi, F. Sample size estimation for power and accuracy in the experimental comparison of algorithms. J. Heuristics 2019, 25, 305–338. [Google Scholar] [CrossRef]
- Song, Q.; Fong, S. Brick-Up Metaheuristic Algorithms. In Proceedings of the 2016 5th IIAI International Congress on Advanced Applied Informatics (IIAI–AAI), Kumamoto, Japan, 10–14 July 2016; pp. 583–587. [Google Scholar]
- Sarhani, M.; Voß, S.; Jovanovic, R. Initialization of metaheuristics: Comprehensive review, critical analysis, and research directions. Int. Trans. Oper. Res. 2023, 30, 3361–3397. [Google Scholar] [CrossRef]
- Piotrowski, A.P.; Napiorkowski, J.J. Some metaheuristics should be simplified. Inf. Sci. 2018, 427, 32–62. [Google Scholar] [CrossRef]
- Dragoi, E.N.; Dafinescu, V. Review of Metaheuristics Inspired from the Animal Kingdom. Mathematics 2021, 9, 2335. [Google Scholar] [CrossRef]
- Oliveira, M.; Pinheiro, D.; Macedo, M.; Bastos-Filho, C.; Menezes, R. Uncovering the social interaction network in swarm intelligence algorithms. Appl. Netw. Sci. 2020, 5, 24. [Google Scholar] [CrossRef]
- Hodashinsky, I.A. Methods for Improving the Efficiency of Swarm Optimization Algorithms. A Survey. Autom. Remote Control 2021, 82, 935–967. [Google Scholar] [CrossRef]
- Campelo, F.; Aranha, C. Sharks, Zombies and Volleyball: Lessons from the Evolutionary Computation Bestiary. In Proceedings of the LIFELIKE Computing Systems Workshop 2021. CEUR-WS.org, 2021. Available online: https://publications.aston.ac.uk/id/eprint/43161/1/main.pdf (accessed on 20 December 2024).
- Del Ser, J.; Osaba, E.; Molina, D.; Yang, X.S.; Salcedo-Sanz, S.; Camacho, D.; Das, S.; Suganthan, P.N.; Coello, C.A.C.; Herrera, F. Bio-inspired computation: Where we stand and what’s next. Swarm Evol. Comput. 2019, 48, 220–250. [Google Scholar] [CrossRef]
- Osaba, E.; Villar-Rodriguez, E.; Del Ser, J.; Nebro, A.J.; Molina, D.; LaTorre, A.; Suganthan, P.N.; Coello, C.A.C.; Herrera, F. A Tutorial On the design, experimentation and application of metaheuristic algorithms to real-World optimization problems. Swarm Evol. Comput. 2021, 64, 100888. [Google Scholar] [CrossRef]
- LaTorre, A.; Molina, D.; Osaba, E.; Poyatos, J.; Del Ser, J.; Herrera, F. A prescription of methodological guidelines for comparing bio-inspired optimization algorithms. Swarm Evol. Comput. 2021, 67, 100973. [Google Scholar] [CrossRef]
- Glover, F. Future paths for integer programming and links to artificial intelligence. Comput. Oper. Res. 1986, 13, 533–549. [Google Scholar] [CrossRef]
- Abualigah, L.; Shehab, M.; Alshinwan, M.; Mirjalili, S.; Elaziz, M.A. Ant lion optimizer: A comprehensive survey of its variants and applications. Arch. Comput. Methods Eng. 2021, 28, 1397–1416. [Google Scholar] [CrossRef]
- Makhadmeh, S.N.; Al-Betar, M.A.; Abasi, A.K.; Awadallah, M.A.; Doush, I.A.; Alyasseri, Z.A.A.; Alomari, O.A. Recent advances in butterfly optimization algorithm, its versions and applications. Arch. Comput. Methods Eng. 2023, 30, 1399–1420. [Google Scholar] [CrossRef]
- Liu, Y.; As’ arry, A.; Hassan, M.K.; Hairuddin, A.A.; Mohamad, H. Review of the grey wolf optimization algorithm: Variants and applications. Neural Comput. Appl. 2024, 36, 2713–2735. [Google Scholar] [CrossRef]
- Gad, A.G. Particle swarm optimization algorithm and its applications: A systematic review. Arch. Comput. Methods Eng. 2022, 29, 2531–2561. [Google Scholar] [CrossRef]
- Shami, T.M.; El-Saleh, A.A.; Alswaitti, M.; Al-Tashi, Q.; Summakieh, M.A.; Mirjalili, S. Particle swarm optimization: A comprehensive survey. IEEE Access 2022, 10, 10031–10061. [Google Scholar] [CrossRef]
- Priyadarshi, R.; Kumar, R.R. Evolution of Swarm Intelligence: A Systematic Review of Particle Swarm and Ant Colony Optimization Approaches in Modern Research. Arch. Comput. Methods Eng. 2025, 1–42. [Google Scholar] [CrossRef]
- Katoch, S.; Chauhan, S.S.; Kumar, V. A review on genetic algorithm: Past, present, and future. Multimed. Tools Appl. 2021, 80, 8091–8126. [Google Scholar] [CrossRef]
- Guilmeau, T.; Chouzenoux, E.; Elvira, V. Simulated annealing: A review and a new scheme. In Proceedings of the 2021 IEEE Statistical Signal Processing Workshop (SSP), Rio de Janeiro, Brazil, 11–14 July 2021; pp. 101–105. [Google Scholar]
- Abualigah, L.; Elaziz, M.A.; Hussien, A.G.; Alsalibi, B.; Jalali, S.M.J.; Gandomi, A.H. Lightning search algorithm: A comprehensive survey. Appl. Intell. 2021, 51, 2353–2376. [Google Scholar] [CrossRef] [PubMed]
- Abdel-Basset, M.; Mohamed, R.; Chakrabortty, R.K.; Sallam, K.; Ryan, M.J. An efficient teaching-learning-based optimization algorithm for parameters identification of photovoltaic models: Analysis and validations. Energy Convers. Manag. 2021, 227, 113614. [Google Scholar] [CrossRef]
- Navaneetha Krishnan, M.; Thiyagarajan, R. Multi-objective task scheduling in fog computing using improved gaining sharing knowledge based algorithm. Concurr. Comput. Pract. Exp. 2022, 34, e7227. [Google Scholar] [CrossRef]
- Weyland, D. A Rigorous Analysis of the Harmony Search Algorithm: How the Research Community can be Misled by a “Novel” Methodology. Int. J. Appl. Metaheuristic Comput. 2010, 1, 50–60. [Google Scholar] [CrossRef]
- Weyland, D. A critical analysis of the harmony search algorithm-How not to solve sudoku. Oper. Res. Perspect. 2015, 2, 97–105. [Google Scholar] [CrossRef]
- Saka, M.P.; Hasançebi, O.; Geem, Z.W. Metaheuristics in structural optimization and discussions on harmony search algorithm. Swarm Evol. Comput. 2016, 28, 88–97. [Google Scholar] [CrossRef]
- Camacho-Villalón, C.L.; Dorigo, M.; Stützle, T. The intelligent water drops algorithm: Why it cannot be considered a novel algorithm. Swarm Intell. 2019, 13, 173–192. [Google Scholar] [CrossRef]
- Simon, D.; Rarick, R.; Ergezer, M.; Du, D. Analytical and numerical comparisons of biogeography-based optimization and genetic algorithms. Inf. Sci. 2011, 181, 1224–1248. [Google Scholar] [CrossRef]
- Camacho-Villalón, C.L.; Dorigo, M.; Stützle, T. Exposing the grey wolf, moth-flame, whale, firefly, bat, and antlion algorithms: Six misleading optimization techniques inspired by bestial metaphors. Int. Trans. Oper. Res. 2023, 30, 2945–2971. [Google Scholar] [CrossRef]
- Davarynejad, M.; van den Berg, J.; Rezaei, J. Evaluating center-seeking and initialization bias: The case of particle swarm and gravitational search algorithms. Inf. Sci. 2014, 278, 802–821. [Google Scholar] [CrossRef]
- Camacho-Villalón, C.L.; Dorigo, M.; Stützle, T. An analysis of why cuckoo search does not bring any novel ideas to optimization. Comput. Oper. Res. 2022, 142, 105747. [Google Scholar] [CrossRef]
- Baronti, L.; Castellani, M.; Pham, D.T. An analysis of the search mechanisms of the bees algorithm. Swarm Evol. Comput. 2020, 59, 100746. [Google Scholar] [CrossRef]
- Kudela, J. The Evolutionary Computation Methods No One Should Use. arXiv 2023, arXiv:2301.01984. [Google Scholar]
- Piotrowski, A.P.; Napiorkowski, J.J.; Rowinski, P.M. How novel is the "novel" black hole optimization approach? Inf. Sci. 2014, 267, 191–200. [Google Scholar] [CrossRef]
- Velasco, L.; Guerrero, H.; Hospitaler, A. A Literature Review and Critical Analysis of Metaheuristics Recently Developed. Arch. Comput. Methods Eng. 2024, 31, 125–146. [Google Scholar] [CrossRef]
- Deng, L.; Liu, S. Deficiencies of the whale optimization algorithm and its validation method. Expert Syst. Appl. 2024, 237, 121544. [Google Scholar] [CrossRef]
- Niu, P.; Niu, S.; liu, N.; Chang, L. The defect of the Grey Wolf optimization algorithm and its verification method. Knowl.-Based Syst. 2019, 171, 37–43. [Google Scholar] [CrossRef]
- Hu, J.; Chen, H.; Heidari, A.A.; Wang, M.; Zhang, X.; Chen, Y.; Pan, Z. Orthogonal learning covariance matrix for defects of grey wolf optimizer: Insights, balance, diversity, and feature selection. Knowl.-Based Syst. 2021, 213, 106684. [Google Scholar] [CrossRef]
- Harandi, N.; Van Messem, A.; De Neve, W.; Vankerschaver, J. Grasshopper Optimization Algorithm (GOA): A Novel Algorithm or A Variant of PSO? In Proceedings of the International Conference on Swarm Intelligence, Konstanz, Germany, 9–11 October 2024; Springer: Cham, Switzerland, 2024; pp. 84–97. [Google Scholar]
- Rajwar, K.; Deep, K.; Mathirajan, M. Impact of Structural Bias on the Sine Cosine Algorithm: A Theoretical Investigation Using the Signature Test. In Proceedings of the International Conference on Metaheuristics and Nature Inspired Computing, Marrakech, Morocco, 1–4 November 2023; Springer: Cham, Switzerland, 2023; pp. 131–141. [Google Scholar]
- Halsema, M.; Vermetten, D.; Bäck, T.; Van Stein, N. A Critical Analysis of Raven Roost Optimization. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, Melbourne, VIC, Australia, 14–18 July 2024; pp. 1993–2001. [Google Scholar]
- Kumar, M.; Rajwar, K.; Deep, K. Analysis of Marine Predators Algorithm using BIAS toolbox and Generalized Signature Test. Alex. Eng. J. 2024, 95, 38–49. [Google Scholar] [CrossRef]
- Deng, L.; Liu, S. Exposing the chimp optimization algorithm: A misleading metaheuristic technique with structural bias. Appl. Soft Comput. 2024, 158, 111574. [Google Scholar] [CrossRef]
- Deng, L.; Liu, S. Metaheuristics exposed: Unmasking the design pitfalls of arithmetic optimization algorithm in benchmarking. Appl. Soft Comput. 2024, 160, 111696. [Google Scholar] [CrossRef]
- Chen, S.; Islam, S.; Lao, S. The Danger of Metaphors for Metaheuristic Design. In Proceedings of the 2023 IEEE Latin American Conference on Computational Intelligence (LA-CCI), Recife-Pe, Brazil, 29 October–1 November 2023; pp. 1–6. [Google Scholar]
- Sörensen, K. Metaheuristics—the metaphor exposed. Int. Trans. Oper. Res. 2015, 22, 3–18. [Google Scholar] [CrossRef]
- Kononova, A.V.; Caraffini, F.; Wang, H.; Bäck, T. Can Single Solution Optimisation Methods Be Structurally Biased? In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–9. [Google Scholar]
- Caraffini, F.; Kononova, A.V. Structural bias in differential evolution: A preliminary study. Aip Conf. Proc. 2019, 2070, 020005. [Google Scholar]
- Castelli, M.; Manzoni, L.; Mariot, L.; Nobile, M.S.; Tangherloni, A. Salp Swarm Optimization: A critical review. Expert Syst. Appl. 2022, 189, 116029. [Google Scholar] [CrossRef]
- Vent, W. Rechenberg, Ingo, Evolutionsstrategie—Optimierung Technischer Systeme nach Prinzipien der Biologischen Evolution; 170 S. mit 36 Abb. Frommann-Holzboog-Verlag: Stuttgart, Germany, 1973. [Google Scholar]
- Padberg, M. Harmony Search Algorithms for binary optimization problems. In Operations Research Proceedings 2011; Springer: Berlin/Heidelberg, Germany, 2012; pp. 343–348. [Google Scholar]
- De Corte, A.; Sörensen, K. Optimisation of gravity-fed water distribution network design: A critical review. Eur. J. Oper. Res. 2013, 228, 1–10. [Google Scholar] [CrossRef]
- Schwefel, H.P. Numerical Optimization of Computer Models; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 1981. [Google Scholar]
- Storn, R.; Price, K. Differential Evolution-A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
- Derrac, J.; García, S.; Hui, S.; Suganthan, P.N.; Herrera, F. Analyzing convergence performance of evolutionary algorithms: A statistical approach. Inf. Sci. 2014, 289, 41–58. [Google Scholar] [CrossRef]
- Das, S.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for CEC 2011 Competition on Testing Evolutionary Algorithms on Real World Optimization Problems; Jadavpur University, Nanyang Technological University: Kolkata, India, 2010; pp. 341–359. [Google Scholar]
- Veček, N.; Mernik, M.; Črepinšek, M. A chess rating system for evolutionary algorithms: A new method for the comparison and ranking of evolutionary algorithms. Inf. Sci. 2014, 277, 656–679. [Google Scholar] [CrossRef]
- Dymond, A.S.; Engelbrecht, A.P.; Kok, S.; Heyns, P.S. Tuning Optimization Algorithms Under Multiple Objective Function Evaluation Budgets. IEEE Trans. Evol. Comput. 2015, 19, 341–358. [Google Scholar] [CrossRef]
- Liao, T.; Aydın, D.; Stützle, T. Artificial bee colonies for continuous optimization: Experimental analysis and improvements. Swarm Intell. 2013, 7, 327–356. [Google Scholar] [CrossRef]
- Draa, A. On the performances of the flower pollination algorithm–Qualitative and quantitative analyses. Appl. Soft Comput. 2015, 34, 349–371. [Google Scholar] [CrossRef]
- Piotrowski, A.P. Regarding the rankings of optimization heuristics based on artificially-constructed benchmark functions. Inf. Sci. 2015, 297, 191–201. [Google Scholar] [CrossRef]
- Piotrowski, A.P.; Napiorkowski, M.J. May the same numerical optimizer be used when searching either for the best or for the worst solution to a real-world problem? Inf. Sci. 2016, 373, 124–148. [Google Scholar] [CrossRef]
- Mernik, M.; Liu, S.H.; Karaboga, D.; Črepinšek, M. On clarifying misconceptions when comparing variants of the Artificial Bee Colony Algorithm by offering a new implementation. Inf. Sci. 2015, 291, 115–127. [Google Scholar] [CrossRef]
- Karaboga, D.; Basturk, B. On the performance of artificial bee colony (ABC) algorithm. Appl. Soft Comput. 2008, 8, 687–697. [Google Scholar] [CrossRef]
- Ampellio, E.; Vassio, L. A hybrid swarm-based algorithm for single-objective optimization problems involving high-cost analyses. Swarm Intell. 2016, 10, 99–121. [Google Scholar] [CrossRef]
- Črepinšek, M.; Liu, S.H.; Mernik, L.; Mernik, M. Is a comparison of results meaningful from the inexact replications of computational experiments? Soft Comput. 2016, 20, 223–235. [Google Scholar] [CrossRef]
- Weise, T.; Chiong, R.; Tang, K. Evolutionary Optimization: Pitfalls and Booby Traps. J. Comput. Sci. Technol. 2012, 27, 907–936. [Google Scholar] [CrossRef]
- Črepinšek, M.; Liu, S.H.; Mernik, L. A note on teaching–learning-based optimization algorithm. Inf. Sci. 2012, 212, 79–93. [Google Scholar] [CrossRef]
- Chinta, S.; Kommadath, R.; Kotecha, P. A note on multi-objective improved teaching–learning based optimization algorithm (MO-ITLBO). Inf. Sci. 2016, 373, 337–350. [Google Scholar] [CrossRef]
- Hall, J.C.; Mills, B.; Nguyen, N.; Hall, J.L. Methodologic standards in surgical trials. Surgery 1996, 119, 466–472. [Google Scholar] [CrossRef]
- Kitchenham, B.A.; Pfleeger, S.L.; Pickard, L.M.; Jones, P.W.; Hoaglin, D.C.; El Emam, K.; Rosenberg, J. Preliminary guidelines for empirical research in software engineering. IEEE Trans. Softw. Eng. 2002, 28, 721–734. [Google Scholar] [CrossRef]
- Clerc, M. Biases and signatures. In Guided Randomness in Optimization; John Wiley & Sons: Hoboken, NJ, USA, 2015; pp. 139–145. [Google Scholar]
- Rajwar, K.; Deep, K. Uncovering structural bias in population-based optimization algorithms: A theoretical and simulation-based analysis of the generalized signature test. Expert Syst. Appl. 2024, 240, 122332. [Google Scholar] [CrossRef]
- Kononova, A.V.; Corne, D.W.; De Wilde, P.; Shneer, V.; Caraffini, F. Structural bias in population-based algorithms. Inf. Sci. 2015, 298, 468–490. [Google Scholar] [CrossRef]
- Inselberg, A. The plane with parallel coordinates. Vis. Comput. 1985, 1, 69–91. [Google Scholar] [CrossRef]
- Cleghorn, C.W.; Engelbrecht, A.P. A generalized theoretical deterministic particle swarm model. Swarm Intell. 2014, 8, 35–59. [Google Scholar] [CrossRef]
- Vermetten, D.; van Stein, B.; Caraffini, F.; Minku, L.L.; Kononova, A.V. BIAS: A Toolbox for Benchmarking Structural Bias in the Continuous Domain. IEEE Trans. Evol. Comput. 2022, 26, 1380–1393. [Google Scholar] [CrossRef]
- Gehlhaar, D.K. Tunig evolutionary programming for conformationally flexible molecular docking. In Proceedings of the 5th Annual Conference on Evolutionary Programming, San Diego, CA, USA, 29 February–2 March 1996; pp. 419–429. [Google Scholar]
- de Castro, L.N. Fundamentals of natural computing: An overview. Phys. Life Rev. 2007, 4, 1–36. [Google Scholar] [CrossRef]
- Bonabeau, E.; Dorigo, M.; Theraulaz, G. Inspiration for optimization from social insect behaviour. Nature 2000, 406, 39–42. [Google Scholar] [CrossRef] [PubMed]
- Beyer, H.G.; Schwefel, H.P.; Wegener, I. How to analyse evolutionary algorithms. Theor. Comput. Sci. 2002, 287, 101–130. [Google Scholar] [CrossRef]
- Campelo, F.; Aranha, C. Lessons from the Evolutionary Computation Bestiary. Artif. Life 2023, 29, 421–432. [Google Scholar] [CrossRef]
- Smaldino, P.E.; McElreath, R. The natural selection of bad science. R. Soc. Open Sci. 2016, 3, 160384. [Google Scholar] [CrossRef]
- Hayward, L.; Engelbrecht, A. How to Tell a Fish from a Bee: Constructing Meta-Heuristic Search Behaviour Characteristics. In Proceedings of the Companion Conference on Genetic and Evolutionary Computation, Lisbon, Portugal, 15–19 July 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 1562–1569. [Google Scholar]
- Ivković, N.; Kudelić, R.; Črepinšek, M. Probability and Certainty in the Performance of Evolutionary and Swarm Optimization Algorithms. Mathematics 2022, 10, 4364. [Google Scholar] [CrossRef]
- Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
- Biedrzycki, R.; Arabas, J.; Warchulski, E. A version of NL-SHADE-RSP algorithm with midpoint for CEC 2022 single objective bound constrained problems. In Proceedings of the 2022 IEEE Congress on Evolutionary Computation (CEC), Padua, Italy, 18–23 July 2022; pp. 1–8. [Google Scholar]
Algorithm Type | Classical Algorithm | Fundamental Ideas |
---|---|---|
Swarm intelligence-based algorithms | Particle Swarm Optimization (PSO) | Collective coordination and information sharing among bird flocks |
Evolution-based algorithms | Genetic Algorithm (GA) | Genetic operators (mutation, crossover, and survival), population evolution |
Physics-based algorithms | Gravitational Search Algorithm (GSA) | Gravity, mass, acceleration, attraction |
Human-based algorithms | Teaching-Based Learning Optimization (TBLO) | Teaching strategies, collaboration, knowledge sharing |
Ref. | Algorithm | Metaphors | Structural Bias | Repackaging | Others |
---|---|---|---|---|---|
[53,54,55] | Harmony Search (HS) | √ | × | A simplification of evolutionary strategies | - |
[56] | Intelligent Water Drops (IWD) Algorithm | √ | × | A particular instantiation of ACO | - |
[57] | Biogeography-Based Optimization (BBO) | √ | × | A generalization of GA | - |
[58] | Firefly Algorithm (FA) | √ | × | - | - |
[59] | Gravitational Search Algorithm (GSA) | √ | √ | - | - |
[60] | Cuckoo Search (CS) | √ | × | Variant type of differential evolution | Quite some differences between description and implementation |
[58,61] | Bat Algorithm (BA) | √ | √ | - | - |
[62] | Teaching Learning-based Optimization (TLO) | √ | √ | - | - |
[63] | Black Hole Optimization (BHO) | √ | × | A simplified version of PSO | - |
[64] | Coral Reef Optimization (CRO) | √ | √ | Deficient mixtures of different evolutionary operators | - |
[58,62,65,66,67] | Grey Wolf Optimizer (GWO) | √ | √ | - | - |
[58] | Antlion Optimizer (ALO) | √ | × | - | - |
[62] | Elephant Herding Optimization (EHO) | √ | √ | - | - |
[58] | Moth-Flame Algorithm (MFA) | √ | × | - | - |
[62] | Wind Driven Optimization (WDO) | √ | √ | - | - |
[58,62] | Whale Optimization Algorithm (WOA) | √ | √ | - | - |
[68] | Grasshopper Optimization Algorithm (GOA) | √ | × | A derivative of PSO | - |
[62,69] | Sine Cosine Algorithm (SCA) | √ | √ | - | - |
[70] | Raven Roost Optimization (RRO) | √ | √ | A special case of PSO | The inherent bias towards its starting point |
[62] | Wildebeest Herd Optimization (WHO) | √ | √ | - | - |
[62] | Henry Gas Solubility Optimization (HGSO) | √ | √ | - | - |
[62] | Butterfly Optimization Algorithm (BOA) | √ | √ | - | - |
[62] | Harris Hawks Optimization (HHO) | √ | √ | - | - |
[62] | Naked Mole-Rat Algorithm (NMRA) | √ | √ | - | - |
[62] | Nuclear Reaction Optimization (NRO) | √ | √ | - | - |
[62] | Pathfinder Algorithm (PA) | √ | √ | - | - |
[71] | Marine Predators Algorithm (MPA) | √ | √ | - | - |
[62] | Tunicate Swarm Algorithm (TSA) | √ | √ | - | - |
[62] | Sparrow Search Algorithm (SSA) | √ | √ | - | - |
[62] | Slime Mould Algorithm (SMA) | √ | √ | - | - |
[62] | Bald Eagle Search (BES) | √ | √ | - | - |
[62] | Artificial Ecosystem-based Optimization (AEO) | √ | √ | - | - |
[62] | Equilibrium Optimizer (EO) | √ | √ | - | - |
[62] | Gradient-Based Optimizer (GBO) | √ | √ | - | - |
[62] | Marine Predators Algorithm (MPA) | √ | √ | - | - |
[62,72] | Chimpanzee Optimization Algorithm (ChOA) | √ | √ | A variant of PSO | - |
[64] | Black Widow Optimization (BWO) | √ | √ | Deficient mixtures of different evolutionary operators | - |
[62,73] | Arithmetic Optimization Algorithm (AOA) | √ | √ | - | Design artificially improves accuracy over standard benchmarks |
[62] | Runge Kutta Optimizer (RKO) | √ | √ | - | - |
[62] | Chaos Game Optimization (CGO) | √ | √ | - | - |
[62] | Aquila Optimization (AO) | √ | √ | - | - |
[62] | Battle Royale Optimization (BRO) | √ | √ | - | - |
[62] | Hunger Games Search (HGS) | √ | √ | - | - |
[62] | Dandelion Optimizer (DO) | √ | √ | - | - |
[62] | Komodo Mlipir Algorithm (KMA) | √ | √ | - | - |
[62] | Mountain Gazelle Optimizer (MGO) | √ | √ | - | - |
Test Method | Type | Advantages | Disadvantages | Computational Cost |
---|---|---|---|---|
Signature test | Visual Test | Simple, provides clear visual representation of biases, and suitable for 2D problems. | Subjective, limited to 2D problems, and cannot fully eliminate landscape bias in greedy algorithms. | Low; suitable for small-scale 2D functions. |
Generalized signature test | Grid-based Test | Flexible, scalable for high-dimensional data, and can detect various types of bias more comprehensively. | High computational complexity, difficult to implement and interpret, and prone to overfitting in high-dimensional scenarios. | High; increases exponentially with problem dimensionality. |
Shifted benchmark function | Numerical Analysis | Effectively reveals structural biases such as central bias and allows observation of algorithm behavior under shifted problem landscapes. | Relies on prior knowledge of biased regions and may be ineffective in detecting boundary biases or biases distributed across multiple regions. | Moderate; requires multiple evaluations under varied shift configurations. |
Parallel coordinates test | Visual Test | Visualizes high-dimensional data, identifies patterns, and compares multiple variables. | Cluttered with many dimensions, subjective interpretation, and overlapping lines reduce clarity. | Moderate to high; depends on dimensionality and data volume. |
BIAS toolbox | Statistical Test | Systematic bias analysis, quantifies algorithm behavior, and supports benchmarking. | Requires parameter tuning, limited to specific problem types, and may not capture all biases. | Moderate; may vary based on sample size and statistical complexity. |
Region scaling | Visual Test | Allows investigation of algorithm scalability and behavior under varying landscape resolutions. | Improper scaling can introduce artificial difficulty or obscure algorithm strengths. | Low to moderate; scaling and visualization are relatively lightweight. |
Criteria | Genuinely Novel Algorithms | Repackaged Algorithms |
---|---|---|
Originality of Concept | Introduce fundamentally new metaphors, operators, or dynamics not present in existing algorithms. | Often reuse known metaphors with superficial changes. |
Core Mechanistic Innovation | Employ novel solution update rules, selection mechanisms, or search strategies that alter algorithm behavior significantly. | Retain the core mechanisms (e.g., mutation, crossover, and velocity update) from classical algorithms like GA or PSO. |
Performance Improvement | Demonstrate substantial improvements over state-of-the-art algorithms across a wide range of benchmark problems. | Exhibit similar or worse performance compared to the base algorithms; improvements are usually minor or dataset-specific. |
Theoretical Foundation | Accompanied by theoretical analyses such as convergence proofs, complexity analysis, or formal models. | Rarely include theoretical justification; mostly evaluated via empirical trials. |
Computational Efficiency | Designed with consideration for scalability and computational cost; often introduce efficient operators. | May involve redundant or costly operations that increase complexity without added value. |
Contribution to the Field | Open new avenues for research, inspire follow-up work, or integrate with other frameworks. | Provide limited insight or innovation; mostly serve as incremental publications with little long-term impact. |
Item | Aspect | Checklist Description |
---|---|---|
M | Mathematical Formulation | Clearly define the algorithm using formal mathematical notations. Include objective functions, constraints, and update rules. |
E | Exploration/Exploitation Balance | Provide analysis or visualizations that demonstrate the balance between exploration and exploitation. |
T | Theoretical Analysis | Present theoretical properties such as convergence, time complexity, or approximation bounds, if applicable. |
R | Reproducibility | Ensure reproducibility by including pseudocode, hyperparameters, datasets, and code availability (e.g., via GitHub). |
I | Innovation Justification | Justify the novelty and necessity of the proposed mechanism. Include ablation studies to isolate the contribution of each component. |
C | Comparison Protocol | Compare the proposed algorithm against state-of-the-art baselines using standardized benchmarks. Report fair and consistent evaluation metrics. |
S | Structural Bias Testing | Include tests for structural bias. Provide visual or statistical analysis to support claims. |
Additional Notes for Implementation:
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, C.-H.; Hu, K.; Wu, X.; Ou, Y. Rethinking Metaheuristics: Unveiling the Myth of “Novelty” in Metaheuristic Algorithms. Mathematics 2025, 13, 2158. https://doi.org/10.3390/math13132158
Wang C-H, Hu K, Wu X, Ou Y. Rethinking Metaheuristics: Unveiling the Myth of “Novelty” in Metaheuristic Algorithms. Mathematics. 2025; 13(13):2158. https://doi.org/10.3390/math13132158
Chicago/Turabian StyleWang, Chia-Hung, Kun Hu, Xiaojing Wu, and Yufeng Ou. 2025. "Rethinking Metaheuristics: Unveiling the Myth of “Novelty” in Metaheuristic Algorithms" Mathematics 13, no. 13: 2158. https://doi.org/10.3390/math13132158
APA StyleWang, C.-H., Hu, K., Wu, X., & Ou, Y. (2025). Rethinking Metaheuristics: Unveiling the Myth of “Novelty” in Metaheuristic Algorithms. Mathematics, 13(13), 2158. https://doi.org/10.3390/math13132158