Special Issue "Benchmarking, Selecting and Configuring Learning and Optimization Algorithms"

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Evolutionary Algorithms and Machine Learning".

Deadline for manuscript submissions: closed (30 November 2020).

Special Issue Editors

Dr. Mario A. Muñoz
E-Mail Website
Guest Editor
School of Mathematics and Statistics, University of Melbourne, Melbourne, Australia
Interests: intelligent optimization methods; complex systems
Prof. Katherine Malan
E-Mail Website
Guest Editor
Department of Decision Sciences, University of South Africa, Pretoria, South Africa
Interests: computational intelligence; metaheuristics; fitness landscape analysis

Special Issue Information

Dear Colleagues,

Whenever we need to solve a computational problem, selecting and configuring an appropriate algorithm are crucial tasks. Both theoretical and empirical results demonstrate that no single algorithm can find the best possible solution for all problems within a domain with the least amount of computation. This is because each algorithm makes different assumptions about the structure of a problem, leading to strength and weaknesses which are often unknown beforehand. This is known as performance complementarity. A deep understanding of this issue is critical for heuristic algorithms, which can perform better than classical ones, solving problems that were unfeasible in the past; however, their behavior is still largely unpredictable. Therefore, an otherwise useful method would result in failures if it is used inappropriately in the wrong contexts.

Given a complex problem, automated algorithm selection and configuration involves the development of methods that would choose the most appropriate algorithm for solving that problem. Automated algorithm selection has been successfully implemented in some well-studied problem scenarios, such as the travelling salesman problem. However, there are many challenges that remain before automated algorithm selection can become a reality in the wider learning and optimization contexts. Some of these challenges involve:

  1. Constructing a robust knowledge base of empirical results, from a wide variety of benchmarks suites that are unbiased, challenging, and contain a mix of synthetically generated and real-world-like instances with diverse structural properties. Without this diversity, the conclusions that can be drawn about the expected algorithm performance in future scenarios are necessarily limited;
  2. Developing robust and efficient characterization methods can determine the structural similarities between problems, and the influence that such structure has on algorithm performance, while facilitating the analysis by the designers;
  3. Constructing selection and configuration methods that are not only accurate but also minimize the probability of making expensive mistakes.

With this call, we invite you to submit your research papers to this Special Issue, covering all aspects of automated algorithm selection and configuration. The following is a (non-exhaustive) list of topics of interest:

  • Problem characterization, such as fitness landscape analysis;
  • Experimental algorithmics for collection of reliable performance data;
  • Meta- and surrogate modeling;
  • Automated parameter selection/tuning;
  • Pipelines for automated algorithm selection;
  • Instance space analysis and algorithm footprints;
  • Benchmark collections;
  • Hyper-heuristics.

Dr. Mario A. Muñoz
Prof. Dr. Katherine Malan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Algorithm selection and configuration
  • Benchmark suites
  • Experimental analysis of algorithms
  • Fitness landscape analysis and problem characterization
  • Meta-learning
  • Surrogate modeling

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
A Feature Selection Algorithm Performance Metric for Comparative Analysis
Algorithms 2021, 14(3), 100; https://doi.org/10.3390/a14030100 - 22 Mar 2021
Viewed by 409
Abstract
This study presents a novel performance metric for feature selection algorithms that is unbiased and can be used for comparative analysis across feature selection problems. The baseline fitness improvement (BFI) measure quantifies the potential value gained by applying feature selection. The BFI measure [...] Read more.
This study presents a novel performance metric for feature selection algorithms that is unbiased and can be used for comparative analysis across feature selection problems. The baseline fitness improvement (BFI) measure quantifies the potential value gained by applying feature selection. The BFI measure can be used to compare the performance of feature selection algorithms across datasets by measuring the change in classifier performance as a result of feature selection, with respect to the baseline where all features are included. Empirical results are presented to show that there is performance complementarity for a suite of feature selection algorithms on a variety of real world datasets. The BFI measure is a normalised performance metric that can be used to correlate problem characteristics with feature selection algorithm performance, across multiple datasets. This ability paves the way towards describing the performance space of the per-instance algorithm selection problem for feature selection algorithms. Full article
Show Figures

Figure 1

Open AccessArticle
Towards Understanding Clustering Problems and Algorithms: An Instance Space Analysis
Algorithms 2021, 14(3), 95; https://doi.org/10.3390/a14030095 - 19 Mar 2021
Viewed by 641
Abstract
Various criteria and algorithms can be used for clustering, leading to very distinct outcomes and potential biases towards datasets with certain structures. More generally, the selection of the most effective algorithm to be applied for a given dataset, based on its characteristics, is [...] Read more.
Various criteria and algorithms can be used for clustering, leading to very distinct outcomes and potential biases towards datasets with certain structures. More generally, the selection of the most effective algorithm to be applied for a given dataset, based on its characteristics, is a problem that has been largely studied in the field of meta-learning. Recent advances in the form of a new methodology known as Instance Space Analysis provide an opportunity to extend such meta-analyses to gain greater visual insights of the relationship between datasets’ characteristics and the performance of different algorithms. The aim of this study is to perform an Instance Space Analysis for the first time for clustering problems and algorithms. As a result, we are able to analyze the impact of the choice of the test instances employed, and the strengths and weaknesses of some popular clustering algorithms, for datasets with different structures. Full article
Show Figures

Figure 1

Open AccessArticle
An Exploratory Landscape Analysis-Based Benchmark Suite
Algorithms 2021, 14(3), 78; https://doi.org/10.3390/a14030078 - 27 Feb 2021
Viewed by 492
Abstract
The choice of which objective functions, or benchmark problems, should be used to test an optimization algorithm is a crucial part of the algorithm selection framework. Benchmark suites that are often used in the literature have been shown to exhibit poor coverage of [...] Read more.
The choice of which objective functions, or benchmark problems, should be used to test an optimization algorithm is a crucial part of the algorithm selection framework. Benchmark suites that are often used in the literature have been shown to exhibit poor coverage of the problem space. Exploratory landscape analysis can be used to quantify characteristics of objective functions. However, exploratory landscape analysis measures are based on samples of the objective function, and there is a lack of work on the appropriate choice of sample size needed to produce reliable measures. This study presents an approach to determine the minimum sample size needed to obtain robust exploratory landscape analysis measures. Based on reliable exploratory landscape analysis measures, a self-organizing feature map is used to cluster a comprehensive set of benchmark functions. From this, a benchmark suite that has better coverage of the single-objective, boundary-constrained problem space is proposed. Full article
Show Figures

Figure 1

Open AccessArticle
A Survey of Advances in Landscape Analysis for Optimisation
Algorithms 2021, 14(2), 40; https://doi.org/10.3390/a14020040 - 28 Jan 2021
Viewed by 619
Abstract
Fitness landscapes were proposed in 1932 as an abstract notion for understanding biological evolution and were later used to explain evolutionary algorithm behaviour. The last ten years has seen the field of fitness landscape analysis develop from a largely theoretical idea in evolutionary [...] Read more.
Fitness landscapes were proposed in 1932 as an abstract notion for understanding biological evolution and were later used to explain evolutionary algorithm behaviour. The last ten years has seen the field of fitness landscape analysis develop from a largely theoretical idea in evolutionary computation to a practical tool applied in optimisation in general and more recently in machine learning. With this widened scope, new types of landscapes have emerged such as multiobjective landscapes, violation landscapes, dynamic and coupled landscapes and error landscapes. This survey is a follow-up from a 2013 survey on fitness landscapes and includes an additional 11 landscape analysis techniques. The paper also includes a survey on the applications of landscape analysis for understanding complex problems and explaining algorithm behaviour, as well as algorithm performance prediction and automated algorithm configuration and selection. The extensive use of landscape analysis in a broad range of areas highlights the wide applicability of the techniques and the paper discusses some opportunities for further research in this growing field. Full article
Open AccessArticle
Diversity Measures for Niching Algorithms
Algorithms 2021, 14(2), 36; https://doi.org/10.3390/a14020036 - 26 Jan 2021
Viewed by 653
Abstract
Multimodal problems are single objective optimisation problems with multiple local and global optima. The objective of multimodal optimisation is to locate all or most of the optima. Niching algorithms are the techniques utilised to locate these optima. A critical factor in determining the [...] Read more.
Multimodal problems are single objective optimisation problems with multiple local and global optima. The objective of multimodal optimisation is to locate all or most of the optima. Niching algorithms are the techniques utilised to locate these optima. A critical factor in determining the success of niching algorithms is how well the search space is covered by the candidate solutions. For niching algorithms, high diversity during the exploration phase will facilitate location and identification of many solutions while a low diversity means that the candidate solutions are clustered at optima. This paper provides a review of measures used to quantify diversity, and how they can be utilised to quantify the dispersion of both the candidate solutions and the solutions of niching algorithms (i.e., found optima). The investigated diversity measures are then used to evaluate the distribution of candidate solutions and solutions when the enhanced species-based particle swarm optimisation (ESPSO) algorithm is utilised to optimise a selected set of multimodal problems. Full article
Show Figures

Figure 1

Open AccessArticle
Sampling Effects on Algorithm Selection for Continuous Black-Box Optimization
Algorithms 2021, 14(1), 19; https://doi.org/10.3390/a14010019 - 11 Jan 2021
Viewed by 559
Abstract
In this paper, we investigate how systemic errors due to random sampling impact on automated algorithm selection for bound-constrained, single-objective, continuous black-box optimization. We construct a machine learning-based algorithm selector, which uses exploratory landscape analysis features as inputs. We test the accuracy of [...] Read more.
In this paper, we investigate how systemic errors due to random sampling impact on automated algorithm selection for bound-constrained, single-objective, continuous black-box optimization. We construct a machine learning-based algorithm selector, which uses exploratory landscape analysis features as inputs. We test the accuracy of the recommendations experimentally using resampling techniques and the hold-one-instance-out and hold-one-problem-out validation methods. The results demonstrate that the selector remains accurate even with sampling noise, although not without trade-offs. Full article
Show Figures

Figure 1

Back to TopTop