Heuristic Optimization and Machine Learning

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E1: Mathematics and Computer Science".

Deadline for manuscript submissions: 31 May 2025 | Viewed by 7297

Special Issue Editor


E-Mail Website
Guest Editor
School of Mathematical and Computational Sciences, University of Prince Edward Island, 550 University Ave, Charlottetown, PE C1A 4P3, Canada
Interests: heuristic optimization; machine learning; artificial intelligence

Special Issue Information

Dear Colleagues,

During the past several decades, heuristic optimization has proven to be effective in solving a broad variety of complex optimization problems; at the same time, machine learning became a transformational force in computer science and data analytics. Currently, the combination of heuristic optimization with machine learning is a rapidly developing field and one of the most successful trends in optimization.

The aim of this Special Issue is to explore this integration from a broad perspective, bringing together the latest research achievements of scholars studying and developing theoretical and practical applications in both fields. Topics include, but are not limited to:

  • Machine learning techniques for improving heuristic and metaheuristic optimization;
  • Evolutionary algorithms for generating artificial neural networks, parameters, and rules (neuro-evolution);
  • Evolutionary unsupervised learning;
  • Evolutionary deep learning;
  • Data-driven heuristic optimization;
  • Representation learning applied to landscape data;
  • Learnheuristics and meta-learning;
  • Machine learning for automatic algorithm selection and configuration;
  • Transfer of approaches between machine learning and optimization;
  • Analysis of heuristic optimization using machine learning methods.

Dr. Antonio Bolufé-Röhler
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • heuristic optimization;
  • metaheuristics;
  • machine learning;
  • deep learning;
  • reinforcement learning;
  • neuro-evolution;
  • learnheuristics;
  • evolutionary algorithms;
  • global optimization;
  • combinatorial optimization

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

34 pages, 3921 KiB  
Article
Soft Actor-Critic Approach to Self-Adaptive Particle Swarm Optimisation
by Daniel von Eschwege and Andries Engelbrecht
Mathematics 2024, 12(22), 3481; https://doi.org/10.3390/math12223481 - 7 Nov 2024
Cited by 1 | Viewed by 1206
Abstract
Particle swarm optimisation (PSO) is a swarm intelligence algorithm that finds candidate solutions by iteratively updating the positions of particles in a swarm. The decentralised optimisation methodology of PSO is ideally suited to problems with multiple local minima and deceptive fitness landscapes, where [...] Read more.
Particle swarm optimisation (PSO) is a swarm intelligence algorithm that finds candidate solutions by iteratively updating the positions of particles in a swarm. The decentralised optimisation methodology of PSO is ideally suited to problems with multiple local minima and deceptive fitness landscapes, where traditional gradient-based algorithms fail. PSO performance depends on the use of a suitable control parameter (CP) configuration, which governs the trade-off between exploration and exploitation in the swarm. CPs that ensure good performance are problem-dependent. Unfortunately, CPs tuning is computationally expensive and inefficient. Self-adaptive particle swarm optimisation (SAPSO) algorithms aim to adaptively adjust CPs during the optimisation process to improve performance, ideally while reducing the number of performance-sensitive parameters. This paper proposes a reinforcement learning (RL) approach to SAPSO by utilising a velocity-clamped soft actor-critic (SAC) that autonomously adapts the PSO CPs. The proposed SAC-SAPSO obtains a 50% to 80% improvement in solution quality compared to various baselines, has either one or zero runtime parameters, is time-invariant, and does not result in divergent particles. Full article
(This article belongs to the Special Issue Heuristic Optimization and Machine Learning)
Show Figures

Figure 1

16 pages, 667 KiB  
Article
A Particle Swarm Optimization-Based Interpretable Spiking Neural Classifier with Time-Varying Weights
by Mohammed Thousif, Shirin Dora and Suresh Sundaram
Mathematics 2024, 12(18), 2846; https://doi.org/10.3390/math12182846 - 13 Sep 2024
Viewed by 1251
Abstract
This paper presents an interpretable, spiking neural classifier (IpT-SNC) with time-varying weights. IpT-SNC uses a two-layered spiking neural network (SNN) architecture in which weights of synapses are modeled using amplitude-modulated, time-varying Gaussian functions. Self-regulated particle swarm optimization (SRPSO) is used to update the [...] Read more.
This paper presents an interpretable, spiking neural classifier (IpT-SNC) with time-varying weights. IpT-SNC uses a two-layered spiking neural network (SNN) architecture in which weights of synapses are modeled using amplitude-modulated, time-varying Gaussian functions. Self-regulated particle swarm optimization (SRPSO) is used to update the amplitude, width, and centers of the Gaussian functions and thresholds of neurons in the output layer. IpT-SNC has been developed to improve the interpretability of spiking neural networks. The time-varying weights in IpT-SNC allow us to describe the rationale behind predictions in terms of specific input spikes. The performance of IpT-SNC is evaluated on ten benchmark datasets in the UCI machine learning repository and compared with the performance of other learning algorithms. According to the performance results, IpT-SNC enhances classification performance on testing datasets from a minimum of 0.5% to a maximum of 7.7%. The significance level of IpT-SNC with other learning algorithms is evaluated using statistical tests like the Friedman test and the paired t-test. Furthermore, on the challenging real-world BCI (Brain Computer Interface) competition IV dataset, IpT-SNC outperforms current classifiers by about 8% in terms of classification accuracy. The results indicate that IpT-SNC has better generalization performance than other algorithms. Full article
(This article belongs to the Special Issue Heuristic Optimization and Machine Learning)
Show Figures

Figure 1

22 pages, 848 KiB  
Article
An Integrated Model of Deep Learning and Heuristic Algorithm for Load Forecasting in Smart Grid
by Hisham Alghamdi, Ghulam Hafeez, Sajjad Ali, Safeer Ullah, Muhammad Iftikhar Khan, Sadia Murawwat and Lyu-Guang Hua
Mathematics 2023, 11(21), 4561; https://doi.org/10.3390/math11214561 - 6 Nov 2023
Cited by 9 | Viewed by 2039
Abstract
Accurate load forecasting plays a crucial role in the effective energy management of smart cities. However, the smart cities’ residents’ load profile is nonlinear, having high volatility, uncertainty, and randomness. Forecasting such nonlinear profiles requires accurate and stable prediction models. On this note, [...] Read more.
Accurate load forecasting plays a crucial role in the effective energy management of smart cities. However, the smart cities’ residents’ load profile is nonlinear, having high volatility, uncertainty, and randomness. Forecasting such nonlinear profiles requires accurate and stable prediction models. On this note, a prediction model has been developed by combining feature preprocessing, a multilayer perceptron, and a genetic wind-driven optimization algorithm, namely FPP-MLP-GWDO. The developed hybrid model has three parts: (i) feature preprocessing (FPP), (ii) a multilayer perceptron (MLP), and (iii) a genetic wind-driven optimization (GWDO) algorithm. The MLP is the key part of the developed model, which uses a multivariate autoregressive algorithm and rectified linear unit (ReLU) for network training. The developed hybrid model known as FPP-MLP-GWDO is evaluated using Dayton Ohio grid load data regarding aspects of accuracy (the mean absolute percentage error (MAPE), Theil’s inequality coefficient (TIC), and the correlation coefficient (CC)) and convergence speed (computational time (CT) and convergence rate (CR)). The findings endorsed the validity and applicability of the developed model compared to other literature models such as the feature selection–support vector machine–modified enhanced differential evolution (FS-SVM-mEDE) model, the feature selection–artificial neural network (FS-ANN) model, the support vector machine–differential evolution algorithm (SVM-DEA) model, and the autoregressive (AR) model regarding aspects of accuracy and convergence speed. The findings confirm that the developed FPP-MLP-GWDO model achieved an accuracy of 98.9%, thus surpassing benchmark models such as the FS-ANN (96.5%), FS-SVM-mEDE (97.9%), SVM-DEA (97.5%), and AR (95.7%). Furthermore, the FPP-MLP-GWDO significantly reduced the CT (299s) compared to the FS-SVM-mEDE (350s), SVM-DEA (240s), FS-ANN (159s), and AR (132s) models. Full article
(This article belongs to the Special Issue Heuristic Optimization and Machine Learning)
Show Figures

Figure 1

19 pages, 888 KiB  
Article
Surrogate-Assisted Automatic Parameter Adaptation Design for Differential Evolution
by Vladimir Stanovov and Eugene Semenkin
Mathematics 2023, 11(13), 2937; https://doi.org/10.3390/math11132937 - 30 Jun 2023
Cited by 5 | Viewed by 1381
Abstract
In this study, parameter adaptation methods for differential evolution are automatically designed using a surrogate approach. In particular, Taylor series are applied to model the searched dependence between the algorithm’s parameters and values, describing the current algorithm state. To find the best-performing adaptation [...] Read more.
In this study, parameter adaptation methods for differential evolution are automatically designed using a surrogate approach. In particular, Taylor series are applied to model the searched dependence between the algorithm’s parameters and values, describing the current algorithm state. To find the best-performing adaptation technique, efficient global optimization, a surrogate-assisted optimization technique, is applied. Three parameters are considered: scaling factor, crossover rate and population decrease rate. The learning phase is performed on a set of benchmark problems from the CEC 2017 competition, and the resulting parameter adaptation heuristics are additionally tested on CEC 2022 and SOCO benchmark suites. The results show that the proposed approach is capable of finding efficient adaptation techniques given relatively small computational resources. Full article
(This article belongs to the Special Issue Heuristic Optimization and Machine Learning)
Show Figures

Figure 1

Back to TopTop