Evolutionary Machine Learning for Real-World Applications

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E: Applied Mathematics".

Deadline for manuscript submissions: 10 November 2025 | Viewed by 264

Special Issue Editor


E-Mail Website
Guest Editor
Faculty of Science, Wilfrid Laurier University, Waterloo, ON, Canada
Interests: machine learning; evolutionary computation; multi-objective optimization; feature selection; digital pathology

Special Issue Information

Dear Colleagues,

This Special Issue "Evolutionary Machine Learning for Real-world Applications" explores the integration of evolutionary computation (EC) techniques with machine learning (ML) to enhance performance, scalability, and adaptability in solving complex real-world challenges. Evolutionary computation methods have demonstrated significant potential in optimizing various aspects of machine learning, from hyperparameter tuning to model selection and feature engineering.

While machine learning has achieved remarkable success in diverse applications such as computer vision, natural language processing, biomedical analysis, and autonomous systems, traditional methods often struggle with large-scale, dynamic, and noisy data environments. Evolutionary computation provides a promising alternative by offering adaptive, self-optimizing, and robust learning strategies that can efficiently navigate complex search spaces.

This Special Issue aims to investigate both theoretical advancements and practical applications at the intersection of evolutionary computation and machine learning. We invite submissions on novel algorithms, hybrid techniques, and innovative applications of EC-driven machine learning across various domains. Topics of interest include, but are not limited to, the following:

  • Evolutionary hyperparameter optimization;
  • Evolutionary feature selection and dimensionality reduction;
  • Neuroevolution for deep learning architectures;
  • Surrogate models and evolutionary computation for machine learning;
  • Evolutionary computation for symbolic regression and applications;
  • Federated learning optimization for real-world applications;
  • Evolutionary computer vision and image processing;
  • Evolutionary transfer learning and transfer optimization;
  • Large-scale evolutionary algorithms for machine learning;
  • Evolutionary computation in the healthcare industry;
  • Evolutionary algorithms for model selection and architecture search;
  • Swarm intelligence in machine learning optimization;
  • Multi-objective evolutionary optimization in machine learning;
  • Co-evolutionary and hybrid evolutionary machine learning techniques;
  • Automated machine learning (AutoML) with evolutionary strategies;
  • Interpretability in evolutionary machine learning;
  • Overfitting mitigation and robust validation in data-driven evolutionary computation.

We encourage original research contributions from the scientific community, including novel methodologies, comparative studies, and real-world case studies where evolutionary computation enhances machine learning applications.

We look forward to your contributions and to advancing the field through this Special Issue.

Dr. Azam Asilian Bidgoli
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • evolutionary computation
  • machine learning
  • evolutionary machine learning
  • swarm intelligence optimization
  • multi-objective optimization
  • neuroevolution
  • deep learning optimization
  • evolutionary architecture search

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 980 KiB  
Article
Beyond the Pareto Front: Utilizing the Entire Population for Decision-Making in Evolutionary Machine Learning
by Parastoo Dehnad, Azam Asilian Bidgoli and Shahryar Rahnamayan
Mathematics 2025, 13(16), 2579; https://doi.org/10.3390/math13162579 - 12 Aug 2025
Abstract
Decision-making plays a pivotal role in data-driven optimization, aiming to achieve optimal results by identifying the most effective combination of input variables. Traditionally, in multi-objective data-driven optimization problems, decision-making relies solely on the Pareto front derived from the training data, as provided by [...] Read more.
Decision-making plays a pivotal role in data-driven optimization, aiming to achieve optimal results by identifying the most effective combination of input variables. Traditionally, in multi-objective data-driven optimization problems, decision-making relies solely on the Pareto front derived from the training data, as provided by the optimizer. This approach limits consideration to a subset of solutions and often overlooks potentially superior solutions on test set within the optimizer’s final population. What if we include the entire final population in the decision-making process? This paper is the first to systematically explore the potential of utilizing the entire final population, rather than relying solely on the optimization Pareto front, for decision-making in data-driven multi-objective optimization. This novel perspective reveals overlooked yet potentially superior solutions that generalize better to unseen data and help mitigate issues such as overfitting and training-data bias. This paper highlights the use of the entire final population of the optimizer for final decision-making in multi-objective optimization. Using feature selection as a case study, this method is evaluated on two key objectives: minimizing classification error rate and reducing the number of selected features. We compare the proposed test Pareto front, derived from the final population, with traditional test Pareto fronts based on training data. Experiments conducted on fifteen large-scale datasets reveal that some optimal solutions within the entire population are overlooked when focusing solely on the optimization Pareto front. This indicates that the solutions on the optimization Pareto front are not necessarily the optimal solutions for real-world unseen data. There may be additional solutions in the final population yet to be utilized for decision-making. Full article
(This article belongs to the Special Issue Evolutionary Machine Learning for Real-World Applications)
Show Figures

Figure 1

Back to TopTop