Optimization and Machine Learning

A special issue of AppliedMath (ISSN 2673-9909).

Deadline for manuscript submissions: 28 February 2025 | Viewed by 4948

Special Issue Editor

Department of Mathematics, School of Science, Wuhan University of Technology, Wuhan 430070, China
Interests: computational intelligence; metaheuristics; optimization; electronic design automation; bioinformatics

Special Issue Information

Dear Colleagues,

We are pleased to invite you to submit your research to be considered for publication in a Special Issue of AppliedMath, focused on the latest advances in optimization and machine learning. The goal of this Special Issue is to showcase the latest advances in this field and to provide a platform for researchers to share their promising findings.

Optimization and machine learning (ML) have become two of the most popular issues in the last decade. ML provides a variety of tricks for data preprocessing, feature extraction, model selection, etc., whereas optimization algorithms offer elementary techniques for the construction of mathematical models and parameter fitting of ML techniques. Arising from the deep integration of optimization and ML, excellent optimization-based ML algorithms and efficient ML-assisted optimization algorithms can be developed to address the challenges of scientific research and engineering applications in the big data era.

In this Special Issue, we invite and welcome reviews and original papers about theoretical and practical studies of ML and optimization algorithms, including excellent ML algorithms based on novel optimization techniques and optimization algorithms promoted by ML strategies.

Dr. Yu Chen
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AppliedMath is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • optimization
  • combinatorial optimization
  • evolutionary optimization
  • swarm intelligence
  • metaheuristics
  • machine learning
  • reinforcement learning
  • transfer learning
  • deep learning
  • data-driven optimization
  • large-scale optimization
  • multi-objective optimization
  • evolutionary multi-task optimization
  • evolutionary deep learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

17 pages, 749 KiB  
Article
A Two-Stage Feature Selection Approach Based on Artificial Bee Colony and Adaptive LASSO in High-Dimensional Data
by Efe Precious Onakpojeruo and Nuriye Sancar
AppliedMath 2024, 4(4), 1522-1538; https://doi.org/10.3390/appliedmath4040081 - 12 Dec 2024
Viewed by 237
Abstract
High-dimensional datasets, where the number of features far exceeds the number of observations, present significant challenges in feature selection and model performance. This study proposes a novel two-stage feature-selection approach that integrates Artificial Bee Colony (ABC) optimization with Adaptive Least Absolute Shrinkage and [...] Read more.
High-dimensional datasets, where the number of features far exceeds the number of observations, present significant challenges in feature selection and model performance. This study proposes a novel two-stage feature-selection approach that integrates Artificial Bee Colony (ABC) optimization with Adaptive Least Absolute Shrinkage and Selection Operator (AD_LASSO). The initial stage reduces dimensionality while effectively dealing with complex, high-dimensional search spaces by using ABC to conduct a global search for the ideal subset of features. The second stage applies AD_LASSO, refining the selected features by eliminating redundant features and enhancing model interpretability. The proposed ABC-ADLASSO method was compared with the AD_LASSO, LASSO, stepwise, and LARS methods under different simulation settings in high-dimensional data and various real datasets. According to the results obtained from simulations and applications on various real datasets, ABC-ADLASSO has shown significantly superior performance in terms of accuracy, precision, and overall model performance, particularly in scenarios with high correlation and a large number of features compared to the other methods evaluated. This two-stage approach offers robust feature selection and improves predictive accuracy, making it an effective tool for analyzing high-dimensional data. Full article
(This article belongs to the Special Issue Optimization and Machine Learning)
Show Figures

Figure 1

22 pages, 907 KiB  
Article
Introducing a Parallel Genetic Algorithm for Global Optimization Problems
by Vasileios Charilogis and Ioannis G. Tsoulos
AppliedMath 2024, 4(2), 709-730; https://doi.org/10.3390/appliedmath4020038 - 10 Jun 2024
Viewed by 1659
Abstract
The topic of efficiently finding the global minimum of multidimensional functions is widely applicable to numerous problems in the modern world. Many algorithms have been proposed to address these problems, among which genetic algorithms and their variants are particularly notable. Their popularity is [...] Read more.
The topic of efficiently finding the global minimum of multidimensional functions is widely applicable to numerous problems in the modern world. Many algorithms have been proposed to address these problems, among which genetic algorithms and their variants are particularly notable. Their popularity is due to their exceptional performance in solving optimization problems and their adaptability to various types of problems. However, genetic algorithms require significant computational resources and time, prompting the need for parallel techniques. Moving in this research direction, a new global optimization method is presented here that exploits the use of parallel computing techniques in genetic algorithms. This innovative method employs autonomous parallel computing units that periodically share the optimal solutions they discover. Increasing the number of computational threads, coupled with solution exchange techniques, can significantly reduce the number of calls to the objective function, thus saving computational power. Also, a stopping rule is proposed that takes advantage of the parallel computational environment. The proposed method was tested on a broad array of benchmark functions from the relevant literature and compared with other global optimization techniques regarding its efficiency. Full article
(This article belongs to the Special Issue Optimization and Machine Learning)
Show Figures

Figure 1

Review

Jump to: Research

30 pages, 953 KiB  
Review
A Review of Optimization-Based Deep Learning Models for MRI Reconstruction
by Wanyu Bian and Yokhesh Krishnasamy Tamilselvam
AppliedMath 2024, 4(3), 1098-1127; https://doi.org/10.3390/appliedmath4030059 - 3 Sep 2024
Viewed by 1724
Abstract
Magnetic resonance imaging (MRI) is crucial for its superior soft tissue contrast and high spatial resolution. Integrating deep learning algorithms into MRI reconstruction has significantly enhanced image quality and efficiency. This paper provides a comprehensive review of optimization-based deep learning models for MRI [...] Read more.
Magnetic resonance imaging (MRI) is crucial for its superior soft tissue contrast and high spatial resolution. Integrating deep learning algorithms into MRI reconstruction has significantly enhanced image quality and efficiency. This paper provides a comprehensive review of optimization-based deep learning models for MRI reconstruction, focusing on recent advancements in gradient descent algorithms, proximal gradient descent algorithms, ADMM, PDHG, and diffusion models combined with gradient descent. We highlight the development and effectiveness of learnable optimization algorithms (LOAs) in improving model interpretability and performance. Our findings demonstrate substantial improvements in MRI reconstruction in handling undersampled data, which directly contribute to reducing scan times and enhancing diagnostic accuracy. The review offers valuable insights and resources for researchers and practitioners aiming to advance medical imaging using state-of-the-art deep learning techniques. Full article
(This article belongs to the Special Issue Optimization and Machine Learning)
Show Figures

Figure 1

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Title: A Multi-objective Random Drift Particle Swarm Optimization
Authors: Liwei Li; Min Shan; Jun Sun; Vasile Palade; Xiaojun Wu
Affiliation: Coventry University, UK; Jiangnan University China
Abstract: This paper proposes, this paper proposes a multi-objective random drift particle swarm optimization algorithm with a dual-archive mechanism (MORDPSO-DA). First, Random Drift particle swarm optimization algorithm is applied to the multi-objective optimization by adopting a multi-scale chaotic variational operation on the particles to enhance the global search ability of the particles. Then a dual-archive mechanism is proposed to establish an auxiliary archive with a capacity threshold in addition to the main archive external to the non-dominated solution set. The particles deleted from the main archive are censored according to the congestion distance and are saved in the auxiliary archive. When the capacity of the auxiliary archive reaches the threshold, the particles in the auxiliary archive are compared with those in the main archive, and the main archive is updated according to the congestion distance, with the particles beneficial to the diversity of the solution being retained. The proposed MORDPSO-DA is tested on the ZDT and DTLZ benchmark test functions, with the results showing that the algorithm achieved better results in terms of IGD, GD, and SP than the other compared methods.

Title: Statistical Machine learning techniques
Authors: S. Zimeras
Affiliation: University of the Aegean, Dept. of Statistics and Actuarial-Financial Mathematics, Samos Greece

Back to TopTop