Optimization and Machine Learning

A special issue of AppliedMath (ISSN 2673-9909).

Deadline for manuscript submissions: 30 November 2025 | Viewed by 8617

Special Issue Editor

Department of Mathematics, School of Science, Wuhan University of Technology, Wuhan 430070, China
Interests: computational intelligence; metaheuristics; optimization; electronic design automation; bioinformatics

Special Issue Information

Dear Colleagues,

We are pleased to invite you to submit your research to be considered for publication in a Special Issue of AppliedMath, focused on the latest advances in optimization and machine learning. The goal of this Special Issue is to showcase the latest advances in this field and to provide a platform for researchers to share their promising findings.

Optimization and machine learning (ML) have become two of the most popular issues in the last decade. ML provides a variety of tricks for data preprocessing, feature extraction, model selection, etc., whereas optimization algorithms offer elementary techniques for the construction of mathematical models and parameter fitting of ML techniques. Arising from the deep integration of optimization and ML, excellent optimization-based ML algorithms and efficient ML-assisted optimization algorithms can be developed to address the challenges of scientific research and engineering applications in the big data era.

In this Special Issue, we invite and welcome reviews and original papers about theoretical and practical studies of ML and optimization algorithms, including excellent ML algorithms based on novel optimization techniques and optimization algorithms promoted by ML strategies.

Dr. Yu Chen
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AppliedMath is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • optimization
  • combinatorial optimization
  • evolutionary optimization
  • swarm intelligence
  • metaheuristics
  • machine learning
  • reinforcement learning
  • transfer learning
  • deep learning
  • data-driven optimization
  • large-scale optimization
  • multi-objective optimization
  • evolutionary multi-task optimization
  • evolutionary deep learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

25 pages, 4789 KiB  
Article
Application of Deep Learning Framework for Early Prediction of Diabetic Retinopathy
by Fahad Mostafa, Hafiz Khan, Fardous Farhana and Md Ariful Haque Miah
AppliedMath 2025, 5(1), 11; https://doi.org/10.3390/appliedmath5010011 - 5 Feb 2025
Viewed by 534
Abstract
Diabetic retinopathy (DR) is a severe microvascular complication of diabetes that affects the eyes, leading to progressive damage to the retina and potential vision loss. Timely intervention and detection are crucial for preventing irreversible damage. With the advancement of technology, deep learning (DL) [...] Read more.
Diabetic retinopathy (DR) is a severe microvascular complication of diabetes that affects the eyes, leading to progressive damage to the retina and potential vision loss. Timely intervention and detection are crucial for preventing irreversible damage. With the advancement of technology, deep learning (DL) has emerged as a powerful tool in medical diagnostics, offering a promising solution for the early prediction of DR. This study compares four convolutional neural network architectures, DenseNet201, ResNet50, VGG19, and MobileNetV2, for predicting DR. The evaluation is based on both accuracy and training time data. MobileNetV2 outperforms other models, with a validation accuracy of 78.22%, and ResNet50 has the shortest training time (15.37 s). These findings emphasize the trade-off between model accuracy and computational efficiency, stressing MobileNetV2’s potential applicability for DR prediction due to its balance of high accuracy and a reasonable training time. Performing a 5-fold cross-validation with 100 repetitions, the ensemble of MobileNetV2 and a Graph Convolution Network exhibits a validation accuracy of 82.5%, significantly outperforming MobileNetV2 alone, which shows a 5-fold validation accuracy of 77.4%. This superior performance is further validated by the area under the receiver operating characteristic curve (ROC) metric, demonstrating the enhanced capability of the ensemble method in accurately detecting diabetic retinopathy. This suggests its competence in effectively classifying data and highlights its robustness across multiple validation scenarios. Moreover, the proposed clustering approach can find damaged locations in the retina using the developed Isolate Regions of Interest method, which achieves almost a 90% accuracy. These findings are useful for researchers and healthcare practitioners looking to investigate efficient and effective powerful models for predictive analytics to diagnose diabetic retinopathy. Full article
(This article belongs to the Special Issue Optimization and Machine Learning)
Show Figures

Figure 1

39 pages, 5494 KiB  
Article
Learning Rate Tuner with Relative Adaptation (LRT-RA): Road to Sustainable Computing
by Saptarshi Biswas, Sumagna Dey and Subhrapratim Nath
AppliedMath 2025, 5(1), 8; https://doi.org/10.3390/appliedmath5010008 - 14 Jan 2025
Viewed by 746
Abstract
Optimizing learning rates (LRs) in deep learning (DL) has long been challenging. Previous solutions, such as learning rate scheduling (LRS) and adaptive learning rate (ALR) algorithms like RMSProp and Adam, added complexity by introducing new hyperparameters, thereby increasing the cost of model training [...] Read more.
Optimizing learning rates (LRs) in deep learning (DL) has long been challenging. Previous solutions, such as learning rate scheduling (LRS) and adaptive learning rate (ALR) algorithms like RMSProp and Adam, added complexity by introducing new hyperparameters, thereby increasing the cost of model training through expensive cross-validation experiments. These methods mainly focus on local gradient patterns, which may not be effective in scenarios with multiple local optima near the global optimum. A new technique called Learning Rate Tuner with Relative Adaptation (LRT-RA) is introduced to tackle these issues. This approach dynamically adjusts LRs during training by analyzing the global loss curve, eliminating the need for costly initial LR estimation through cross-validation. This method reduces training expenses and carbon footprint and enhances training efficiency. It demonstrates promising results in preventing premature convergence, exhibiting inherent optimization behavior, and elucidating the correlation between dataset distribution and optimal LR selection. The proposed method achieves 84.96% accuracy on the CIFAR-10 dataset while reducing the power usage to 0.07 kWh, CO2 emissions to 0.05, and both SO2 and NOx emissions to 0.00003 pounds, during the whole training and testing process. Full article
(This article belongs to the Special Issue Optimization and Machine Learning)
Show Figures

Figure 1

17 pages, 749 KiB  
Article
A Two-Stage Feature Selection Approach Based on Artificial Bee Colony and Adaptive LASSO in High-Dimensional Data
by Efe Precious Onakpojeruo and Nuriye Sancar
AppliedMath 2024, 4(4), 1522-1538; https://doi.org/10.3390/appliedmath4040081 - 12 Dec 2024
Viewed by 714
Abstract
High-dimensional datasets, where the number of features far exceeds the number of observations, present significant challenges in feature selection and model performance. This study proposes a novel two-stage feature-selection approach that integrates Artificial Bee Colony (ABC) optimization with Adaptive Least Absolute Shrinkage and [...] Read more.
High-dimensional datasets, where the number of features far exceeds the number of observations, present significant challenges in feature selection and model performance. This study proposes a novel two-stage feature-selection approach that integrates Artificial Bee Colony (ABC) optimization with Adaptive Least Absolute Shrinkage and Selection Operator (AD_LASSO). The initial stage reduces dimensionality while effectively dealing with complex, high-dimensional search spaces by using ABC to conduct a global search for the ideal subset of features. The second stage applies AD_LASSO, refining the selected features by eliminating redundant features and enhancing model interpretability. The proposed ABC-ADLASSO method was compared with the AD_LASSO, LASSO, stepwise, and LARS methods under different simulation settings in high-dimensional data and various real datasets. According to the results obtained from simulations and applications on various real datasets, ABC-ADLASSO has shown significantly superior performance in terms of accuracy, precision, and overall model performance, particularly in scenarios with high correlation and a large number of features compared to the other methods evaluated. This two-stage approach offers robust feature selection and improves predictive accuracy, making it an effective tool for analyzing high-dimensional data. Full article
(This article belongs to the Special Issue Optimization and Machine Learning)
Show Figures

Figure 1

22 pages, 907 KiB  
Article
Introducing a Parallel Genetic Algorithm for Global Optimization Problems
by Vasileios Charilogis and Ioannis G. Tsoulos
AppliedMath 2024, 4(2), 709-730; https://doi.org/10.3390/appliedmath4020038 - 10 Jun 2024
Viewed by 2062
Abstract
The topic of efficiently finding the global minimum of multidimensional functions is widely applicable to numerous problems in the modern world. Many algorithms have been proposed to address these problems, among which genetic algorithms and their variants are particularly notable. Their popularity is [...] Read more.
The topic of efficiently finding the global minimum of multidimensional functions is widely applicable to numerous problems in the modern world. Many algorithms have been proposed to address these problems, among which genetic algorithms and their variants are particularly notable. Their popularity is due to their exceptional performance in solving optimization problems and their adaptability to various types of problems. However, genetic algorithms require significant computational resources and time, prompting the need for parallel techniques. Moving in this research direction, a new global optimization method is presented here that exploits the use of parallel computing techniques in genetic algorithms. This innovative method employs autonomous parallel computing units that periodically share the optimal solutions they discover. Increasing the number of computational threads, coupled with solution exchange techniques, can significantly reduce the number of calls to the objective function, thus saving computational power. Also, a stopping rule is proposed that takes advantage of the parallel computational environment. The proposed method was tested on a broad array of benchmark functions from the relevant literature and compared with other global optimization techniques regarding its efficiency. Full article
(This article belongs to the Special Issue Optimization and Machine Learning)
Show Figures

Figure 1

Review

Jump to: Research

30 pages, 953 KiB  
Review
A Review of Optimization-Based Deep Learning Models for MRI Reconstruction
by Wanyu Bian and Yokhesh Krishnasamy Tamilselvam
AppliedMath 2024, 4(3), 1098-1127; https://doi.org/10.3390/appliedmath4030059 - 3 Sep 2024
Viewed by 2695
Abstract
Magnetic resonance imaging (MRI) is crucial for its superior soft tissue contrast and high spatial resolution. Integrating deep learning algorithms into MRI reconstruction has significantly enhanced image quality and efficiency. This paper provides a comprehensive review of optimization-based deep learning models for MRI [...] Read more.
Magnetic resonance imaging (MRI) is crucial for its superior soft tissue contrast and high spatial resolution. Integrating deep learning algorithms into MRI reconstruction has significantly enhanced image quality and efficiency. This paper provides a comprehensive review of optimization-based deep learning models for MRI reconstruction, focusing on recent advancements in gradient descent algorithms, proximal gradient descent algorithms, ADMM, PDHG, and diffusion models combined with gradient descent. We highlight the development and effectiveness of learnable optimization algorithms (LOAs) in improving model interpretability and performance. Our findings demonstrate substantial improvements in MRI reconstruction in handling undersampled data, which directly contribute to reducing scan times and enhancing diagnostic accuracy. The review offers valuable insights and resources for researchers and practitioners aiming to advance medical imaging using state-of-the-art deep learning techniques. Full article
(This article belongs to the Special Issue Optimization and Machine Learning)
Show Figures

Figure 1

Back to TopTop