Navigating Complexity: Advanced Optimization Techniques for Machine Learning

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E: Applied Mathematics".

Deadline for manuscript submissions: closed (1 November 2024) | Viewed by 4833

Special Issue Editors


E-Mail Website
Guest Editor
Department of Applied Mathematics, Ayandegan Institute of Higher Education, Tonekabon, Iran
Interests: computational intelligence; uncertainty; decision theory and method; multicriteria decision-making; construction management, operation research; soft computing; computational modeling
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Section of Mathematics, International Telematic University Uninettuno, Corso Vittorio Emanuele II, Roma, Italy
Interests: fractional calculus; numerical analysis; deep learning; artificial intelligence; fuzzy mathematics

Special Issue Information

Dear Colleagues, 

The rapid advancements in machine learning have brought forth complex challenges that necessitate equally advanced optimization techniques. As machine learning finds applications in diverse sectors such as healthcare, finance, and autonomous systems, the need for optimized algorithms becomes crucial. Traditional optimization methods often fall short in navigating the high-dimensionality, non-convexity, and real-time requirements of modern machine learning problems. This Special Issue aims to explore the frontier of optimization techniques designed to address these complexities in machine learning applications. It will feature contributions that present innovative algorithms, theoretical insights, and real-world applications to accelerate and refine machine learning models. Potential topics include, but are not limited to: advanced gradient descent variants in machine learning, Smith-objective optimization for hyperparameter tuning, meta-learning for algorithmic optimization, Bayesian optimization in machine learning, optimization under uncertainty and indeterminacy in machine learning, soft computing approaches for machine learning optimization, scalability challenges in machine learning optimization, real-world applications of optimized machine learning algorithms, performance analysis of new optimization techniques in machine learning, and convergence rates and acceleration methods in optimization for machine learning.

Dr. Seyyed Ahmad Edalatpanah
Dr. Mohammad Javad Ebadi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning optimization
  • advanced gradient descent
  • multi-objective optimization
  • Bayesian optimization
  • soft computing
  • scalability in optimization
  • convergence and acceleration techniques

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

28 pages, 1518 KiB  
Article
Efficient Tuning of an Isotope Separation Online System Through Safe Bayesian Optimization with Simulation-Informed Gaussian Process for the Constraints
by Santiago Ramos Garces, Ivan De Boi, João Pedro Ramos, Marc Dierckx, Lucia Popescu and Stijn Derammelaere
Mathematics 2024, 12(23), 3696; https://doi.org/10.3390/math12233696 - 25 Nov 2024
Viewed by 769
Abstract
Optimizing process outcomes by tuning parameters through an automated system is common in industry. Ideally, this optimization is performed as efficiently as possible, using the minimum number of steps to achieve an optimal configuration. However, care must often be taken to ensure that, [...] Read more.
Optimizing process outcomes by tuning parameters through an automated system is common in industry. Ideally, this optimization is performed as efficiently as possible, using the minimum number of steps to achieve an optimal configuration. However, care must often be taken to ensure that, in pursuing the optimal solution, the process does not enter an “unsafe” state (for the process itself or its surroundings). Safe Bayesian optimization is a viable method in such contexts, as it guarantees constraint fulfillment during the optimization process, ensuring the system remains safe. This method assumes the constraints are real-valued and continuous functions. However, in some cases, the constraints are binary (true/false) or classification-based (safe/unsafe), limiting the direct application of safe Bayesian optimization. Therefore, a slight modification of safe Bayesian optimization allows for applying the method using a probabilistic classifier for learning classification constraints. However, violation of constraints may occur during the optimization process, as the theoretical guarantees of safe Bayesian optimization do not apply to discontinuous functions. This paper addresses this limitation by introducing an enhanced version of safe Bayesian optimization incorporating a simulation-informed Gaussian process (GP) for handling classification constraints. The simulation-informed GP transforms the classification constraint into a piece-wise function, enabling the application of safe Bayesian optimization. We applied this approach to optimize the parameters of a computational model for the isotope separator online (ISOL) at the MYRRHA facility (Multipurpose Hybrid Research Reactor for High-Tech Applications). The results revealed a significant reduction in constraint violations—approximately 80%—compared to safe Bayesian optimization methods that directly learn the classification constraints using Laplace approximation and expectation propagation. The sensitivity to the accuracy of the simulation model was analyzed to determine the extent to which it is advantageous to use the proposed method. These findings suggest that incorporating available information into the optimization process is valuable for reducing the number of unsafe outcomes in constrained optimization scenarios. Full article
Show Figures

Figure 1

16 pages, 5512 KiB  
Article
Research on Autonomous Manoeuvre Decision Making in Within-Visual-Range Aerial Two-Player Zero-Sum Games Based on Deep Reinforcement Learning
by Bo Lu, Le Ru, Shiguang Hu, Wenfei Wang, Hailong Xi and Xiaolin Zhao
Mathematics 2024, 12(14), 2160; https://doi.org/10.3390/math12142160 - 10 Jul 2024
Viewed by 1097
Abstract
In recent years, with the accelerated development of technology towards automation and intelligence, autonomous decision-making capabilities in unmanned systems are poised to play a crucial role in contemporary aerial two-player zero-sum games (TZSGs). Deep reinforcement learning (DRL) methods enable agents to make autonomous [...] Read more.
In recent years, with the accelerated development of technology towards automation and intelligence, autonomous decision-making capabilities in unmanned systems are poised to play a crucial role in contemporary aerial two-player zero-sum games (TZSGs). Deep reinforcement learning (DRL) methods enable agents to make autonomous manoeuvring decisions. This paper focuses on current mainstream DRL algorithms based on fundamental tactical manoeuvres, selecting a typical aerial TZSG scenario—within visual range (WVR) combat. We model the key elements influencing the game using a Markov decision process (MDP) and demonstrate the mathematical foundation for implementing DRL. Leveraging high-fidelity simulation software (Warsim v1.0), we design a prototypical close-range aerial combat scenario. Utilizing this environment, we train mainstream DRL algorithms and analyse the training outcomes. The effectiveness of these algorithms in enabling agents to manoeuvre in aerial TZSG autonomously is summarised, providing a foundational basis for further research. Full article
Show Figures

Figure 1

15 pages, 1325 KiB  
Article
Approximate Solution of PHI-Four and Allen–Cahn Equations Using Non-Polynomial Spline Technique
by Mehboob Ul Haq, Sirajul Haq, Ihteram Ali and Mohammad Javad Ebadi
Mathematics 2024, 12(6), 798; https://doi.org/10.3390/math12060798 - 8 Mar 2024
Cited by 3 | Viewed by 1297
Abstract
The aim of this work is to use an efficient and accurate numerical technique based on non-polynomial spline for the solution of the PHI-Four and Allen–Cahn equations. A recent discovery suggests that the PHI-Four equation focuses on its implications for particle physics and [...] Read more.
The aim of this work is to use an efficient and accurate numerical technique based on non-polynomial spline for the solution of the PHI-Four and Allen–Cahn equations. A recent discovery suggests that the PHI-Four equation focuses on its implications for particle physics and the behavior of scalar fields in the quantum realm. In materials science, ongoing research involves using the Allen–Cahn equation to understand and predict the evolution of microstructures in various materials as well as in biophysics. It depicts pattern formation in biological systems and the dynamics of spatial organization in tissues. To obtain an approximate solution of both equations, this technique uses forward differences for the time and cubic non-polynomial spline function for spatial descretization. The stability of the suggested technique is addressed using the von Neumann technique. Convergence test is carried out theoretically to show the order of convergence of the scheme. Some numerical tests are carried out to confirm accuracy and efficiency in terms of absolute error LR. Convergence rates for different test problems are also computed numerically. Numerical results and simulations obtained are compared with the existing methods. Full article
Show Figures

Figure 1

Back to TopTop