entropy-logo

Journal Browser

Journal Browser

Advances in Bayesian Optimization and Deep Reinforcement Learning

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: 29 May 2025 | Viewed by 1664

Special Issue Editor


E-Mail Website
Guest Editor
Department of Quantitative Methods, Universidad Pontificia Comillas, Madrid, Spain
Interests: research scientist on Bayesian optimization; deep reinforcement learning; information theory; AutoML and AI ethics

Special Issue Information

Dear Colleagues,

Bayesian optimization has been shown to be a state-of-the-art technique in the optimization of black boxes, that is, expensive-to-evaluate functions with unknown analytical expression and, hence, unknown gradients to use in the optimization whose evaluations can also be noisy. In particular, the significance of the methodology has been shown in its application to the hyperparameter tuning problem of machine learning algorithms with success. We have seen a whole set of papers dealing with advanced Bayesian optimization techniques where its application has almost always been supervised learning algorithms.

However, in recent times, deep reinforcement learning has shown promising results for a variety of problems, displaying numerous proposals of different new algorithms. That is why it is now very important to translate the performance of Bayesian optimization in supervised learning to deep reinforcement learning algorithms, both from a methodological and an application point of view. Notice that many advanced Bayesian optimization scenarios can now be tested with deep reinforcement learning.

This Special Issue aims to be a forum for the presentation of new and improved Bayesian optimization methodologies, deep reinforcement learning methodologies or applications of Bayesian optimization to enhance the performance of deep reinforcement learning in a plethora of scenarios from robotics to financial portfolio management. Bonus points will be given to those methodologies that include an information theory approach to solving the problem in the Bayesian optimization or deep reinforcement learning paradigms.

Dr. Eduardo C. Garrido-Merchán
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Bayesian optimization
  • deep reinforcement learning
  • hyperparameter tuning
  • AutoRL
  • information

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 954 KiB  
Article
Advanced Monte Carlo for Acquisition Sampling in Bayesian Optimization
by Javier Garcia-Barcos and Ruben Martinez-Cantin
Entropy 2025, 27(1), 58; https://doi.org/10.3390/e27010058 - 10 Jan 2025
Viewed by 876
Abstract
Optimizing complex systems usually involves costly and time-consuming experiments, where selecting the experiments to perform is fundamental. Bayesian optimization (BO) has proved to be a suitable optimization method in these situations thanks to its sample efficiency and principled way of learning from previous [...] Read more.
Optimizing complex systems usually involves costly and time-consuming experiments, where selecting the experiments to perform is fundamental. Bayesian optimization (BO) has proved to be a suitable optimization method in these situations thanks to its sample efficiency and principled way of learning from previous data, but it typically requires that experiments are sequentially performed. Fully distributed BO addresses the need for efficient parallel and asynchronous active search, especially where traditional centralized BO faces limitations concerning privacy in federated learning and resource utilization in high-performance computing settings. Boltzmann sampling is an embarrassingly parallel method that enables fully distributed BO using Monte Carlo sampling. However, it also requires sampling from a continuous acquisition function, which can be challenging even for advanced Monte Carlo methods due to its highly multimodal nature, constrained search space, and possibly numerically unstable values. We introduce a simplified version of Boltzmann sampling, and we analyze multiple Markov chain Monte Carlo (MCMC) methods with a numerically improved log EI implementation for acquisition sampling. Our experiments suggest that by introducing gradient information during MCMC sampling, methods such as the MALA or CyclicalSGLD improve acquisition sampling efficiency. Interestingly, a mixture of proposals for the Metropolis–Hastings approach proves to be effective despite its simplicity. Full article
(This article belongs to the Special Issue Advances in Bayesian Optimization and Deep Reinforcement Learning)
Show Figures

Figure 1

Back to TopTop