entropy-logo

Journal Browser

Journal Browser

Advances in Probabilistic Machine Learning

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: 22 August 2025 | Viewed by 2496

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, Aalto University, 02150 Espoo, Finland
Interests: probabilistic machine learning; tractable probabilistic inference; Bayesian deep learning; Bayesian nonparametrics; probabilistic programming

E-Mail Website
Guest Editor
Department of Information Systems, Decision Sciences and Statistics, ESSEC Business School, Singapore 139408, Singapore
Interests: statistical learning theory; mathematical statistics; Bayesian statistics; aggregation of estimators; approximate posterior inference
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Probabilistic modeling and reasoning are central to modern machine learning when dealing with all forms of uncertainties. In recent years, substantial progress has been made in the areas of approximate Bayesian inference, tractable probabilistic reasoning, and uncertainty quantification as a whole. Those advancements have resulted in, for example, a comeback of Bayesian deep learning and techniques that allow the effective and efficient quantification of uncertainties in complex scenarios. Moreover, the probabilistic approach has recently shown substantial potential in a wide range of application domains, including drug discovery and autonomous diving, and is a cornerstone for robust and reliable machine learning.

This Special Issue aims to provide a platform for the presentation of advancements in the field of probabilistic machine learning and Bayesian inference with a particular emphasis on computational approaches for large-scale problems. In particular, we invite submissions presenting theoretical and methodological advancements in the field, as well as application papers. Possible topics include, but are not limited to, advancements in approximate Bayesian inference (e.g., variational inference, parallel tempering), tractable probabilistic modeling (e.g., probabilistic circuits), and applications in Bayesian deep learning (e.g., uncertainty quantification in LLMs), as well as other challenging large-scale scenarios.

Dr. Martin Trapp
Prof. Dr. Pierre Alquier
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • probabilistic machine learning
  • approximate Bayesian inference
  • variational inference
  • tractable probabilistic inference
  • Bayesian deep learning
  • probabilistic circuits
  • uncertainty quantification

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 3952 KiB  
Article
Hidden Markov Neural Networks
by Lorenzo Rimella and Nick Whiteley
Entropy 2025, 27(2), 168; https://doi.org/10.3390/e27020168 - 5 Feb 2025
Viewed by 832
Abstract
We define an evolving in-time Bayesian neural network called a Hidden Markov Neural Network, which addresses the crucial challenge in time-series forecasting and continual learning: striking a balance between adapting to new data and appropriately forgetting outdated information. This is achieved by modelling [...] Read more.
We define an evolving in-time Bayesian neural network called a Hidden Markov Neural Network, which addresses the crucial challenge in time-series forecasting and continual learning: striking a balance between adapting to new data and appropriately forgetting outdated information. This is achieved by modelling the weights of a neural network as the hidden states of a Hidden Markov model, with the observed process defined by the available data. A filtering algorithm is employed to learn a variational approximation of the evolving-in-time posterior distribution over the weights. By leveraging a sequential variant of Bayes by Backprop, enriched with a stronger regularization technique called variational DropConnect, Hidden Markov Neural Networks achieve robust regularization and scalable inference. Experiments on MNIST, dynamic classification tasks, and next-frame forecasting in videos demonstrate that Hidden Markov Neural Networks provide strong predictive performance while enabling effective uncertainty quantification. Full article
(This article belongs to the Special Issue Advances in Probabilistic Machine Learning)
Show Figures

Figure 1

11 pages, 586 KiB  
Article
Stochastic Gradient Descent for Kernel-Based Maximum Correntropy Criterion
by Tiankai Li, Baobin Wang, Chaoquan Peng and Hong Yin
Entropy 2024, 26(12), 1104; https://doi.org/10.3390/e26121104 - 17 Dec 2024
Viewed by 771
Abstract
Maximum correntropy criterion (MCC) has been an important method in machine learning and signal processing communities since it was successfully applied in various non-Gaussian noise scenarios. In comparison with the classical least squares method (LS), which takes only the second-order moment of models [...] Read more.
Maximum correntropy criterion (MCC) has been an important method in machine learning and signal processing communities since it was successfully applied in various non-Gaussian noise scenarios. In comparison with the classical least squares method (LS), which takes only the second-order moment of models into consideration and belongs to the convex optimization problem, MCC captures the high-order information of models that play crucial roles in robust learning, which is usually accompanied by solving the non-convexity optimization problems. As we know, the theoretical research on convex optimizations has made significant achievements, while theoretical understandings of non-convex optimization are still far from mature. Motivated by the popularity of the stochastic gradient descent (SGD) for solving nonconvex problems, this paper considers SGD applied to the kernel version of MCC, which has been shown to be robust to outliers and non-Gaussian data in nonlinear structure models. As the existing theoretical results for the SGD algorithm applied to the kernel MCC are not well established, we present the rigorous analysis for the convergence behaviors and provide explicit convergence rates under some standard conditions. Our work can fill the gap between optimization process and convergence during the iterations: the iterates need to converge to the global minimizer while the obtained estimator cannot ensure the global optimality in the learning process. Full article
(This article belongs to the Special Issue Advances in Probabilistic Machine Learning)
Show Figures

Figure 1

Back to TopTop