Next Article in Journal
A Coding Theorem for f-Separable Distortion Measures
Next Article in Special Issue
Stochastic Proximal Gradient Algorithms for Multi-Source Quantitative Photoacoustic Tomography
Previous Article in Journal
Tsallis Extended Thermodynamics Applied to 2-d Turbulence: Lévy Statistics and q-Fractional Generalized Kraichnanian Energy and Enstrophy Spectra
Previous Article in Special Issue
Particle Swarm Optimization and Uncertainty Assessment in Inverse Problems
Open AccessFeature PaperArticle

An Auxiliary Variable Method for Markov Chain Monte Carlo Algorithms in High Dimension

SAFRAN TECH, Groupe Safran, 78772 Magny-les-Hameaux, France
Laboratoire Informatique Gaspard Monge (LIGM)-UMR 8049 CNRS, University Paris-East, 93162 Noisy-le-Grand, France
Center for Visual Computing, University Paris-Saclay, 91190 Gif-sur-Yvette, France
COSIM Research Laboratory, Higher School of Communication of Tunis (SUP’COM), University of Carthage, 2083 Ariana, Tunisia
Author to whom correspondence should be addressed.
Entropy 2018, 20(2), 110;
Received: 4 December 2017 / Revised: 16 January 2018 / Accepted: 30 January 2018 / Published: 7 February 2018
(This article belongs to the Special Issue Probabilistic Methods for Inverse Problems)
In this paper, we are interested in Bayesian inverse problems where either the data fidelity term or the prior distribution is Gaussian or driven from a hierarchical Gaussian model. Generally, Markov chain Monte Carlo (MCMC) algorithms allow us to generate sets of samples that are employed to infer some relevant parameters of the underlying distributions. However, when the parameter space is high-dimensional, the performance of stochastic sampling algorithms is very sensitive to existing dependencies between parameters. In particular, this problem arises when one aims to sample from a high-dimensional Gaussian distribution whose covariance matrix does not present a simple structure. Another challenge is the design of Metropolis–Hastings proposals that make use of information about the local geometry of the target density in order to speed up the convergence and improve mixing properties in the parameter space, while not being too computationally expensive. These two contexts are mainly related to the presence of two heterogeneous sources of dependencies stemming either from the prior or the likelihood in the sense that the related covariance matrices cannot be diagonalized in the same basis. In this work, we address these two issues. Our contribution consists of adding auxiliary variables to the model in order to dissociate the two sources of dependencies. In the new augmented space, only one source of correlation remains directly related to the target parameters, the other sources of correlations being captured by the auxiliary variables. Experiments are conducted on two practical image restoration problems—namely the recovery of multichannel blurred images embedded in Gaussian noise and the recovery of signal corrupted by a mixed Gaussian noise. Experimental results indicate that adding the proposed auxiliary variables makes the sampling problem simpler since the new conditional distribution no longer contains highly heterogeneous correlations. Thus, the computational cost of each iteration of the Gibbs sampler is significantly reduced while ensuring good mixing properties. View Full-Text
Keywords: data augmentation; auxiliary variables; MCMC; Gaussian models; large scale problems; Bayesian methods data augmentation; auxiliary variables; MCMC; Gaussian models; large scale problems; Bayesian methods
Show Figures

Figure 1

MDPI and ACS Style

Marnissi, Y.; Chouzenoux, E.; Benazza-Benyahia, A.; Pesquet, J.-C. An Auxiliary Variable Method for Markov Chain Monte Carlo Algorithms in High Dimension. Entropy 2018, 20, 110.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

Back to TopTop