An Auxiliary Variable Method for Markov Chain Monte Carlo Algorithms in High Dimension
Abstract
:1. Introduction
2. Motivation
2.1. Sampling Issues in HighDimensional Space
2.1.1. Sampling from HighDimensional Gaussian Distribution
 Perturbation: Draw a Gaussian random vector ${\mathbf{n}}_{1}\sim \mathcal{N}({\mathbf{0}}_{Q},\mathbf{G})$.
 Optimization: Solve the linear system $\mathbf{G}{\mathbf{n}}_{2}={\mathbf{n}}_{1}+{\mathbf{H}}^{\top}\mathsf{\Lambda}\mathbf{z}+{\mathbf{G}}_{\mathbf{x}}{\mathbf{m}}_{x}$.
2.1.2. Designing Efficient Proposals in MH Algorithms
2.2. Auxiliary Variables and Data Augmentation Strategies
 the first condition is satisfied thanks to the definition of the joint distribution in (9), provided that $\mathsf{p}(\mathbf{u}\mathbf{x},\mathbf{z})$ is a density of a proper distribution;
 for the second condition, it can be noticed that if the first condition is met, Fubini–Tonelli’s theorem allows us to claim that$${\int}_{{\mathbb{R}}^{J}}\left(\right)open="("\; close=")">{\int}_{{\mathbb{R}}^{Q}}\mathsf{p}(\mathbf{x},\mathbf{u}\mathbf{z})\phantom{\rule{4pt}{0ex}}\mathrm{d}\mathbf{x}\phantom{\rule{4pt}{0ex}}\mathrm{d}\mathbf{x}={\int}_{{\mathbb{R}}^{Q}}\mathsf{p}(\mathbf{x}\mathbf{z})\phantom{\rule{4pt}{0ex}}\mathrm{d}\mathbf{x}=1.$$This shows that $\mathsf{p}(\mathbf{u}\mathbf{z})$ as defined in $({C}_{2})$ is a valid probability density function.
 Sample ${\mathbf{u}}^{(t+1)}$ from ${\mathcal{P}}_{\mathbf{u}{\mathbf{x}}^{(t)},\mathbf{z}}$;
 Sample ${\mathbf{x}}^{(t+1)}$ from ${\mathcal{P}}_{\mathbf{x}{\mathbf{u}}^{(t+1)},\mathbf{z}}$.
3. Proposed Approach
3.1. Correlated Gaussian Noise
Algorithm 1 Gibbs sampler with auxiliary variables in order to eliminate the coupling induced by $\mathsf{\Lambda}$. 
Initialize: ${\mathbf{x}}^{(0)}\in {\mathbb{R}}^{Q}$, ${\mathbf{v}}^{(0)}\in {\mathbb{R}}^{N}$, $\mu >0$ such that ${\mu \parallel \mathsf{\Lambda}\parallel}_{\mathrm{S}}<1$

Algorithm 2 Gibbs sampler with auxiliary variables in order to eliminate the coupling induced by ${\mathbf{H}}^{\top}\mathsf{\Lambda}\mathbf{H}$. 
Initialize: ${\mathbf{x}}^{(0)}\in {\mathbb{R}}^{Q}$, ${\mathbf{v}}^{(0)}\in {\mathbb{R}}^{Q}$, $\mu >0$ such that $\mu \parallel {\mathbf{H}}^{\top}\mathsf{\Lambda}\mathbf{H}\parallel <1$

3.2. Scale Mixture of Gaussian Noise
3.2.1. Problem Formulation
 1.
 ${\int}_{{\mathbb{R}}^{J}}\mathsf{p}(\mathbf{x},\mathbf{\sigma},\mathbf{v}\mathbf{z})\mathrm{d}\mathbf{v}=\mathsf{p}(\mathbf{x},\mathbf{\sigma}\mathbf{z})$,
 2.
 ${\int}_{{\mathbb{R}}^{Q}}{\int}_{{\mathbb{R}}^{N}}\mathsf{p}(\mathbf{x},\mathbf{\sigma},\mathbf{v}\mathbf{z})\phantom{\rule{4pt}{0ex}}\mathrm{d}\mathbf{x}\mathrm{d}\mathbf{\sigma}=\mathsf{p}(\mathbf{v}\mathbf{z})$,
3.2.2. Proposed Algorithms
 Suppose first that there exists a constant $\nu >0$ such that$$\left(\right)open="("\; close=")">\forall t\u2a7e0\phantom{\rule{1.em}{0ex}}\nu \u2a7d{\sigma}_{i}^{(t)}.$$
 Otherwise, when $\nu >0$ satisfying (34) does not exist, results in Section 3.1 remain also valid when, at each iteration t, for a given value of ${\mathbf{\sigma}}^{(t)}$, we replace $\mathsf{\Lambda}$ by $\mathbf{D}({\mathbf{\sigma}}^{(t)})$. However, there is a main difference with respect to the case when $\nu >0$, which is that $\mu $ depends on the value of the mixing variable ${\mathbf{\sigma}}^{(t)}$ and hence can take different values along the iterations. Subsequently, $\mu (\mathbf{\sigma})$ will denote the chosen value of $\mathbf{\mu}$ for a given value of $\mathbf{\sigma}$. Here again, two strategies can be distinguished for setting ${\left(\right)}_{\mu}$, depending on the dependencies one wants to eliminate through the DA strategy.
Algorithm 3 Gibbs sampler with auxiliary variables in order to eliminate the coupling induced by $\mathbf{D}(\mathbf{\sigma})$ in the case of a scale mixture of Gaussian noise. 
Initialize: ${\mathbf{x}}^{(0)}\in {\mathbb{R}}^{Q}$, ${\mathbf{v}}^{(0)}\in {\mathbb{R}}^{N}$, ${\mathbf{\sigma}}^{(0)}\in {\mathbb{R}}_{+}^{N}$, $0<\u03f5<1$, $\mu ({\mathbf{\sigma}}^{(0)})=\u03f5{\left(\right)}^{min}2$

Algorithm 4 Gibbs sampler with auxiliary variables in order to eliminate the coupling induced by ${\mathbf{H}}^{\top}\mathbf{D}(\mathbf{\sigma})\mathbf{H}$ in the case of a scale mixture of Gaussian noise. 
Initialize: ${\mathbf{x}}^{(0)}\in {\mathbb{R}}^{Q}$, ${\mathbf{v}}^{(0)}\in {\mathbb{R}}^{Q}$, ${\mathbf{\sigma}}^{(0)}\in {\mathbb{R}}_{+}^{N}$, $0<\u03f5<1$, $\mu ({\mathbf{\sigma}}^{(0)})=\u03f5\phantom{\rule{3.33333pt}{0ex}}{\parallel \mathbf{H}\parallel}_{\mathrm{S}}^{2}\phantom{\rule{3.33333pt}{0ex}}{\left(\right)}^{min}2$

3.2.3. Partially Collapsed Gibbs Sampling
Algorithm 5 PCGS in the case of a scale mixture of Gaussian noise. 
Initialize: ${\mathbf{x}}^{(0)}\in {\mathbb{R}}^{Q}$, ${\mathbf{v}}^{(0)}\in {\mathbb{R}}^{Q}$, ${\mathbf{\sigma}}^{(0)}\in {\mathbb{R}}_{+}^{N}$, ${\mathsf{\Theta}}^{(0)}\in {\mathbb{R}}^{P}$

3.3. HighDimensional Gaussian Distribution
 If the prior precision matrix ${\mathbf{G}}_{\mathbf{x}}$ and the observation matrix $\mathbf{H}$ can be diagonalized in the same basis, it can be of interest to add the auxiliary variable ${\mathbf{v}}_{1}$ in the data fidelity term. Following Algorithm 1, let ${\mu}_{1}>0$ such that ${\mu}_{1}{\parallel \mathsf{\Lambda}\parallel}_{\mathrm{S}}<1$ and$${\mathbf{v}}_{1}\sim \mathcal{N}\left(\right)open="("\; close=")">\left(\right)open="("\; close=")">{\displaystyle \frac{1}{{\mu}_{1}}}{\mathbf{I}}_{N}\mathsf{\Lambda}.$$The resulting conditional distribution of the target signal $\mathbf{x}$ given the auxiliary variable ${\mathbf{v}}_{1}$ and the vector of observation $\mathbf{z}$ is a Gaussian distribution with the following parameters:$$\tilde{\mathbf{G}}={\displaystyle \frac{1}{{\mu}_{1}}}{\mathbf{H}}^{\top}\mathbf{H}+{\mathbf{G}}_{\mathbf{x}},$$$$\tilde{\mathbf{m}}={\tilde{\mathbf{G}}}^{1}\left(\right)open="("\; close=")">{\mathbf{H}}^{\top}\mathsf{\Lambda}\mathbf{z}+{\mathbf{G}}_{\mathbf{x}}{\mathbf{m}}_{\mathbf{x}}+{\mathbf{H}}^{\top}{\mathbf{v}}_{1}$$Then, sampling from the target signal can be performed by passing to the transform domain where $\mathbf{H}$ and ${\mathbf{G}}_{\mathbf{x}}$ are diagonalizable (e.g., Fourier domain when $\mathbf{H}$ and ${\mathbf{G}}_{\mathbf{x}}$ are circulant).Similarly, if it is possible to write ${\mathbf{G}}_{\mathbf{x}}={\mathbf{V}}^{\top}\mathsf{\Omega}\mathbf{V}$, such that $\mathbf{H}$ and $\mathbf{V}$ can be diagonalized in the same basis, we suggest the introduction of an extra auxiliary variable ${\mathbf{v}}_{2}$ independent of ${\mathbf{v}}_{1}$ in the prior term to eliminate the coupling introduced by $\mathsf{\Omega}$ when passing to the transform domain. Let ${\mu}_{2}>0$ be such that ${\mu}_{2}{\parallel \mathsf{\Omega}\parallel}_{\mathrm{S}}<1$ and let the distribution of ${\mathbf{v}}_{2}$ conditionally to $\mathbf{x}$ be given by$${\mathbf{v}}_{2}\sim \mathcal{N}\left(\right)open="("\; close=")">\left(\right)open="("\; close=")">{\displaystyle \frac{1}{{\mu}_{2}}}{\mathbf{I}}_{N}\mathsf{\Omega}.$$The joint distribution of the unknown parameters is given by$$\mathsf{p}(\mathbf{x},{\mathbf{v}}_{1},{\mathbf{v}}_{2}\mathbf{z})=\mathsf{p}(\mathbf{x}\mathbf{z})\mathsf{p}({\mathbf{v}}_{1}\mathbf{x},\mathbf{z})\mathsf{p}({\mathbf{v}}_{2}\mathbf{x},\mathbf{z}).$$It follows that the minus logarithm of the conditional distribution of $\mathbf{x}$ given $\mathbf{z}$, ${\mathbf{v}}_{1}$, and ${\mathbf{v}}_{2}$ is Gaussian with parameters:$$\tilde{\mathbf{G}}={\displaystyle \frac{1}{{\mu}_{1}}}{\mathbf{H}}^{\top}\mathbf{H}+{\displaystyle \frac{1}{{\mu}_{2}}}{\mathbf{V}}^{\top}\mathbf{V}$$$$\tilde{\mathbf{m}}={\tilde{\mathbf{G}}}^{1}\left(\right)open="("\; close=")">{\mathbf{H}}^{\top}\mathsf{\Lambda}\mathbf{z}+{\mathbf{G}}_{\mathbf{x}}{\mathbf{m}}_{\mathbf{x}}+{\mathbf{H}}^{\top}{\mathbf{v}}_{1}+{\mathbf{V}}^{\top}{\mathbf{v}}_{2}$$
 If ${\mathbf{G}}_{\mathbf{x}}$ and $\mathbf{H}$ are not diagonalizable in the same basis, the introduction of an auxiliary variable either in the data fidelity term or the prior allows us to eliminate the coupling between these two heterogeneous operators. Let ${\mu}_{1}>0$ such that ${\mu}_{1}{\parallel {\mathbf{H}}^{\top}\mathsf{\Lambda}\mathbf{H}\parallel}_{\mathrm{S}}<1$ and$${\mathbf{v}}_{1}\sim \mathcal{N}\left(\right)open="("\; close=")">\left(\right)open="("\; close=")">{\displaystyle \frac{1}{{\mu}_{1}}}{\mathbf{I}}_{Q}{\mathbf{H}}^{\top}\mathsf{\Lambda}\mathbf{H}.$$Then, the parameters of the Gaussian posterior distribution of $\mathbf{x}$ given ${\mathbf{v}}_{1}$ read:$$\tilde{\mathbf{G}}={\displaystyle \frac{1}{{\mu}_{1}}}{\mathbf{I}}_{Q}+{\mathbf{G}}_{\mathbf{x}}\phantom{\rule{0.166667em}{0ex}},$$$$\tilde{\mathbf{m}}={\tilde{\mathbf{G}}}^{1}\left(\right)open="("\; close=")">{\mathbf{H}}^{\top}\mathsf{\Lambda}\mathbf{z}+{\mathbf{G}}_{\mathbf{x}}{\mathbf{m}}_{\mathbf{x}}+{\mathbf{v}}_{1}$$Note that if ${\mathbf{G}}_{\mathbf{x}}$ has some simple structure (e.g,. diagonal, block diagonal, sparse, circulant, etc.), the precision matrix (50) will inherit this simple structure.Otherwise, if ${\mathbf{G}}_{\mathbf{x}}$ does not present any specific structure, one could apply the proposed DA method to both data fidelity and prior terms. It suffices to introduce an extra auxiliary variable ${\mathbf{v}}_{2}$ in the prior law, additionally to the auxiliary variable ${\mathbf{v}}_{1}$ in (49). Let ${\mu}_{2}>0$ be such that ${\mu}_{2}{\parallel {\mathbf{G}}_{\mathbf{x}}\parallel}_{\mathrm{S}}<1$ and$${\mathbf{v}}_{2}\sim \mathcal{N}\left(\right)open="("\; close=")">\left(\right)open="("\; close=")">{\displaystyle \frac{1}{{\mu}_{2}}}{\mathbf{I}}_{Q}{\mathbf{G}}_{\mathbf{x}}.$$Then, the posterior distribution of $\mathbf{x}$ given ${\mathbf{v}}_{1}$ and ${\mathbf{v}}_{2}$ is Gaussian with the following parameters:$$\tilde{\mathbf{G}}={\displaystyle \frac{1}{\mu}}{\mathbf{I}}_{Q}$$$$\tilde{\mathbf{m}}=\mu \left(\right)open="("\; close=")">{\mathbf{v}}_{1}+{\mathbf{v}}_{2}+{\mathbf{H}}^{\top}\mathsf{\Lambda}\mathbf{z}+{\mathbf{G}}_{\mathbf{x}}{\mathbf{m}}_{\mathbf{x}}$$$$\mu ={\displaystyle \frac{{\mu}_{1}{\mu}_{2}}{{\mu}_{1}+{\mu}_{2}}}.$$
3.4. Sampling the Auxiliary Variable
 (1)
 Generate ${\mathbf{n}}^{(t+1)}\sim \mathcal{N}\left(\right)open="("\; close=")">{\mathbf{0}}_{N},{\displaystyle \frac{1}{\beta}}{\mathbf{I}}_{N}\mathsf{\Lambda}$,
 (2)
 Generate ${\mathbf{y}}^{(t+1)}\sim \mathcal{N}\left(\right)open="("\; close=")">{\mathbf{0}}_{Q},{\displaystyle \frac{1}{\lambda}}{\mathbf{I}}_{Q}{\mathbf{H}}^{\top}\mathbf{H}$ with $\lambda ={\displaystyle \frac{\mu}{\beta}}\u2a7d{\displaystyle \frac{\sqrt{\u03f5}}{{\parallel \mathbf{H}\parallel}_{\mathrm{S}}^{2}}}$,
 (3)
 Compute ${\mathbf{v}}^{(t+1)}=\left(\right)open="("\; close=")">{\displaystyle \frac{1}{\mu}}{\mathbf{I}}_{Q}{\mathbf{H}}^{\top}\mathsf{\Lambda}\mathbf{H}$,
 In the particular case when $\mathbf{H}$ is circulant, sampling can be performed in the Fourier domain. More generally, since ${\mathbf{H}}^{\top}\mathbf{H}$ is symmetric, there exists an orthogonal matrix $\mathbf{N}$ such that $\mathbf{N}{\mathbf{H}}^{\top}\mathbf{H}{\mathbf{N}}^{\top}$ is diagonal with positive diagonal entries. It follows that sampling from the Gaussian distribution with covariance matrix $\frac{1}{\lambda}}{\mathbf{I}}_{Q}{\mathbf{H}}^{\top}\mathbf{H$ can be fulfilled easily within the basis defined by the matrix $\mathbf{N}$.
 Suppose that $\mathbf{H}$ satisfies $\mathbf{H}{\mathbf{H}}^{\top}=\nu {\mathbf{I}}_{N}$ with $\nu >0$, which is the case, for example, of tight frame synthesis operators or decimation matrices. Note that $\nu \lambda \u2a7d\sqrt{\u03f5}<1$. We then have:$$\frac{1}{\lambda}}{\mathbf{I}}_{Q}{\mathbf{H}}^{\top}\mathbf{H}={\left(\right)}^{{\displaystyle \frac{1}{\sqrt{\lambda}}}}2{\mathbf{H}}^{\top}\mathbf{H}.$$It follows that a sample from the Gaussian distribution with covariance matrix $\frac{1}{\lambda}}{\mathbf{I}}_{Q}{\mathbf{H}}^{\top}\mathbf{H$ can be obtained as follows:$${\mathbf{y}}^{(t+1)}=\left(\right)open="("\; close=")">{\displaystyle \frac{1}{\sqrt{\lambda}}}{\mathbf{I}}_{Q}\sqrt{\lambda}{\mathbf{H}}^{\top}\mathbf{H}$$
 Suppose that $\mathbf{H}=\mathbf{M}\mathbf{P}$ with $\mathbf{M}\in {\mathbb{R}}^{N\times K}$ and $\mathbf{P}\in {\mathbb{R}}^{K\times Q}$. Hence, one can set $\lambda >0$ and $\tilde{\lambda}>0$ such that$${\lambda \parallel \mathbf{P}\parallel}^{2}<\tilde{\lambda}<{\displaystyle \frac{1}{{\parallel \mathbf{M}\parallel}^{2}}}.$$For example, for $\mu ={\displaystyle \frac{\u03f5}{{\parallel \mathbf{P}\parallel}_{\mathrm{S}}^{2}{\parallel \mathbf{M}\parallel}_{\mathrm{S}}^{2}{\parallel \mathsf{\Lambda}\parallel}_{\mathrm{S}}}}$, we have $\lambda ={\displaystyle \frac{\sqrt{\u03f5}}{{\parallel \mathbf{P}\parallel}_{\mathrm{S}}^{2}{\parallel \mathbf{M}\parallel}_{\mathrm{S}}^{2}}}$. Then, we can set $\tilde{\lambda}={\displaystyle \frac{{\u03f5}^{1/4}}{{\parallel \mathbf{M}\parallel}_{\mathrm{S}}^{2}}}$. It follows that$$\frac{1}{\lambda}}{\mathbf{I}}_{Q}{\mathbf{H}}^{\top}\mathbf{H}={\displaystyle \frac{1}{\tilde{\lambda}}}\left(\right)open="("\; close=")">{\displaystyle \frac{\tilde{\lambda}}{\lambda}}{\mathbf{I}}_{Q}{\mathbf{P}}^{\top}\mathbf{P}\mathbf{P}.$$It appears that if it is possible to draw merely random vectors ${\mathbf{y}}_{1}^{(t+1)}$ and ${\mathbf{y}}_{2}^{(t+1)}$ from the Gaussian distributions with covariance matrices $\frac{\tilde{\lambda}}{\lambda}}{\mathbf{I}}_{Q}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}{\mathbf{P}}^{\top}\mathbf{P$ and $\frac{1}{\tilde{\lambda}}}{\mathbf{I}}_{K}{\mathbf{M}}^{\top}\mathbf{M$, respectively (for example, when $\mathbf{P}$ is a tight frame analysis operator and $\mathbf{M}$ is a convolution matrix with periodic boundary condition), a sample from the Gaussian distribution with a covariance matrix $\frac{1}{\lambda}}{\mathbf{I}}_{Q}{\mathbf{H}}^{\top}\mathbf{H$ can be obtained as follows:$${\mathbf{y}}^{(t+1)}={\displaystyle \frac{1}{\sqrt{\tilde{\lambda}}}}{\mathbf{y}}_{1}^{(t+1)}+{\mathbf{P}}^{\top}{\mathbf{y}}_{2}^{(t+1)}.$$
4. Application to Multichannel Image Recovery in the Presence of Gaussian Noise
4.1. Problem Formulation
4.2. Sampling from the Posterior Distribution of the Wavelet Coefficients

4.3. Hyperparameters Estimation
4.3.1. Separation Strategy
4.3.2. Prior and Posterior Distribution for the Hyperparameters
4.3.3. Initialization
4.4. Experimental Results
5. Application to Image Recovery in the Presence of Two Mixed Gaussian Noise Terms
5.1. Problem Formulation
5.2. Prior Distributions
Posterior Distributions
 $\left(\right)open="("\; close=")">\forall i\in \{1,\dots ,N\}$ where ${p}_{i}={\displaystyle \frac{{\eta}_{i}}{1+{\eta}_{i}}}$ such that$${\eta}_{i}={\displaystyle \frac{\beta}{1\beta}}exp\left(\right)open="("\; close=")">{\displaystyle \frac{1}{2}}\left(\right)open="("\; close=")">{\kappa}_{2}^{2}{\kappa}_{1}^{2}2$$
 $\beta \mathbf{x},\mathbf{z},\mathbf{\sigma},{\kappa}_{1}^{2},{\kappa}_{2}^{2}\sim \mathcal{B}\left(\right)open="("\; close=")">{n}_{2}+1,{n}_{1}+1$, where $\mathcal{B}$ is the Beta distribution and ${n}_{1}$ and ${n}_{2}$ are the cardinals of the sets $\{i\in \{1,\dots ,N\},\phantom{\rule{4pt}{0ex}}\mid \phantom{\rule{4pt}{0ex}}{\sigma}_{i}={\kappa}_{1}\}$ and $\{i\in \{1,\dots ,N\},\phantom{\rule{4pt}{0ex}}\mid \phantom{\rule{4pt}{0ex}}{\sigma}_{i}={\kappa}_{2}\}$, respectively, so that ${n}_{1}+{n}_{2}=N$,
 ${\kappa}_{1}^{2}\mathbf{x},\mathbf{\sigma},\beta ,\mathbf{z}\sim \mathcal{IG}\left(\right)open="("\; close=")">{a}_{1}+\frac{{n}_{1}}{2},{b}_{1}+{\sum}_{i\mid {\sigma}_{i}={\kappa}_{1}}{\displaystyle \frac{{\left(\right)}^{{\left(\right)}_{\mathbf{H}}}}{}2}2$,
 ${\kappa}_{2}^{2}\mathbf{x},\mathbf{\sigma},\beta ,\mathbf{z}\sim \mathcal{IG}\left(\right)open="("\; close=")">{a}_{2}+\frac{{n}_{2}}{2},{b}_{2}+{\sum}_{i\mid {\sigma}_{i}={\kappa}_{2}}{\displaystyle \frac{{\left(\right)}^{{\left(\right)}_{\mathbf{H}}}}{}2}2$,
 $\gamma \mathbf{x}\sim \mathcal{G}\left(\right)open="("\; close=")">{\displaystyle \frac{Q}{2}}+{a}_{\gamma},{\displaystyle \frac{1}{2}}{\parallel \mathbf{L}\mathbf{x}\parallel}^{2}+{b}_{\gamma}$.
5.3. Sampling from the Posterior Distribution of $\mathbf{x}$
5.3.1. First Variant
AuxV1 

5.3.2. Second Variant
AuxV2 

5.4. Experimental Results
6. Conclusions
Author Contributions
Conflicts of Interest
References
 Bertero, M.; Boccacci, P. Introduction to Inverse Problems in Imaging; CRC Press: Boca Raton, FL, USA, 1998. [Google Scholar]
 Demoment, G. Image reconstruction and restoration: Overview of common estimation structure and problems. IEEE Trans. Acoust. Speech Signal Process. 1989, 37, 2024–2036. [Google Scholar] [CrossRef]
 Marnissi, Y.; Zheng, Y.; Chouzenoux, E.; Pesquet, J.C. A Variational Bayesian Approach for Image Restoration. Application to Image Deblurring with PoissonGaussian Noise. IEEE Trans. Comput. Imaging 2017, 3, 722–737. [Google Scholar] [CrossRef]
 Chouzenoux, E.; Jezierska, A.; Pesquet, J.C.; Talbot, H. A Convex Approach for Image Restoration with Exact PoissonGaussian Likelihood. SIAM J. Imaging Sci. 2015, 8, 2662–2682. [Google Scholar] [CrossRef]
 Chaari, L.; Pesquet, J.C.; Tourneret, J.Y.; Ciuciu, P.; BenazzaBenyahia, A. A Hierarchical Bayesian Model for Frame Representation. IEEE Trans. Signal Process. 2010, 58, 5560–5571. [Google Scholar] [CrossRef] [Green Version]
 Pustelnik, N.; BenazzaBenhayia, A.; Zheng, Y.; Pesquet, J.C. WaveletBased Image Deconvolution and Reconstruction. In Wiley Encyclopedia of Electrical and Electronics Engineering; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 1999; pp. 1–34. [Google Scholar]
 Hastings, W.K. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 1970, 57, 97–109. [Google Scholar] [CrossRef]
 Liu, J.S. Monte Carlo Strategies in Scientific Computing; Springer Series in Statistics; SpringerVerlag: New York, NY, USA, 2001. [Google Scholar]
 Gilks, W.R.; Richardson, S.; Spiegelhalter, D. Markov Chain Monte Carlo in Practice; Interdisciplinary Statistics; Chapman and Hall/CRC: Boca Raton, FL, USA, 1999. [Google Scholar]
 Gamerman, D.; Lopes, H.F. Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference; Texts in Statistical Science; Chapman and Hall/CRC: Boca Raton, FL, USA, 2006. [Google Scholar]
 Glynn, P.W.; Iglehart, D.L. Importance sampling for stochastic simulations. Manag. Sci. 1989, 35, 1367–1392. [Google Scholar] [CrossRef]
 Gilks, W.R.; Wild, P. Adaptive rejection sampling for Gibbs sampling. Appl. Stat. 1992, 41, 337–348. [Google Scholar] [CrossRef]
 Neal, R.M. MCMC using Hamiltonian dynamics. In Handbook of Markov Chain Monte Carlo; Brooks, S., Gelman, A., Jones, G.L., Meng, X.L., Eds.; CRC Press: Boca Raton, FL, USA, 2011; pp. 113–162. [Google Scholar]
 Jarner, S.F.; Hansen, E. Geometric ergodicity of Metropolis algorithms. Stoch. Process. Appl. 2000, 85, 341–361. [Google Scholar] [CrossRef]
 Gilks, W.R.; Best, N.; Tan, K. Adaptive rejection Metropolis sampling within Gibbs sampling. Appl. Stat. 1995, 44, 455–472. [Google Scholar] [CrossRef]
 Dobigeon, N.; Moussaoui, S.; Coulon, M.; Tourneret, J.Y.; Hero, A.O. Joint Bayesian Endmember Extraction and Linear Unmixing for Hyperspectral Imagery. IEEE Trans. Signal Process. 2009, 57, 4355–4368. [Google Scholar] [CrossRef] [Green Version]
 Roberts, G.O.; Gelman, A.; Gilks, W.R. Weak convergence and optimal scaling or random walk Metropolis algorithms. Ann. Appl. Probab. 1997, 7, 110–120. [Google Scholar] [CrossRef]
 Sherlock, C.; Fearnhead, P.; Roberts, G.O. The random walk Metropolis: Linking theory and practice through a case study. Stat. Sci. 2010, 25, 172–190. [Google Scholar] [CrossRef]
 Roberts, G.O.; Stramer, O. Langevin diffusions and MetropolisHastings algorithms. Methodol. Comput. Appl. Probab. 2002, 4, 337–357. [Google Scholar] [CrossRef]
 Martin, J.; Wilcox, C.L.; Burstedde, C.; Ghattas, O. A Stochastic Newton MCMC Method for LargeScale Statistical Inverse Problems with Application to Seismic Inversion. SIAM J. Sci. Comput. 2012, 34, 1460–1487. [Google Scholar] [CrossRef]
 Zhang, Y.; Sutton, C.A. QuasiNewton Methods for Markov Chain Monte Carlo. In Proceedings of the Neural Information Processing Systems (NIPS 2011), Granada, Spain, 12–17 December 2011; pp. 2393–2401. [Google Scholar]
 Girolami, M.; Calderhead, B. Riemann manifold Langevin and Hamiltonian Monte Carlo methods. J. R. Stat. Soc. Ser. B Stat. Methodol. 2011, 73, 123–214. [Google Scholar] [CrossRef]
 Van Dyk, D.A.; Meng, X.L. The art of data augmentation. J. Comput. Graph. Stat. 2012, 10, 1–50. [Google Scholar] [CrossRef]
 Féron, O.; Orieux, F.; Giovannelli, J.F. Gradient Scan Gibbs Sampler: An efficient algorithm for highdimensional Gaussian distributions. IEEE J. Sel. Top. Signal Process. 2016, 10, 343–352. [Google Scholar] [CrossRef]
 Rue, H. Fast sampling of Gaussian Markov random fields. J. R. Stat. Soc. Ser. B Stat. Methodol. 2001, 63, 325–338. [Google Scholar] [CrossRef]
 Geman, D.; Yang, C. Nonlinear image recovery with halfquadratic regularization. IEEE Trans. Image Process. 1995, 4, 932–946. [Google Scholar] [CrossRef] [PubMed]
 Chellappa, R.; Chatterjee, S. Classification of textures using Gaussian Markov random fields. IEEE Trans. Acoust. Speech Signal Process. 1985, 33, 959–963. [Google Scholar] [CrossRef]
 Rue, H.; Held, L. Gaussian Markov Random Fields: Theory and Applications; CRC Press: Boca Raton, FL, USA, 2005. [Google Scholar]
 Bardsley, J.M. MCMCbased image reconstruction with uncertainty quantification. SIAM J. Sci. Comput. 2012, 34, A1316–A1332. [Google Scholar] [CrossRef]
 Papandreou, G.; Yuille, A.L. Gaussian sampling by local perturbations. In Proceedings of the Neural Information Processing Systems 23 (NIPS 2010), Vancouver, BC, Canada, 6–11 December 2010; pp. 1858–1866. [Google Scholar]
 Orieux, F.; Féron, O.; Giovannelli, J.F. Sampling highdimensional Gaussian distributions for general linear inverse problems. IEEE Signal Process. Lett. 2012, 19, 251–254. [Google Scholar] [CrossRef] [Green Version]
 Gilavert, C.; Moussaoui, S.; Idier, J. Efficient Gaussian sampling for solving largescale inverse problems using MCMC. IEEE Trans. Signal Process. 2015, 63, 70–80. [Google Scholar] [CrossRef]
 Parker, A.; Fox, C. Sampling Gaussian distributions in Krylov spaces with conjugate gradients. SIAM J. Sci. Comput. 2012, 34, B312–B334. [Google Scholar] [CrossRef]
 Lasanen, S. NonGaussian statistical inverse problems. Inverse Prob. Imaging 2012, 6, 267–287. [Google Scholar] [CrossRef]
 Bach, F.; Jenatton, R.; Mairal, J.; Obozinski, G. Optimization with sparsityinducing penalties. Found. Trends Mach. Learn. 2012, 4, 1–106. [Google Scholar] [CrossRef]
 Kamilov, U.; Bostan, E.; Unser, M. Generalized total variation denoising via augmented Lagrangian cycle spinning with Haar wavelets. In Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP 2012), Kyoto, Japan, 25–30 March 2012; pp. 909–912. [Google Scholar]
 Kolehmainen, V.; Lassas, M.; Niinimäki, K.; Siltanen, S. Sparsitypromoting Bayesian inversion. Inverse Prob. 2012, 28, 025005. [Google Scholar] [CrossRef]
 Stuart, M.A.; Voss, J.; Wiberg, P. Conditional Path Sampling of SDEs and the Langevin MCMC Method. Commun. Math. Sci. 2004, 2, 685–697. [Google Scholar] [CrossRef]
 Marnissi, Y.; Chouzenoux, E.; BenazzaBenyahia, A.; Pesquet, J.C.; Duval, L. Reconstruction de signaux parcimonieux à l’aide d’un algorithme rapide d’échantillonnage stochastique. In Proceedings of the GRETSI, Lyon, France, 8–11 September 2015. (In French). [Google Scholar]
 Marnissi, Y.; BenazzaBenyahia, A.; Chouzenoux, E.; Pesquet, J.C. MajorizeMinimize adapted MetropolisHastings algorithm. Application to multichannel image recovery. In Proceedings of the European Signal Processing Conference (EUSIPCO 2014), Lisbon, Portugal, 1–5 September 2014; pp. 1332–1336. [Google Scholar]
 Vacar, C.; Giovannelli, J.F.; Berthoumieu, Y. Langevin and Hessian with Fisher approximation stochastic sampling for parameter estimation of structured covariance. In Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP 2011), Prague, Czech Republic, 22–27 May 2011; pp. 3964–3967. [Google Scholar]
 Schreck, A.; Fort, G.; Le Corff, S.; Moulines, E. A shrinkagethresholding Metropolis adjusted Langevin algorithm for Bayesian variable selection. IEEE J. Sel. Top. Signal Process. 2016, 10, 366–375. [Google Scholar] [CrossRef]
 Pereyra, M. Proximal Markov chain Monte Carlo algorithms. Stat. Comput. 2016, 26, 745–760. [Google Scholar] [CrossRef]
 Atchadé, Y.F. An adaptive version for the Metropolis adjusted Langevin algorithm with a truncated drift. Methodol. Comput. Appl. Probab. 2006, 8, 235–254. [Google Scholar] [CrossRef]
 Tanner, M.A.; Wong, W.H. The calculation of posterior distributions by data augmentation. J. Am. Stat. Assoc. 1987, 82, 528–540. [Google Scholar] [CrossRef]
 Mira, A.; Tierney, L. On the use of auxiliary variables in Markov chain Monte Carlo sampling. Technical Report, 1997. Available online: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.7814 (accessed on 1 February 2018).
 Robert, C.; Casella, G. Monte Carlo Statistical Methods; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
 Doucet, A.; Sénécal, S.; Matsui, T. Space alternating data augmentation: Application to finite mixture of gaussians and speaker recognition. In Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP 2005), Philadelphia, PA, USA, 23 March 2005; pp. 708–713. [Google Scholar]
 Févotte, C.; Cappé, O.; Cemgil, A.T. Efficient Markov chain Monte Carlo inference in composite models with space alternating data augmentation. In Proceedings of the IEEE Statistical Signal Processing Workshop (SSP 2011), Nice, France, 28–30 June 2011; pp. 221–224. [Google Scholar]
 Giovannelli, J.F. Unsupervised Bayesian convex deconvolution based on a field with an explicit partition function. IEEE Trans. Image Process. 2008, 17, 16–26. [Google Scholar] [CrossRef] [PubMed]
 David, H.M. Auxiliary Variable Methods for Markov Chain Monte Carlo with Applications. J. Am. Stat. Assoc. 1997, 93, 585–595. [Google Scholar]
 Hurn, M. Difficulties in the use of auxiliary variables in Markov chain Monte Carlo methods. Stat. Comput. 1997, 7, 35–44. [Google Scholar] [CrossRef]
 Damlen, P.; Wakefield, J.; Walker, S. Gibbs sampling for Bayesian nonconjugate and hierarchical models by using auxiliary variables. J. R. Stat. Soc. Ser. B Stat. Methodol. 1999, 61, 331–344. [Google Scholar] [CrossRef]
 Duane, S.; Kennedy, A.; Pendleton, B.J.; Roweth, D. Hybrid Monte Carlo. Phys. Lett. B 1987, 195, 216–222. [Google Scholar] [CrossRef]
 Geman, S.; Geman, D. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. J. Appl. Stat. 1993, 20, 25–62. [Google Scholar] [CrossRef]
 Idier, J. Convex HalfQuadratic Criteria and Interacting Auxiliary Variables for Image Restoration. IEEE Trans. Image Process. 2001, 10, 1001–1009. [Google Scholar] [CrossRef] [PubMed]
 Geman, D.; Reynolds, G. Constrained restoration and the recovery of discontinuities. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 367–383. [Google Scholar] [CrossRef]
 Champagnat, F.; Idier, J. A connection between halfquadratic criteria and EM algorithms. IEEE Signal Process. Lett. 2004, 11, 709–712. [Google Scholar] [CrossRef]
 Nikolova, M.; Ng, M.K. Analysis of halfquadratic minimization methods for signal and image recovery. SIAM J. Sci. Comput. 2005, 27, 937–966. [Google Scholar] [CrossRef]
 Bect, J.; BlancFéraud, L.; Aubert, G.; Chambolle, A. A l1Unified Variational Framework for Image Restoration. In Proceedings of the European Conference on Computer Vision (ECCV 2004), Prague, Czech Republic, 11–14 May 2004; pp. 1–13. [Google Scholar]
 Cavicchioli, R.; Chaux, C.; BlancFéraud, L.; Zanni, L. ML estimation of wavelet regularization hyperparameters in inverse problems. In Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP 2013), Vancouver, BC, Canada, 26–31 May 2013; pp. 1553–1557. [Google Scholar]
 Ciuciu, P. Méthodes Markoviennes en Estimation Spectrale Non Paramétriques. Application en Imagerie Radar Doppler. Ph.D. Thesis, Université Paris SudParis XI, Orsay, France, October 2000. [Google Scholar]
 Andrews, D.F.; Mallows, C.L. Scale mixtures of normal distributions. J. R. Stat. Soc. Ser. B Methodol. 1974, 36, 99–102. [Google Scholar]
 West, M. On scale mixtures of normal distributions. Biometrika 1987, 74, 646–648. [Google Scholar] [CrossRef]
 Van Dyk, D.A.; Park, T. Partially collapsed Gibbs samplers: Theory and methods. J. Am. Stat. Assoc. 2008, 103, 790–796. [Google Scholar] [CrossRef]
 Park, T.; van Dyk, D.A. Partially collapsed Gibbs samplers: Illustrations and applications. J. Comput. Graph. Stat. 2009, 18, 283–305. [Google Scholar] [CrossRef]
 Costa, F.; Batatia, H.; Oberlin, T.; Tourneret, J.Y. A partially collapsed Gibbs sampler with accelerated convergence for EEG source localization. In Proceedings of the IEEE Statistical Signal Processing Workshop (SSP 2016), Palma de Mallorca, Spain, 26–29 June 2016; pp. 1–5. [Google Scholar]
 Kail, G.; Tourneret, J.Y.; Hlawatsch, F.; Dobigeon, N. Blind deconvolution of sparse pulse sequences under a minimum distance constraint: A partially collapsed Gibbs sampler method. IEEE Trans. Signal Process. 2012, 60, 2727–2743. [Google Scholar] [CrossRef] [Green Version]
 Chouzenoux, E.; Legendre, M.; Moussaoui, S.; Idier, J. Fast constrained least squares spectral unmixing using primaldual interiorpoint optimization. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 7, 59–69. [Google Scholar] [CrossRef]
 Marnissi, Y.; BenazzaBenyahia, A.; Chouzenoux, E.; Pesquet, J.C. Generalized multivariate exponential power prior for waveletbased multichannel image restoration. In Proceedings of the IEEE International Conference on Image Processing (ICIP 2013), Melbourne, Australia, 15–18 September 2013; pp. 2402–2406. [Google Scholar]
 Laruelo, A.; Chaari, L.; Tourneret, J.Y.; Batatia, H.; Ken, S.; Rowland, B.; Ferrand, R.; Laprie, A. Spatiospectral regularization to improve magnetic resonance spectroscopic imaging quantification. NMR Biomed. 2016, 29, 918–931. [Google Scholar] [CrossRef] [PubMed]
 Celebi, M.E.; Schaefer, G. Color medical image analysis. In Lecture Notes on Computational Vision and Biomechanics; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
 Criminisi, E. Spatial decision forests for MS lesion segmentation in multichannel magnetic resonance images. NeuroImage 2011, 57, 378–390. [Google Scholar]
 Delp, E.; Mitchell, O. Image compression using block truncation coding. IEEE Trans. Commun. 1979, 27, 1335–1342. [Google Scholar] [CrossRef]
 KhelilCherif, N.; BenazzaBenyahia, A. Waveletbased multivariate approach for multispectral image indexing. In Proceedings of the SPIE Conference on Wavelet Applications in Industrial Processing, Rabat, Morocco, 10 September–2 October 2004. [Google Scholar]
 Chaux, C.; Pesquet, J.C.; Duval, L. Noise Covariance Properties in DualTree Wavelet Decompositions. IEEE Trans. Inf. Theory 2007, 53, 4680–4700. [Google Scholar] [CrossRef] [Green Version]
 Roberts, G.O.; Tweedie, L.R. Exponential Convergence of Langevin Distributions and Their Discrete Approximations. Bernoulli 1996, 2, 341–363. [Google Scholar] [CrossRef]
 Murphy, K.P. Conjugate Bayesian Analysis of the Gaussian Distribution. Technical Report, 2007. Available online: https://www.cs.ubc.ca/~murphyk/Papers/bayesGauss.pdf (accessed on 1 February 2018).
 Barnard, J.; McCulloch, R.; Meng, X.L. Modeling covariance matrices in terms of standard deviations and correlations, with application to shrinkage. Stat. Sin. 2000, 10, 1281–1311. [Google Scholar]
 Fink, D. A Compendium of Conjugate Priors. 1997. Available online: https://www.johndcook.com/CompendiumOfConjugatePriors.pdf (accessed on 7 February 2018).
 Flandrin, P. Wavelet analysis and synthesis of fractional Brownian motion. IEEE Trans. Inf. Theory 1992, 38, 910–917. [Google Scholar] [CrossRef]
 Velayudhan, D.; Paul, S. Twophase approach for recovering images corrupted by Gaussianplusimpulse noise. In Proceedings of the IEEE International Conference on Inventive Computation Technologies (ICICT 2016), Coimbatore, India, 26–27 August 2016; pp. 1–7. [Google Scholar]
 Chang, E.S.; Hung, C.C.; Liu, W.; Yina, J. A Denoising algorithm for remote sensing images with impulse noise. In Proceedings of the IEEE International Symposium on Geoscience and Remote Sensing (IGARSS 2016), Beijing, China, 10–15 July 2016; pp. 2905–2908. [Google Scholar]
Problem Source  Proposed Auxiliary Variable  Resulting Conditional Density $\mathsf{p}(\mathit{x}\mathit{z},\mathit{v})\propto exp(\mathcal{J}(\mathit{x}\mathit{v}))$ 

$\mathsf{\Lambda}$  $\mathbf{v}\sim \mathcal{N}\left(\right)open="("\; close=")">\left(\right)open="("\; close=")">{\displaystyle \frac{1}{\mu}}{\mathbf{I}}_{N}\mathsf{\Lambda}$  $\mathcal{J}(\mathbf{x}\mathbf{v})={\displaystyle \frac{1}{2\mu}}{\parallel \mathbf{H}\mathbf{x}\mu \left(\right)open="("\; close=")">\mathsf{\Lambda}\mathbf{z}+\mathbf{v}}^{\parallel}2$ 
${\mathbf{H}}^{\top}\mathsf{\Lambda}\mathbf{H}$  $\mathbf{v}\sim \mathcal{N}\left(\right)open="("\; close=")">\left(\right)open="("\; close=")">{\displaystyle \frac{1}{\mu}}{\mathbf{I}}_{Q}{\mathbf{H}}^{\top}\mathsf{\Lambda}\mathbf{H}$  $\mathcal{J}(\mathbf{x}\mathbf{v})={\displaystyle \frac{1}{2\mu}}{\parallel \mathbf{x}\mu (\mathbf{v}+{\mathbf{H}}^{\top}\mathsf{\Lambda}\mathbf{z})\parallel}^{2}+\mathsf{\Psi}(\mathbf{V}\mathbf{x})$ 
$\mathbf{b}=\mathbf{1}$  $\mathbf{b}=\mathbf{2}$  $\mathbf{b}=\mathbf{3}$  $\mathbf{b}=\mathbf{4}$  $\mathbf{b}=\mathbf{5}$  $\mathbf{b}=\mathbf{6}$  Average  

Initial  BSNR  24.27  30.28  31.73  28.92  26.93  22.97  27.52 
PSNR  25.47  21.18  19.79  22.36  23.01  26.93  23.12  
SNR  11.65  13.23  13.32  13.06  11.81  11.77  12.47  
SSIM  0.6203  0.5697  0.5692  0.5844  0.5558  0.6256  0.5875  
MMSE  BSNR  32.04  38.33  39.21  38.33  35.15  34.28  36.22 
PSNR  28.63  25.39  23.98  26.90  27.25  31.47  27.27  
SNR  14.82  17.50  17.60  17.66  16.12  16.38  16.68  
SSIM  0.7756  0.8226  0.8156  0.8367  0.8210  0.8632  0.8225 
RW  MALA  

$\widehat{{\gamma}_{1}}$ (${\gamma}_{1}$ = 0.71)  Mean  0.67  0.67 
Std.  (1.63 × ${10}^{3}$)  (1.29 × ${10}^{3}$)  
$\widehat{{\gamma}_{2}}$ (${\gamma}_{2}$ = 0.99)  Mean  0.83  0.83 
Std.  (1.92 × ${10}^{3}$)  (2.39 × ${10}^{3}$)  
$\widehat{{\gamma}_{3}}$ (${\gamma}_{3}$ = 0.72)  Mean  0.62  0.61 
Std.  (1.33 × ${10}^{3}$)  (1.23 × ${10}^{3}$)  
$\widehat{{\gamma}_{4}}$ (${\gamma}_{4}$ = 0.0.24)  Mean  0.24  0.24 
Std.  (1.30 × ${10}^{3}$)  (1.39 × ${10}^{3}$)  
$\widehat{{\gamma}_{5}}$ (${\gamma}_{5}$ = 0.40)  Mean  0.37  0.37 
Std.  (2.10 × ${10}^{3}$)  (2.42 × ${10}^{3}$)  
$\widehat{{\gamma}_{6}}$ (${\gamma}_{6}$ = 0.22)  Mean  0.21  0.21 
Std.  (1.19 × ${10}^{3}$)  (1.25 × ${10}^{3}$)  
$\widehat{{\gamma}_{7}}$ (${\gamma}_{7}$ = 0.0.07)  Mean  0.08  0.08 
Std.  (0.91 × ${10}^{3}$)  (1.08 × ${10}^{3}$)  
$\widehat{{\gamma}_{8}}$ (${\gamma}_{8}$ = 0.13)  Mean  0.13  0.13 
Std.  (1.60 × ${10}^{3}$)  (1.64 × ${10}^{3}$)  
$\widehat{{\gamma}_{9}}$ (${\gamma}_{9}$ = 0.07)  Mean  0.07  0.07 
Std.  (0.83 × ${10}^{3}$)  (1 × ${10}^{3}$)  
$\widehat{{\gamma}_{10}}$ (${\gamma}_{10}$ = 7.44 × ${10}^{4}$)  Mean  7.80 × ${10}^{4}$  7.87 × ${10}^{4}$ 
Std.  (1.34 × ${10}^{5}$)  (2.12 × ${10}^{5}$)  
$det(\widehat{\mathbf{R}})$ $det(\mathbf{R})$ = 5.79 × ${10}^{8}$  Mean  1.89 × ${10}^{8}$  2.10 × ${10}^{8}$ 
Std.  (9.96 × ${10}^{10}$)  (2.24 × ${10}^{9}$) 
RJPO  AuxV1  AuxV2  

$\widehat{\gamma}$ ($\gamma $ = 5.30 × ${10}^{3}$)  Mean  4.78 × ${10}^{3}$  4.84 × ${10}^{3}$  4.90 × ${10}^{3}$ 
Std.  (1.39 × ${10}^{4}$)  (1.25 × ${10}^{4}$)  (9.01 × ${10}^{5}$)  
$\widehat{{\kappa}_{1}}$ (${\kappa}_{1}$ = 13)  Mean  12.97  12.98  12.98 
Std.  (4.49 × ${10}^{2}$)  (4.82 × ${10}^{2}$)  (4.91 × ${10}^{2}$)  
$\widehat{{\kappa}_{2}}$ (${\kappa}_{1}$ = 40)  Mean  39.78  39.77  39.80 
Std.  (0.13)  (0.14)  (0.13)  
$\widehat{\beta}$ ($\beta $ = 0.35)  Mean  0.35  0.35  0.35 
Std.  (2.40 × ${10}^{3}$)  (2.71 × ${10}^{3}$)  (2.72 × ${10}^{3}$)  
$\widehat{{x}_{i}}$ (${x}_{i}$ = 140)  Mean  143.44  143.19  145.91 
Std.  (10.72)  (11.29)  (9.92) 
RJPO  AuxV1  AuxV2  

T(s.)  5.27  0.13  0.12 
$MSJ$  15.41  14.83  4.84 
$MSJ$/T  2.92  114.07  40.33 
Efficiency  1  39  13.79 
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Marnissi, Y.; Chouzenoux, E.; BenazzaBenyahia, A.; Pesquet, J.C. An Auxiliary Variable Method for Markov Chain Monte Carlo Algorithms in High Dimension. Entropy 2018, 20, 110. https://doi.org/10.3390/e20020110
Marnissi Y, Chouzenoux E, BenazzaBenyahia A, Pesquet JC. An Auxiliary Variable Method for Markov Chain Monte Carlo Algorithms in High Dimension. Entropy. 2018; 20(2):110. https://doi.org/10.3390/e20020110
Chicago/Turabian StyleMarnissi, Yosra, Emilie Chouzenoux, Amel BenazzaBenyahia, and JeanChristophe Pesquet. 2018. "An Auxiliary Variable Method for Markov Chain Monte Carlo Algorithms in High Dimension" Entropy 20, no. 2: 110. https://doi.org/10.3390/e20020110