Next Article in Journal
Effectiveness of Learning Systems from Common Image File Types to Detect Osteosarcoma Based on Convolutional Neural Networks (CNNs) Models
Next Article in Special Issue
Perspective Shape-from-Shading Problem: A Unified Convergence Result for Several Non-Lambertian Models
Previous Article in Journal
On the Relationship between Corneal Biomechanics, Macrostructure, and Optical Properties
Previous Article in Special Issue
Off-The-Grid Variational Sparse Spike Recovery: Methods and Algorithms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Nearly Exact Discrepancy Principle for Low-Count Poisson Image Restoration

by
Francesca Bevilacqua
,
Alessandro Lanza
,
Monica Pragliola
* and
Fiorella Sgallari
Department of Mathematics, University of Bologna, Piazza di Porta San Donato 5, 40126 Bologna, Italy
*
Author to whom correspondence should be addressed.
J. Imaging 2022, 8(1), 1; https://doi.org/10.3390/jimaging8010001
Submission received: 25 October 2021 / Revised: 6 December 2021 / Accepted: 16 December 2021 / Published: 23 December 2021
(This article belongs to the Special Issue Inverse Problems and Imaging)

Abstract

:
The effectiveness of variational methods for restoring images corrupted by Poisson noise strongly depends on the suitable selection of the regularization parameter balancing the effect of the regulation term(s) and the generalized Kullback–Liebler divergence data term. One of the approaches still commonly used today for choosing the parameter is the discrepancy principle proposed by Zanella et al. in a seminal work. It relies on imposing a value of the data term approximately equal to its expected value and works well for mid- and high-count Poisson noise corruptions. However, the series truncation approximation used in the theoretical derivation of the expected value leads to poor performance for low-count Poisson noise. In this paper, we highlight the theoretical limits of the approach and then propose a nearly exact version of it based on Monte Carlo simulation and weighted least-square fitting. Several numerical experiments are presented, proving beyond doubt that in the low-count Poisson regime, the proposed modified, nearly exact discrepancy principle performs far better than the original, approximated one by Zanella et al., whereas it works similarly or slightly better in the mid- and high-count regimes.

1. Introduction

The image restoration problem under Poisson noise corruption is a task that has been extensively addressed in the literature as it arises in many real-world applications, where the acquired image is obtained by counting the particles, e.g., photons, hitting the image domain [1]. The typical image formation model under blur and Poisson noise corruption takes the form
y = Poiss λ , λ = H x ¯ + b ,
where H R m × n models the blur operator, which we assume to be known; y N m and x ¯ R n are the observed m 1 × m 2 and unknown n 1 × n 2 images in column-major vectorized form (with m = m 1 m 2 and n = n 1 n 2 ), respectively; b R m is a non-negative background emission, and where Poiss ( λ ) : = Poiss ( λ 1 ) , , Poiss ( λ m ) T , with  Poiss ( λ i ) indicating the realization of a Poisson-distributed random variable of parameter (mean) λ i .
When tackling the recovery of x ¯ starting from y , one has also to consider the intrinsic constraint
x ¯ Ω : = x ¯ R n : x ¯ 0 ,
which accounts for the pixel values being non-negative.
In a probabilistic perspective [2], problem (1) and (2) can be addressed by modeling the unknown x as a random variable. In general, the information on the degradation process is encoded in the so-called likelihood probability density function (pdf) p ( y x ) , while the prior beliefs on the unknown x are expressed by the prior pdf p ( x ) . In the Bayesian framework, one aims to recover the posterior pdf, which is related to the likelihood and the prior term via the Bayes formula:
p ( x y ) P ( y x ) p ( x ) .
with P denoting the probability mass function (pmf) that replaces the continuous pdf p to account for the discrete nature of the data y .
According to the Maximum A Posteriori (MAP) estimation approach, the mode of p ( x y ) can be selected as a single-point representative of the posterior distribution, so that the original problem (1) and (2) turns into:
x * arg max x p ( y x ) p ( x ) = arg min x ln p ( y x ) ln p ( x ) .
In light of the constraint expressed in (2), the general form of the prior pdf reads
p ( x ) p c ( x ) p 0 ( x ) ,
with
p c ( x ) = 1 if x Ω 0 otherwise ,
and p 0 ( x ) encoding other information possibly available on x . A typical choice for p 0 ( x ) is given by the Total Variation (TV) Gibbs prior—see [3]—which reads
p 0 ( x ) = 1 Z exp α i = 1 n ( D x ) i 2 ,
where Z > 0 is a normalization constant, α > 0 is the prior parameter and D : = ( D h T , D v T ) T R 2 n × n denotes the discrete gradient operator with D h , D v R n × n , two linear operators representing the finite difference discretizations of the first-order partial derivatives of the image x in the horizontal and vertical direction. The negative logarithm of the prior pdf p ( x ) thus reads
ln p ( x ) = α i = 1 n ( D x ) i 2 + ι Ω ( x ) + ln Z ,
with ι Ω ( · ) denoting the indicator function of set Ω , which is equal to 0 if x Ω , or + otherwise.
Concerning the likelihood pdf, first, we notice that the forward model (1) can be usefully rewritten in component-wise (pixel-wise) form as follows:
y i = Poiss λ i , λ i = H i x + b i , i = 1 , , m ,
with y i N , λ i , b i R + , and where H i R 1 × n denotes the i-th row of matrix H . Upon the assumption of independence of the Poisson noise realizations at different pixels, we have:
ln P y x = ln P y λ = ln i = 1 m P y i λ i = i = 1 m ln P y i λ i , λ = H x + b ,
where P y i λ i , which denotes the probability for y i to be the realization of a Poisson-distributed random variable with mean λ i , reads
P y i λ i = λ i y i e λ i y i ! , y i N , λ i R + ,
with R + denoting the set of non-negative real numbers. Hence, the associated negative log-pmf reads
ln P y i λ i = λ i y i ln λ i + ln y i ! .
By plugging (6) into (5), the negative log-likelihood takes the following form:
ln P y x = i = 1 m λ i y i ln λ i + ln y i ! .
Finally, plugging (4) and (7) into (3), dropping out the constant term ln Z in (4), readjusting the constant terms in (7) (by adding y i ln y i y i ln ( y i ! ) to each term in the sum), and then dividing the cost function by the positive scalar α , we obtain the so-called TV-KL variational model:
x ^ ( μ ) = arg min x Ω J ( x ; μ ) : = TV ( x ) + μ KL λ ; y , λ = H x + b ,
where μ = 1 / α , the TV semi-norm term [4], is defined by
TV ( x ) = i = 1 n D u i 2 ,
and the term KL λ ; y indicates the generalized Kullback–Leibler (KL) divergence between λ = H x + b and the observation y , which reads
KL λ ; y = i = 1 m F λ i ; y i , with F λ ; y : = λ y ln λ + y ln y y .
Note that the adoption of the MAP strategy within a probabilistic framework yields a minimization problem, which is typically addressed in the context of variational methods for image restoration. The TV term and the KL divergence play the role of the regularization and data fidelity term, respectively. Moreover, the parameter μ , which has been defined starting from the prior parameter α , is the so-called regularization parameter balancing the action of regularization and data fidelity terms.
The selection of a suitable value for the regularization parameter μ is of crucial importance for obtaining high-quality results. This relation is highlighted by the explicit dependence in (8) of the solution x ^ on the parameter μ . Very often, μ is chosen empirically by brute-force optimization with respect to some visual quality metrics. However, for Poisson data, a large amount of literature has been devoted to the analysis of the Discrepancy Principle (DP), which can be formulated in general terms as follows [5]:
Select μ = μ * R + such that D μ * ; y = Δ ,
where the last equality and the scalar Δ R + in (10) are commonly referred to as the discrepancy equation and the discrepancy value, respectively, while the discrepancy function D ( · ; y ) : R + R + is defined by
D μ ; y : = KL λ ^ ( μ ) ; y = i = 1 m D i μ ; y i : = F λ ^ i ( μ ) ; y i ,
with function F defined in (9) and
λ ^ ( μ ) = H x ^ ( μ ) + b .
The DP in (10)–(12) formalizes a quite simple idea: choose the value μ * of the regularization parameter μ in the TV-KL model (8) such that the value of the KL data fidelity term associated with the solution x ^ ( μ * ) is equal to a prescribed discrepancy value Δ . However, applying the DP in an effective manner in practice is not straightforward as several issues concerning the computational efficiency and, more importantly, the quality of the output solutions arise. We examine both of them more closely.
(I1)
Computational efficiency. The solution function x ^ ( μ ) of model (8) does not admit a closed-form expression and iterative solvers must be used to compute the restored image x ^ associated with any μ . Hence, selecting μ * by solving the scalar discrepancy equation defined in (10)–(12) as an efficient preliminary step and then computing the sought restored image x ^ ( μ * ) by iteratively solving model (8) only once is not feasible.
(I2)
Quality of solution(s). Even if an efficient algorithm is used for the computation, the obtained restored image x ^ ( μ * ) may be of such low quality that it is of no practical use if the discrepancy value Δ in (10) is not suitably chosen.
Issue (I1) concerning computational efficiency has been successfully addressed in [6], where the authors propose to automatically update μ along the iterations of the minimization algorithm used for solving the TV-KL model so as to satisfy (at convergence) a specific version of the general DP defined in (10)–(12).
Concerning (I2), we highlight that, in the theoretical hypothesis that the target image x ¯ is known, so that λ ¯ = H x ¯ + b is also known, one would select μ * such that the value of the KL fidelity term associated with the solution x ^ ( μ * ) is equal to the value of the KL fidelity term associated with x ¯ . This clearly does not guarantee that the obtained solution x ^ ( μ * ) coincides with the target image x ¯ . However, by constraining x ^ ( μ ) to belong to the unique level set of the (convex) KL fidelity term containing x ¯ , this abstract strategy, which we refer to as the Theoretical DP (TDP), represents an oracle for the general DP in (10)–(12). The TDP is thus formulated as follows:
Select μ = μ * R + such that D μ * ; y = Δ ( T ) , with Δ ( T ) : = i = 1 m δ ( T ) ( λ ¯ i ) : = F λ ¯ i ; y i , λ ¯ = H x ¯ + b ,
with function F defined in (9). Clearly, the value Δ ( T ) cannot be computed in practice as the original image x ¯ is not available. As in the case of the Morozov discrepancy principle for Gaussian noise, one could replace the scalar Δ ( T ) with the expected value of the KL fidelity term in (9) regarded as a function of the m-variate random variable Y . We will refer to this version of the DP as Exact (or Expected value) DP (EDP). In the following formula
Select μ = μ * R + such that D μ * ; y = Δ ( E ) μ * , with Δ ( E ) ( μ ) : = i = 1 m δ ( E ) ( λ ^ i ( μ ) ) : = E Y i F λ ^ i ( μ ) ; Y i , λ ^ ( μ ) = H x ^ ( μ ) + b ,
where E Y i F λ ^ i ( μ ) ; Y i denotes the expected value of F λ ^ i ( μ ) ; Y i regarded as a function of the Poisson-distributed random variable Y i . Nonetheless, unlike the Gaussian noise case, the discrepancy value is not a constant but is a function Δ ( E ) ( μ ) of the regularization parameter μ , and deriving its analytic expression is a very challenging task. A popular and widespread strategy, originally proposed in [5] for denoising purposes and extended in [7] to the image restoration task, replaces the exact expected value function Δ ( E ) ( μ ) with a constant approximation coming from truncating its Taylor series expansion. We will refer to this version of the DP as Approximate DP (ADP). It reads:   
Select μ = μ * R + such that D μ * ; y = Δ ( A ) , with Δ ( A ) : = i = 1 m δ ( A ) : = 1 2 = m 2 .
Despite its extensive use due to the good performance achieved in the mid- and high-count regimes, the (15) is known to return poor-quality results in the low-count Poisson regime [8], i.e., when the number of photons hitting the image domain is small. In fact, in [7], where the ADP was first extended to the image deblurring task, the authors state (in Remark 3) that the choice of the constant value δ ( A ) = 1 / 2 in (15) may not be “optimal” and suggest replacing it with 1 / 2 + ϵ , where ϵ is a small positive or negative real number. As a preliminary, qualitative proof of the possible poor performance of (15) in the low-count regime, in the first column of Figure 1, we show the two test images phantom and cameraman, which have been corrupted by blur and heavy Poisson noise (second column). The TV-KL image restoration model in (8), with regularization parameter μ selected according to the (15), has been performed. The output restorations are displayed in the third column of Figure 1. One can see that the rough approximation δ ( A ) = 1 / 2 used in the (15) can either return oversmoothed results, as in the case of phantom, or undersmoothed restorations, as for cameraman.
Since its proposal in [5], the ADP has been (and still is) widely used for variational image restoration (see, e.g., [9,10]) and it can be regarded as the standard extension of the Morozov DP for Gaussian noise to the Poisson noise case. Then, some literature exists working on the ADP, e.g., by proposing, analyzing, and testing its usage in KL-constrained variational models [11] or by analyzing it theoretically [12]. However, to the best of the authors’ knowledge, the only attempt to improve the ADP by giving a face to the ϵ adjustment to the approximate, constant discrepancy value δ ( A ) = 1 / 2 is the one in [8]. The authors in [8] correctly state that ϵ must not be a constant, but a function ϵ ( λ ) of the photon count level. However, they propose to take ϵ ( λ ) as the sum of the second to tenth terms of the same Taylor expansion used in [5]. As we will highlight later in the paper, such expansion converges only for λ approaching + ; hence, the choice in [8] cannot aspire to improve the performance of ADP in low-count regimes.

Contribution

The goal of this paper is to provide novel insights about the EDP and the ADP in order to design a novel discrepancy principle capable of outperforming the classical ADP proposed in [5]. In more detail, we will provide a qualitative study proving that the recovery of a closed-form expression for function Δ ( E ) ( μ ) in (14) through its Taylor series expansion used in [5,8] is not only difficult to achieve but also theoretically unfeasible for low-count Poisson regimes. Moreover, we will explore in detail the properties of the ADP motivating the dichotomic behavior (i.e., oversmoothing/undersmoothing) arising upon its adoption in the low-count regime. Finally, based on a simple Monte Carlo simulation followed by weighted least-square fitting, we will derive a novel version of the general DP in (10)–(12) based on a nearly exact (NE) approximation of function Δ ( E ) ( μ ) in (14), concisely referred to as NEDP. Our approach will successfully address issues (I1)–(I2). In particular, it will be demonstrated experimentally that NEDP can return high-quality results both for low-count and mid/high-count acquisitions. The good performance of NEDP is anticipated in the last column of Figure 1, where we show the output restorations achieved by using the TV-KL model in (8) and (9) coupled with the novel μ -selection strategy.

2. Limits of the Approximate DP

The discrepancy principle proposed by Zanella et al. in [5] for Poisson image denoising and then extended to image restoration by Bertero et al. in [7] relies on Lemma 1 in [5], whose proof has been completed in [5] (corrigendum), which we report below for completeness.
Lemma 1.
Let Y λ be a Poisson random variable with expected value λ R + + and consider the function of Y λ defined by
F Y λ = λ Y λ ln λ + Y λ ln Y λ Y λ = Y λ ln 1 + Y λ λ λ + λ Y λ .
Then, the following estimate of the expected value of F ( Y λ ) holds true for large λ:
δ ( E ) ( λ ) = E F Y λ = δ ( A ) + O 1 λ , δ ( A ) = 1 2 .
Based on the estimate above, and implicitly assuming a sufficiently large λ (i.e., a sufficiently high-count Poisson regime) such that the O ( 1 / λ ) term can be neglected, the exact DP outlined in (14) is replaced in [5,7] by the approximation given in (15) and recalled below:
Δ = Δ ( A ) = i = 1 m δ ( A ) = m 2 .
However, the ADP performs poorly for low-count Poisson images. Our goal here is to highlight that the reason for this lies precisely in the constant approximation δ ( E ) ( λ ) δ ( A ) used in (15) and then propose a nearly exact DP based on a much less approximate estimate δ ( N E ) ( λ ) of the expected value function δ ( E ) ( λ ) .
For this purpose, first, we carry out a preliminary Monte Carlo simulation aimed at highlighting the error associated with the approximation in (15). In particular, we consider a discrete set of λ values λ i [ 0 , 8 ] and, for each λ i , we generate pseudo-randomly a large number ( 10 6 ) of realizations of the Poisson random variable Y λ i . Then, we compute the associated values of the function F ( Y λ i ) defined in (16) and, finally, for each λ i , we obtain an estimate δ ^ ( E ) ( λ i ) of δ ( E ) ( λ i ) by calculating the sample mean of these function values. The results of this simulation are shown in Figure 2. In particular, in the left figure, we report the computed estimates δ ^ ( E ) ( λ i ) , whereas, in the right figure, we report the percentage errors (with respect to the estimates) associated with using the constant value δ ( A ) = 1 / 2 as in the (15). The percentage error approaches + for λ tending to zero and is in the order of 10 % for λ [ 1 , 4 ] ; then, as expected, it decreases (quite slowly) to zero for λ tending to + . The error is thus quite large for small λ and this can explain the poor performance of the (15) in the low-count Poisson regime.
In order to obtain a more accurate approximation or even an exact analytical expression for the expected value function δ ( E ) ( λ ) , we now retrace in detail the proof of Lemma 1 given in [5], and completed in [5] (corrigendum), and check if the rough truncation carried out in [5] can be avoided.
After noting that function ln ( 1 + φ ) is C on its domain ( 1 , + ) and considering its Taylor expansion around 0, the Taylor theorem with the remainder in integral form allows one to write:
ln 1 + φ = i = 1 N ( 1 ) i + 1 i φ i + r N ( φ ) = φ 1 2 φ 2 + 1 3 φ 3 + ( 1 ) N + 1 N φ N + r N ( φ ) , r N ( φ ) = ( 1 ) N 0 φ φ t N 1 + t N + 1 d t , φ ( 1 , + ) .
Replacing the expansion above with φ = ( Y λ λ ) / λ in the expression of function F defined in (16), we obtain
F Y λ = Y λ i = 1 N ( 1 ) i + 1 i Y λ λ λ i + r N ( φ ) + λ Y λ = Y λ r N ( φ ) + Y λ λ + λ i = 1 N ( 1 ) i + 1 i Y λ λ λ i + λ Y λ = Y λ r N ( φ ) + Y λ λ i = 1 N ( 1 ) i + 1 i Y λ λ λ i + λ i = 1 N ( 1 ) i + 1 i Y λ λ λ i + λ Y λ = Y λ r N ( φ ) + i = 1 N ( 1 ) i + 1 i Y λ λ i + 1 λ i + i = 2 N ( 1 ) i + 1 i Y λ λ i λ i 1 = Y λ r N ( φ ) + i = 1 N ( 1 ) i + 1 i Y λ λ i + 1 λ i + i = 1 N 1 ( 1 ) i ( i + 1 ) Y λ λ i + 1 λ i = Y λ r N ( φ ) + i = 1 N 1 ( 1 ) i + 1 i + ( 1 ) i ( i + 1 ) Y λ λ i + 1 λ i + ( 1 ) N + 1 N Y λ λ N + 1 λ N = Y λ r N ( φ ) + i = 1 N 1 ( 1 ) i + 1 i ( i + 1 ) Y λ λ i + 1 λ i + ( 1 ) N + 1 N Y λ λ N + 1 λ N .
After noting that the only random quantity in (19) is Y λ , the expected value reads
δ ( E ) ( λ ) = E F Y λ = i = 0 N 1 ω i ( N ) η i + 2 Y λ λ i + 1 + R N ( λ ) ,
with coefficients ω i ( N ) Q , i = 0 , , N 1 , and remainder function R N : R + + R given by
ω i ( N ) = ( 1 ) i ( i + 1 ) ( i + 2 ) for i = 0 , , N 2 ( 1 ) i i + 1 for i = N 1 , R N ( λ ) = E Y λ r N Y λ λ λ ,
and where
η i + 2 Y λ = E Y λ λ i + 2 , i = 0 , 1 ,
denote the central moments of order i + 2 of the Poisson random variable Y λ . It is well known (see [13], p. 162) that these moments can be obtained by the recursive formula
η 1 Y λ = 0 , η 2 Y λ = λ , η i + 2 Y λ = λ d η i + 1 Y λ d λ + ( i + 1 ) η i Y λ .
After noting that, in (20), only moments η i + 2 Y λ with i 0 are present and that they are all divided by λ , it is easy to verify that, by applying (22), one obtains the following general algebraic polynomial expression:
P i ( λ ) : = η i + 2 Y λ λ = j = 0 d i ϑ i ( j ) λ j , i = 0 , 1 , ,
where ϑ i ( j ) are all integer coefficients with ϑ i ( 0 ) = 1 for any i = 0 , 1 , , and where the degrees d i of polynomials P i ( λ ) are given by
d i = i 2 = 0 , 0 , 1 , 1 , 2 , 2 , for i = 0 , 1 , 2 , 3 , 4 , 5 , ,
where · denotes the floor function. The first 8 polynomials, P i ( λ ) , i = 0 , , 7 , read
P 0 ( λ ) = 1 , P 1 ( λ ) = 1 , P 2 ( λ ) = 1 + 3 λ , P 3 ( λ ) = 1 + 10 λ , P 4 ( λ ) = 1 + 25 λ + 15 λ 2 , P 5 ( λ ) = 1 + 56 λ + 105 λ 2 , P 6 ( λ ) = 1 + 119 λ + 490 λ 2 + 105 λ 3 , P 7 ( λ ) = 1 + 246 λ + 1918 λ 2 + 1260 λ 3 .
By replacing the expressions of P i ( λ ) given in (23) into (20), one obtains the following general formula:
δ ( E ) ( λ ) = E F Y λ = i = 0 N 1 Q i ( N ) ( λ ) : = j = 0 d i ψ i ( N , j ) λ i j + R N ( λ )
where the coefficients ψ i ( N , j ) Q of the rational polynomials Q i ( N ) ( λ ) in (25) read
ψ i ( N , j ) = ω i ( N ) ϑ i ( j ) , i = 0 , 1 , , N 1 , j = 0 , 1 , , d i ,
with ω i ( N ) given in (21) and ϑ i ( j ) defined in (23).
After noting that, from (24), it follows that d i i for any i = 0 , 1 , , it is a matter of simple algebra to verify that (25) can be equivalently and more compactly rewritten as
δ ( E ) ( λ ) = E F Y λ = i = 0 N 1 γ i ( N ) λ i + R N ( λ ) ,
with γ i ( N ) Q computable coefficients. In particular, for  N = 1 , , 9 , we have
δ ( E ) ( λ ) = 1 + R 1 ( λ ) = 1 2 1 2 λ + R 2 ( λ ) = 1 2 + 5 6 λ + 1 3 λ 2 + R 3 ( λ ) = 1 2 + 1 12 λ 29 12 λ 2 1 4 λ 3 + R 4 ( λ ) = 1 2 + 1 12 λ + 31 12 λ 2 + 99 20 λ 3 + 1 5 λ 4 + R 5 ( λ ) = 1 2 + 1 12 λ + 3 12 λ 2 1003 60 λ 3 93 10 λ 4 1 6 λ 5 + R 6 ( λ ) = 1 2 + 1 12 λ + 1 12 λ 2 + 797 60 λ 3 + 687 10 λ 4 + 713 42 λ 5 + 1 7 λ 6 + R 7 ( λ ) = 1 2 + 1 12 λ + 1 12 λ 2 + 19 120 λ 3 3001 20 λ 4 39925 168 λ 5 1721 56 λ 6 1 8 λ 7 + R 8 ( λ ) = 1 2 + 1 12 λ + 1 12 λ 2 + 19 120 λ 3 + 1899 20 λ 4 + 516833 504 λ 5 + 126829 168 λ 6 + 4007 72 λ 7 + 1 9 λ 8 + R 9 ( λ )
from which we note how, as the truncation order N increases, the coefficients γ i ( N ) stabilize at some values, which we denote by γ i ( ) . Unfortunately, we are not able to obtain an explicit analytical expression for the sequence of coefficients γ i ( ) (as we are not able to obtain explicit analytic expressions for the coefficients ϑ i ( j ) defining the central moments of a Poisson random variable). By means of the Matlab symbolic toolbox, we were able to compute the first 34 coefficients γ i ( ) , i = 0 , , 33 , shown (in logarithmic scale) in Figure 3 (left). Determining the subsequent coefficients becomes unfeasible due the huge computation time required. Hence, the following short discussion must be regarded as conjectural as it relies on the assumption that the behavior of coefficients γ i ( ) , i = 34 , 35 , , can be smoothly extrapolated from the first 34 coefficients shown in Figure 3 (left). These first 34 coefficients indicate that the coefficient sequence is positive and strictly increasing for i 2 . This implies that making the truncation order N tend to + , the (infinite) weighted geometric series in (26) is divergent for λ 1 . Even without analyzing the case λ > 1 , we can state that an analytical form for function δ ( E ) ( λ ) in the low-count Poisson regime is very unlikely to be obtainable as the sum of the series in (26). In fact, there will be, very likely, at least one pixel such that λ i 1 .
We believe that it is worth concluding this section by pointing out the theoretical reason for the non-convergence of the series in (26). Function ln ( 1 + φ ) is analytical at φ = 0 , but its Maclaurin series converges (pointwise to the function) only for φ ( 1 , 1 ] . Hence, as N tends to + , the Taylor series expansion in (19) converges to the function F ( Y λ ) only for φ = ( Y λ λ ) / λ ( 1 , 1 ] Y λ ( 0 , 2 λ ] . However, Y λ in (20) represents a Poisson random variable with parameter λ . Hence, for N tending to + , the series in (20) converges to the function δ ( E ) ( λ ) = E [ F ( Y λ ) ] only if the random variable Y λ satisfies
P 0 < Y λ 2 λ = 1 i = 1 2 λ P Y λ i = 1 .
From Figure 3 (right), where we plot the probability in (28) as a function of λ , one can notice that condition (28) for convergence of the series in (20) is fulfilled asymptotically for λ approaching + but it is not satisfied at all for small λ values.

3. A Nearly Exact DP Based on Monte Carlo Simulation

Since it is not possible to derive analytically the expression of function δ ( E ) ( λ ) in (17), the goal in this section is to compute a nearly exact estimate δ ( N E ) ( λ ) of function δ ( E ) ( λ ) based on a simple Monte Carlo simulation approach analogous to that used at the beginning of Section 2. Based on the expected shape of function δ ( E ) ( λ ) —see Figure 2(left)—here, we consider a set of 1385 unevenly distributed λ values λ i [ 0 , 250 ] , namely
λ i 0 , 0.01 , 0.02 , , 5.99 , 6 , 6.1 , 6.2 , , 65.9 , 66 , 67 , 68 , , 249 , 250 .
This set comes from the union of three subsets of equally spaced λ values, namely from 0 to 6 with step 0.01 , from 6 to 66 with step 0.1 , and from 66 to 250 with step 1. For each λ i , we generate pseudo-randomly a very large number S = 5 × 10 7 of samples y i ( j ) , j = 1 , , S , of the Poisson random variable Y λ i ; then, we compute the associated values f i ( j ) , j = 1 , , S , of the function F ( Y λ i ) defined in (16) and, finally, we calculate the sample mean δ ^ ( E ) ( λ i ) and variance v i of these function values. In formula,
y i ( j ) = Poiss Y λ i , j = 1 , , S f i ( j ) = F y i ( j ) , j = 1 , , S δ ^ ( E ) ( λ i ) = 1 S j = 1 S f i ( j ) , v i = 1 S 1 j = 1 S f i ( j ) δ ^ ( E ) ( λ i ) 2 .
Notation for the sample means come from them representing estimates of the sought theoretical means δ ( E ) ( λ i ) = E [ F ( Y λ i ) ] , i = 1 , , 1385 . The obtained values λ i , δ ^ ( E ) ( λ i ) and ( λ i , v i ) are shown (blue crosses) in the first and second row of Figure 4, respectively. It is well known that δ ^ ( E ) ( λ i ) and v i represent unbiased estimators of the mean and standard deviation of the random variable F ( Y λ i ) and that, according to the central limit theorem, for a very large number S of samples (which is definitely our case), the sample mean δ ^ ( E ) ( λ i ) can be regarded as a realization of a Gaussian random variable with the theoretical mean δ ( E ) ( λ i ) of the random variable F ( Y λ i ) and the sample variance v i divided by the number of samples S. In formulas,
δ ^ ( E ) ( λ i ) = Gauss δ ( E ) ( λ i ) , v i S .
We now want to fit a parametric model f ( λ ; c ) , with  c the parameter vector, to the obtained Monte-Carlo-simulated data points λ i , δ ^ ( E ) ( λ i ) , i = 1 , , 1385 . First, in accordance with the trend of these data—see the blue crosses in the first row of Figure 4—and recalling the expected asymptotic behavior of function δ ( E ) ( λ ) for λ approaching + —see the discussion in Section 2, particularly the first two terms of the expansion in (27)—we choose a model of the form
f ( λ ; c ) = 1 2 + ϵ ( λ ; c ) ,
with function h exhibiting the following properties:
ϵ C 0 ( R + ) , ϵ ( 0 ; c ) = 1 2 , ϵ ( λ ; c ) 1 12 λ for λ + .
Then, with the aim of achieving a good trade-off between the model’s ability to accurately fit data and the computational efficiency of its evaluation, we choose the following rational form for function ϵ :   
ϵ λ ; c = λ 2 + c 1 λ + c 2 12 λ 3 + c 3 λ 2 + c 4 λ 2 c 2 .
Thanks to (30), fitting model f in (31) with ϵ as in (32) can be obtained via a Maximum Likelihood (ML) estimation of the parameter vector c = ( c 1 , c 2 , c 3 , c 4 ) R 4 . In fact, according to (30), the likelihood reads
L c = i = 1 S p δ ^ ( E ) ( λ i ) c = i = 1 S 1 2 π v i / S exp 1 2 δ ^ ( E ) ( λ i ) f ( λ i ; c ) 2 v i / S = 1 2 π / S S 2 i = 1 S v i exp S 2 i = 1 S δ ^ ( E ) ( λ i ) f λ i ; c 2 v i ,
and the ML estimate c ( M L ) of c can be computed as follows:
c ( M L ) arg max c R 4 L c = arg min c R 4 ln L c = arg min c R 4 i = 1 S w i d i h λ i ; c 2 ,
where we dropped constants (with respect to the optimization variable c ) and defined
w i : = 1 v i , d i : = δ ^ ( E ) ( λ i ) 1 2 , i = 1 , , S .
Problem (34) is a nonlinear (in particular, rational) weighted least-squares problem. The cost function is non-convex and local minimizers exist. We compute an estimate c ^ of c ( M L ) by solving (34) via the iterative trust-region algorithm 1000 times starting from 1000 different initial guesses c ( 0 ) randomly sampled from a uniform distribution with support [ 20 , 20 ] 4 and then picking up the solution c ^ yielding the minimum cost function value. The obtained parameter estimate is as follows:
c ^ = c ^ 1 , c ^ 2 , c ^ 3 , c ^ 4 = + 2.5792 , 1.5205 , 5.6244 , + 17.9347 .
We thus define the nearly exact estimate δ ( N E ) ( λ ) of the theoretical expected value function δ ( E ) ( λ ) = E [ F ( Y λ ) ] as the parametric function f defined in (31), (32) with parameter vector c equal to c ^ given in (35). In formula,
δ ( N E ) ( λ ) : = f ( λ ; c ^ ) = 1 2 + ϵ ( λ ; c ^ ) = 1 2 + λ 2 + 2.5792 λ 1.5205 12 λ 3 5.6244 λ 2 + 17.9347 λ + 3.0410 .
In the first row of Figure 4, we plot the constant approximate function δ ( A ) and the obtained nearly exact function δ ( N E ) ( λ ) , whereas, in the third and fourth row of Figure 4, we report the errors e ^ ( A ) ( λ i ) and e ^ ( N E ) ( λ i ) , respectively. They are defined by
e ^ ( X ) ( λ i ) = 100 × δ ( X ) ( λ i ) δ ^ ( E ) ( λ i ) δ ^ ( E ) ( λ i ) i = 1 , 2 , , 1385 , X A , N E ,
and represent the percentage errors associated with using the approximations δ ( A ) and δ ( N E ) ( λ ) with respect to the very accurate Monte Carlo estimates δ ^ ( E ) ( λ i ) of the true underlying expected values δ ( E ) ( λ i ) = E [ F ( Y λ i ) ] . One can notice that | e ^ ( N E ) ( λ i ) | is around 20 times smaller than | e ^ ( A ) ( λ i ) | for λ [ 0 , 6 ] (first column of Figure 4) and around 10 times less for λ [ 6 , 250 ] (second and third column Figure 4). In particular, in the low-count Poisson regime (which we can roughly associate with λ [ 0 , 6 ] ), the proposed nearly exact estimate of the theoretical expected value function δ ( E ) ( λ ) yields a percentage error in the order of 0.5 % , whereas the constant approximation used in [5,7] leads to a percentage error in the order of 10 % . Such a large error is the reason for the poor performance of the (15) in the low-count regime.
We thus propose the following nearly exact DP (NEDP):
Select μ = μ * R + such that D μ * ; y = Δ ( N E ) μ * , with Δ ( N E ) ( μ ) = i = 1 m δ ( N E ) ( λ ^ i ( μ ) ) = m 2 + i = 1 m ϵ ( λ ^ i ( μ ) ; c ^ ) , λ ^ ( μ ) = H x ^ ( μ ) + b ,
with function ϵ and parameter vector c ^ given in (32) and (35), respectively.

4. Numerical Solution via ADMM

In the following, we detail how to tackle the minimization problem in (8) and (9) when the regularization parameter μ is automatically selected according to one of the considered versions of the DP, namely the TDP in (13), the ADP in (15), and the NEDP in (37).
In principle, one could set a fine grid of μ -values and compute the solution x ^ ( μ ) corresponding to each μ . Then, among the recorded solutions, one could select the one such that the TDP, the ADP, or the NEDP is satisfied. However, this algorithmic scheme, to which we refer as the a posteriori optimization procedure, turns out to be particularly costly.
In [6,14], the authors propose to update the regularization parameter according to the ADP along the iterations of the popular Alternating Direction Method of Multipliers (ADMM) [15,16]. Here, we detail the steps of the proposed algorithmic procedure, which can be employed for applying not only the ADP but also the TDP and the NEDP. Finally, we remark that the case of the TDP is only addressed for explanatory purposes and it cannot be performed in practice as x ¯ is not available.
After introducing the auxiliary variables λ R m , g R 2 n , z R n , problem (8) and (9) can be equivalently written as follows:
x * , λ * , g * , z * arg min x , λ , g , z i = 1 n g i 2 + μ i = 1 m λ i y i ln ( λ i ) + ι R + n z subject to : λ = H x + b , g = D x , z = x ,
where, with a little abuse of notation, we define g i : = ( ( D h x ) i , ( D v x ) i ) R 2 , for every i = 1 , , n .
To solve problem (38), we introduce the augmented Lagrangian function,
L ( x , λ , g , ρ λ , ρ g ) = i = 1 n g i 2 + μ i = 1 m λ i y i ln ( λ i ) ρ λ , λ H x + b + β λ 2 λ H x + b 2 2 ρ g , g D x + β g 2 g D x 2 2 ρ z , z x + β z 2 z x 2 2 + ι R + n ( z )
where ρ λ R m , ρ g R 2 n , ρ z R n are the vectors of Lagrange multipliers associated with the linear constraints in (38), while β λ , β g , β z R + + are the ADMM penalty parameters.
By setting u : = ( x ; λ ; g ; z ) , ρ = ( ρ λ ; ρ g ; ρ z ) , U : = R n × R m × R 2 n × R n , and R : = R m × R 2 n × R n , we observe that solving (38) amounts to seeking solutions of the saddle-point problem:   
Find ( u * , ρ * ) U × R such that L ( u * , ρ ) L ( u * , ρ * ) L ( u , ρ * ) ( u , ρ ) U × R .
Upon suitable initialization, and for any k 0 , the k-th iteration of the ADMM algorithm applied to the solution of (40) with the augmented Lagrangian function L defined in (39) reads
λ ( k + 1 ) arg min λ R m L x ( k ) , λ , g ( k ) , z ( k ) , ρ λ ( k ) , ρ g ( k ) , ρ z ( k ) ,
g ( k + 1 ) arg min g R 2 n L x ( k ) , λ ( k + 1 ) , g , z ( k ) , ρ λ ( k ) , ρ g ( k ) , ρ z ( k ) ,
z ( k + 1 ) arg min z R n L x ( k ) , λ ( k + 1 ) , g ( k + 1 ) , z , ρ λ ( k ) , ρ g ( k ) , ρ z ( k ) ,
x ( k + 1 ) arg min x R n L x , λ ( k + 1 ) , g ( k + 1 ) , z ( k + 1 ) , ρ λ ( k ) , ρ g ( k ) , ρ z ( k ) ,
ρ λ ( k + 1 ) = ρ λ ( k ) β λ λ ( k + 1 ) H x ( k + 1 ) + b ,
ρ g ( k + 1 ) = ρ g ( k ) β g g ( k + 1 ) D x ( k + 1 ) ,
ρ z ( k + 1 ) = ρ z ( k ) β z z ( k + 1 ) x ( k + 1 ) .
In the following subsections, we discuss the solution of subproblems (41)–(44) for the four primal variables λ , g , z , x . Since the solution of the subproblem for variable λ is the most complicated and requires the application of the DP, it is presented last.

4.1. Solving Subproblem for g

The subproblem for g in (42) reads
g ( k + 1 ) arg min g R 2 n i = 1 n g i 2 ρ g ( k ) , g D x ( k ) + β g 2 g D x ( k ) 2 2 = arg min g R 2 n i = 1 n g i 2 + β g 2 g w ( k ) 2 2 , w ( k ) = D x ( k ) + 1 β g ρ g ( k ) .
Solving (48) is equivalent to solving the n independent two-dimensional minimization problems
g i ( k + 1 ) = arg min g i R 2 g i 2 + β g 2 g i w i ( k ) 2 2 , i = 1 , , n ,
which yields the unique solution
g i ( k + 1 ) = w i ( k ) max 1 1 β g w i ( k ) 2 , 0 , i = 1 , , n .

4.2. Solving Subproblem for z

The subproblem for z in (43) reads
z ( k + 1 ) arg min z R n ρ z ( k ) , z x ( k ) + β z 2 z x ( k ) 2 2 + ι R + n ( z ) = arg min z R + n z q ( k ) 2 , q ( k ) = x ( k ) + 1 β z ρ z ( k ) .
Hence, the solution z ( k + 1 ) is given by the unique Euclidean projection of vector q ( k ) defined in (50) onto the (convex) nonnegative orthant and admits the simple closed-form expression
z i ( k ) = max q i ( k ) , 0 , i = 1 , , n .

4.3. Solving Subproblem for x

After dropping the constant terms, the  x -subproblem in (44) reads:
x ( k + 1 ) arg min x R n { ρ λ ( k ) , λ ( k + 1 ) H x + b + β λ 2 λ ( k + 1 ) H x + b 2 2 ρ g ( k ) , g ( k + 1 ) D x + β g 2 g ( k + 1 ) D x 2 2 ρ z ( k ) , z ( k + 1 ) x + β z 2 z ( k + 1 ) x 2 2 } .
By imposing a first-order optimality condition on the quadratic cost function in (52), after simple algebraic manipulations, we obtain the following linear system of equations:
D T D + β λ β g H T H + β z β g I n x = D T g ( k + 1 ) 1 β g ρ g ( k ) + β λ β g H T λ ( k + 1 ) b 1 β λ ρ λ ( k ) + β z β g z ( k + 1 ) 1 β z ρ z ( k ) ,
which is solvable since the coefficient matrix is symmetric positive definite and hence nonsingular. When assuming periodic boundary conditions for x , the blur matrix H is square, i.e.,  m = n , and, more importantly, D T D , H T H and—trivially— I are block circulant matrices with circulant blocks. Hence, the linear system (53) can be solved efficiently by one application of the direct 2D Fast Fourier Transform (FFT) and one application of the inverse 2D FFT, each at a cost of O ( n log n ) .

4.4. Solving Subproblem for λ and Applying the DP

The subproblem for λ in (41) reads
λ ( k + 1 ) = arg min λ R m { μ i = 1 m λ i y i ln λ i ρ λ ( k ) , λ ( H x ( k ) + b ) + β λ 2 λ ( H x ( k ) + b ) 2 2 } .
After manipulating algebraically the last two terms of the cost function in (54), dropping constant terms, and then dividing by the positive scalar β λ , problem (54) can be equivalently rewritten as follows:
λ ( k + 1 ) ( γ ) = arg min λ R m γ i = 1 m λ i y i ln λ i + 1 2 λ v ( k ) 2 2 , with γ = μ β λ , v ( k ) = H x ( k ) + b + 1 β λ ρ λ .
In (55), we introduced the explicit dependence of the solution λ ( k + 1 ) on the parameter γ , which is the basis of the application of the DP. Notice that the ADMM penalty parameter β λ is fixed; hence, γ plays the role of the regularization parameter μ in the DP applied to this subproblem. Problem (55) can be further simplified after noting that it can be equivalently rewritten in component-wise (pixel-wise) form as follows:
λ i ( k + 1 ) ( γ ) arg min λ i R + γ λ i y i ln λ i + 1 2 λ i v i ( k ) 2 , i = 1 , , m .
It is easy to prove that, for any γ R + and independently of the constants y i N and v i ( k ) R , all the minimization problems in (56) admit a unique solution given by
λ i ( k + 1 ) ( γ ) = 1 2 v i ( k ) γ + v i ( k ) γ 2 + 4 y i γ .
We now want to apply one among the DP versions—namely, (13), (15), and the proposed (37)—outlined in Section 1 and Section 3 for selecting a value of the free parameter γ in (57). In particular, we select γ = γ ( k + 1 ) such that γ ( k + 1 ) satisfies the discrepancy equation, which, in accordance with the general definition given in (10)–(12), takes here the form
G ( γ ; y ) : = D ( γ ; y ) Δ = 0
where the discrepancy function reads
D ( γ ; y ) = i = 1 m D i γ ; y i = i = 1 m F λ i ( k + 1 ) ( γ ) ; y i ,
with function F defined in (9), and where the discrepancy value Δ , according to the definitions given in (13), (15) and (37), takes one of the following values/forms:
Δ = Δ ( T ) = i = 1 m F H x ¯ + b i ; y i for ( 13 ) , Δ ( A ) = m 2 for ( 15 ) , Δ ( N E ) ( γ ) = m 2 + i = 1 m ϵ λ i ( k + 1 ) ( γ ) ; c ^ for ( 37 ) ,
with rational polynomial function ϵ defined in (32) and parameter vector c ^ given in (35). We notice that Δ ( T ) and Δ ( A ) are two positive scalars that can be computed once for all and do not change their values during the ADMM iterations, whereas Δ ( N E ) ( γ ) is a function of γ , which almost certainly changes its shape along the ADMM iterations (due to function λ i ( k + 1 ) ( γ ) in (57) changing its shape when vector v ( k ) in (55) changes).
Summing up, the complete procedure for the DP-based update of the parameter γ and, then, of the variable λ reads as follows:
v ( k ) = H x ( k ) + b + 1 β λ ρ λ ,
γ ( k + 1 ) = root of the discrepancy equation in ( 58 ) , ( 59 ) , ( 60 ) ,
λ i ( k + 1 ) γ ( k + 1 ) computed by ( 57 ) , for i = 1 , , m .
Concerning the ADP, in [6], the authors have proven that along the ADMM iterations, the function D ( γ ; y ) is convex and decreasing so that the existence and the uniqueness of the solution of the discrepancy equation in (58) with Δ = Δ ( A ) is guaranteed. The same result can be immediately extended to the case of the TDP. When considering the NEDP, the functional form of Δ ( N E ) ( γ ) is such that the above result cannot be straightforwardly applied. However, the following proposition on the existence of a solution for the discrepancy equation (58) with Δ = Δ ( N E ) holds true.
Proposition 1.
Consider the discrepancy equation in (58)–(60) with Δ = Δ ( N E ) ( γ ) and with vector v ( k ) and function λ i ( k + 1 ) ( γ ) defined as in (55) and (57), respectively, and let
t ( k ) : = max v ( k ) , 0 .
Then, the discrepancy equation admits a solution if the following condition is fulfilled:
i : y i 0 T t ( k ) , y : = i = 1 m T t i ( k ) , y i m 2 ,
where function T : R + × N R is defined by
T ( t , y ) = F ( t ; y ) ϵ t ; c ^ ,
with function F, function ϵ, and parameter vector c ^ given in (9), (32), and (35), respectively.
Proof. 
Since functions F in (9), ϵ in (32), and λ i ( k + 1 ) in (57) are all continuous, then the function G defined in (58)–(60) with Δ = Δ ( N E ) ( γ ) is continuous in the variable γ on its entire domain γ R + , for any y N m and any v ( k ) R m .
Then, it is easy to prove that function λ i ( k + 1 ) ( γ ) in (57) satisfies
λ i ( k + 1 ) ( 0 ) = max v i ( k ) , 0 = t i ( k ) , lim γ + λ i ( k + 1 ) ( γ ) = y i ,
with vector t ( k ) defined in (64).
It thus follows from (67) and from the definition of functions D in (59) and Δ ( N E ) in (60) that
G ( 0 ; y ) = D ( 0 ; y ) Δ ( N E ) ( 0 ) = i = 1 m F λ i ( k + 1 ) ( 0 ) ; y i m 2 i = 1 m ϵ λ i ( k + 1 ) ( 0 ) ; c ^ = i = 1 m F t i ( k ) ; y i ϵ t i ( k ) ; c ^ m 2 = T t ( k ) , y m 2 ,
and that
lim γ + G ( γ ; y ) = lim γ + D ( γ ; y ) Δ ( N E ) ( γ ) = lim γ + i = 1 m F λ i ( k + 1 ) ( γ ) ; y i m 2 i = 1 m ϵ λ i ( k + 1 ) ( γ ) ; c ^ ( 69 ) = i = 1 m F ( y i ; y i ) i = 1 m 1 2 + ϵ y i ; c ^ ( 70 ) = i = 1 m f y i ; c ^ < 0 if i : y i 0 ,
where function T in (68) is defined in (65), cancelling the first summation in (69) comes from F ( y ; y ) = 0 for any y R + (see the definition of function F in (9), where y ln y = 0 for y = 0 ) and (70) comes from the definition of function f in (36).
From (70) and the continuity of function G ( γ ; y ) , we can conclude that, for any y 0 , the discrepancy equation G ( γ ; y ) = 0 admits a solution if G ( 0 ; y ) 0 . It thus follows from (68) that the sufficient condition in (64)–(66) holds true.   □
In Algorithm 1, we outline the general ADMM-based scheme used for solving image restoration variational models of the TV-KL form in (8) and (9) with automatic update/selection of the regularization parameter μ according to one of the considered versions of the DP. We refer to the general scheme as DP-ADMM, whereas the specific schemes using one among the DP versions TDP, ADP, and NEDP will be named TDP-ADMM, ADP-ADMM, and NEDP-ADMM, respectively. Notice that the γ -update at step 3 can be performed by means of a derivative-free approach, such as bisection or the secant method.
Algorithm 1: General DP-ADMM approach for image restoration variational models of the TV-KL form in (8) and (9) and automatic selection of μ via DP.
inputs: observed degraded image y N m , emission background b R + m , blur and regularization operators H R m × n , D R 2 n × n
output: estimated restored image x ^ R n  
  1. initialise: set x ( 0 ) = y
  2. for k = 0, 1, 2, until convergence do:  
  3.     · compute γ ( k + 1 ) = μ ( k + 1 ) / β λ by (61) and (62)
  4.     · compute λ ( k + 1 ) by (63)
  5.     · compute g ( k + 1 ) by (49)
  6.     · compute z ( k + 1 ) by (51)
  7.     · compute x ( k + 1 ) by (53)
  8.     · compute ρ λ ( k + 1 ) , ρ g ( k + 1 ) , ρ z ( k + 1 ) by (45)–(47)
  9. end for
  10. x ^ = x ( k + 1 )

5. Numerical Results

In this section, we evaluate the performance of the proposed NEDP in (37) for the automatic selection of the regularization parameter μ in image restoration variational models of the TV-KL form in (8) and (9).
Our approach is compared with the TDP and the ADP in (13) and (15), respectively. For each criterion, we perform the ADMM-based scheme outlined in Algorithm 1. As recalled in Section 4, the  μ -selection problem along the ADMM iterations always admits a unique solution under the adoption of the ADP and TDP. Concerning the NEDP-ADMM, at the generic iteration k of Algorithm 1, the regularization parameter μ is updated provided that the condition stated in Proposition 1 is satisfied. If this is not the case, the parameter update is not performed and μ ( k ) = μ ( k 1 ) .
The numerical tests have been designed with the following twofold aim:
(i)
to prove that the proposed NEDP criterion is capable of selecting optimal μ values returning high-quality restoration results and, in particular, that it outperforms the classical ADP criterion in the low-count Poisson regime;
(ii)
to prove that the proposed NEDP-ADMM scheme outlined in Algorithm 1 is capable of automatically selecting such optimal μ values in a robust (and efficient) manner.
More specifically, the latter point will be proven by showing that the iterated and the a posteriori version of our approach behave similarly in terms of μ -selection.
The μ -values selected by the TDP, the ADP, and the NEDP applied a posteriori will be denoted by μ ( T ) , μ ( A ) , μ ( N E ) , respectively, while the output μ -value of the ADP-ADMM and of the NEDP-ADMM scheme will be denoted by μ ^ ( A ) and μ ^ ( N E ) , respectively.
The quality of the restorations x ^ with respect to the original uncorrupted image x ¯ will be assessed by means of two scalar measures, namely the Structural Similarity Index (SSIM) [17] and the Improved-Signal-to-Noise Ratio (ISNR), defined by
ISNR x ^ , x ¯ = 10 log 10 x ¯ b 2 2 x ¯ x ^ 2 2 .
For all tests, the iterations of the ADMM-based scheme in Algorithm 1 are stopped as soon as
ϵ x ( k ) = x ( k ) x ( k 1 ) 2 x ( k 1 ) 2 < 10 5 , k N { 0 } ,
and the ADMM penalty parameters β λ , β g , β z are set manually so as to achieve fast convergence. More precisely, in each test, the same triplet ( β λ , β g , β z ) of ADMM penalty parameters is used for the three compared discrepancy principles TDP, ADP, and NEDP, with  β λ , β g , β z [ 0.5 , 2 ] .
We consider the four test images, each with pixel values between 0 and 1, shown in Figure 5. The acquisition process (1) has been simulated as follows. First, the original image is multiplied by a factor κ N { 0 } representing the maximum emitted photon count, i.e., the maximum expected value of the number of photons emitted by the scene and hitting the image domain. Clearly, the lower κ , the lower the SNR of the observed noisy image and the more difficult the image restoration problem. For each image, several values κ ranging from 3 to 1000 have been considered. Then, the resulting images have been corrupted by space-invariant Gaussian blur, with a blur kernel generated by the Matlab routine fspecial, which is characterized by two parameters, namely the band parameter, representing the side length (in pixels) of the square support of the kernel, and sigma, which is the standard deviation (in pixels) of the isotropic bivariate Gaussian distribution defining the kernel in the continuous setting. We consider two different blur levels characterized by the parameters band = 5, sigma = 1 and band = 13, sigma = 3. The blurred noiseless image λ = H x + b is then generated by adding to the blurred image a constant emission background b of value 2 × 10 3 . The observed image y = Poiss ( λ ) is finally obtained by pseudo-randomly generating an m-variate independent Poisson realization with mean vector λ .
The black solid curves plotted in Figure 6a,c represent the function D ( μ ; y ) as defined in (11) and (12) for the first test image cameraman and κ = 5 , for the less severe (first row) and more severe (second row) blur level. They have been computed by solving the TV-KL model in (8) for a fine grid of different μ -values, and then calculating D ( μ ; y ) for each μ . The horizontal dashed cyan and green lines represent the constant discrepancy values Δ ( T ) and Δ ( A ) used in (13) and (15), respectively, while the dashed magenta curve represents the discrepancy value function Δ ( N E ) ( μ ) defined in (37). We remark that Δ ( N E ) ( μ ) has been obtained in the same way as D ( μ ; y ) , i.e., by computing the expression in (37) for each μ of the selected fine grid. One can clearly observe that the intersection points between the curve Δ ( N E ) ( μ ) and the function D ( μ ; y ) and between the line representing Δ ( T ) and D ( μ ; y ) are very close, and both at a significant distance from the intersection point detected by Δ ( A ) .
In Figure 6b,d, we show the INSR and SSIM values achieved for different μ -values with κ = 5 . The vertical cyan, green, and magenta lines correspond to the μ -values detected by the intersection of D ( μ ; y ) and Δ ( T ) , Δ ( A ) , Δ ( N E ) ( μ ) , respectively. As a reflection of the behavior of the discrepancy function and of the three curves, the ISNR/SSIM corresponding to μ ( T ) and μ ( N E ) are very close to each other and almost reach the maximum of the two curves. We also highlight that, when considering the more severe blur case, the ADP selects a very large μ -value, which returns very low ISNR and SSIM values—see the thumbnail image in the right-hand corner of Figure 6d.
We are also interested in verifying that the proposed NEDP-ADMM scheme outlined in Algorithm 1 succeeds in automatically selecting such optimal μ in a robust and efficient way: the blue and red markers in Figure 6b,d represent the final ISNR and SSIM values, respectively, of the image restored via NEDP-ADMM. Clearly, the markers are plotted in correspondence of μ ^ ( N E ) , which is, as we recall, the output μ -value of the iterative scheme NEDP-ADMM; one can clearly observe that μ ^ ( N E ) is very close to the optimal μ ( N E ) detected a posteriori by the NEDP.
As a further analysis, at the bottom of Figure 6, we report the output μ -values, the ISNR and the SSIM values for the two blur levels, and the 11 κ -values considered to be obtained by the ADP-ADMM (first column) and the NEDP-ADMM (second column). To facilitate the comparison, we also report in blue/red the increments/decrements of the ISNR and SSIM achieved by our method with respect to the approximate criterion. Notice that the NEDP outperforms the ADP both in terms of ISNR and SSIM for the low-count acquisitions. However, when the κ increases, the two methods behave very similarly, with the ISNR and SSIM values obtained by the ADP-ADMM being slightly larger than those obtained by the NEDP-ADMM. In accordance with this analysis, the output μ ^ ( A ) and μ ^ ( N E ) are significantly different in low-count regimes, similar in mid-count regimes, and particularly close in high-count regimes. Notice that this behavior can be easily explained in light of the analysis carried out in Section 2, where we have shown that the approximation provided by Δ ( A ) becomes more and more accurate as the number of pixels with large values increases.
For a visual comparison, in Figure 7 and Figure 8, we show the observed images (left column), the restorations via ADP-ADMM (middle column) and via NEDP-ADMM (right column) for the less and more severe blur level, respectively, and when different photon count regimes, ranging from very low to very high, are considered. As already observed from the ISNR and SSIM values reported at the bottom of Figure 6, we notice that for low-count acquisitions, the μ -value selected by the ADP does not allow for a proper regularization, so that NEDP-ADMM clearly outperforms the competitor. However, starting from κ = 15 —for the first blur level—and from κ = 40 —for the second blur level—the two approaches return similar output images.
For the second test image, brain, we report in Figure 9 the behavior of the discrepancy function D ( μ ; y ) and of the ISNR/SSIM curves obtained by applying the TDP, the ADP, and the NEDP, for  κ = 5 and for the two considered blur levels. In addition, in this case, the NEDP and the TDP behave similarly and they almost achieve the maximum of the ISNR and of the SSIM curves. In contrast, μ ( A ) appears to be largely underestimated with respect to the optimal μ —which can be intended as the one maximizing either the ISNR or the SSIM. As for the first test image, the blue and red markers, indicating the output ISNR and SSIM, respectively, of the iterated version of our approach, are very close to the ones achieved by applying the NEDP a posteriori, suggesting that also μ ^ ( N E ) and μ ( N E ) are very close.
From the table reported at the bottom of Figure 9, we observe that the proposed μ -selection criterion, for every κ , returns restored images outperforming the ones achieved via the ADP both in terms of ISNR and SSIM. The poor behavior of the ADP can be related to the nature of the processed image, which, either for the low-count or high-count acquisitions, presents few pixels with large values so that the approximation in (15) is particularly inaccurate. As a signal of this, note that the output μ ^ ( A ) is always smaller—or significantly smaller—than μ ^ ( N E ) .
The restored images in Figure 10 and Figure 11 reflect the values recorded in the table as, for the two considered blur levels, the output of the NEDP-ADDM appears to be remarkably sharper than the final restoration by ADP-ADMM, especially in low- and mid-count regimes.
In Figure 12, for the test image phantom, we report the curve of the discrepancy function D ( μ ; y ) obtained a posteriori, as well as the curves of the ISNR and of the SSIM for κ = 3 and the two blur levels considered. As for the test image brain, in this case, the ADP also selects a μ -value that is far from the optimal one, either if measured in terms of ISNR or SSIM. On the other hand, μ ( T ) and μ ( N E ) are very close and, in particular, one can notice that the output of the NEDP-ADMM represents the optimal compromise in terms of ISNR and SSIM. Notice also that, when considering the larger blur level, the output value μ ^ ( N E ) of the NEDP-ADMM, detected by the red and blue markers, is larger than the one selected by the a posteriori version of the NEDP. However, the difference in terms of ISNR and SSIM is not particularly significant. This behavior is due to the use of different penalty parameters β λ , β g , β z in the ADMM for the a posteriori and the iterated version of our approach. In fact, when considering a large blur level, the convergence of the ADMM is particularly slow and can be achieved upon suitable selection of the penalty parameters, whose values may not coincide in the two scenarios addressed.
The mismatch observed in Figure 12 between the curves of the ISNR and of the SSIM is reflected in the values reported in the bottom part of the figure, whence we can conclude that the NEDP-ADMM outperforms the ADP-ADMM in terms of ISNR for each κ -value, while the ADP-ADMM returns slightly better results in terms of SSIM for high-count acquisitions. As for the test image brain, in this case, the output μ ^ ( A ) also appears to be significantly small. Once again, this behavior can be related to the considered image, which is mostly characterized by pixels with very small values.
From the restorations shown in Figure 13 and Figure 14, one can also notice that the slight improvement in terms of SSIM does not correspond to any significant visual improvements. In fact, along the whole range of considered photon counts κ , the NEDP-ADMM is capable of returning sharper restorations. This reflects the tendency of ADP-ADMM applied on the current test image in selecting not sufficiently large μ -values, so that, in the TV-KL, the regularization term takes over. We also remark that for the current test image, the SSIM value does not seem to be particularly meaningful.
For the fourth and final test image, cells, we show in Figure 15 the behavior of the discrepancy function D ( μ ; y ) , as well as of the ISNR and SSIM values in the a posteriori framework for the two blur levels and κ = 3 . Note that, in the a posteriori setting, for both blur levels, the NEDP achieves higher ISNR and SSIM values when compared to the ADP. However, we observe that when considering the larger blur level, the output μ ^ ( N E ) of the NEDP-ADMM is smaller than μ ( N E ) ; this behavior can be ascribed, once again, to the ADMM convergence issues and the different values selected for the penalty parameters.
From the values reported in the bottom part of Figure 15, we notice that the NEDP-ADMM outperforms the ADP-ADMM in every photon count regime. Clearly, the closer μ ^ ( A ) and μ ^ ( N E ) , the smaller the difference in terms of ISNR and SSIM.
The restorations computed by the ADP-ADMM and the NEDP-ADMM are shown in Figure 16 for the smaller blur level and in Figure 17 for the larger one. The obtained results confirm the values reported in the bottom of Figure 15. Moreover, also from a visual viewpoint, the difference between the two performances increases when going from high- to low-count regimes.

6. Conclusions and Future Work

We propose an automatic selection strategy for the regularization parameter of variational image restoration models under Poisson noise corruption based on a nearly exact version of the approximate discrepancy principle originally proposed in [5]. Our approach relies on Monte Carlo simulations, which have been designed with the purpose of providing meaningful insights into the limitations of the original approximate strategy, especially in the low-count Poisson regime. The proposed version of the discrepancy principle has then been derived by means of a weighted least-square fitting and embedded along the iterations of an efficient ADMM-based optimization scheme. Our approach has been extensively tested on different images and for different photon count values, ranging from very low to high values. When compared to the original approximate selection criterion, the proposed strategy has been shown to drastically improve the quality of the output restorations in low-count regimes and in mid-count/high-count regimes on images characterized by few large pixel values.
From an analytical point of view, investigating the uniqueness of the regularization parameter value satisfying the proposed discrepancy principle will certainly constitute future work. From a modeling and applicative perspective, the effectiveness of the proposed approach when applied to variational models containing regularizers other than TV or aimed at solving inverse problems other than image restoration will be the subject of future analysis. Finally, from an algorithmic viewpoint, a matter that deserves further investigation is the (possibly automatic) selection of the three ADMM penalty parameters, which can significantly affect the speed of convergence of the numerical solution scheme.

Author Contributions

F.B., A.L., M.P., F.S. contributed in equal parts to the conceptualisation, methodology, writing and editing of the manuscript. F.B. was the main contributor for numerical implementation, data curation, validation, and visualisation. All authors contributed to investigation and resources. All authors have read and agreed to the published version of the manuscript.

Funding

Research was supported by the ‘National Group for Scientific Computation (GNCS-INDAM)’ and by ex60 project by the University of Bologna ‘Funds for selected research topics’. The research of MP has been also funded by the GNCS-INDAM ’Fund for young researchers’.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bertero, M.; Boccacci, P.; Ruggiero, V. Inverse Imaging with Poisson Data; IOP Publishing: Bristol, UK, 2018. [Google Scholar]
  2. Calvetti, D.; Somersalo, E. Introduction to Bayesian Scientific Computing: Ten Lectures on Subjective Computing (Surveys and Tutorials in the Applied Mathematical Sciences); Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  3. Geman, S.; Geman, D. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell. 1984, 6, 721–741. [Google Scholar] [CrossRef] [PubMed]
  4. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Physica D 1992, 60, 259–268. [Google Scholar] [CrossRef]
  5. Zanella, R.; Boccacci, P.; Zanni, L.; Bertero, M. Efficient gradient projection methods for edge-preserving removal of Poisson noise. Inverse Probl. 2013, 25, 0450100, Erratum in Inverse Probl. 2013, 29, 119501. [Google Scholar] [CrossRef] [Green Version]
  6. Carlavan, M.; Blanc-Feraud, L. Sparse Poisson Noisy Image Deblurring. IEEE Trans. Image Process. 2012, 21, 1834–1846. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Bertero, M.; Boccacci, P.; Talenti, G.; Zanella, R.; Zanni, L. A discrepancy principle for Poisson data. Inverse Probl. 2010, 26, 105004. [Google Scholar] [CrossRef] [Green Version]
  8. Bonettini, S.; Prato, M. Accelerated gradient methods for the X-ray imaging of solar flares. Inverse Probl. 2014, 30, 055004. [Google Scholar] [CrossRef] [Green Version]
  9. Benvenuto, F. A study on regularization for discrete inverse problems with model-dependent noise. SIAM J. Numer. Anal. 2017, 55, 2187–2203. [Google Scholar] [CrossRef]
  10. Guastavino, S.; Benvenuto, F. A mathematical model for image saturation with an application to the restoration of solar images via adaptive sparse deconvolution. Inverse Probl. 2021, 37, 0150104. [Google Scholar] [CrossRef]
  11. Zanni, L.; Benfenati, A.; Bertero, M.; Ruggiero, V. Numerical methods for parameter estimation in Poisson data inversion. J. Math. Imaging Vis. 2015, 52, 397–413. [Google Scholar] [CrossRef]
  12. Sixou, B.; Hohweiller, T.; Ducros, N. Morozov principle for Kullback-Leibler residual term and Poisson noise. Inverse Probl. Imaging 2018, 12, 607–634. [Google Scholar] [CrossRef] [Green Version]
  13. Johnson, N.L.; Kemps, A.W.; Kotz, S. Univariate Discrete Distributions; Wiley: New York, NY, USA, 2005. [Google Scholar]
  14. Teuber, T.; Steidl, G.; Chan, R.H. Minimization and parameter estimation for seminorm regularization models with I-divergence constraints. Inverse Probl. 2013, 29, 035007. [Google Scholar] [CrossRef]
  15. Boyd, S.P.; Parikh, N.; Chu, E.K.; Peleato, B.; Eckstein, J. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  16. Glowinski, R.; Marroco, A. Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problèmes de Dirichlet non linéaires. Math. Model. Numer. Anal. 1975, 9, 41–76. [Google Scholar] [CrossRef]
  17. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Original images (first column), observed images corrupted by blur and Poisson noise (second column), restored images obtained by the TV-KL model (8) and (9) with μ selected according to the approximate DP [5,7] (third column), and the proposed nearly exact DP (last column).
Figure 1. Original images (first column), observed images corrupted by blur and Poisson noise (second column), restored images obtained by the TV-KL model (8) and (9) with μ selected according to the approximate DP [5,7] (third column), and the proposed nearly exact DP (last column).
Jimaging 08 00001 g001
Figure 2. Comparison between the approximation δ ( A ) = 1 / 2 of δ ( E ) ( λ ) = E F Y λ used in the (15) proposed in [5,7] and the Monte Carlo estimates δ ^ ( E ) ( λ i ) for some λ i [ 0 , 8 ] .
Figure 2. Comparison between the approximation δ ( A ) = 1 / 2 of δ ( E ) ( λ ) = E F Y λ used in the (15) proposed in [5,7] and the Monte Carlo estimates δ ^ ( E ) ( λ i ) for some λ i [ 0 , 8 ] .
Jimaging 08 00001 g002
Figure 3. Visual representation of the first 34 terms of the sequence of coefficients γ i ( ) , i = 0 , 1 , , in (26) (left) and the behavior of the probability measure defined in (28) as a function of λ (right).
Figure 3. Visual representation of the first 34 terms of the sequence of coefficients γ i ( ) , i = 0 , 1 , , in (26) (left) and the behavior of the probability measure defined in (28) as a function of λ (right).
Jimaging 08 00001 g003
Figure 4. Results of Monte Carlo simulation and weighted least-squares fitting for λ [ 0 , 6 ] (first column), λ [ 6 , 66 ] (second column), and λ [ 66 , 250 ] (third column).
Figure 4. Results of Monte Carlo simulation and weighted least-squares fitting for λ [ 0 , 6 ] (first column), λ [ 6 , 66 ] (second column), and λ [ 66 , 250 ] (third column).
Jimaging 08 00001 g004
Figure 5. Grayscale test images considered for the numerical experiments.
Figure 5. Grayscale test images considered for the numerical experiments.
Jimaging 08 00001 g005
Figure 6. Test image cameraman. Top: discrepancy curve divided by 10 4 (a,c) and ISNR/SSIM values (b,d) achieved for different μ -values with κ = 5 and Gaussian blur with parameters band = 5, sigma = 1 (first row) and band = 13, sigma = 3 (second row). Bottom: output μ -values and ISNR/SSIM values obtained by the ADP-ADMM and the NEDP-ADMM for the two blur levels considered and different photon counts κ .
Figure 6. Test image cameraman. Top: discrepancy curve divided by 10 4 (a,c) and ISNR/SSIM values (b,d) achieved for different μ -values with κ = 5 and Gaussian blur with parameters band = 5, sigma = 1 (first row) and band = 13, sigma = 3 (second row). Bottom: output μ -values and ISNR/SSIM values obtained by the ADP-ADMM and the NEDP-ADMM for the two blur levels considered and different photon counts κ .
Jimaging 08 00001 g006
Figure 7. Test image cameraman. Left column: observed data y corrupted by Gaussian blur with parameters band = 5, sigma = 1 and Poisson noise with different κ -values ranging from 3 to 1000. Middle column: restorations by ADP-ADMM. Right column: restorations by NEDP-ADMM.
Figure 7. Test image cameraman. Left column: observed data y corrupted by Gaussian blur with parameters band = 5, sigma = 1 and Poisson noise with different κ -values ranging from 3 to 1000. Middle column: restorations by ADP-ADMM. Right column: restorations by NEDP-ADMM.
Jimaging 08 00001 g007
Figure 8. Test image cameraman. Left column: observed data y corrupted by Gaussian blur with parameters band = 13, sigma = 3 and Poisson noise with different κ -values ranging from 3 to 1000. Middle column: restorations by ADP-ADMM. Right column: restorations by NEDP-ADMM.
Figure 8. Test image cameraman. Left column: observed data y corrupted by Gaussian blur with parameters band = 13, sigma = 3 and Poisson noise with different κ -values ranging from 3 to 1000. Middle column: restorations by ADP-ADMM. Right column: restorations by NEDP-ADMM.
Jimaging 08 00001 g008
Figure 9. Test image brain. Top: discrepancy curve divided by 10 4 (a,c) and ISNR/SSIM values (b,d) achieved for different μ -values with κ = 5 and Gaussian blur with parameters band = 5, sigma = 1 (first row) and band = 13, sigma = 3 (second row). Bottom: output μ -values and ISNR/SSIM values obtained by the ADP-ADMM and the NEDP-ADMM for the two blur levels considered and different photon counts κ .
Figure 9. Test image brain. Top: discrepancy curve divided by 10 4 (a,c) and ISNR/SSIM values (b,d) achieved for different μ -values with κ = 5 and Gaussian blur with parameters band = 5, sigma = 1 (first row) and band = 13, sigma = 3 (second row). Bottom: output μ -values and ISNR/SSIM values obtained by the ADP-ADMM and the NEDP-ADMM for the two blur levels considered and different photon counts κ .
Jimaging 08 00001 g009
Figure 10. Test image brain. Left column: observed data y corrupted by Gaussian blur with parameters band = 5, sigma = 1 and Poisson noise with different κ -values ranging from 3 to 1000. Middle column: restorations by ADP-ADMM. Right column: restorations by NEDP-ADMM.
Figure 10. Test image brain. Left column: observed data y corrupted by Gaussian blur with parameters band = 5, sigma = 1 and Poisson noise with different κ -values ranging from 3 to 1000. Middle column: restorations by ADP-ADMM. Right column: restorations by NEDP-ADMM.
Jimaging 08 00001 g010
Figure 11. Test image brain. Left column: observed data y corrupted by Gaussian blur with parameters band = 13, sigma = 3 and Poisson noise with different κ -values ranging from 3 to 1000. Middle column: restorations by ADP-ADMM. Right column: restorations by NEDP-ADMM.
Figure 11. Test image brain. Left column: observed data y corrupted by Gaussian blur with parameters band = 13, sigma = 3 and Poisson noise with different κ -values ranging from 3 to 1000. Middle column: restorations by ADP-ADMM. Right column: restorations by NEDP-ADMM.
Jimaging 08 00001 g011
Figure 12. Test image phantom. Top: discrepancy curve divided by 10 4 (a,c) and ISNR/SSIM values (b,d) achieved for different μ -values with κ = 3 and Gaussian blur with parameters band = 5, sigma = 1 (first row) and band = 13, sigma = 3 (second row). Bottom: output μ -values and ISNR/SSIM values obtained by the ADP-ADMM and the NEDP-ADMM for the two blur levels considered and different photon counts κ .
Figure 12. Test image phantom. Top: discrepancy curve divided by 10 4 (a,c) and ISNR/SSIM values (b,d) achieved for different μ -values with κ = 3 and Gaussian blur with parameters band = 5, sigma = 1 (first row) and band = 13, sigma = 3 (second row). Bottom: output μ -values and ISNR/SSIM values obtained by the ADP-ADMM and the NEDP-ADMM for the two blur levels considered and different photon counts κ .
Jimaging 08 00001 g012
Figure 13. Test image phantom. Left column: observed data y corrupted by Gaussian blur with parameters band = 5, sigma = 1 and Poisson noise with different κ -values ranging from 3 to 1000. Middle column: restorations by ADP-ADMM. Right column: restorations by NEDP-ADMM.
Figure 13. Test image phantom. Left column: observed data y corrupted by Gaussian blur with parameters band = 5, sigma = 1 and Poisson noise with different κ -values ranging from 3 to 1000. Middle column: restorations by ADP-ADMM. Right column: restorations by NEDP-ADMM.
Jimaging 08 00001 g013
Figure 14. Test image phantom. Left column: observed data y corrupted by Gaussian blur with parameters band = 13, sigma = 3 and Poisson noise with different κ -values ranging from 3 to 1000. Middle column: restorations by ADP-ADMM. Right column: restorations by NEDP-ADMM.
Figure 14. Test image phantom. Left column: observed data y corrupted by Gaussian blur with parameters band = 13, sigma = 3 and Poisson noise with different κ -values ranging from 3 to 1000. Middle column: restorations by ADP-ADMM. Right column: restorations by NEDP-ADMM.
Jimaging 08 00001 g014
Figure 15. Test image cells. Top: discrepancy curve divided by 10 4 (a,c) and ISNR/SSIM values (b,d) achieved for different μ -values with κ = 3 and Gaussian blur with parameters band = 5, sigma = 1 (first row) and band = 13, sigma = 3 (second row). Bottom: output μ -values and ISNR/SSIM values obtained by the ADP-ADMM and the NEDP-ADMM for the two blur levels considered and different photon counts κ .
Figure 15. Test image cells. Top: discrepancy curve divided by 10 4 (a,c) and ISNR/SSIM values (b,d) achieved for different μ -values with κ = 3 and Gaussian blur with parameters band = 5, sigma = 1 (first row) and band = 13, sigma = 3 (second row). Bottom: output μ -values and ISNR/SSIM values obtained by the ADP-ADMM and the NEDP-ADMM for the two blur levels considered and different photon counts κ .
Jimaging 08 00001 g015
Figure 16. Test image cells. Left column: observed data y corrupted by Gaussian blur with parameters band = 5, sigma = 1 and Poisson noise with different κ -values ranging from 3 to 1000. Middle column: restorations by ADP-ADMM. Right column: restorations by NEDP-ADMM.
Figure 16. Test image cells. Left column: observed data y corrupted by Gaussian blur with parameters band = 5, sigma = 1 and Poisson noise with different κ -values ranging from 3 to 1000. Middle column: restorations by ADP-ADMM. Right column: restorations by NEDP-ADMM.
Jimaging 08 00001 g016
Figure 17. Test image cells. Left column: observed data y corrupted by Gaussian blur with parameters band = 13, sigma = 3 and Poisson noise with different κ -values ranging from 3 to 1000. Middle column: restorations by ADP-ADMM. Right column: restorations by NEDP-ADMM.
Figure 17. Test image cells. Left column: observed data y corrupted by Gaussian blur with parameters band = 13, sigma = 3 and Poisson noise with different κ -values ranging from 3 to 1000. Middle column: restorations by ADP-ADMM. Right column: restorations by NEDP-ADMM.
Jimaging 08 00001 g017
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bevilacqua, F.; Lanza, A.; Pragliola, M.; Sgallari, F. Nearly Exact Discrepancy Principle for Low-Count Poisson Image Restoration. J. Imaging 2022, 8, 1. https://doi.org/10.3390/jimaging8010001

AMA Style

Bevilacqua F, Lanza A, Pragliola M, Sgallari F. Nearly Exact Discrepancy Principle for Low-Count Poisson Image Restoration. Journal of Imaging. 2022; 8(1):1. https://doi.org/10.3390/jimaging8010001

Chicago/Turabian Style

Bevilacqua, Francesca, Alessandro Lanza, Monica Pragliola, and Fiorella Sgallari. 2022. "Nearly Exact Discrepancy Principle for Low-Count Poisson Image Restoration" Journal of Imaging 8, no. 1: 1. https://doi.org/10.3390/jimaging8010001

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop