1. Introduction
Noise interferences often occur in many systems such as wireless communications [
1] and social networks [
2,
3]. Hence, images are inevitably corrupted by both blur and noise during the acquisition and transmission. Hence, the restoration of clean images from blurred and noisy observations is a fundamental task in the image processing community. A wide range of approaches has been proposed to remove additive Gaussian noise [
4,
5,
6]. However, many other noises, such as impulse noise [
7,
8,
9,
10,
11,
12], multiplicative noise [
13,
14], Poisson noise [
15,
16,
17], Cauchy noise [
18,
19], and Rician noise [
20], commonly appear in the real world and thus are studied by many researchers. Another impulsive noise is often caused by alphastable noise, which normally appears in many applications, such as wireless communication systems, synthetic aperture radar (SAR) images, biomedical images, and medical ultrasound images [
21,
22].
Mathematically, the image restoration problem can be expressed as
where
$u\in {\mathbb{R}}^{mn}$ is obtained from a twodimensional pixelarray with dimension
$m\times n$ and defined on a connected bounded domain
$\mathrm{\Omega}\subset {\mathbb{R}}^{2}$ with compact Lipschitz boundary,
$K\in {\mathbb{R}}^{mn\times mn}$ denotes a known linear and continuous blurring operator,
$\eta $ is the noise obeys certain distribution (for example alphastable noise is the noise which obeying alphastable distribution), and
$f\in {\mathbb{R}}^{mn}$ is the blurred image with the additive noise. In particular, when
f is corrupted only by noise, it is then given by
$f=u+\eta $.
It is well known that restoring
u from
f is normally an illconditioned problem. Variational methods are proposed to handle this illposed inverse imaging problems. These methods are usually summarized as convex and nonconvex methods, respectively. The total variation (TV) regularization method [
23] plays a significant role in convex variationalbased image processing, since it can preserve sharp edges in images due to the piecewise smooth property of the TV norm.
The ROF (Rudin Osher and Fatemi) denoising model is one of the most famous total variational models for restoring images with additive Guassian noise, which was proposed by Rudin et al. [
6], as given by
where
${\int}_{\mathrm{\Omega}}\left(\right)open=""\; close="">Du$ is the TV regularization term,
$BV$ is the space of the functions of bounded variation,
${\int}_{\mathrm{\Omega}}{\left(\right)}^{uf}2$ is the data fidelity term, and
$\lambda >0$ is the regularization parameter, which represents the tradeoff between the data fidelity term and the TV regularization term. It is possible to modify the ROF denoising model to incorporate a linear blurring operator
K [
6]. The ROF deblurring and denoising model is then given as follows:
Although the ROF deblurring and denoising model is a very useful deblurring and denoising approach with additive Gaussian noise, it does not achieve good performance in the scenario of nonGuassian environments. As a result, many kinds of variational models based on TV have been proposed for restoring clean images from blurred and nonGuassian noise distribution, such as that of impulse noise [
7,
8,
9,
10,
11,
12], multiplicative noise [
13,
14], Poisson noise [
15], Cauchy noise [
18,
19], and Rician noise [
20]. Based on different noise distributions, and data fidelity terms, one can obtain appropriate variational models for image denoising and deblurring in the presence of different noises. For example,
${\int}_{\mathrm{\Omega}}\left(\right)open=""\; close="">Kuf$ is the data fidelity term of TVL1 deblurring and denoising model with impulse noise [
11], and
${\int}_{\mathrm{\Omega}}\mathrm{log}\left(\right)open="("\; close=")">{\gamma}^{2}+{\left(\right)open="("\; close=")">Kuf}^{}2dx$ is the data fidelity term of Cauchy deblurring and denoising model with Cauchy noise [
18].
Recently, some methods have been considered to mitigate alphastable noise. For example, Zozor et al. [
24] employed a parametric approach for suboptimal signal detection. They dealt with the detection of a known signal embedded in alphastable noise and discussed the robustness of the detector against the signal amplitude and the stability index. Sadreazami et al. [
25] modeled the contourlet coefficients of noisefree images with the alphastable distribution. They have also presented a new approach for despeckling SAR images and a multiplicative watermark detection in the contourlet domain using the alphastable distribution [
26,
27]. Yang et al. [
28] proposed a total variational method to restore images that are degraded by alphastable noise based on the property of meridian distributed.
Until now, to the best of our knowledge, there is no paper reporting on a variational method for blurred image restoration in the presence of alphastable noise. In order to restore images from blur and alphastable noise while also preserving their edges, this paper proposes a novel variational method based on the statistical property of meridian distribution and the TV, and our numerical experiments demonstrate that it performs better than many standard deblurring and denoising method in impulsive noisy environments (with small $\alpha $ values, i.e., $\alpha \in \left(\right)open="("\; close=")">0,1.5$), while providing comparable or better performance in less demanding, lighttailed environments (with high $\alpha $ values, i.e., $\alpha \in \left(\right)open="("\; close=")">1.5,2$).
The main contributions of this paper are summarized as follows. (i) Based on the statistical properties of meridian distribution and the TV, we propose a new variational method for restoring blurred images with alphastable noise and then analyze the existence of the solution for the variational model. (ii) By adding a penalty term, we propose a strictly convex variational method and prove the existence and uniqueness of the solution for the convex variational model. (iii) The primaldual algorithm is employed to solve the novel convex variational problem, with its convergence being analyzed. (iv) We compare our proposed method to stateoftheart methods such as the TVL1 model [
11], the Cauchy model [
18], and the meridian filter [
29] and show the effectiveness of our proposed method.
The rest of this paper is organized as follows. In
Section 2, we describe the alphastable and the meridian distributions. In
Section 3, we propose a variational method for simultaneous deblurring and denoising, and study the existence of the solution for the proposed model. We also propose a convex variational method to restore blurred images with alphastable noise, and analyze the existence and uniqueness of the solution for the convex variational model. The primaldual algorithm for solving the proposed convex restoration problems is given in
Section 4.
Section 5 presents extensive numerical results to evaluate the performance of the proposed method in comparison with wellknown methods. Finally, concluding remarks are provided in
Section 6.
2. A Brief Review of the AlphaStable and Meridian Distributions
The alphastable noise which obeys alphastable distribution is often found in radar and sonarrelated applications. The heaviness of the alphastable distribution tails is controlled by the parameter
$\alpha \in (0,2)$, namely, the tails grow thicker as
$\alpha $ values becomes smaller. Hence, alphastable noise can be seen as a type of impulsive noise with small
$\alpha $ values (
$\alpha \in \left(\right)open="("\; close=")">0,1.5$) [
21].
The alphastable distributions are closed under additions, i.e., the sum of two alphastable random variables is still an alphastable random variable. Moreover, the alphastable random variables obey the generalized central limit theorem [
21]. However, this class of alphastable distribution random variables has no closedform expressions for densities and distribution functions (except for Gaussian distribution, Cauchy distribution, and Levy distribution). The distribution with
$\alpha =2$ corresponds to the wellknown Gaussian distribution, and the one with
$\alpha =1$ corresponds to the Cauchy distribution.
Figure 1 shows the probability density functions (PDFs) of alphastable distributions
$S\left(\right)open="("\; close=")">\alpha ,0,1,0$ with different values of
$\alpha $. We can see that the distributions of this class are all bellshaped, with increasing density on the left and decreasing on the right. In addition, the tail of the bells becomes heavier as the value of
$\alpha $ decreases.
The meridian distribution is a member of the generalized Cauchy distributions (GCD) family [
30], and it combines the advantages of the GCD and alphastable distributions. Moreover, an estimator derived from the meridian distribution is robust to the impulsive noise [
30]. The probability density function (PDF) of the meridian distribution is given by
where
$\gamma >0$ is the scale parameter, and
$\theta $ is the localization parameter. Without loss of generality, we consider
$\theta =0$ in our paper. A careful inspection of the meridian distribution shows that its PDF tail decays slower than the Cauchy case, resulting in a heaviertailed PDF, that is, the meridian PDF exhibits tails heavier than that of the Cauchy PDF [
29]. Moreover, by examining the wellestablished statistical relation between the Laplacian and meridian distributions, we can find that the ratio of two independent Laplacian distributed random variables is a meridian distribution [
29].
The influence function of the meridian distribution is given by
where
$\mathrm{sgn}(\xb7)$ is the sign function. The influence function determines the effect of contamination. The rejection point of the meridian is smaller than that of the Cauchy distribution as it has a higher influence function decay rate. This indicates that a signed detection algorithm in the presence of the impulsive noise with the meridian distribution is more robust than that in the Cauchy distributed noise [
29].
3. The Proposed Variational Model
In this section, we propose a new variational model for restoring blurred images under the alphastable noise environments.
Motivated by existing work [
6,
13,
18,
29], we propose a variational model by applying the Bayes rule and the maximum a posteriori (MAP) estimator to restore the blurred images with alphastable noise based on the property of the meridian distribution and the TV.
First, we focus only on the denoising scenario. Given a known image
f, as in [
6,
13], by using the Bayes rule as well as the MAP estimation, we have
In obtaining Equation (
6), we have omitted
$\mathrm{log}\left(\right)open="("\; close=")">P\left(f\right)$ since it is a constant respect to
u.
As the image is corrupted by alphastable noise, for each pixel
$x\in \mathrm{\Omega}$, we have
where
$\gamma >0$ stands for the scale parameter. Therefore,
Inspired by the idea of Aubert et al. [
13],
u is assumed to follow a Gibbs prior distribution. Therefore, we can obtain the TV regularization of
u as follows:
where
$\beta >0$ is a parameter, and
R is the normalization factor. Hence, solving Equation (
6) is equivalent to find the minimization of the following logarithmic probability. That is,
Here, please note that the $\mathrm{log}2+\mathrm{log}\gamma +\mathrm{log}R$ is omitted since the three terms are all constants with respect to u.
Therefore, our pure denoising with alphastable noise is given by
where
$\lambda =\frac{2}{\beta}>0$ is a regularization parameter. As one can see, we keep the same regularization term as in the ROF denoising model (Equation (
2)) since the TV regularization term is useful for preserving edges, but we adapt the data fidelity term to the alphastable noise, introducing one that is suitable for such noise. We emphasize that the proposed model can be extended to other modern regularization terms such as framelets, sharelets, rank surrogates, dictionary learning, or the tightframe approach. These regularization terms are effective for the restoration of blurred and noisy images.
Thus, we start to prove the existence of the solution for Equation (
11).
Theorem 1. Let $f\in {L}^{\infty}\left(\mathrm{\Omega}\right)$ with $\underset{\mathrm{\Omega}}{\mathrm{inf}}f>0$, then Equation (11) has a solution ${u}^{*}\in BV\left(\mathrm{\Omega}\right)$ satisfying: Proof. Set $a=\underset{\mathrm{\Omega}}{\mathrm{inf}}f$, $b=\underset{\mathrm{\Omega}}{\mathrm{sup}}f$, and let ${E}_{0}\left(u\right):=\lambda {\int}_{\mathrm{\Omega}}\mathrm{log}\left(\right)open="("\; close=")">1+\frac{\left(\right)}{uf}\gamma dx$. Noting that $E\left(u\right):={\int}_{\mathrm{\Omega}}\left(\right)open=""\; close="">Du$, we have $E\left(u\right)\ge {E}_{0}\left(u\right)\ge 0$. This leads to $E\left(u\right)$ being lowerbounded, and we can find a minimal sequence $\left(\right)open="\{"\; close="\}">{u}_{n}$.
In addition, for any fixed
$x\in \mathrm{\Omega}$, let
$h\left(t\right):=\mathrm{log}\left(\right)open="("\; close=")">1+\frac{\left(\right)}{tf\left(x\right)}\gamma $. Therefore, if
$t>f\left(x\right)$, we have
${h}^{\prime}\left(t\right)=\frac{1}{\gamma +tf\left(x\right)}>0$, else if
$t<f\left(x\right)$, we get
${h}^{\prime}\left(t\right)=\frac{1}{\gamma +f\left(x\right)t}<0$. From the above two inequalities, we know that the function
$h\left(t\right)$ is decreasing if
$t\in \left(\right)open="["\; close="]">0,f\left(x\right)$ and increasing if
$t\in \left(\right)open="["\; close=")">f\left(x\right),+\infty $. This implies that
$h\left(\right)open="("\; close=")">\mathrm{min}\left(\right)open="("\; close=")">t,M\le h\left(t\right)$ if
$M\ge f\left(x\right)$. Hence,
${E}_{0}\left(\right)open="("\; close=")">\underset{\mathrm{\Omega}}{\mathrm{inf}}\left(\right)open="("\; close=")">u,b\le {E}_{0}\left(u\right)$ if
$M=b$. Furthermore, it is known that
${\int}_{\mathrm{\Omega}}\left(\right)open=""\; close="">D\underset{\mathrm{\Omega}}{\mathrm{inf}}\left(\right)open="("\; close=")">u,b\le {\int}_{\mathrm{\Omega}}\left(\right)open=""\; close="">Du$ (see Lemma 1 in [
31]). Therefore, we can conclude that
$E\left(\right)open="("\; close=")">\underset{\mathrm{\Omega}}{\mathrm{inf}}\left(\right)open="("\; close=")">u,b\le E\left(u\right)$. Similarly,
$E\left(\right)open="("\; close=")">\underset{\mathrm{\Omega}}{\mathrm{sup}}\left(\right)open="("\; close=")">u,a\le E\left(u\right)$ with
$a=\underset{\mathrm{\Omega}}{\mathrm{inf}}f$. Hence, we can assume that
$0<a\le {u}_{n}\le b$, which implies that
${u}_{n}$ is bounded in
${L}^{1}\left(\mathrm{\Omega}\right)$.
According to the definition of
$\left(\right)$,
$E\left(\right)open="("\; close=")">{u}_{n}$ is bounded. In addition, it is proved that
${u}_{n}$ is bounded in
$BV\left(\mathrm{\Omega}\right)$ since
${\int}_{\mathrm{\Omega}}\left(\right)open=""\; close="">D{u}_{n}$ is bounded [
31]. Hence, there is a subsequence that converges strongly in
${L}^{1}\left(\mathrm{\Omega}\right)$ and weakly in
$BV\left(\mathrm{\Omega}\right)$ to some
${u}^{*}\in BV\left(\mathrm{\Omega}\right)$. Furthermore, given
$0<a\le {u}^{*}\le b$, the lower semicontinuity of the TV, and the Fatou’s Lemma, the solution to Equation (
11) is obtained as
${u}^{*}$. ☐
We then extend Equation (
11) to the simultaneous deblurring and denoising scenarios. The restoration is conducted by solving the following optimization model:
It is worth mentioning that Equation (
12) is also a nonconvex problem, as in the scenario of the pure denoising Equation (
11). Since Equations (
11) and (
12) are both nonconvex, they cannot guarantee a global optimal solution. To overcome this drawback, we incorporate an additional penalty term into Equations (
11) and (
12) to obtain novel convex variational models in the following section. This penalty term is based on the medianfiltered result of the noise image.
In the following section, we propose a convex variational model for deblurring and denoising images, which is corrupted by both blur and alphastable noise.
We first also focus on a convex variational model for denoising only. By introducing a penalty term into Equation (
11), we obtain a convex variational model as follows:
where
$g=\mathrm{medfilt}2\left(f\right)$ (
g is the median filter function of
f ) [
18],
$\lambda >0$ and
$\mu >0$ are the regularization parameters, respectively.
As a result, three theorems are provided to confirm that the above model is strictly convex under certain conditions, and there is a unique solution to Equation (
13).
Lemma 1. If $\mu {\gamma}^{2}\ge 1$, the objective function in Equation (13) is strictly convex. Proof. For each fixed
$x\in \mathrm{\Omega}$, let the real function
h on
${\mathbb{R}}^{+}\cup \left\{0\right\}$ be defined as
We can easily compute the first and second order derivatives of
h, as given by
Since
$\mu {\gamma}^{2}\ge 1$, we have
$\gamma \ge \frac{1}{\sqrt{\mu}}$; thus,
$\gamma +\left(\right)open=""\; close="">tf\left(x\right)$, or
$\mu {\left(\right)}^{\gamma +\left(\right)open=""\; close="">tf\left(x\right)}\ge 1$, that is
${h}^{\u2033}\left(t\right)\ge 0$, i.e.,
h is convex. Furthermore, the function
h has only one minimizer, so
h is strictly convex when
$\mu {\gamma}^{2}\ge 1$. Since the total variation regularization is convex, we can also conclude that the objective function in Equation (
13) is strictly convex for
$\mu {\gamma}^{2}\ge 1$. ☐
Based on Lemma 1, we can now prove the existence and uniqueness of the solution to Equation (
13).
Lastly, we also extend our convex variational model for the following simultaneous deblurring and denoising case:
Since the blurring operator
K is linear and nonnegative, we can conclude that the model in Equation (
14) is convex when
$\mu {\gamma}^{2}\ge 1$. In the following theorem, we state the existence and uniqueness of its solution.
Theorem 2. Let $f\in {L}^{\infty}\left(\mathrm{\Omega}\right)$ with $\underset{\mathrm{\Omega}}{\mathrm{inf}}f>0$, $g\in {L}^{2}\left(\mathrm{\Omega}\right)$, and a nonnegative linear operator $K\in \mathcal{L}\left(\right)open="("\; close=")">{L}^{1}\left(\mathrm{\Omega}\right),{L}^{2}\left(\mathrm{\Omega}\right)$. Assume that K does not annihilate constant functions, i.e., $KI\ne 0$. Therefore, Equation (14) has a solution. Further, if $\mu {\gamma}^{2}\ge 1$ and K is injective, the solution is unique. Proof. Let
$\left(\right)open="\{"\; close="\}">{u}_{n}$ be a minimizing sequence for Equation (
14). Since the objective function in (
14) is bounded, we know that
$\left(\right)$ is bounded [
13,
18]. As in the proof of Theorem 2 of [
18], we can verify that
${\left(\right)}_{{u}_{n}{m}_{\mathrm{\Omega}}\left(\right)open="("\; close=")">{u}_{n}}2$ and
${\left(\right)}_{{u}_{n}{m}_{\mathrm{\Omega}}\left(\right)open="("\; close=")">{u}_{n}}1$ are bounded for each
n (where
${m}_{\mathrm{\Omega}}\left(\right)open="("\; close=")">{u}_{n}$,
$\left\mathrm{\Omega}\right$ denotes the measure of
$\mathrm{\Omega}$). Due to the continuity of the operator
$K\in \mathcal{L}\left(\right)open="("\; close=")">{L}^{1}\left(\mathrm{\Omega}\right),{L}^{2}\left(\mathrm{\Omega}\right)$, we know that the sequence
$\left(\right)$ is bounded in
${L}^{2}\left(\mathrm{\Omega}\right)$ and in
${L}^{1}\left(\mathrm{\Omega}\right)$.
Moreover, for each
n, the objective function in Equation (
14) is bounded, hence
${\left(\right)}^{K{u}_{n}g}$ is bounded in
${L}^{1}\left(\mathrm{\Omega}\right)$. Thus,
${\left(\right)}_{K{u}_{n}g}$ is bounded as well, and hence
${\left(\right)}_{K{u}_{n}}$ is bounded. One can easily find that
$\left(\right)open=""\; close="">{m}_{\mathrm{\Omega}}\left(\right)open="("\; close=")">{u}_{n}{\left(\right)}_{K1}1$ is bounded from Equation (
15).
Since $K1\ne 0$, ${m}_{\mathrm{\Omega}}\left(\right)open="("\; close=")">{u}_{n}$ is uniformly bounded. Moreover, $\left(\right)$ is bounded, so $\left(\right)$ is bounded in ${L}^{2}\left(\mathrm{\Omega}\right)$ and in ${L}^{1}\left(\mathrm{\Omega}\right)$. Since $BV\left(\mathrm{\Omega}\right)$ is closed and convex, $\left(\right)$ is also bounded in $BV\left(\mathrm{\Omega}\right)$.
As a consequence, there is a possible subsequence
$\left(\right)$, which converges in
${L}^{1}\left(\mathrm{\Omega}\right)$ to some
${u}^{*}\in BV\left(\mathrm{\Omega}\right)$, and
$\left(\right)$ converges slightly as a measure to
$D{u}^{*}$. Since the linear operator
K is continuous,
$\left(\right)$ converges to
$K{u}^{*}$ in
${L}^{2}\left(\mathrm{\Omega}\right)$. Thus,
${u}^{*}$ is a solution of Equation (
14) according to the lower semicontinuity of TV and Fatou’s lemma.
Based on Lemma 1, when
$\mu {\gamma}^{2}\ge 1$, Equation (
14) is strictly convex. Furthermore,
K is injective, so its solution is unique. ☐
4. PrimalDual Algorithm
In this section, we employ the primaldual algorithm [
32,
33] to solve the minimization problem in (
14) since it is easy to implement and its convergence is guaranteed [
32]. Due to the convexity of Equation (
14), there are many algorithms that can be employed to solve the proposed image deblurring and denoising model such as the alternating direction method of multipliers (ADMM) [
5,
34,
35] and the splitBregman algorithm [
36].
We address the general deblurring and denoising case, since the pure denoising case can be considered special when
K is an invariant parameter. At first, the discrete version of our proposed image deblurring and denoising Equation (
14) is derived, and the corresponding numerical solution is then given.
Suppose that the noisy image
$f\in {\mathbb{R}}^{mn}$ is obtained from a twodimensional pixelarray with dimension
$m\times n$, and
$K\in {\mathbb{R}}^{mn\times mn}$ is the discretization of the continuous blurring operator. Now we introduce the discrete version of Equation (
14):
where
$G:{\mathbb{R}}^{mn}\to \mathbb{R}$ is defined as
The first term of Equation (
16) denotes the discrete total variation of the image
u, and it is defined as
where the discrete gradient
$\nabla \in {\mathbb{R}}^{2mn\times mn}$ is given by
$\nabla u=\left(\right)open="("\; close=")">\begin{array}{c}{\nabla}_{x}u\hfill \\ {\nabla}_{y}u\hfill \end{array}$.
The first term on the right side of Equation (
17) is a robust distance metric, which can be defined as the meridian norm. The meridian norm tends to behave like the
${L}_{1}$ norm for points within the unitary
${L}_{1}$ ball and gives the same penalization to large sparse deviations as to small clustered deviations [
30].
As in [
32], we introduce new variables
$v\in {\mathbb{R}}^{2mn}$ and
$w\in {\mathbb{R}}^{mn}$, and Equation (
16) is then clearly equivalent to the following constrained optimization problem:
To employ the primaldual algorithm, we study the following optimization problem:
where
$p\in {\mathbb{R}}^{2mn}$ and
$q\in {\mathbb{R}}^{mn}$ are the dual variables,
X is a real vector space
${\mathbb{R}}^{mn}$, and
$Y=\left(\right)open="\{"\; close="\}">q\in {\mathbb{R}}^{2mn}:{\u2225q\u2225}_{\infty}\le 1$, where
${\u2225q\u2225}_{\infty}$ is defined as
${\u2225q\u2225}_{\infty}=\underset{i\in \left(\right)open="\{"\; close="\}">1,2,\cdots ,mn}{\mathrm{max}}$.
Now we apply the primaldual algorithm to the optimization problem of Equation (
20). The primaldual algorithm is defined through the following iterations:
In the following, we provide details on how to solve them. Since the objective functions of Equations (
21)–(
23) are quadratic, the update of
p,
q, and
u can be computed efficiently by
where the divergence operator
$\mathrm{div}={\nabla}^{\mathrm{T}}$. The update in Equation (
24) can be obtained by applying the soft thresholding operator as
where
${t}^{k}={v}^{k}\tau {p}^{k+1}$. The optimality condition for (
25) is given by
that is
where
${a}^{k}=\frac{1}{\gamma \left(\right)open="("\; close=")">1\phantom{\rule{3.33333pt}{0ex}}+\phantom{\rule{3.33333pt}{0ex}}\mu \lambda \tau}\frac{f}{\gamma}$.
We remark that, if K is the identity operator, i.e. the degraded image f is not blurred but is only corrupted by noise, there is no need to introduce the primal variable w and the dual variable q, and the algorithm can be simplified accordingly.
The primaldual algorithm above to solve the optimization problem of Equation (
20) can be summarized in the following table.
The termination condition in Algorithm 1 will be discussed in
Section 5.
In the rest of this section, we study the existence of the solution to Equation (
20) and the convergence of Algorithm 1.
Define
$A=\left(\begin{array}{ccc}\nabla & I& 0\\ K& 0& I\end{array}\right)$,
$x=\left(\right)open="("\; close=")">\begin{array}{c}u\hfill \\ v\hfill \\ w\hfill \end{array}$,
$y=\left(\right)open="("\; close=")">\begin{array}{c}p\hfill \\ q\hfill \end{array}$, such that Equation (
20) is equivalent to
where
$H\left(x\right)={\u2225v\u2225}_{1}+\lambda G\left(w\right)$.
Proposition 1. The saddlepoint set of Equation (35) is nonempty. Proof. The proof of the above proposition is the same as that for Proposition 2 of [
37]. We remark that we can easily verify that the required conditions in [
38] are satisfied for the proposed primaldual formulation:
(H1): X and Y are nonempty closed convex sets;
(H2): The objective function (denote
$\mathrm{\Phi}\left(\right)open="("\; close=")">x,y$ ) of (
35) is convexconcave on
$X\times Y$ in the following sense: for each
$y\in Y$, the function
$\mathrm{\Phi}\left(\right)open="("\; close=")">\xb7,y$ is convex, for each
$x\in X$, the function
$\mathrm{\Phi}\left(\right)open="("\; close=")">x,\xb7$ is concave;
(H3): X is bounded, or ${y}_{0}\in Y$ such that $\mathrm{\Phi}\left(\right)open="("\; close=")">x,{y}_{0}$ when $\u2225x\u2225\to +\infty $;
(H4):
Y is bounded, or
${x}_{0}\in Y$ such that
$\mathrm{\Phi}\left(\right)open="("\; close=")">{x}_{0},y$ when
$\u2225y\u2225\to +\infty $; Thus, there exists a nonempty convex compact set of saddlepoints on
$X\times Y$ of Equation (
35). ☐
The following proposition shows the convergence of Algorithm 1.
Algorithm 1: Primaldual algorithm for solving model (20) 
Initialization: Given $\sigma >0$, $\tau >0$, starting points ${p}^{0}=0$, ${q}^{0}=0$, ${u}^{0}={\overline{u}}^{0}=f$, ${v}^{0}={\overline{v}}^{0}=\nabla {u}^{0}$ and ${w}^{0}={\overline{w}}^{0}=K{u}^{0}$, and iteration index $k=0$ Calculate: ${p}^{k+1}$, ${q}^{k+1}$, ${u}^{k+1}$, ${v}^{k+1}$, ${w}^{k+1}$, ${\overline{u}}^{k+1}$, ${\overline{v}}^{k+1}$ and ${\overline{w}}^{k+1}$ from
The iteration is terminated if the termination condition is satisfied; otherwise, set $k:=k+1$ and return to Step (2).

Proposition 2. Let ${\u2225A\u2225}_{2}$ be the operator 2norm of A , and the iteration of $\left(\right)$ be defined by Algorithm 1. If $\sigma \tau {\u2225A\u2225}_{2}^{2}<1$, then $\left(\right)$ converges to a saddle point$\left(\right)$ of primaldual problem in Equation (35). Proof. The proposition can be seen as a special case of Theorem 1 in [
32]. The conclusion (a) of Theorem 1 in [
32] establishes that
$\left(\right)$ is a bounded sequence, so that some subsequence
$\left(\right)$ converges to some limit
$\left(\right)$. Observe that the conclusion (b) of Theorem 1 in [
32] implies that
$\underset{k\to \infty}{\mathrm{lim}}\left(\right)open="("\; close=")">{x}^{k}{x}^{k1}=0$, and
${x}^{{k}_{l}1}$ and
${y}^{{k}_{l}1}$ in particular converge, respectively, to
${x}^{*}$ and
${y}^{*}$. It follows that the limit
$\left(\right)$ is a fixed point of the iterations of Algorithm 1, hence a saddlepoint of our problem. ☐
Since
${\u2225\nabla \u2225}_{2}^{2}\le 8$ (see [
4]),
${\u2225K\u2225}_{2}\le 1$ (see [
37]), and
${\u2225A\u2225}_{2}^{2}\le {\u2225\nabla \u2225}_{2}^{2}+{\u2225K\u2225}_{2}^{2}+1$ (see [
18,
39]),
${\u2225A\u2225}_{2}^{2}\le 10$. Therefore, in order to ensure the convergence of our algorithm we just need to choose
$\sigma $ and
$\tau $ such that
$\sigma \tau <0.1$.
5. Experimental Results and Analysis
In this section, numerical results are obtained by applying our proposed models to blurred images corrupted by alphastable noise. We also compare our models with other existing and wellknown models.
We take six images—Cameraman (
$256\times 256$), Peppers (
$256\times 256$), Lena (
$256\times 256$), Phantom (
$256\times 256$), Boat (
$256\times 256$), and Fruits (
$256\times 256$)—for experiment and comparison. For further comparison, four objective image quality metrics—the peak signal noise ratio (PSNR) in dB, the measure of structural similarity index (SSIM) [
40], the multiscale SSIM (MSSSIM) [
41], and the feature similarity index (FSIM) [
42]—are used to measure the performance of the proposed models for the test images. Each of the same experiments is repeated 10 times, so the PSNR, SSIM, MSSSIM and FSIM values are the averaged results of 10 experiments. The PSNR and SSIM are respectively defined as follows:
where
$\widehat{u}$ is the restored image,
u is the original image,
${\mu}_{\widehat{u}}$ and
${\mu}_{u}$ are their respective mean,
${\sigma}_{\widehat{u}}^{2}$ and
${\sigma}_{u}^{2}$ are their respective variances,
${\sigma}_{\widehat{u}u}$ is the covariance of them, and
${c}_{1},{c}_{2}>0$ are constants. PSNR, SSIM, MSSSIM, and FSIM are all measures of the performance of an image. A higher PSNR indicates that the better restored image will be picked up, and the SSIM, MSSSIM, and FSIM values are closer to 1. The characteristic of the restored image is more similar to the original image.
In our numerical simulations, we terminate the algorithm when the relative change of the objective function between two consecutive iterations becomes small enough, i.e.,
where
$E(\xb7)$ denotes the objective function of the proposed Equation (
14), and
$\epsilon >0$ is a tolerance. For Algorithm 1, we have found that smaller tolerance values (e.g.,
$\epsilon ={10}^{4}$) do not consistently improve the relative error as the runtimes increase, so we set
$\epsilon ={10}^{3}$ in our numerical experiments.
Since
$\gamma $ depends on the noise level, we take the same value of the parameter found in [
30], that is,
$\gamma =\frac{{f}_{\left(\right)}}{\phantom{\rule{3.33333pt}{0ex}}}$ (where
${f}_{\left(c\right)}$ denotes the
cth quantile of
f). We chose
$\sigma =\tau =0.3$ and
$\mu {\gamma}^{2}=1$. In addition, the regularization parameter
$\lambda $ balances the tradeoff between the TV regularization term and the data fidelity term. We manually tune it in order to obtain the highest PSNR values of the restored image.
We would first like to illustrate the different effects of Gaussian noise, impulse noise, and alphastable noise.
Figure 2a shows the original Cameraman image, and
Figure 2b–d represent, respectively, the images degraded by Gaussion noise, impulse noise, and alphastable noise (with
$\alpha =0.5$).
Figure 2e–h show the zoomed top left corner of
Figure 2a–d.
It is clear from
Figure 2 that the image corrupted by Gaussian noise looks different from the images corrupted by impulse noise and alphastable noise (with
$\alpha =0.5$), while to some extent the alphastable noise and impulse noise are close to each other. For example, some pixels are degraded to white or black with the impulse noise and the alphastable noise (with
$\alpha =0.5$), while the image corrupted by Gaussian noise is uniformly modified and all the pixels are corrupted by noise (see
Figure 2f). Although the alphastable noise is similar to the impulse noise, there are also some very important differences, for instance, in the impulse noise, some pixels are noisefree (see
Figure 2g), while in the alphastable noise, the noise free pixels are very rare (see
Figure 2h). Thus, due to the impulsive character of the alphastable noise, we employ the meridian norm in our proposed model.
5.1. Image Denoising
In this subsection, we first focus only on the pure denoising case. The noisy image
f is generated as
$f=u+\eta =u+\xi \rho $ where
$\rho $ follows the alphastable distribution, and
$\xi >0$ gives the noise level. We compare the proposed image denoising model with the Cauchy model [
18], the TVL1 model [
11], and the meridian filter [
29]. These models are all efficient for recovering images in impulsive noise.
The proposed image denoising model is applied to the Cameraman image in the presence of alphastable noise at different tail parameters
$\alpha $ (with
$\xi =0.04$ and
$\rho $ following the alphastable distribution
$S\left(\right)open="("\; close=")">\alpha ,0,0.2,0$). In order to evaluate quantitatively the performances of the proposed image denoising model, two objective criteria, PSNR and SSIM, are computed and provided in
Figure 3. The Cauchy and TVL1 models for image denoising perform similarly, so we only provide the results of the Cauchy model in
Figure 3.
Figure 3 gives the PSNR and the SSIM of the noisy Cameraman image and the recovered images resulting from the proposed image denoising model, the Cauchy model, and the meridian filter at different tail parameters
$\alpha $. As the tail parameter
$\alpha $ increases, the PSNR values and the SSIM values become higher in all of these methods; And as the tail parameter
$\alpha $ decreases, the superiority of the proposed method becomes obvious. Moreover, our proposed image denoising model outperforms the Cauchy model and the meridian filter in terms of the PSNR and SSIM at the same tail parameter. In all, the proposed model significantly outperforms the commonly employed image denoising models in impulsive noisy environments (with small
$\alpha $ values) while providing comparable performances in less demanding, lighttailed environments (with high
$\alpha $ values). In particular, the PSNR values of our proposed model are all above 30 dB at the tail parameter of
$\alpha \ge 1$, and such values are considered to be perfect recovery results, so we employ the value of
$\rho $, which, in this part, follows the alphastable distribution
$S\left(\right)open="("\; close=")">1,0,0.2,0$.
For comparison of the performance quantitatively, the PSNR in dB and the SSIM are used to measure the performance of different models for the three noisy test images: Cameraman, Peppers, and Lena. The PSNR values in dB and the SSIM values for noisy images (
$\xi =0.04$ and
$\rho $ obeying
$S\left(\right)open="("\; close=")">1,0,0.2,0$) and recovered images given by different methods are listed in
Table 1.
Table 1 gives the PSNR values and the SSIM values for three different test images and the recovered results of these noisy images resulting from our proposed image denoising model, the Cauchy model, the TVL1 model, and the meridian filter, respectively. Obviously, our proposed image denoising model outperforms the TVL1 model, the Cauchy model, and the meridian filter in terms of the PSNR and SSIM at the same noise levels (
$\xi =0.04$ and
$\rho $ following
$S\left(\right)open="("\; close=")">1,0,0.2,0$). Take the Cameraman noisy image as an example, with our method, we can increase the PSNR values of the recovered images by 2.836 dB at the same noise levels and obtain the largest SSIM values.
5.2. Image Deblurring and Denoising
In the following subsection, we focus on the deblurring and denoising case. Here, we consider the recovery of the blurred images corrupted by both the Gaussian blur (a window size
$9\times 9$ and standard deviation of 1) and alphastable noise (
$\xi =0.04$). As in the previous subsection, we compare our proposed deblurring and denoising model with other wellknown image deblurring and denoising methods for impulsive noise, such as the TVL1 model [
11] and the Cauchy model [
18].
The proposed image deblurring and denoising model is applied to the blurred and noisy Cameraman image at different tail parameters
$\alpha $. The PSNR and SSIM are computed and provided in
Figure 4.
Figure 4 provides the quantitative results of our proposed image deblurring and denoising model, the TVL1 model, and the Cauchy model. It is clear that these methods perform well. As the alpha values increase, the PSNR and SSIM values become higher for all these methods. And, as the alpha values decrease, the superiority of our proposed model becomes obvious. Hence, our proposed model has better performance at the same tail parameter
$\alpha $ than that of the TVL1 model and the Cauchy model.
Since the PSNR and SSIM performances depend on the tail parameter, it is necessary to choose an appropriate tail parameter for image deblurring and denoising. In the following test, the tail parameter is set to
$\alpha =1$. In practice, we can see from
Figure 4 that the recovered results with
$\alpha =1$ are of good quality for all models.
In order to evaluate quantitatively the performance of the proposed image blurring and denoising model, we apply it now to recover three different images (Phantom, Boat, and Fruits) with the Gaussian blur (a window size
$9\times 9$ and standard deviation of 1) at the same noise level (
$\xi =0.04$ and
$\rho $ following
$S\left(\right)open="("\; close=")">1,0,0.2,0$). Experimental results on these test images are shown in
Figure 5,
Figure 6 and
Figure 7, respectively.
Figure 5a is the Phantom blurred and noisy image, and
Figure 5b–d are the recovered images from our proposed image blurring and denoising model, the TVL1 model, and the Cauchy model, respectively. The source images in
Figure 6 and
Figure 7 have similar situations for the Boat and Fruits images, respectively. It is clear from
Figure 5,
Figure 6 and
Figure 7 that the recovered images of our proposed image blurring and denoising model have more detailed information and are much closer to the original test images as compared with the recovered images from the TVL1 model and the Cauchy model.
Figure 8a–d are the magnified top left regions of
Figure 7a–d, respectively. It is clear from
Figure 8 that the reconstruction result obtained with our proposed method produces characterizations that are superior to those of the TVL1 and Cauchy methods. We also can see that the restored result of the proposed method can maintain salient features of the line in the original image and has clearer outlines and reduced noise and blur effects.
For further quantitative comparison of the performance of the proposed image deblurring and denoising model, the PSNR in dB and SSIM were computed using the different models for the three different groups of blurred and noisy test images.
The PSNR and SSIM values for blurred and noisy three different test images: Cameraman, Peppers, and Lena (the Gaussian blur with a window size
$9\times 9$ and standard deviation of 1,
$\xi =0.04$ and
$\rho $ following
$S\left(\right)open="("\; close=")">1,0,0.2,0$). The recovered images given by different methods are listed in
Table 2.
For easy observation, we took the Fruits image as an example and magnified the top left regions of the restored results with different algorithms. The magnified local regions of the restored results with different algorithms are shown in
Figure 8.
In general, larger PSNR values indicate that the recovered image can pick up more information. It is obvious from
Table 2 that a notable performance improvement has been achieved by the proposed image deblurring and denoising model as compared with the TVL1 model and the Cauchy model. For example, the PSNRs of the Cameraman image, resulting from the TVL1 model and the Cauchy model are 27.283 dB and 26.244 dB, respectively, while our proposed model gives 28.327 dB, implying that our proposed model provides an improvement of 2.083 dB, as compared with the Cauchy model. This is consistent with the visual effects of
Figure 5,
Figure 6,
Figure 7 and
Figure 8.
To further verify the performance of the algorithm, the PSNR, SSIM, MSSSIM, and FSIM for blurred and noisy Phantom images and recovered images given by different methods are listed in
Table 3. It is obvious from
Table 3 that a notable performance improvement has been achieved by the proposed image deblurring and denoising model as compared with the TVL1 model and the Cauchy model in terms of these four image quality metrics. This is also consistent with the visual effects of
Figure 5. In addition, we have employed other classical test images to evaluate the deblurring and denoising performance and found that a similar performance gain in terms of the PSNR, SSIM, MSSSIM, and FSIM has been achieved by the proposed method.