Next Article in Journal
Exploitation Techniques of IoST Vulnerabilities in Air-Gapped Networks and Security Measures—A Systematic Review
Previous Article in Journal
Improved RSSI Indoor Localization in IoT Systems with Machine Learning Algorithms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Beyond Staircasing Effect: Robust Image Smoothing via 0 Gradient Minimization and Novel Gradient Constraints †

1
Faculty of Environmental Engineering, The University of Kitakyushu, Fukuoka 808-0135, Japan
2
Faculty of Science and Engineering, Doshisha University, Kyoto 610-0394, Japan
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in Proceedings of the IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018.
Signals 2023, 4(4), 669-686; https://doi.org/10.3390/signals4040037
Submission received: 30 July 2023 / Revised: 10 September 2023 / Accepted: 22 September 2023 / Published: 26 September 2023
(This article belongs to the Topic Research on the Application of Digital Signal Processing)

Abstract

:
In this paper, we propose robust image-smoothing methods based on 0 gradient minimization with novel gradient constraints to effectively suppress pseudo-edges. Simultaneously minimizing the 0 gradient, i.e., the number of nonzero gradients in an image, and the 2 data fidelity results in a smooth image. However, this optimization often leads to undesirable artifacts, such as pseudo-edges, known as the “staircasing effect”, and halos, which become more visible in image enhancement tasks, like detail enhancement and tone mapping. To address these issues, we introduce two types of gradient constraints: box and ball. These constraints are applied using a reference image (e.g., the input image is used as a reference for image smoothing) to suppress pseudo-edges in homogeneous regions and the blurring effect around strong edges. We also present an 0 gradient minimization problem based on the box-/ball-type gradient constraints using an alternating direction method of multipliers (ADMM). Experimental results on important applications of 0 gradient minimization demonstrate the advantages of our proposed methods compared to existing 0 gradient-based approaches.

1. Introduction

Image smoothing is a key technique in image processing and is used in applications such as deblurring [1,2,3,4], detail enhancement [5,6], tone mapping [7,8,9], and so on. Furthermore, it is used in various fields, such as in medical imaging [10,11,12], computer graphics [13,14,15], remote sensing [16,17,18], and character recognition [4,19,20].
Filtering-based smoothing methods, e.g., a bilateral filter [6,21], a guided filter [22,23,24,25], and a nonlocal mean filter [26,27], have been actively studied for a long time and are often used in practical situations, as they can easily obtain smooth images that roughly maintain the structural gradients of input images with a relatively low computational cost.
Optimization-based smoothing methods were recently proposed in [1,28,29,30,31,32,33,34,35,36,37,38,39,40,41], and they can flexibly incorporate a priori information into minimization problems. The most standard a priori knowledge of natural images is local smoothness. The popular total variation (TV) [1] is designed as the total magnitude of the vertical and horizontal discrete gradients of an image and promotes this smoothness property on optimization. This method can be employed in more advanced image restoration problems, e.g., reflection removal [42,43], rain streak removal [44,45,46], intrinsic image decomposition [36,47,48], image blending [49], and multispectral pansharpening [50,51,52,53], which are complex problems to solve using filtering-based techniques; this is because optimization-based methods can explicitly and quantitatively design observation models by modeling using some norms and specific functions approximately. For example, we can flexibly define not only the commonly used 2 -norm but also the 1 -norm [54,55] and Huber loss function [49,56,57], which are robust to outliers, and so on, as a data-fidelity term in minimization problems.
Among these optimization-based methods, the 0 gradient minimization proposed in [31] is known to provide the best approximation of the properties of the local piece-wise flatness of natural images, which is among the most essential prior knowledge in image processing. In this method, the 0 gradient, which is the number of nonzero gradients of an image, is minimized together with the 2 data fidelity to an input image. Recently, the applications of this method were actively studied, such as algorithm acceleration [58], deblurring [3,4], multi-dimensional data smoothing [58], and layer separation [48,59]. Ono investigated the 0 gradient projection in [37], which allows us to control the degree of smoothing of the output image intuitively. Several extensions of the 0 gradient minimization were recently proposed to flexibly separate the structure and texture components in smoothing texture-rich images [39,40,41].
However, the 0 gradient-based smoothing methods have inherent issues, resulting in undesirable artifacts, such as pseudo-edges and halos in the output images. When an input image contains regions with gradations, the output image often exhibits a piecewise constant appearance, commonly referred to as the staircasing effect. These gradation regions produce strong edges in the output, which are not present in the input image. These issues arise because characteristics related to the intensity and sign of gradients within the observed images are not explicitly considered constraints in optimization problems. Furthermore, these artifacts become more noticeable, particularly in detail enhancement and tone-mapping applications. For example, although these applications are frequently used for diagnosing X-ray images, the staircasing effect introduces a potential risk of misdiagnosis. Similarly, tone mapping is often applied in low-light scenes, such as autonomous driving applications. However, this effect may lead to a reduction in object recognition accuracy. Our primary objective is to mitigate those unexpected artifacts caused by the staircasing effect by incorporating appropriate constraints into the gradient domain within optimization problems.
In this paper, we propose effective smoothing methods based on minimizing the 0 gradient with novel gradient constraints that suppress those pseudo-edge artifacts. In particular, we introduce two types of gradient constraints: a box-type gradient constraint and a ball-type constraint. We incorporate them in the 0 gradient minimization problem. These constraints are imposed using an appropriate reference image (e.g., an input image is used as a reference for image smoothing) to suppress pseudo-edges in homogeneous regions and the blurring effect of strong edges. We find a solution to the proposed minimization problems via the alternating direction method of multipliers (ADMM) [37,60,61] for nonconvex optimization.
The contributions of this paper are as follows.
  • Image smoothing while maintaining gradient characteristics of reference image: Existing smoothing methods based on the 0 gradient and TV do not explicitly consider any constraints in the gradient domain. In the proposed method, the local smoothness properties of a reference image can be explicitly considered constraints in the gradient domain. Therefore, we can suppress artifacts, including the staircasing effect, through image smoothing.
  • Strict or flexible gradient constraints on the sign of gradients: Since the box-type gradient constraint is strict with respect to the sign of gradients, gradient reversals are well suppressed. In contrast, the ball-type constraint is flexible with respect to the sign of gradients, allowing robust image smoothing, even when a reference image is degraded by noise and has different shading characteristics, including gradient reversals.
The remainder of this paper is organized as follows. In Section 2, we present several mathematical preliminaries and the ADMM algorithm. In Section 3, we introduce the novel box- and ball-type gradient constraints and the minimization problems for image smoothing. In Section 4, several examples are shown to confirm the validity of the proposed methods compared with the existing smoothing methods based on the 0 gradient. Finally, Section 5 concludes the paper.
In the previous paper [62], we introduced only the box-type gradient constraint to extend the 0 gradient minimization. Additionally, robustness to noise and performance when using some reference images with different luminance and colors with/without reversed edges were not sufficiently evaluated. In this paper, we show the limitation of the box-type gradient constraint and propose a novel ball-type gradient constraint that can be applied flexibly, even when using references with noise or edge reversals. We also extend the proposed gradient constraints to the 0 gradient projection [37] and the details of the algorithm derivation. Furthermore, we examine an experiment on JPEG artifact removal.

2. Preliminaries

Throughout this paper, bold-faced lowercase and uppercase letters indicate vectors and matrices, respectively, and R N denotes N-dimensional vector space. We define the set of M × N real-valued matrices as R M × N , and the block diagonal matrix of A 1 , , A M by diag ( A 1 , , A M ) . The transpose operation of a vector and matrix is denoted as ( · ) .

2.1. 0 Gradient

Let x R N be an input vector. The 0 pseudo-norm (note that this norm does not satisfy the properties of the norm; for simplicity in this paper, we refer to the 0 pseudo-norm as the 0 -norm) counts the number of nonzero elements, and it is denoted by
x 0 : = n = 1 N C ( x n ) , C ( x n ) : = 0 , if x n = 0 , 1 , otherwise .
Let x : = [ x R x G x B ] ( R 3 N ) be a vectorized color image, and let N be the number of pixels. Further, let D v and D h R N × N be the vertical and horizontal first-order differential operators with a Neumann boundary, respectively; the differential operator is then defined by D 1 : = [ D v D h ] ( R 2 N × N ) for a gray image and D : = diag D 1 , D 1 , D 1 ( R 6 N × 3 N ) for a color image. In the 0 gradient minimization, the group sparsity of the R G B gradients is considered by concatenating the 1 -norm [31,37,48], and it is defined as
D x 0 , 1 : = n = 1 N C ( D v x R ) n + ( D h x R ) n + ( D v x G ) n + ( D h x G ) n + ( D v x B ) n + ( D h x B ) n .
A piece-wise smooth image is obtained by minimizing an optimization problem based on (2). In [31,48], a half quadratic optimization method is used to solve this optimization problem.

2.2. Alternating Direction Method of Multipliers

Alternating the direction method of multipliers (ADMM) [60] is a method of the proximal splitting algorithm that can treat convex optimization problems of the form
min x R N 1 , z R N 2 F ( x ) + G ( z ) s . t . z = L x ,
where F and G are usually assumed to be quadratic and proximable functions, respectively, and L R N 2 × N 1 is a matrix with full-column rank. For any x ( 0 ) R N 1 , z ( 0 ) R N 2 , b ( 0 ) R N 2 and γ > 0 , the ADMM algorithm is given by
x ( τ + 1 ) = arg min x F ( x ) + 1 2 γ z ( τ ) L x b ( τ ) 2 2 , z ( τ + 1 ) = arg min z G ( z ) + 1 2 γ z L x ( τ + 1 ) b ( τ ) 2 2 , b ( τ + 1 ) = b ( τ ) + L x ( τ + 1 ) z ( τ + 1 ) .
When the optimization problem is convex, the sequence generated by (4) converges to an optimal solution of (3) under the existence of a saddle point.

2.3. Proximal Tools

The proximity operator [63] is a key tool of proximal splitting techniques. Let x R N be an input vector. For any γ > 0 , the primary operator of f over R N is defined by
prox γ f ( x ) : = arg min y R N f ( y ) + 1 2 γ x y 2 .
For a given nonempty closed convex set C , the indicator function of C is defined by
ι C ( x ) : = 0 , if x C , + , otherwise .
The proximity operator of ι C is expressed as
prox γ ι C ( x ) : = arg min y R N ι C ( x ) + 1 2 γ x y 2 2 .
The solution of prox γ ι C should be in the set C and minimize x y 2 2 . Thus, for any index γ > 0 , this proximity operator is equivalent to the metric projection onto C , i.e., P C ( x ) = prox γ ι C ( x ) .
Let l and u R N be the lower and upper bounds, respectively. The box constraint forces each element of x into the dynamic range [ l i , u i ] for i = 1 , , N , and its closed convex set is defined as
C [ l , u ] : = x R N l i x i u i ( i = 1 , , N ) .
The computation of the metric projection onto C [ l , u ] for i = 1 , , N is given by
P C [ l , u ] ( x ) i = l i , if x i < l i , u i , if x i > u i , x i , if l i x i u i .
The 2 ball constraint forces the Euclidean distance between the vector x and the centered vector v to be less than the radius ϵ , and its closed convex set is defined as
B v , ϵ : = x R N x v 2 ϵ .
The computation of the metric projection onto B v , ϵ is given by
P B v , ϵ ( x ) = x , if x v 2 ϵ , v + ϵ x v x v 2 , otherwise .

3. Proposed Methods

To suppress the undesirable artifacts mentioned in Section 1, we introduce the following two types of gradient constraints.

3.1. Gradient Constraints

3.1.1. Box-Type Gradient Constraint

Let x : = [ x R x G x B ] ( R 3 N ) be a vectorized color image. We introduce the box-type gradient constraints in the positive and negative ranges:
0 ( D x ) i ( D x ref ) i , i     Ω + ,
( D x ref ) j ( D x ) j < 0 , j     Ω ,
where x ref R 3 N is a vectorized reference color image, D is the differential operator for a color image introduced in (2), and D x , D x ref ( R 6 N ) are gradient images. The notations Ω + and Ω are the index sets of positive and negative elements in D x ref , respectively, where these sets satisfy Ω + Ω = { 1 , , 6 N } and Ω + Ω = . The gradient intensity of an output image obtained by solving a minimization problem with (12) and (13) is limited to the range of the gradient values of the reference image x ref . Thus, these constraints suppress pseudo-edges that do not exist in the reference image.
The major drawback of minimizing the total variation regularization based on the 0 - or 1 -norms is that it yields pseudo-edge artifacts, such as an undesirable staircasing effect on the homogeneous regions in many cases. These artifacts are more severe in the case of the 0 -norm, as it has significant flattening effects. Figure 1 plots an example of the vertical first-order gradient signals of an input image and the smoothing results obtained by the two conventional methods—one is the 0 gradient minimization [31], and the other is the 0 gradient projection [37]—and the proposed methods with the box-type gradient constraint. The gradient signals of the 0 gradient minimization and the 0 gradient projection contain the gradients denoted by arrows that do not originally exist in the input image. Owing to the proposed gradient constraint, the gradient signal of the proposed methods is lower than that of the input image. When a reference image has local smoothness properties, artifacts such as halos and pseudo-edges can be suppressed by the proposed constraint. Similar results are obtained with the other channels and the horizontal direction.
Figure 2a illustrates an example of the box-type gradient constraint in the case where the vertical gradient is positive, and the horizontal gradient is negative in the reference image. We see from the figure that the ideal gradient is not included in the box-type gradient constraint. This often occurs when there is noise in the input image. Because the sign of gradients is sensitive to noise, our method generates better results for noise-free images; however, the method may generate unexpected artifacts for noisy images. To overcome this limitation, we introduce a ball-type gradient constraint.

3.1.2. Ball-Type Gradient Constraint

Let the sets of indexes G 1 , , G N G be G n : = { n , N + n , , 5 N + n } for n = 1 , , N . We introduce the ball-type gradient constraint with respect to the n-th pixel as
( D x ) G n 2     ( D x ref ) G n 2 ,
where ( D x ) G n R 6 is the n-th sub-vector of D x with the entries specified by G n , such as
( D x ) G n : = ( D x ) n ( D x ) N + n ( D x ) 2 N + n ( D x ) 3 N + n ( D x ) 4 N + n ( D x ) 5 N + n = ( D v x R ) n ( D h x R ) n ( D v x G ) n ( D h x G ) n ( D v x B ) n ( D h x B ) n .
The ball-type gradient constraint limits the magnitude of the gradients of an output image to less than that of a reference image. Note that the signs of the gradients are not considered explicitly in these constraints.
Figure 2b illustrates an example of the ball-type gradient constraint with the same reference image as in Figure 2a. We see from this figure that the ball-type constraint is defined by a circle whose radius is the gradient magnitude of the reference image (see the dotted blue circle). Furthermore, the range of the ball-type constraint is more relaxed compared with the box-type one in Figure 2a, even with the same reference image. Therefore, even when we use a noisy image as a reference, in which the sign of the gradients varies locally, the ball-type constraint enables us to generate a smooth image that is locally smoother and more natural than the box-type one.
As described above, the ball-type constraint can obtain better results than the box-type one for a noisy image. However, when it is applied to a noise-free image, gradient reversals may occur, especially in the case of small-magnitude texture.

3.2. 0 -Smoothing Based on Box-Type Gradient Constraint

3.2.1. Minimization Problem

Let s in and s ref be a vectorized input color image and a vectorized reference image for the proposed box-type gradient constraint, respectively. The image-smoothing problem with the proposed gradient constraints (12) and (13) is defined by
min s D s 0 , 1 + λ 2 s s in 2 2 s . t . 0 ( D s ) i ( D s ref ) i , i     Ω + , ( D s ref ) j ( D s ) j < 0 , j     Ω ,
where λ is a balancing weight of the two costs. The first term in (16) is defined by (2). The second term is a data-fidelity term calculating the squared error between the input and output images. Note that the objective function, excluding the gradient constraints from (16), is equivalent to that of the 0 gradient minimization introduced in [31].
To find a solution of (16), we use ADMM described in Section 2.2. Since the first term of (16) is a nonconvex function, there is no guarantee that the solution obtained by ADMM is a global minimum solution of (16). However, as stated in [37,61], we experimentally confirmed that a reasonable solution satisfying the constraints can be obtained by carefully decreasing the step size of the algorithm in each iteration.

3.2.2. Optimization

Let g + : = P D s ref , g : = M D s ref , where P and M are the matrices that extract the elements from D s ref corresponding to the indexes included in Ω + and Ω , respectively. The convex sets C [ 0 , g + ] and C [ g , 0 ] are defined as
C [ 0 , g + ] : = { x R m + 0 x i g i + , i = 1 , , m + } ,
C [ g , 0 ] : = { x R m g j x j < 0 , j = 1 , , m } .
where m + and m denote the number of the indices in their sets, which satisfy 6 N = m + + m , and 0 is a vector filled with zeros. To apply ADMM, we first reformulate (16) into the following unconstrained problem:
min s D s 0 , 1 + λ 2 s s in 2 2 + ι C [ 0 , g + ] ( P D s ) + ι C [ g , 0 ] ( M D s )
where ι C [ 0 , g + ] ( · ) and ι C [ g , 0 ] ( · ) are the indicator functions (let x R N be an input vector; for a given non-empty closed convex set C , the indicator function of C is defined by ι C ( x ) , which returns 0 if x C , and + otherwise) of C [ 0 , g + ] and C [ g , 0 ] . These functions guarantee that the positive and negative gradient intensities of the optimal image s * fall in the ranges [ 0 , g + ] and [ g , 0 ] . The role of the third and fourth terms of (17) correspond to the role of the first and second constraints of (16), respectively.
By introducing auxiliary variables z 1 , z 2 , and z 3 , we rewrite the minimization problem (17) into the following equivalent expression:
min s , z 1 , z 2 , z 3 F ( s ) + G ( z ) s . t . z = L s , F ( s ) : = λ 2 s s in 2 2 , G ( z ) : = z 1 0 , 1 + ι C [ 0 , g + ] ( z 2 ) + ι C [ g , 0 ] ( z 3 ) , z = [ z 1 z 2 z 3 ] ( z 1 R 6 N , z 2 R m + , z 3 R m ) , L = D P D M D R 12 N × 3 N .
The algorithm for solving Equation (18) with γ i ( i = 1 , 2 , 3 ) is summarized in Algorithm 1.
Algorithm 1 Proposed algorithm for (18).
1:
Input    : z i ( 0 ) , b i ( 0 ) , γ i ( i = 1 , 2 , 3 ) , λ , μ , η ( 0 < η < 1 )
2:
Output: s ( τ )
3:
while A stopping criterion is not satisfied do
4:
    s τ + 1 arg min s λ 2 s s in 2 2 + 1 2 γ 1 z 1 ( τ ) D s b 1 ( τ ) 2 2 + 1 2 γ 2 z 2 ( τ ) P D s b 2 ( τ ) 2 2 + 1 2 γ 3 z 3 ( τ ) M D s b 3 ( τ ) 2 2 ;
5:
   for  n = 1 to N do
6:
        z 1 G n ( τ + 1 ) prox γ 1 · 0 , 1 ( D s ( τ + 1 ) + b 1 ( τ ) ) G n ;
7:
   end for
8:
    z 2 ( τ + 1 ) P [ 0 , g + ] P D s ( τ + 1 ) + b 2 ( τ ) ;
9:
    z 3 ( τ + 1 ) P [ g , 0 ] M D s ( τ + 1 ) + b 3 ( τ ) ;
10:
    b 1 ( τ + 1 ) b 1 ( τ ) + D s ( τ + 1 ) z 1 ( τ + 1 ) ;
11:
    b 2 ( τ + 1 ) b 2 ( τ ) + P D s ( τ + 1 ) z 2 ( τ + 1 ) ;
12:
    b 3 ( τ + 1 ) b 3 ( τ ) + M D s ( τ + 1 ) z 3 ( τ + 1 ) ;
13:
    γ 1 η γ 1 ;
14:
    γ 2 η γ 2 ;
15:
    γ 3 η γ 3 ;
16:
    τ τ + 1 ;
17:
end while
The update of s in Algorithm 1 is achieved by solving the following quadratic minimization problem (hereafter, the superscript ( τ ) is omitted for simplicity):
min s λ 2 s s in 2 2 + 1 2 γ 1 z 1 D s b 1 2 2 + 1 2 γ 2 z 2 P D s b 2 2 2 + 1 2 γ 3 z 3 M D s b 3 2 2 .
Thus, by setting the first-order derivative to zero, the optimal solution is characterized by the system of linear equations:
A s = q , A = λ I + γ 1 1 D D + γ 2 1 D P P D + γ 3 1 D M M D , q = λ s in + γ 1 1 D z 1 b 1 + γ 2 1 D P z 2 b 2 + γ 3 1 D M z 3 b 3 ,
where A is a matrix of size 3 N × 3 N , q is a 3 N -dimensional vector, and I R 3 N × 3 N is an identity matrix. The optimal solution is obtained by the inverse problem s * = A 1 q . If we set γ 3 = γ 2 , the third term of the right side in the formula A can be rewritten as γ 2 1 D D = γ 2 1 D P P D + γ 3 1 D M M D . Since the boundary condition of D is periodic and the matrix A is a block circulant matrix with circulant blocks (BCCB), it is diagonalized by the 2D fast discrete Fourier transform (2DFFT). Thus, the inverse problem of (20) can be calculated as
s * = F * λ I + ( γ 1 1 + γ 2 1 ) Σ 1 F q ,
where F and F * are the 2D-FFT and its inverse matrices, respectively, and Σ is the diagonal matrix with its entries being the Fourier-transformed Laplacian filter kernel. Therefore, its inversion is reduced to entry-wise division.
For the update of z 1 , we need to compute the pseudo-proximity operator of the 0 , 1 -norm. By dividing z 1 to sub-vectors as z 1 : = [ z 1 G 1 z 1 G N ] , in which G n ( n = 1 , , N ) is introduced in Section 3.1.2 and z 1 G n can be regarded as the auxiliary variable for ( D s ) G n (see (15)), the pseudo-proximity operator equals to a group hard-thresholding operation for each sub-vector [48,61]. Thus, the optimal solution of the n-th sub-vector z 1 G n is obtained by
z 1 G n * prox γ 1 · 0 , 1 ( d n ) = d n , if m = 1 6 d n , m 2 2 γ 1 , 0 , otherwise ,
where d n , m is the m-th element of d n : = ( D s + b 1 ) G n ( R 6 ) .
To update z 2 and z 3 , we need to compute the proximity operator of the indicator functions ι C [ 0 , g + ] and ι C [ g , 0 ] . By (9), these are simply projections onto each set, respectively, i.e., for i = 1 , , m + by
z 2 P C [ 0 , g + ] ( P D s + b 2 ) i = 0 , if ( P D s + b 2 ) i < 0 , g i + , if ( P D s + b 2 ) i > g i + , ( P D s + b 2 ) i , if 0 ( P D s + b 2 ) i g i + ,
and for j = 1 , , m by
z 3 P C [ g , 0 ] ( M D s + b 3 ) j = g j , if ( M D s + b 3 ) j < g j , 0 , if ( M D s + b 3 ) j > 0 , ( M D s + b 3 ) j , if g j ( M D s + b 3 ) j 0 .
To stabilize ADMM for nonconvex optimization, a scalar 0 < η < 1 is introduced to gradually decrease the value of γ i ( i = 1 , 2 , 3 ) as in the 11th–13th lines of Algorithm 1. The algorithm continues until the following stopping criterion is satisfied:
s ( τ + 1 ) s ( τ ) 2 s ( τ + 1 ) 2 μ ,
where we set a small value to μ (e.g., μ 10 4 ). Although there is no theoretical guarantee of convergence for our algorithm, we show that the sequence generated by this algorithm experimentally converges to a reasonable solution with sufficiently small values for γ i ( i = 1 , 2 , 3 ) ; similar strategies have been employed in the existing minimization algorithms for the 0 -norm [37,48,61].

3.3. 0 -Smoothing Based on Ball-Type Gradient Constraint

The image-smoothing problem with the proposed ball-type gradient constraint (14), which is the modified version of (16), is defined by
min s D s 0 , 1 + λ 2 s s in 2 2 s . t . ( D s ) G n 2 ( D s ref ) G n 2 , n = 1 , , N .
Similar to the preceding section, we solve the minimization problem (26) by applying ADMM to it. Now, the convex set B 0 , ϵ n is defined for n = 1 , , N by
B 0 , ϵ n : = { x R 6 x 2 ϵ n } ,
where ϵ n : = ( D s ref ) G n 2 . We reformulate (26) into the following unconstrained problem by introducing the indicator functions of B 0 , ϵ n ( n = 1 , , N ) , and then, we also introduce auxiliary variables z 1 and z 2 to non-differential functions
min s , z 1 , z 2 λ 2 s s in 2 2 + z 1 0 , 1 + n = 1 N ι B 0 , ϵ n ( z 2 G n ) s . t . z 1 = D s , z 2 = D s ,
where z 2 G n is the n-th sub-vector of z 2 specified by G n .
The algorithm for solving (28) is obtained by slightly modifying Algorithm 1. In particular, the update of s in line 4 of Algorithm 1 is obtained by
s * = F * λ I + ( γ 1 1 + γ 2 1 ) Σ 1 F q , where q = λ s in + γ 1 1 D ( z 1 b 1 ) + γ 2 1 D ( z 2 b 2 ) .
In the update of z 2 , we calculate the following metric projection onto the 2 ball according to (11) instead of the 8th and 9th lines of Algorithm 1, i.e., for n = 1 , , N , by
z 2 G n P B 0 , ϵ n ( D s b 2 ) G n = D s b 2 G n , if ( D s b 2 ) G n 2 ϵ n , ϵ n ( D s b 2 ) G n ( D s b 2 ) G n 2 , otherwise .

3.4. 0 Gradient Projection with Gradient Constraint

Our proposed box-/ball-type gradient constraints can be incorporated with not only the minimization problem of the 0 gradient minimization [31] but also with that of the 0 gradient projection [37]. An image-smoothing problem based on the 0 gradient projection (for simplicity, we only discuss using the ball-type gradient constraint; note that the box-type gradient constraints can be also used as well) with the proposed gradient constraint is defined by
min s 1 2 s s in 2 2 s . t . D s 0 , 1     α , ( D s ) G n 2     ( D s ref ) G n 2 , n = 1 , , N .
where α is the parameter specified by a user that is the least upper bound of D s 0 , 1 . It allows us to intuitively control the degree of smoothing of the output image s * .
Similar to the preceding section, a solution of (31) can be estimated based on Algorithm 1 after transforming it into an equation applicable to ADMM (the indicator function of the inequality constraint on D s 0 , 1 is defined in (9) of [37], and its pseudo-metric projection is derived in (16) of [37]).

4. Experiments

To clarify the differences in the characteristics of the proposed methods with the box- and ball-type gradient constraints, we first conducted noise removal experiments using some reference images. We then applied the proposed methods to various applications, such as image smoothing, detail enhancement, tone mapping, and JPEG artifact removal, and the results were compared with those of two existing methods based on the 0 gradient: 0 gradient minimization [31], and 0 gradient projection [37] (for the implementation of the existing methods, we used the source code distributed by the authors of each paper).
All experiments were performed using MATLAB R2021a on a computer with an AMD EPYC 7402P 2.80 GHz processor and 128 GB RAM. To accelerate the computation time of the proposed methods, we used an NVIDIA GeForce RTX 3090 GPU. For the parameters of our methods, we set γ i = 5 ( i = 1 , 2 , 3 ) , η = 0.97 , and μ = 10 4 in all experiments. For the reference image s ref , we set s in in the three experiments described in Section 4.2, Section 4.3 and Section 4.4, respectively. Note that the dynamic range of input images was normalized in [ 0 , 1 ] .

4.1. Box-Type vs. Ball-Type Gradient Constraint

The performance of our methods depend on the variation of the luminance and color in a reference image. Accordingly, we studied the sensitivity of our methods with respect to a reference image by conducting an experiment on Gaussian noise removal.
We generated a noisy image, which is shown in Figure 3b, by adding zero-mean Gaussian noise with standard deviation σ = 5.0 × 10 3 to the ground truth image shown in Figure 3a. Then, four types of images, shown in Figure 4(a-1) ground truth image, a-2 horizontally flipped image of a-1, a-3 hue shift image of a-1, and a-4 horizontally flipped and hue shift image of a-1, were prepared, each of which was used as a reference image of our proposed gradient constraints. For the quality metric, we used the peak signal-to-noise ratio (PSNR). For the smoothing parameter setting, the balancing weight λ was set to maximize PSNR. Note that the purpose of this experiment is not to evaluate the pure performance of noise removal but to clarify the sensitivity of the method.
Figure 4 shows the noise removal results and their PSNR values. One can see from Figure 4b,c that our images with the ball-type constraint have higher PSNR and visually better results than the box-type image for all cases. The results of Figure 4(b-2,b-4) are over-smoothed because the reference images have luminance variations that are considerably different from those of the input image (there are flipped relationships between the luminance variations of the input and reference images in the horizontal direction). This is because the sign of the gradients of the reference images is often reversed from the ideal one around the edges. The box-type constraint does not prevent the smoothed image from having the gradients whose sign is reversed from that of the reference image, thus yielding blurred images. In Figure 4(b-1,b-3), this constraint generates a better smooth result with sharper edges similar to those of the ground truth image; however, pseudo-color artifacts often occur around edges as shown in Figure 4(b-3). Thus, this constraint is very sensitive to the sign of image gradients unless an input image is used as a reference. In contrast, the ball-type constraint robustly generates a better smooth result with sharper edges that is the same as that in the input image, even if the reference images have luminance and color variations considerably different from those in the input image.

4.2. Detail Enhancement

Based on the methods [5,6], we performed the detail enhancement as follows: (i) obtain a base layer by applying each method to an input image; (ii) calculate a detail layer by subtracting the base layer from the input image; (iii) amplify the detail layer by scaling factor s > 1 ; and (iv) reconstruct a detail-enhanced image by summing the base layer and the enhanced detail layer.
Figure 5 and Figure 6 show the smoothing and detail enhancement results obtained by the existing methods and the proposed methods (we took input images from an extensive database of royalty-free images https://pixabay.com/en/ accessed on 8 January 2018). In Section 4.2 and Section 4.3, since there are visually no differences between the results of the proposed methods with the box- and ball-type gradient constraints, we only show the results of the proposed method with the box-type gradient constraint. For a fair comparison, we adjusted the smoothing parameters of each method so that the 2 -norms are the same in the input and output images. The scaling factor s was set to 2.5 . Figure 5 and Figure 6 indicate that the contrast in the entire image is maintained for all methods. However, the results of [31] have halo-like artifacts in the region with gradation indicated by the red square window in (b), and the color of the clouds is changed. Pseudo-edge artifacts, such as the staircasing effect, occur in the sky region of the results of [37] (indicated by the yellow and red square windows in (c)). In contrast, our methods do not have these undesirable artifacts because the gradient constraints suppress pseudo-edges in the homogeneous regions.

4.3. Tone Mapping

Next, we applied these methods for layer separation in the tone-mapping framework [7].
Figure 7 shows the tone-mapping results, where we also show the result of Reinhard et al.’s global tone-mapping operator [64] for reference. Figure 7b,c show that the halo and pseudo-edge artifacts occurred, especially at the boundary between the bright and dark regions. In contrast, the proposed methods sufficiently suppressed these artifacts.
To show the robustness of the degree of smoothing, we applied the 0 gradient minimization [31] and the proposed methods to an input HDR image with different smoothing parameters. The smoothing parameters were adjusted so that the mean squared error (MSE) of the smoothing results was 1.2 × 10 5 , 1.9 × 10 5 , and 2.5 × 10 5 , and these results are shown in Figure 8. One sees from Figure 8(b-1,c-1,d-1) that the staircasing effects and pseudo-color edges occur. In addition, these artifacts are more noticeable as the smoothing degree increases. In contrast, the results of the proposed methods have few artifacts. Even in Figure 8(d-2), which is the smoothest result, noticeable artifacts do not appear. In Section 4.2 and Section 4.3, we applied the proposed methods to dozens of images and obtained similar results.

4.4. JPEG Artifact Removal in Clip-Art Images

Input images were generated from clip-art images (we took ten test images from the dataset provided in https://google.github.io/cartoonset/ accessed on 20 July 2019), which were compressed by standard JPEG with quality values in the range of [ 10 , 90 ] . The resultant images were evaluated by using two quality metrics: PSNR and structural similarity (SSIM) [65]. In each method, we adjusted the degree of noise removal to obtain the visually best restoration results, i.e., maximizing the smoothness while preserving the edges of the images as much as possible (we only showed the results of the proposed methods obtained with the ball-type gradient constraint because it is more robust than the box-type constraint for noise removal).
Figure 9a,b plot the PSNR [dB] and SSIM values averaged over ten test images, respectively. It can be seen that the PSNR and SSIM of the proposed methods are higher than those of the existing methods in most cases.
Figure 10 shows the results and their PSNR values [dB] of two images in the cases of the quality value with 20. Block-wise artifacts in the JPEG images shown in Figure 10(b-1,b-2) are sufficiently removed by the proposed methods, while the existing methods retain them (see close-up images indicated by the red and blue boxes). Our methods have the highest PSNR in both images.

5. Conclusions

We proposed effective smoothing methods based on minimizing the 0 gradient using novel gradient constraints. To suppress the pseudo-edge artifacts such as the staircasing effect, box- and ball-type gradient constraints were introduced. These constraints limited the gradient intensity of an output image to the range of a reference image. By solving the 0 gradient minimization problem based on the proposed gradient constraints via ADMM for nonconvex optimization, we effectively preserved the strong edges and removed weak edges without yielding pseudo-edges and halo artifacts. It was confirmed that the smoothed image obtained by the proposed method can generate better contrast-enhanced images and tone map images without pseudo-edges or halos. These results suggest the potential for delivering more dependable image diagnosis and recognition outcomes in medical imaging and autonomous driving applications, where image enhancement applications are essential. Throughout all our experiments, we consistently observed the effectiveness of the proposed methods in comparison to existing 0 gradient-based methods.
In future work, we will apply our proposed gradient constraints for other optimization problems, e.g., intrinsic image decomposition, reflection removal, and flash/no-flash image blending.

Author Contributions

Methodology, R.M. and M.O.; software, R.M.; validation, R.M. and M.O.; formal analysis, R.M. and M.O.; investigation, R.M. and M.O.; data curation, R.M.; writing—original draft preparation, R.M. and M.O.; writing—review and editing, R.M. and M.O.; project administration, R.M. and M.O.; funding acquisition, R.M. and M.O. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by JSPS KAKENHI Grant Number 21K17767 and MEXT Promotion of Distinctive Joint Research Center Program Grant Number #JPMXP 0621467946. The experiments in this paper were performed using DeeplearningBOX/Alpha Workstation at The University of Kitakyushu.

Data Availability Statement

Data will be shared with interested third parties on reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear Total Variation Based Noise Removal Algorithms. Physics D 1992, 60, 259–268. [Google Scholar] [CrossRef]
  2. Afonso, M.V.; Bioucas-Dias, J.M.; Figueiredo, M.A. An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems. IEEE Trans. Image Process. 2011, 20, 681–695. [Google Scholar] [CrossRef] [PubMed]
  3. Xu, L.; Zheng, S.; Jia, J. Unnatural L0 Sparse Representation for Natural Image Deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Portland, OR, USA, 23–28 June 2013; pp. 1107–1114. [Google Scholar] [CrossRef]
  4. Pan, J.; Hu, Z.; Su, Z.; Yang, M. Deblurring Text Images via L0-Regularized Intensity and Gradient Prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 2901–2908. [Google Scholar] [CrossRef]
  5. Farbman, Z.; Fattal, R.; Lischinski, D.; Szeliski, R. Edge-preserving Decompositions for Multi-scale Tone and Detail Manipulation. ACM Trans. Graph. 2008, 27, 67. [Google Scholar] [CrossRef]
  6. Gastal, E.S.L.; Oliveira, M.M. Domain Transform for Edge-aware Image and Video Processing. ACM Trans. Graph. 2011, 30, 69. [Google Scholar] [CrossRef]
  7. Durand, F.; Dorsey, J. Fast Bilateral Filtering for the Display of High-dynamic-range Images. ACM Trans. Graph. 2002, 21, 257–266. [Google Scholar] [CrossRef]
  8. Fattal, R. Edge-avoiding Wavelets and Their Applications. ACM Trans. Graph. 2009, 28, 22. [Google Scholar] [CrossRef]
  9. Liang, Z.; Xu, J.; Zhang, D.; Cao, Z.; Zhang, L. A Hybrid L1-L0 Layer Decomposition Model for Tone Mapping. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 4758–4766. [Google Scholar] [CrossRef]
  10. Figueiredo, M.A.T.; Bioucas-Dias, J.M. Restoration of Poissonian Images Using Alternating Direction Optimization. IEEE Trans. Image Process. 2010, 19, 3133–3145. [Google Scholar] [CrossRef]
  11. Xu, Q.; Yu, H.; Mou, X.; Zhang, L.; Hsieh, J.; Wang, G. Low-Dose X-ray CT Reconstruction via Dictionary Learning. IEEE Trans. Med. Imaging 2012, 31, 1682–1697. [Google Scholar] [CrossRef]
  12. Ramani, S.; Fessler, J.A. A Splitting-Based Iterative Algorithm for Accelerated Statistical X-Ray CT Reconstruction. IEEE Trans. Med. Imaging 2012, 31, 677–688. [Google Scholar] [CrossRef]
  13. Graber, G.; Pock, T.; Bischof, H. Online 3D reconstruction using convex optimization. In Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, 6–13 November 2011; pp. 708–711. [Google Scholar] [CrossRef]
  14. Ferstl, D.; Reinbacher, C.; Ranftl, R.; Ruether, M.; Bischof, H. Image Guided Depth Upsampling Using Anisotropic Total Generalized Variation. In Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCV), Sydney, Australia, 1–8 December 2013; pp. 993–1000. [Google Scholar] [CrossRef]
  15. Wang, Q.; Tao, Y.; Lin, H. Edge-Aware Volume Smoothing Using L0 Gradient Minimization. Comput. Graph. Forum 2015, 34, 131–140. [Google Scholar] [CrossRef]
  16. Yuan, Q.; Zhang, L.; Shen, H. Hyperspectral Image Denoising Employing a Spectral-Spatial Adaptive Total Variation Model. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3660–3677. [Google Scholar] [CrossRef]
  17. Yuan, Q.; Zhang, L.; Shen, H. Hyperspectral Image Denoising with a Spatial-Spectral View Fusion Strategy. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2314–2325. [Google Scholar] [CrossRef]
  18. Yuan, Q.; Zhang, Q.; Li, J.; Shen, H.; Zhang, L. Hyperspectral Image Denoising Employing a Spatial-Spectral Deep Residual Convolutional Neural Network. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1205–1218. [Google Scholar] [CrossRef]
  19. Likforman-Sulem, L.; Darbon, J.; Smith, E.H.B. Enhancement of historical printed document images by combining Total Variation regularization and Non-local Means filtering. Image Vis. Comput. 2011, 29, 351–363. [Google Scholar] [CrossRef]
  20. Ge, F.; He, L. A de-noising method based on L0 gradient minimization and guided filter for ancient Chinese calligraphy works on steles. EURASIP J. Image Vid. Process. 2019, 2019, 32. [Google Scholar] [CrossRef]
  21. Tomasi, C.; Manduchi, R. Bilateral filtering for gray and color images. In Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCV), Bombay, India, 7 January 1998; pp. 839–846. [Google Scholar] [CrossRef]
  22. He, K.; Sun, J.; Tang, X. Guided Image Filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  23. Seo, H.J.; Milanfar, P. Robust flash denoising/deblurring by iterative guided filtering. EURASIP J. Adv. Signal Process. 2012, 2012, 3. [Google Scholar] [CrossRef]
  24. Baba, T.; Matsuoka, R.; Shirai, K.; Okuda, M. Misaligned Image Integration With Local Linear Model. IEEE Trans. Image Process. 2016, 25, 2035–2044. [Google Scholar] [CrossRef]
  25. Matsuoka, R.; Shirai, K.; Okuda, M. Reference-based local color distribution transformation method and its application to image integration. Signal Process. Image Comm. 2019, 76, 231–242. [Google Scholar] [CrossRef]
  26. Buades, A.; Coll, B.; Morel, J.M. A Non-Local Algorithm for Image Denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, 20–26 June 2005; pp. 60–65. [Google Scholar] [CrossRef]
  27. Zhan, Y.; Ding, M.; Xiao, F.; Zhang, X. An Improved Non-local Means Filter for Image Denoising. In Proceedings of the International Conference on Intelligent Computation and Bio-Medical Instrumentation (ICBMI), Wuhan, China, 14–17 December 2011; pp. 31–34. [Google Scholar] [CrossRef]
  28. Bioucas-Dias, J.M.; Figueiredo, M.A.T. A New TwIST: Two-Step Iterative Shrinkage/Thresholding Algorithms for Image Restoration. IEEE Trans. Image Process. 2007, 16, 2992–3004. [Google Scholar] [CrossRef]
  29. Figueiredo, M.A.T.; Nowak, R.D.; Wright, S.J. Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems. IEEE J. Sel. Top. Signal Process. 2007, 1, 586–597. [Google Scholar] [CrossRef]
  30. Wright, S.J.; Nowak, R.D.; Figueiredo, M.A.T. Sparse Reconstruction by Separable Approximation. IEEE Trans. Signal Process. 2009, 57, 2479–2493. [Google Scholar] [CrossRef]
  31. Xu, L.; Lu, C.; Xu, Y.; Jia, J. Image Smoothing via L0 Gradient Minimization. ACM Trans. Graph. 2011, 30, 174. [Google Scholar] [CrossRef]
  32. Miyata, T.; Sakai, Y. Vectorized total variation defined by weighted L infinity norm for utilizing inter channel dependency. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Orlando, FL, USA, 30 September–3 October 2012; pp. 3057–3060. [Google Scholar] [CrossRef]
  33. Xu, L.; Yan, Q.; Xia, Y.; Jia, J. Structure Extraction from Texture via Relative Total Variation. ACM Trans. Graph. 2012, 31, 139. [Google Scholar] [CrossRef]
  34. Condat, L. A Generic Proximal Algorithm for Convex Optimization—Application to Total Variation Minimization. IEEE SPS Lett. 2014, 21, 985–989. [Google Scholar] [CrossRef]
  35. Ono, S.; Yamada, I. Decorrelated Vectorial Total Variation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 4090–4097. [Google Scholar] [CrossRef]
  36. Bi, S.; Han, X.; Yu, Y. An L1 Image Transform for Edge-preserving Smoothing and Scene-level Intrinsic Decomposition. ACM Trans. Graph. 2015, 34, 78. [Google Scholar] [CrossRef]
  37. Ono, S. L0 Gradient Projection. IEEE Trans. Image Process. 2017, 26, 1554–1564. [Google Scholar] [CrossRef] [PubMed]
  38. Kobayashi, I.; Matsuoka, R. A Study on JPEG Artifact Removal by Four-Directional Difference-based 0,1 norm Regularization. In Proceedings of the IEEE Global Conference on Consumer Electronics (GCCE), Osaka, Japan, 18–21 October 2022; pp. 724–725. [Google Scholar] [CrossRef]
  39. Ma, X.; Li, X.; Zhou, Y.; Zhang, C. Image smoothing based on global sparsity decomposition and a variable parameter. Comput. Vis. Media 2021, 7, 483–497. [Google Scholar] [CrossRef]
  40. Liu, W.; Zhang, P.; Lei, Y.; Huang, X.; Yang, J.; Ng, M. A Generalized Framework for Edge-Preserving and Structure-Preserving Image Smoothing. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 6631–6648. [Google Scholar] [CrossRef]
  41. Huang, J.; Wang, H.; Wang, X.; Ruzhansky, M. Semi-Sparsity for Smoothing Filters. IEEE Trans. Image Process. 2023, 32, 1627–1639. [Google Scholar] [CrossRef]
  42. Li, Y.; Brown, M.S. Single Image Layer Separation Using Relative Smoothness. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 2752–2759. [Google Scholar] [CrossRef]
  43. Shibata, T.; Akai, Y.; Matsuoka, R. Reflection Removal Using RGB-D Images. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 1862–1866. [Google Scholar] [CrossRef]
  44. Li, Y.; Tan, R.T.; Guo, X.; Lu, J.; Brown, M.S. Single Image Rain Streak Decomposition Using Layer Priors. IEEE Trans. Image Process. 2017, 26, 3874–3885. [Google Scholar] [CrossRef] [PubMed]
  45. Jiang, T.; Huang, T.; Zhao, X.; Deng, L.; Wang, Y. A Novel Tensor-Based Video Rain Streaks Removal Approach via Utilizing Discriminatively Intrinsic Priors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2818–2827. [Google Scholar] [CrossRef]
  46. Jiang, T.; Huang, T.; Zhao, X.; Deng, L.; Wang, Y. FastDeRain: A Novel Video Rain Streak Removal Method Using Directional Gradient Priors. IEEE Trans. Image Process. 2019, 28, 2089–2102. [Google Scholar] [CrossRef] [PubMed]
  47. Jeon, J.; Cho, S.; Tong, X.; Lee, S. Intrinsic Image Decomposition Using Structure-Texture Separation and Surface Normals. In European Conference Computer Vision (ECCV); Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Springer: Cham, Switzerland, 2014; pp. 218–233. [Google Scholar]
  48. Matsuoka, R.; Baba, T.; Rizkinia, M.; Okuda, M. White Balancing by Using Multiple Images via Intrinsic Image Decomposition. IEICE Trans. Inf. Syst. 2015, 98, 1562–1570. [Google Scholar] [CrossRef]
  49. Matsuoka, R.; Ono, S.; Okuda, M. Transformed-Domain Robust Multiple-Exposure Blending With Huber Loss. IEEE Access 2019, 7, 162282–162296. [Google Scholar] [CrossRef]
  50. Palsson, F.; Sveinsson, J.R.; Ulfarsson, M.O. A New Pansharpening Algorithm Based on Total Variation. IEEE Geosci. Remote Sens. Lett. 2014, 11, 318–322. [Google Scholar] [CrossRef]
  51. He, X.; Condat, L.; Bioucas-Dias, J.M.; Chanussot, J.; Xia, J. A New Pansharpening Method Based on Spatial and Spectral Sparsity Priors. IEEE Trans. Image Process. 2014, 23, 4160–4174. [Google Scholar] [CrossRef] [PubMed]
  52. Takeyama, S.; Ono, S.; Kumazawa, I. Robust and Effective Hyperspectral Pansharpening Using Spatio-Spectral Total Variation. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 1603–1607. [Google Scholar] [CrossRef]
  53. Takeyama, S.; Ono, S.; Kumazawa, I. Hyperspectral Pansharpening Using Noisy Panchromatic Image. In Proceedings of the APSIPA Annual Summit and Conference (APSIPA ASC), Honolulu, HI, USA, 12–15 November 2018; pp. 880–885. [Google Scholar] [CrossRef]
  54. Zach, C.; Pock, T.; Bischof, H. A Globally Optimal Algorithm for Robust TV-L1 Range Image Integration. In Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCV), Venice, Italy, 22–29 October 2017; pp. 1–8. [Google Scholar] [CrossRef]
  55. Sénchez Pèrez, J.; Meinhardt-Llopis, E.; Facciolo, G. TV-L1 Optical Flow Estimation. Image Process. Line 2013, 3, 137–150. [Google Scholar] [CrossRef]
  56. Huber, P.J. Robust estimation of a location parameter. Ann. Math. Stat. 1964, 35, 73–101. [Google Scholar] [CrossRef]
  57. Ono, S.; Yamada, I. Signal Recovery With Certain Involved Convex Data-Fidelity Constraints. IEEE Trans. Signal Process. 2015, 63, 6149–6163. [Google Scholar] [CrossRef]
  58. Nguyen, R.M.H.; Brown, M.S. Fast and Effective L0 Gradient Minimization by Region Fusion. In Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCV), Santiago, Chile, 7–13 December 2015; pp. 208–216. [Google Scholar] [CrossRef]
  59. Arvanitopoulos, N.; Achanta, R.; Süsstrunk, S. Single Image Reflection Suppression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1752–1760. [Google Scholar] [CrossRef]
  60. Gabay, D.; Mercier, B. A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. Appl. 1976, 2, 17–40. [Google Scholar] [CrossRef]
  61. Matsuoka, R.; Kyochi, S.; Ono, S.; Okuda, M. Joint Sparsity and Order Optimization Based on ADMM With Non-Uniform Group Hard Thresholding. IEEE Trans. Circuits Syst. I Regul. Pap. 2018, 65, 1602–1613. [Google Scholar] [CrossRef]
  62. Akai, Y.; Shibata, T.; Matsuoka, R.; Okuda, M. L0 Smoothing Based on Gradient Constraints. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 3943–3947. [Google Scholar] [CrossRef]
  63. Moreau, J.J. Fonctions convexes duales et points proximaux dans un espace hilbertien. C. R. Acad. Sci. 1962, 255, 2897–2899. [Google Scholar]
  64. Reinhard, E.; Stark, M.; Shirley, P.; Ferwerda, J. Photographic Tone Reproduction for Digital Images. ACM Trans. Graph. 2002, 21, 267–276. [Google Scholar] [CrossRef]
  65. Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Example of image smoothing: The black line, the dotted blue line, the dotted green line, and the dotted red line show the gradient signals of the input image, the smoothing results of the 0 gradient minimization [31], the 0 gradient projection [37], and ours with the box-type gradient constraints, respectively. Each gradient signal is the vertical first-order gradient of the G channel. The red arrows indicate pseudo-edges.
Figure 1. Example of image smoothing: The black line, the dotted blue line, the dotted green line, and the dotted red line show the gradient signals of the input image, the smoothing results of the 0 gradient minimization [31], the 0 gradient projection [37], and ours with the box-type gradient constraints, respectively. Each gradient signal is the vertical first-order gradient of the G channel. The red arrows indicate pseudo-edges.
Signals 04 00037 g001
Figure 2. Examples of the proposed gradient constraints: vertical and horizontal axes d v and d h represent gradient values of vertical and horizontal directions, respectively. The blue solid arrow and the dotted red arrow indicate the gradient vectors of reference and ideal images, respectively. The dotted blue box and circle represent areas that satisfy the gradient constraints, respectively. (a) Box-type gradient constraint. (b) Ball-type gradient constraint.
Figure 2. Examples of the proposed gradient constraints: vertical and horizontal axes d v and d h represent gradient values of vertical and horizontal directions, respectively. The blue solid arrow and the dotted red arrow indicate the gradient vectors of reference and ideal images, respectively. The dotted blue box and circle represent areas that satisfy the gradient constraints, respectively. (a) Box-type gradient constraint. (b) Ball-type gradient constraint.
Signals 04 00037 g002
Figure 3. Ground truth and input images on zero-mean Gaussian noise removal ( σ = 5.0 × 10 3 ). (a) Ground truth. (b) Input image.
Figure 3. Ground truth and input images on zero-mean Gaussian noise removal ( σ = 5.0 × 10 3 ). (a) Ground truth. (b) Input image.
Signals 04 00037 g003
Figure 4. Noise removal results and their PSNR values [dB]: (from left to right) (a14) reference images, (b14) our image with the box-type constraint, and (c14) our image with the ball-type one.
Figure 4. Noise removal results and their PSNR values [dB]: (from left to right) (a14) reference images, (b14) our image with the box-type constraint, and (c14) our image with the ball-type one.
Signals 04 00037 g004
Figure 5. Detail enhancement results 1: (upper) smoothing results and (lower) detail enhancement results. (a) Input. (b) 0 gradient minimization [31]. (c) 0 gradient projection [37]. (d) Ours. ©2018 IEEE. Reprinted, with permission, from [62].
Figure 5. Detail enhancement results 1: (upper) smoothing results and (lower) detail enhancement results. (a) Input. (b) 0 gradient minimization [31]. (c) 0 gradient projection [37]. (d) Ours. ©2018 IEEE. Reprinted, with permission, from [62].
Signals 04 00037 g005
Figure 6. Detail enhancement results 2: (upper) smoothing results and (lower) detail enhancement results. (a) Input. (b) 0 gradient minimization [31]. (c) 0 gradient projection [37]. (d) Ours. ©2018 IEEE. Reprinted, with permission, from [62].
Figure 6. Detail enhancement results 2: (upper) smoothing results and (lower) detail enhancement results. (a) Input. (b) 0 gradient minimization [31]. (c) 0 gradient projection [37]. (d) Ours. ©2018 IEEE. Reprinted, with permission, from [62].
Signals 04 00037 g006
Figure 7. Tone-mapping results. (a) Global operator [64]. (b) 0 gradient minimization [31]. (c) 0 gradient projection [37]. (d) Ours. ©2018 IEEE. Reprinted, with permission, from [62].
Figure 7. Tone-mapping results. (a) Global operator [64]. (b) 0 gradient minimization [31]. (c) 0 gradient projection [37]. (d) Ours. ©2018 IEEE. Reprinted, with permission, from [62].
Signals 04 00037 g007
Figure 8. Tone-mapping results obtained by different smoothing parameters and their MSE values: (a) Reinhard et al.’s global operator [64], (bd) are obtained by Durand and Dorsey’s tone-mapping framework [7] with (·-1) 0 gradient minimization [31] and (·-2) our methods. ©2018 IEEE. Reprinted, with permission, from [62].
Figure 8. Tone-mapping results obtained by different smoothing parameters and their MSE values: (a) Reinhard et al.’s global operator [64], (bd) are obtained by Durand and Dorsey’s tone-mapping framework [7] with (·-1) 0 gradient minimization [31] and (·-2) our methods. ©2018 IEEE. Reprinted, with permission, from [62].
Signals 04 00037 g008
Figure 9. Comparison PSNR and SSIM for clip-art JPEG artifact removal. (a) Peak signal-to-noise ratio (PSNR) [dB]. (b) Structural similarity (SSIM) [65].
Figure 9. Comparison PSNR and SSIM for clip-art JPEG artifact removal. (a) Peak signal-to-noise ratio (PSNR) [dB]. (b) Structural similarity (SSIM) [65].
Signals 04 00037 g009
Figure 10. Results of clip-art JPEG artifact removal and their PSNR [dB]. (a) Ground truth, (b) Input image, (c-1) [31], 31.51, (c-2) [31], 32.18, (d-1) [37], 31.19, (d-2) [37], 31.89, (e-1) Ours, 31.81, (e-2) Ours, 32.23.
Figure 10. Results of clip-art JPEG artifact removal and their PSNR [dB]. (a) Ground truth, (b) Input image, (c-1) [31], 31.51, (c-2) [31], 32.18, (d-1) [37], 31.19, (d-2) [37], 31.89, (e-1) Ours, 31.81, (e-2) Ours, 32.23.
Signals 04 00037 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Matsuoka, R.; Okuda, M. Beyond Staircasing Effect: Robust Image Smoothing via 0 Gradient Minimization and Novel Gradient Constraints. Signals 2023, 4, 669-686. https://doi.org/10.3390/signals4040037

AMA Style

Matsuoka R, Okuda M. Beyond Staircasing Effect: Robust Image Smoothing via 0 Gradient Minimization and Novel Gradient Constraints. Signals. 2023; 4(4):669-686. https://doi.org/10.3390/signals4040037

Chicago/Turabian Style

Matsuoka, Ryo, and Masahiro Okuda. 2023. "Beyond Staircasing Effect: Robust Image Smoothing via 0 Gradient Minimization and Novel Gradient Constraints" Signals 4, no. 4: 669-686. https://doi.org/10.3390/signals4040037

Article Metrics

Back to TopTop