You are currently viewing a new version of our website. To view the old version click .
Mathematics
  • Article
  • Open Access

11 April 2023

Novel Algorithms with Inertial Techniques for Solving Constrained Convex Minimization Problems and Applications to Image Inpainting

and
1
Department of Science and Mathematics, Rajamangala University of Technology Isan Surin Campus, Surin 32000, Thailand
2
Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Research Trends and Challenges in the Theory of Nonlinear Analysis and Its Applications

Abstract

In this paper, we propose two novel inertial forward–backward splitting methods for solving the constrained convex minimization of the sum of two convex functions, φ 1 + φ 2 , in Hilbert spaces and analyze their convergence behavior under some conditions. For the first method (iFBS), we use the forward–backward operator. The step size of this method depends on a constant of the Lipschitz continuity of φ 1 , hence a weak convergence result of the proposed method is established under some conditions based on a fixed point method. With the second method (iFBS-L), we modify the step size of the first method, which is independent of the Lipschitz constant of φ 1 by using a line search technique introduced by Cruz and Nghia. As an application of these methods, we compare the efficiency of the proposed methods with the inertial three-operator splitting (iTOS) method by using them to solve the constrained image inpainting problem with nuclear norm regularization. Moreover, we apply our methods to solve image restoration problems by using the least absolute shrinkage and selection operator (LASSO) model, and the results are compared with those of the forward–backward splitting method with line search (FBS-L) and the fast iterative shrinkage-thresholding method (FISTA).

1. Introduction and Preliminaries

In this study, N and R denote the set of all positive integers and the set of all real numbers, respectively. Let H be a real Hilbert space and Id : H H be the identity operator. The symbols → and ⇀ denote strong convergence and weak convergence, respectively.
The forward–backward splitting method is a popular method for solving the following convex minimization problems:
min u H ψ 1 ( u ) + ψ 2 ( u ) ,
where ψ 2 : H R { } is a convex proper lower semi-continuous function, ψ 1 : H R is a convex proper lower semi-continuous and differentiable function, and ψ 1 is Lipschitz continuous with constant L.
In 2005, Combettes and Wajs [1] proposed a relaxed forward–backward splitting method as follows:
u k + 1 = u k + β k ( prox λ k ψ 2 ( u k λ k ψ 1 ( u k ) ) u k ) , k N ,
where u 1 R N , a ( 0 , min ( 1 , 1 L ) ) , λ k [ a , 2 L a ] , and β k [ a , 1 ] .
In 2016, Cruz and Nghia [2] presented a technique for selecting the stepsize λ k that is independent of the Lipschitz constant of ψ 1 by using line search process as follows:
We denote Linesearch ( ψ 1 , ψ 2 ) ( u , σ , θ , δ ) as Algorithm 1 with respect to the functions ψ 1 and ψ 2 at u . The forward–backward splitting method, FBS-L, where the stepsize λ k is generated using Algorithm 1, is as follows:
u k + 1 = prox λ k ψ 2 ( u k λ k ψ 1 ( u k ) ) , k N ,
where u 1 H , θ ( 0 , 1 ) , σ > 0 , δ ( 0 , 1 2 ) and λ k : = Linesearch ( ψ 1 , ψ 2 ) ( u k , σ , θ , δ ) .
Algorithm 1 Let  u H , θ ( 0 , 1 ) , σ > 0 and δ > 0 .
Input  λ = σ .
      While
λ ψ 1 ( prox λ ψ 2 ( Id λ ψ 1 ) ( u ) ) ψ 1 ( u ) > δ prox λ ψ 2 ( Id λ ψ 1 ) ( u ) u ,
      do
               λ = θ λ .
      End
Output  λ .
Inertial techniques are applied in order to speed up the forward–backward splitting method. Various inertial techniques and fixed point methods have been studied, see [3,4,5,6,7,8] for example. Recently, Moudafi, and Oliny [9] introduced an inertial forward–backward splitting method as follows:
u k + 1 = prox λ k ψ 2 ( ( u k + α k ( u k u k 1 ) ) λ k ψ 1 ( u k + α k ( u k u k 1 ) ) ) , k N ,
where u 0 , u 1 H , λ k ( 0 , 2 L ) , α k [ 0 , 1 ) . The α k , an inertial parameter, controls the momentum u k u k 1 . In 2009, Beck and Teboulle [3] proposed a fast iterative shrinkage-thresholding algorithm (FISTA) for solving the convex minimization problems, Problem (1) and they also generated the sequence { u k } as follows:
y k = prox 1 L ψ 2 ( Id 1 L ψ 1 ) ( u k ) , t k + 1 = 1 + 1 + 4 t k 2 2 , u k + 1 = y k + t k 1 t k + 1 ( y k y k 1 ) , k N ,
when t 1 = 1 , u 1 = y 0 H .
Recently, Cui et al. proposed an inertial three-operator splitting (iTOS) method [10] for solving the following general convex minimization problems:
min u H ψ 1 ( u ) + ψ 2 ( u ) + ψ 3 ( u ) ,
where ψ 2 : H R { } and ψ 3 : H R { } are the convex proper lower semi-continuous functions, ψ 1 : H R is the convex proper lower semi-continuous and differentiable function, and ψ 1 is Lipschitz continuous with constant L. The iTOS method is defined by the following:
v k = u k + α k ( u k u k 1 ) ; u ψ 3 k = prox λ ψ 3 v k ; u ψ 2 k = prox λ ψ 2 ( 2 u ψ 3 k v k λ ψ 1 ( u ψ 3 k ) ) ; u k + 1 = v k + β k ( u ψ 2 k u ψ 3 k ) , k N ,
where u 0 , u 1 H , λ ( 0 , 2 L ε ) , ε ( 0 , 1 ) , { α k } is non-decreasing, 0 α k d < 1 and k 1 , and a , b , c > 0 such that
c > d 2 ( 1 + d ) + d b 1 d 2 and 0 < a β k c d [ d ( 1 + d ) + d c + b ] d ¯ c [ 1 + d ( 1 + d ) + d c + b ] , where d ¯ = 1 2 ε .
They proved the convergence of the iTOS method and applied the method to solve the constrained image inpainting problem as follows:
min u C 1 2 A ( u u 0 ) F 2 + τ u * ,
where u is the matrix of known entries A ( u 0 ) , u 0 R m × n ,   A is the linear map that selects a subset of the entries of an m × n matrix by setting each unknown entry in the matrix to 0 , and C is the nonempty closed convex subset of R m × n . The regularization parameter is τ > 0 ,   · * is the nuclear matrix norm, and · F is the Frobenius matrix norm. The constrained image inpainting problem, Problem (4), is equivalent to the following unconstrained image inpainting problem:
min u 1 2 A ( u u 0 ) F 2 + τ u * + δ C ( u ) ,
when δ C is the indicator function.
In this paper, we introduce two novel inertial forward–backward splitting methods for solving the following constraint convex minimization problems:
min u Argmin ( ψ 1 + ψ 2 ) φ 1 ( u ) + φ 2 ( u ) ,
where φ 1 : H R , ψ 1 : H R , ψ 1 : H R { } and ψ 2 : H R { } are convex proper lower semi-continuous functions. Note that u * is a solution to problem (6) if it satisfies the common minimizers of the following convex minimization problems:
min u H φ 1 ( u ) + φ 2 ( u ) , and min u H ψ 1 ( u ) + ψ 2 ( u ) .
Then, we prove the weak convergence of the proposed methods by using the fixed point method and line search technique, respectively. Moreover, we apply our methods for solving constrained image inpainting problem, Problem (4), and compare the performance of our methods with the iTOS method (3).
The basic definitions and Lemmas needed for our paper are let ψ : H R { } be a convex proper lower semi-continuous function. The proximal operator [11,12] of ψ , denoted by prox ψ is defined for each u H , prox ψ u is the unique solution of the minimization problem:
minimize z H ψ ( z ) + 1 2 u z 2 .
The equivalent form of the proximity operator of ψ from H to H can be written as:
prox ψ = ( Id + ψ ) 1 ,
when the subdifferential of ψ defined by
ψ ( v ) : = { u H : ψ ( v ) + u , z v ψ ( z ) , z H } , v H .
We notice that prox δ C = P C , when C is the nonempty closed convex subset of H and P C : H C is the orthogonal projection operator. The following useful fact is also needed:
u prox λ ψ ( u ) λ ψ ( prox λ ψ ( u ) ) , u H , λ > 0 .
If ψ 1 is differentiable in H , the solution of the convex minimization problem, Problem (1), is a fixed point of the forward–backward operator, i.e.,
u Argmin ( ψ 1 + ψ 2 ) if and only if u = prox λ ψ 2 backward step ( Id λ ψ 1 ) forward step ( u ) ,
where λ > 0 . Note that the subdifferential operator ψ is a maximal monotone [13] and the operator prox λ ψ 2 ( Id λ ψ 1 ) is a nonexpansive mapping where λ ( 0 , 2 L ) .
We consider the following class of Lipschitz continuous and nonexpansive mappings. A mapping S : H H is said to be Lipschitz continuous if there exists L > 0 such that
S u S v L u v , u , v H .
The mapping S is said to be nonexpansive if S is 1-Lipschitz continuous. If u = S u , then a point u H is said to be fixed point of S The set of all fixed points of S is denoted by Fix ( S ) .
The mapping Id S is called demiclosed at zero if sequence { u k } in H such that u k u , and u k S u k 0 as k , then u Fix ( S ) . It is known that if S is a nonexpansive mapping, then Id S is demiclosed at zero [14]. Let { S k : H H and S : H H } be such that Fix ( S ) k = 1 Fix ( S k ) . Then, { S k } is said to satisfy NST-condition (I) with S [15] if, for all bounded sequence { u k } in H ,
lim k u k S k u k = 0 lim k u k S u k = 0 .
A sequence { S k } is said to satisfy the condition (Z) [16,17] if { u k } is a bounded sequence in H such that
lim k u k S k u k = 0 ,
it follows that every weak cluster point of { u k } belongs to k = 1 Fix ( S k ) . The following identity in H will be used in the paper (see [11]): for any y , z H and τ [ 0 , 1 ] ,
τ y + ( 1 τ ) z 2 = τ y 2 + ( 1 τ ) z 2 τ ( 1 τ ) y z 2 ,
y ± z 2 = y 2 ± 2 y , z + z 2 .
Lemma 1
([4]). Let ψ 1 : H R be a convex and differentiable function such that ψ 1 is Lipschitz continuous with constant L and ψ 2 : H R { } be a convex proper lower semi-continuous function. Let S k : = prox λ k ψ 2 ( Id λ k ψ 1 ) and S : = prox λ ψ 2 ( Id λ ψ 1 ) , when 0 < λ k , λ < 2 L with lim k λ k = λ . Then { S k } satisfies NST-condition (I) with S.
Lemma 2
([5]). Let { u k } and { α k } be sequences of nonnegative real numbers such that
u k + 1 ( 1 + α k ) u k + α k u k 1 , k N .
Then u k + 1 N · i = 1 k ( 1 + 2 α i ) , when N = max { u 1 , u 2 } . Moreover, if k = 1 α k < , then { u k } is bounded.
Lemma 3
([18]). If ψ : H R { } is a convex proper lower semi-continuous function, then the graph of ψ defined by Gph ( ψ ) : = { ( u , v ) H × H : v ψ ( u ) } is demiclosed, i.e., if the sequence { ( u k , v k ) } Gph ( ψ ) satisfies that u k u and v k v , then ( u , v ) Gph ( ψ ) .
Lemma 4
([19]). Let ψ 1 and ψ 2 be convex proper lower semi-continuous functions from H to R { } . Then for all u d o m ψ 2 and c 2 c 1 > 0 , we have
c 2 c 1 u prox c 1 ψ 2 ( Id c 1 ψ 1 ) ( u ) u prox c 2 ψ 2 ( Id c 2 ψ 1 ) ( u ) u prox c 1 ψ 2 ( Id c 1 ψ 1 ) ( u ) .
Lemma 5
([20]). Let { u k } and { v k } be sequences of nonnegative real numbers such that u k + 1 u k + v k , k N . If k = 1 v k < , then lim k u k exists.
Lemma 6
([14,21]). Let { u k } be a sequence in H such that there exists a nonempty set Γ H satisfying:
(i)
For every u * Γ , lim k u k u * exists;
(ii)
ω w ( u k ) Γ , where ω w ( u k ) is the set of all weak-cluster points of { u k } .
Then, { u k } converges weakly to a point in Γ .

2. An Inertial Forward–Backward Splitting Method Based on Fixed Point Inertial Techniques

In this part, we propose an inertial forward–backward splitting method for finding common elements of the convex minimization problem, Problem (7), by using a fixed-point inertial technique and proving a weak convergence of the proposed method.
In the first part, we propose a fixed-point inertial method that approximates the common fixed point of two countable families of nonexpansive mappings. The details and assumptions of the iterative method are represented as:
(A1) 
{ S k : H H } and { T k : H H } are two countable family of nonexpansive mappings which satisfy condition (Z);
(A2) 
Γ : = n = 1 F i x ( S k ) n = 1 F i x ( T k ) .
Now, we prove the first convergence theorem for Algorithm 2.
Algorithm 2 A fixed-point inertial method.
  •        Le u 0 , u 1 H . Choose { α k } , { β k } and { γ k } .
  •        For  k = 1 , 2 , . . .  do
    v k = u k + α k ( u k u k 1 ) ;
    w k = v k + β k ( S k v k v k ) ;
    u k + 1 = ( 1 γ k ) S k w k + γ k T k w k ,
  •       end for.
Theorem 1.
Let { x k } be the sequence created by Algorithm 2. Suppose that the sequences { α k } , { β k } and { γ k } satisfy the following conditions:
(C1) 
0 < a β k b < 1 , 0 < c γ k d < 1 k N , for some a , b , c , d R ;
(C2) 
α k 0 , k N and k = 1 α k < .
Then the following hold:
(i)
u k + 1 u * N j = 1 k ( 1 + 2 α j ) , where N = max { u 1 u * , u 2 u * } and u * Γ .
(ii)
{ u k } converges weakly to some element in Γ .
Proof. 
(i) Let u * Γ . Using (13), we have
v k u * u k u * + α k u k u k 1 .
Using (14) and the nonexpansiveness of S k , we have
w k u * ( 1 β k ) v k u * + β k S k v k u * v k u * .
Using (15) and the nonexpansiveness of S k and T k , we obtain
u k + 1 u * ( 1 γ k ) S k w k u * + γ k T k w k u * w k u * .
From (16)–(18), we get
u k + 1 u * w k u * v k u * u k u * + α k u k u k 1 .
This implies
u k + 1 u * ( 1 + α k ) u k u * + α k u k 1 u * .
Applying Lemma 2, we get u k + 1 u * N · j = 1 k ( 1 + 2 α j ) , where N = max { u 1 u * , u 2 u * } .
(ii) We conclude that { u k } is bounded by condition (C2) and that { v k } and { w k } are also bounded. This implies k = 1 α k u k u k 1 < . Using (19) and Lemma 5, we obtain that lim k u k u * exists. Using (12) and (13), we obtain
v k u * 2 u k u * 2 + α k 2 u k u k 1 2 + 2 α k u k u * u k u k 1 .
Using (11), (14), (15), and the nonexpansiveness of S k and T k , we obtain
w k u * 2 = ( 1 β k ) v k u * 2 + β k S k v k u * 2 β k ( 1 β k ) v k S k v k 2 v k u * 2 β k ( 1 β k ) v k S k v k 2 ,
and
u k + 1 u * 2 ( 1 γ k ) S k w k u * 2 + γ k T k w k u * 2 γ k ( 1 γ k ) T k w k S k w k 2 w k u * 2 γ k ( 1 γ k ) T k w k S k w k 2 .
From (21)–(23), we obtain
u k + 1 u * 2 u k u * 2 + α k 2 u k u k 1 2 + 2 α k u k u * u k u k 1 γ k ( 1 γ k ) T k w k S k w k 2 β k ( 1 β k ) v k S k v k 2 .
From (24) and by condition (C1), k = 1 α k u k u k 1 < and lim k u k u * exists, we have
lim k T k w k S k w k = lim k v k S k v k = 0 and also lim k w k v k = 0 .
From (25), we obtain
T k w k w k T k w k S k w k + S k w k S k v k + S k v k v k + v k w k T k w k S k w k + 2 w k v k + S k v k v k 0 as k .
From (13) and k = 1 α k u k u k 1 < , we have
v k u k = α k u k u k 1 0 as k .
Since { u k } is bounded, we obtain ω w ( u k ) as nonempty. Using (25) and (27), we get ω w ( u k ) ω w ( v k ) ω w ( w k ) . Since { S k } and { T k } satisfy the condition (Z), lim k v k S k v k = 0 and lim k w k T k w k = 0 , we have ω w ( v k ) n = 1 F i x ( S k ) and ω w ( w k ) n = 1 F i x ( T k ) , respectively. So, we get ω w ( u k ) n = 1 F i x ( S k ) n = 1 F i x ( T k ) . Using Lemma 6, we conclude that { u k } converges weakly to a point in Γ . This completes the proof.    □
Remark 1.
Assuming that S k = T k for all k 1 , the iterative Algorithm 2 becomes the following Algorithm 3:
Algorithm 3 The inertial Picard–Mann hybrid iterative process.
  •        Let u 0 , u 1 H . Choose { α k } and { β k } .
  •        For  k = 1 , 2 , . . .  do
    v k = u k + α k ( u k u k 1 ) ; u k + 1 = S k ( v k + β k ( S k v k v k ) ) .
  •        end for.
Remark 2.
If α k and S k = S for all k 1 , the inertial Picard–Mann hybrid iterative process (Algorithm 3) is reduced to Picard–Mann hybrid iterative process [22].
We are presently in a position to state the inertial forward–backward splitting method and show its convergence properties. We set the standing assumptions used in this method as follows:
(D1) 
φ 1 : H R and ψ 1 : H R are differentiable and convex functions;
(D2) 
φ 1 and ψ 1 are Lipschitz continuous with constants L 1 and L 2 , respectively;
(D3) 
φ 2 : H R { } and ψ 2 : H R { } are convex proper lower semi-continuous functions;
(D4) 
Θ : = Argmin ( φ 1 + φ 2 ) Argmin ( ψ 1 + ψ 2 ) .
Remark 3.
Let S k : = prox λ k φ 2 ( Id λ k φ 1 ) and S : = prox λ φ 2 ( Id λ φ 1 ) . If 0 < λ k , λ < 2 L 1 , then S k and S are nonexpansive mappings with Fix ( S ) = Argmin ( ψ 1 + ψ 2 ) = k = 1 Fix ( S k ) . Moreover, if λ k λ , by Lemma 1, we obtain that { S k } satisfies NST-condition (I) with S.
Now, we prove the convergence theorem of Algorithm 4.
Algorithm 4 An inertial forward–backward splitting (iFBS) method.
       Let u 0 , u 1 H . Choose { α k } , { β k } , { γ k } , { λ k } and { λ k * } .
       For  k = 1 , 2 , . . .  do
v k = u k + α k ( u k u k 1 ) ; w k = v k + β k ( prox λ k φ 2 ( Id λ k φ 1 ) v k v k ) ; u k + 1 = ( 1 γ k ) prox λ k φ 2 ( Id λ k φ 1 ) w k + γ k prox λ k * ψ 2 ( Id λ k * ψ 1 ) w k ,
       end for.
Theorem 2.
Let { x k } be the sequence created by Algorithm 4. Suppose that the sequences { α k } , { β k } and { γ k } satisfy the following conditions:
(C1)
0 < a β k b < 1 , 0 < c γ k d < 1 k N , for some a , b , c , d R ;
(C2)
α k 0 , k N and k = 1 α k < ;
(C3)
0 < λ k , λ < 2 L 1 , 0 < λ k * , λ * < 2 L 1 , k N such that λ k λ and λ k * λ * as k .
Then the following hold:
(i)
u k + 1 u * N j = 1 k ( 1 + 2 α j ) , where N = max { u 1 u * , u 2 u * } and u * Θ ;
(ii)
{ u k } converges weakly to some element in Θ .
Proof. 
Operators S k , T k , S , T : H H are defined as follows:
S k : = prox λ k φ 2 ( Id λ k φ 1 ) , S : = prox λ φ 2 ( Id λ φ 1 ) , T k : = prox λ k * ψ 2 ( Id λ k * ψ 1 ) and T : = prox λ * ψ 2 ( Id λ * ψ 1 ) .
By applying Remark 3 and condition (C3), we have { S k : H H } and { T k : H H } , two families of nonexpansive mappings, which satisfy NST-condition (I) with S and T , respectively. This implies that { S k } and { T k } satisfy the condition (Z). Therefore, the result is obtained directly from Theorem 1.    □
Remark 4.
If φ 1 = ψ 1 , φ 2 = ψ 2 , and λ k = λ k * then the Algorithm 4 is reduced to the method for solving the minimization problem of the sum of two convex function, Problem (1), as follows Algorithm 5:
Algorithm 5 An inertial Picard–Mann forward–backward splitting (iPM-FBS) method.
  •        Let u 0 , u 1 H . Choose { α k } , { β k } and { λ k } .
  •        For  k = 1 , 2 , . . .  do
    v k = u k + α k ( u k u k 1 ) ; w k = v k + β k ( prox λ k ψ 2 ( Id λ k ψ 1 ) v k v k ) ; u k + 1 = prox λ k ψ 2 ( Id λ k ψ 1 ) w k ,
  •        end for.

4. Numerical Examples

The following experiments are executed in MATLAB and run on a PC with Intel Core-i7/16.00 GB RAM/Windows 11/64-bit. For image quality measurements, we utilize the peak signal-to-noise ratio (PSNR) in decibels (dB), which is defined as follows:
PSNR : = 10 log 10 255 2 1 M u k u 2 2 ,
where M and u are the number of image samples and the original image, respectively.

4.1. The Constrained Image Inpainting Problems

In this subsection, we apply the constrained convex minimization problem, Problem (6), to the constrained image inpainting problem, Problem (4). We analyze and illustrate the convergence behavior of the iFBS method (Algorithm 4) and iFBS-L method (Algorithm 6) for the image restoration problem and also compare those methods’ efficiency with that of the iTOS method [10].
In this experiment, we discuss the following constrained image inpainting problem [10]:
min u C 1 2 P Λ ( u 0 ) P Λ ( u ) F 2 + τ u * ,
where u 0 R m × n is a given image. We define P Λ by
P Λ ( u 0 ) = u i j 0 , ( i , j ) Λ , 0 , otherwise ,
where { u i j 0 } ( i , j ) Λ are observed, Λ is a subset of the index set { 1 , 2 , 3 . . . , m } × { 1 , 2 , 3 . . . , n } which indicates where data are available in the image domain, and the rest are missed and C = { u R m × n | u i j 0 } .
It is obvious that the constrained image inpainting problem, Problem (44), is a special case of the constrained convex minimization problem, Problems (6), i.e., we can set
ψ 1 ( u ) = 0 , ψ 2 ( u ) = δ C ( u ) , φ 1 ( u ) = 1 2 P Λ ( u 0 ) P Λ ( u ) F 2 , and φ 2 ( u ) = τ u * .
So, we have φ 1 ( u ) is differentiable and convex, and φ 1 ( u ) = P Λ ( u 0 ) P Λ ( u ) is Lipschitz continuous with constant L = 1 . The proximity operator of ψ 2 ( u ) is the orthogonal projection onto the closed convex set C and the proximity operator of φ 2 ( u ) can be computed using singular value decomposition (SVD), see [23]. Thus, the iFBS method (Algorithm 4) and iFBS-L method (Algorithm 6) can be employed to solve the constrained image inpainting problem, Problem (44).
In this demonstration, we identify and correct the damaged area of the Wat Chedi Luang image in the default gallery, and we recommend stopping the calculation when
u k + 1 u k F u k F ϵ = 10 5 or 4000 th iteration .
The suggested parameters setting in Table 1 are used as parameters for the iFBS, iFBS-L, and iTOS methods. It is noted that the sequence α k can be defined with
α k = ρ k if 1 k 4000 ; 1 2 k otherwise ,
which satisfies the condition (C2).
Table 1. Details of parameters for each method.
The PSNR value for the restored images, CPU time (second), and the number of iterations (Iter.) are listed in Table 2.
Table 2. Performance evaluation of the restored images.
As shown in Table 2, the numerical examples indicate that our proposed methods perform better than the inertial three-operator splitting (iTOS) method [10] in both terms of PSNR performance and the number of iterations. The running time of the iFBS method is less than those of the iFBS-L and iTOS methods. We exhibit the original image, painted image, and restored images in the case of a regularization parameter of τ = 0.001 in Figure 1 as a visual demonstration.
Figure 1. The original image, painted image, and restored images in the case of a regularization parameter of τ = 0.001 .

4.2. The Least Absolute Shrinkage and Selection Operator (LASSO)

In this subsection, we provide numerical results by comparing Algorithm 5 (iPM-FBS), Algorithm 7 (iPM-FBS-L), the FISTA method [3], and the FBS-L method [2] for the image restoration problems by using the LASSO model specified by the formula:
min u R N ψ 1 ( u ) + ψ 2 ( u ) : = 1 2 B u b 2 2 + τ u 1 ,
where B = RW , R is the kernel matrix, W is 2-D fast Fourier transform, b is the observed image, τ > 0 , · 1 is the l 1 -norm, and · 2 is the Euclidean norm.
In this experiment, we use the RGB image test with a size of 256 × 256 pixels as the original image and evaluate the performance of the iPM-FBS, iPM-FBS-L, FISTA, and FBS-L methods in two typical image blurring scenarios which are summarized in Table 3 (see the test images in Figure 2). The parameters for the methods are set in Table 4.
Table 3. Description of image blurring scenarios.
Figure 2. The test images. (a) original image, (b) Blurred image in scenario I, and (c) Blurred image in scenario II.
Table 4. Details of parameters for each method.
In the results from Table 5, it is evident that the iPM-FBS method produces the best PSNR results. We exhibit the restored images in the case of a regularization parameter of τ = 10 8 in Figure 3 and Figure 4 as a visual demonstration. The PSNR graphs in Figure 5 further show that the iPM-FBS method has considerable advantages in terms of recovery performance when compared to the iPM-FBS-L, FISTA, and FBS-L methods.
Table 5. The comparison of the PSNR performance of four methods at 200 th iteration.
Figure 3. The images recovered in scenario I in the case of τ = 10 8 . (ad) Images recovered using iPM-FBS, iPM-FBS-L, FISTA, and FBS-L, respectively.
Figure 4. The images recovered in scenario II in case of τ = 10 8 . (ad) Images recovered using iPM-FBS, iPM-FBS-L, FISTA, and FBS-L, respectively.
Figure 5. The PSNR comparisons of iPM-FBS, iPM-FBS-L, FISTA, and FBS-L methods.

5. Conclusions

This paper proposed two novel inertial forward–backward splitting methods, the iFBS method and the iFBS-L method, for finding common elements of the convex minimization of the sum of two convex functions in real Hilbert spaces. The key idea was to examine the weak convergence results of the proposed methods by utilizing the fixed point conditions of the forward–backward operators and line search strategies, respectively. The proposed methods were applied to solve image inpainting problems. Numerical simulations showed that our methods (iFBS and iFBS-L) performed better than the inertial three-operator splitting (iTOS) method [10] both in terms of PSNR performance and the number of iterations. Finally, we reduced our methods to solve the convex minimization of the sum of two convex functions, called the iPM-FBS method and iPM-FBS-L method, and applied them to solve image recovery problems. Numerical simulations showed that the iPM-FBS method performed better than the iFBS-L method, FISTA method [3], and FBS-L method [2] in terms of PSNR performance.

Author Contributions

Formal analysis, writing—original draft preparation, methodology, writing—review and editing, software, A.H.; Conceptualization, Supervision, revised the manuscript, S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research project was supported by the Science Research and Innovation Fund. Contract No. FF66-P1-011, the NSRF via the program Management Unit for Human Resources & Institutional Development, Research and Innovation [grant number B05F640183] and Fundamental Fund 2023, Chiang Mai University.

Data Availability Statement

Not applicable.

Acknowledgments

This research project was supported by Science Research and Innovation Fund, Contract No. FF66-P1-011 and Fundamental Fund 2023, Chiang Mai University. The second author has received funding support from the NSRF via the program Management Unit for Human Resources & Institutional Development, Research, and Innovation [grant number B05F640183]. We also would like to thank Chiang Mai University and the Rajamangala University of Technology Isan for their partial financial support.

Conflicts of Interest

The authors declare that they have no competing interests.

References

  1. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef]
  2. Cruz, J.Y.B.; Nghia, T.T.A. On the convergence of the forward–backward splitting method with linesearchs. Optim. Methods Softw. 2016, 31, 1209–1238. [Google Scholar] [CrossRef]
  3. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
  4. Bussaban, L.; Suantai, S.; Kaewkhao, A. A parallel inertial S-iteration forward–backward algorithm for regression and classification problems. Carpathian J. Math. 2020, 36, 35–44. [Google Scholar] [CrossRef]
  5. Hanjing, A.; Suantai, S. A fast image restoration algorithm based on a fixed point and optimization. Mathematics. 2020, 8, 378. [Google Scholar] [CrossRef]
  6. Saluja, G.S. Some common fixed point theorems on S-metric spaces using simulation function. J. Adv. Math. Stud. 2022, 15, 288–302. [Google Scholar]
  7. Suantai, S.; Kankam, K.; Cholamjiak, P. A novel forward–backward algorithm for solving convex minimization problem in Hilbert spaces. Mathematics 2020, 8, 42. [Google Scholar] [CrossRef]
  8. Zhao, X.P.; Yao, J.C.; Yao, Y. A nonmonotone gradient method for constrained multiobjective optimization problems. J. Nonlinear Var. Anal. 2022, 6, 693–706. [Google Scholar]
  9. Moudafi, A.; Oliny, M. Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 2003, 155, 447–454. [Google Scholar] [CrossRef]
  10. Cui, F.; Tang, Y.; Yang, Y. An inertial three-operator splitting algorithm with applications to image inpainting. arXiv 2019, arXiv:1904.11684. [Google Scholar]
  11. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011. [Google Scholar]
  12. Moreau, J.J. Proximité et dualité dans un espace hilbertien. Bull. Société Mathématique Fr. 1965, 93, 273–299. [Google Scholar] [CrossRef]
  13. Burachik, R.S.; Iusem, A.N. Set-Valued Mappings and Enlargements of Monotone Operator; Springer Science Business Media: New York, NY, USA, 2007. [Google Scholar]
  14. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef]
  15. Nakajo, K.; Shimoji, K.; Takahashi, W. On strong convergence by the hybrid method for families of mappings in Hilbert spaces. Nonlinear Anal. Theory Mothods Appl. 2009, 71, 112–119. [Google Scholar] [CrossRef]
  16. Aoyama, K.; Kimura, Y. Strong convergence theorems for strongly nonexpansive sequences. Appl. Math. Comput. 2011, 217, 7537–7545. [Google Scholar] [CrossRef]
  17. Aoyama, K.; Kohsaka, F.; Takahashi, W. Strong convergence theorems by shrinking and hybrid projection methods for relatively nonexpansive mappings in Banach spaces. In Proceedings of the 5th International Conference on Nonlinear Analysis and Convex Analysis; Yokohama Publishers: Yokohama, Japan, 2009; pp. 7–26. [Google Scholar]
  18. Rockafellar, R.T. On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 1970, 33, 209–216. [Google Scholar] [CrossRef]
  19. Huang, Y.; Dong, Y. New properties of forward–backward splitting and a practical proximal-descent algorithm. Appl. Math. Comput. 2014, 237, 60–68. [Google Scholar] [CrossRef]
  20. Tan, K.; Xu, H.K. Approximating fixed points of nonexpansive mappings by the ishikawa iteration process. J. Math. Anal. Appl. 1993, 178, 301–308. [Google Scholar] [CrossRef]
  21. Moudafi, A.; Al-Shemas, E. Simultaneous iterative methods for split equality problem. Trans. Math. Prog. Appl. 2013, 1, 1–11. [Google Scholar]
  22. Khan, S.H. A Picard–Mann hybrid iterative process. Fixed Point Theory Appl. 2013, 2013, 69. [Google Scholar] [CrossRef]
  23. Cai, J.F.; Candes, E.J.; Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.