Next Article in Journal
Optical Fiber Sensors for Monitoring Railway Infrastructures: A Review towards Smart Concept
Next Article in Special Issue
Turnpike Properties for Dynamical Systems Determined by Differential Inclusions
Previous Article in Journal
Some New Simpson’s-Formula-Type Inequalities for Twice-Differentiable Convex Functions via Generalized Fractional Operators
Previous Article in Special Issue
Existence and Convergence Results for Generalized Mixed Quasi-Variational Hemivariational Inequality Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Modified Tseng’s Method for Solving the Modified Variational Inclusion Problems and Its Applications

by
Thidaporn Seangwattana
1,†,
Kamonrat Sombut
2,†,
Areerat Arunchai
3 and
Kanokwan Sitthithakerngkiet
4,*,†
1
Faculty of Science Energy and Environment, King Mongkut’s University of Technology North Bangkok, Rayong Campus (KMUTNB), Rayong 21120, Thailand
2
Department of Mathematics and Computer Science, Faculty of Science and Technology, Rajamangala University of Technology Thanyaburi (RMUTT), Pathum Thani 12110, Thailand
3
Department of Mathematics and Statistics, Faculty of Science and Technology, Nakhon Sawan Rajabhat University, Nakhon Sawan 60000, Thailand
4
Intelligent and Nonlinear Dynamic Innovations Research Center, Department of Mathematics, Faculty of Applied Science, King Mongkut’s University of Technology North Bangkok (KMUTNB), Wongsawang, Bangsue, Bangkok 10587, Thailand
*
Author to whom correspondence should be addressed.
Current address: Applied Mathematics for Science and Engineering Research Unit (AMSERU), Department of Mathematics and Computer Science, Faculty of Science and Technology, Rajamangala University of Technology Thanyaburi (RMUTT), 39 Rungsit-Nakorn Nayok Rd., Klong 6, Khlong Luang, Thanyaburi, Pathum Thani 12110, Thailand.
Symmetry 2021, 13(12), 2250; https://doi.org/10.3390/sym13122250
Submission received: 24 October 2021 / Revised: 19 November 2021 / Accepted: 20 November 2021 / Published: 25 November 2021
(This article belongs to the Special Issue Nonlinear Analysis and Its Applications in Symmetry)

Abstract

:
The goal of this study was to show how a modified variational inclusion problem can be solved based on Tseng’s method. In this study, we propose a modified Tseng’s method and increase the reliability of the proposed method. This method is to modify the relaxed inertial Tseng’s method by using certain conditions and the parallel technique. We also prove a weak convergence theorem under appropriate assumptions and some symmetry properties and then provide numerical experiments to demonstrate the convergence behavior of the proposed method. Moreover, the proposed method is used for image restoration technology, which takes a corrupt/noisy image and estimates the clean, original image. Finally, we show the signal-to-noise ratio (SNR) to guarantee image quality.

1. Introduction

As technology advances, several new and challenging real-world problems develop. These problems appear in a variety of fields, such as chemistry, engineering, biology, physics, and computer science. The problems can be formulated as optimization problems, i.e., to find x such that the following holds:
min x R n f ( x )
where f : R n R is continuously differentiable. In solving these problems, optimization and control methodologies and techniques are certainly required. Since symmetry appears in certain natural and engineering systems, there is some form of symmetry in many mathematical models and optimization problems. Therefore, researchers have to pay attention to adjusting certain constrained optimization problems, where certain variables appear symmetrically in the objective and constraint functions. One of the basic optimization concepts for finding minimization is the variational inclusion problem (VIP), which is to find x in a real Hilbert space H, such that the following holds:
0 A x + B x ,
where operators A : H H and B : H 2 H are, respectively, single-valued and multi-valued. The solution of the VIP can apply to real-world problems, such as engineering, economics, machine learning, equilibrium, image processing, and transportation problems [1,2,3,4,5,6,7]. An increasing number of researchers have investigated methods to solve the variational inclusion problem. A popular one is the forward–backward splitting method (see [8,9,10]), given by the following:
u n + 1 = J r B ( u n r A u n ) , n 1 ,
where J r B = ( I + r B ) 1 with r > 0 . Moreover, researchers have modified these methods not only with more versatility by using relaxation techniques (see [11,12]), but also with more acceleration by using the inertial techniques (see [13,14,15,16]). Later, Alvarez and Attouch [14,15] expanded the inertial idea and introduced the inertial forward–backward method given by the following:
s n = u n + ξ n ( u n u n 1 ) u n + 1 = ( I + r B ) 1 ( I r A ) s n , n 1 .
The technique of speeding up this method consists in the term ξ n ( u n u n 1 ) . The inertial term with an extrapolation factor ξ n is well known (for more details, see [17,18,19,20]). For monotone inclusions/non-smooth convex minimization problems, they also proved the convergence theorem. Attouch and Cabot [21,22] established relaxed inertial proximal algorithms by using both techniques to increase the performance of algorithms to solve the previous problems. Later, many researchers focused on both techniques and introduced the relaxed inertial forward–backward algorithm [23], the inertial Douglas–Rachford algorithm [24], a Tseng extragradient method in Banach spaces [25], and the relaxed inertial Tseng’s type algorithm [26]. Among the well-known algorithms is the relaxed inertial Tseng’s type method developed by Abubakar et al. [26]. They solved the problem (2) by modifying Tseng’s forward–backward forward splitting approach by employing the relaxation parameter ρ and the inertial extrapolation factor ξ . Their algorithm is given by the following:
t n = u n + ξ ( u n u n 1 ) s n = ( I + λ n B ) 1 ( I λ n A ) t n , n 1 u n + 1 = ( 1 ρ ) t n + ρ s n + ρ λ n ( A t n A s n ) , n 1 .
A new simple step size rule was designed to self-adaptively update the step size λ . Furthermore, they proved a weak convergence theorem generated by their algorithm and applied the algorithm to solve the problem of image deblurring.
In 2014, Khuangsatung and Kangtunyakarn [27,28] presented the modified variational inclusion problem (MVIP), that is, to find u H such that the following holds:
0 i = 1 N a i A i u + B u ,
where, for every i = 1 , 2 , , N , a i ( 0 , 1 ) with i = 1 N a i = 1 , operator A i : H H and B : H 2 H . Obviously, if A i A for every i = 1 , 2 , , N , then reduce from (3) to (2), and the iterative methods are used for solving a finite family of nonexpansive mappings of fixed point T and a finite family of variational inclusion problems in Hilbert spaces under the condition i = 1 n a i = i = 1 n δ i = 1 . Their method is given by the following:   
z n i = b n u n + ( 1 b n ) T i u n , n 1 u n + 1 = α n f ( x n ) + β n γ n J M , λ ( I λ i = 1 N δ i A i ) u n + γ n i = 1 N a i z n i .
Moreover, their iterative method provided a strong convergence theorem, which they proved. Apart from that, there is another technique that can reduce the overall computational effort under widely used conditions. That is the parallel technique (for more details, see [29,30,31]). Cholamjiak et al. recently proposed an inertial parallel monotone hybrid method that solves the common variational inclusion problems. Their method is given by the following:
s n = u n + ξ n ( u n u n 1 ) , z n i = ( 1 α n i ) s n + α n i J r n B ( I r n A i ) s n , i = arg max { z n i u n : i = 1 , 2 , , N } , z ¯ n = z n i , C n + 1 = { w C n : z ¯ n w 2 u n w 2 + θ n 2 u n u n 1 2 2 θ n u n w , u n 1 u n } , u n + 1 = P C n + 1 u 1 , n 1 ,
The idea of the parallel technique is to find the farthest element z ¯ n = z n i from the previous approximation u n . They demonstrated that the method yielded strong convergence results and applied their results to solve image restoration problems.
Inspired by the ideas of [26,28,31], we provide a new method for determining the solution set of the modified variational inclusion problem in Hilbert spaces. For obtaining high performance, our method modifies the relaxed inertial Tseng’s type method by using the condition i = 1 n δ i = 1 and the parallel technique, which finds the farthest element A ¯ . Furthermore, under appropriate conditions, we prove a weak convergence theorem and present numerical experiments to demonstrate convergence behavior. Finally, the image restoration problems are solved by our result.
The substance of this work is structured as follows: in Section 2, we gather basic definitions and lemmas. In Section 3, the proposed algorithms are explained. In Section 4, the numerical experiments are discussed. In the last section, the conclusion of this work is presented.

2. Preliminaries

Throughout the paper, we suppose that H is a real Hilbert space with norm · and the inner product · , · and that C is a nonempty closed and convex subset of H . In this section, we go over several fundamental concepts and lemmas that are utilized in the main result section.
Lemma 1.
[32] The following statements are accurate:
(i) 
for every α , β H , α β 2 α 2 β 2 2 α β , β ;
(ii) 
for every α , β H , α + β 2 α 2 + 2 β , α + β ;
(iii) 
for every γ ,   λ [ 0 , 1 ] with γ + λ = 1 , γ α + λ β 2 = γ α 2 + λ λ 2 γ λ α β 2 .
Definition 1.
Suppose that A : H H . For any x , y C ,
A is a monotone mapping if the following holds:
A x A y , x y 0 ;
A is L-Lipschitz continuous if there is a constant L > 0 , as follows:
A x A y L x y .
We mention a mapping B : H 2 H . If u B x and v B y for every x , y H , then u v , x y 0 . If B : H 2 H is monotone and every ( y , v ) G r a p h ( B ) implies that u B x for every ( x , u ) H , u v , x y 0 , it is called the maximal monotone.
Definition 2.
The resolvent operator associated with B is referred to as the following:
J λ B = ( I + λ B ) 1 ( u ) , u H ,
where B : H 2 H is the maximal monotone, and I and λ are an identity mapping and a positive number, respectively.
Lemma 2.
[9] Let A : H H be a monotone Lipschitz continuous mapping, and B : H 2 H be a maximal monotone operator; then, A + B is the maximal monotone.
Lemma 3.
[33] Let Γ be a non empty subset of H, and { x n } a sequence of elements of H . Assume the following:
(i) 
For every x C , lim n + x n x exists;
(ii) 
Every weak sequential limit point of { x n } , as n + , belongs to C .
Then { x n } converges weakly as n + to a point in C .
Lemma 4.
[34] Suppose that { ζ n } , { ν n } and { η n } are sequences in [ 0 , ) . For every n 1 , there is η R with 0 η n η 1 and
ζ n + 1 ζ n + η n ( ζ n ζ n 1 ) + ν n , ν n < .
Then the following conditions are satisfied:
(i) 
[ ζ n ζ n 1 ] + < , where [ k ] + = m a x { k , 0 } ;
(ii) 
there is ζ ¯ [ 0 , ) with lim ζ n = ζ ¯ .

3. Main Results

Two algorithms are presented in this section. First, we propose Algorithm 1, which is known as a modified Tseng method for solving the modified variational inclusion problem. After that, we prove a weak convergence theorem by using some symmetry properties. Assume the following assumptions to be correct for the purposes of the investigation.
Assumption 1.
The feasible set of the VIP is a nonempty closed and convex subset of H .
Assumption 2.
The solution set Γ of the MVIP is nonempty.
Assumption 3.
A i : H H is monotone and L i -Lipschitz continuous on H for every i, and B : H 2 H is the maximal monotone.
Algorithm 1 (Modified Tseng’s method for solving the MVIP)
Pick u 0 , u 1 H , μ ( 0 , 1 ) , ρ > 0 , ξ 0 , λ 0 > 0 .
Iterative steps: Given iterates u n 1 and u n in H .
Step 1. Set t n as
t n : = u n + ξ ( u n u n 1 ) .

Step 2. Compute
s n = ( 1 + λ n B ) 1 ( I λ n i = 1 N δ i A i ) t n .

If t n = s n , stop. t n is the solution of MVIP. Else, go to Step 3.
Step 3. Compute
u n + 1 = ( 1 ρ ) t n + ρ s n + ρ λ n ( A i n t n A i n s n ) ,
where A i n : = arg max { A i t n A i s n | i = 1 , 2 , 3 , , N } for some i n { 1 , . . , N } , and the stepsize sequence { λ n } is updated as follows:
λ n + 1 : = min { λ n , μ t n s n A i t n A i s n } , if A i t n A i s n for some i λ n , otherwise .

Set n : = n + 1 , and go back to Step 1.
Lemma 5.
The sequence { λ n } is bounded below.
Proof. 
Since A i is L i -Lipschitz continuous for i = 1 , 2 , , N , we have
A i t n A i s n L i t n s n
By (7), if A i t n A i s n 0 for i = 1 , 2 , , N , then the following holds:
μ t n s n A i t n A i s n μ L i .
Thus, the sequence { λ n } is bounded below by μ L i for i = 1 , 2 , , N .
In the case of A i t n A i s n = 0 for i = 1 , 2 , , N , it is obvious that the sequence { λ n } is bounded below by λ 0 . We can conclude that min { μ L i , λ 0 } λ n .    □
Remark 1.
According to Lemma 5, it is easy to see that the sequence { λ n } is monotonically decreasing and bounded below by min { μ L i , λ 0 } . This implies that the update (7) is well defined and the following holds:
λ n + 1 A i t n A i s n μ t n s n .
Lemma 6.
Let { t n } be a sequence generated by Algorithm 1. If there is a subsequence { t n k } with weak convergence to q with lim n t n s n = 0 , then q Γ .
Proof. 
We assume that ( y , x ) G r a p h ( A + B ) , which means x A y B y . By s n k = ( I + λ n k B ) ( I λ n k i = 1 N δ i A i ) t n k , we have the following:
( I λ n k i = 1 N δ i A i ) t n k ( I + λ n k B ) s n k .
Therefore, 1 λ n k ( t n k s n k λ n k i = 1 N δ i A i t n k ) B s n k . Because B is the maximal monotone, we obtain the following:
y s n k , x A y 1 λ n k ( t n k s n k λ n k i = 1 N δ i A i ) 0 .
Hence,
y s n k , x y s n k , A y + 1 λ n k ( t n k s n k λ n k i = 1 N δ i A i t n k ) = y s n k , A y i = 1 N δ i A i t n k + y s n k , 1 λ n k ( t n k s n k ) = y s n k , A y i = 1 N δ i A i s n k + y s n k , i = 1 N δ i A i s n k i = 1 N δ i A i t n k + y s n k , 1 λ n k ( t n k s n k ) y s n k , i = 1 N δ i A i s n k i = 1 N δ i A i t n k + y s n k , 1 λ n k ( t n k s n k ) .
Since A i is L i -Lipschitz continuous and lim n t n s n = 0 , it follows that lim n i = 1 N δ i A i t n i = 1 N δ i A i s n = 0 . If lim n λ n exists, it receives the following:
0 y q , x = lim k y s n k , x .
Based on the maximal monotonicity of A + B and (11), a zero is in ( A + B ) q . We can conclude that q Γ .    □
Theorem 1.
Assume the following:
0 < ρ < 1 + 8 ω 1 2 ω 2 ( 1 ω )
with ω ( 0 , 1 μ 2 ρ ( 1 μ ) ) , and i = 1 N δ i = 1 . If every u ¯ Γ Ø , then { u n } generated by Algorithm 1 converges weakly to u ¯ .
Proof. 
We know that the resolvent J λ n B is firmly non-expansive and s n = ( I + λ n B ) 1 ( I λ n i = 1 N δ i A i ) t n = J λ n B ( I λ n i = 1 N δ i A i ) t n .
Thus, we have the following:
s n u ¯ , t n s n λ n i = 1 N δ i A i t n = J λ n B ( I λ n i = 1 N δ i A i ) t n J λ n B ( I λ n i = 1 N δ i A i ) u ¯ , ( I λ n i = 1 N δ i A i ) t n ( I λ n i = 1 N δ i A i ) u ¯ + ( I λ n i = 1 N δ i A i ) u ¯ s n s n u ¯ 2 + s n u ¯ , u ¯ s n s n u ¯ , λ n i = 1 N δ i A i u ¯ = s n u ¯ , λ n i = 1 N δ i A i u ¯ .
Obviously, s n u ¯ , t n s n λ n i = 1 N δ i ( A i t n A i s n ) 0 . This implies that   
2 t n s n , s n u ¯ 2 λ n i = 1 N δ i A i t n A i s n , s n u ¯ 0 .
However, 2 t n s n , s n u ¯ =   t n u ¯ 2 t n s n 2 s n u ¯ 2 . To replace the previous equation in (13), we obtain the following:
s n u ¯ 2 t n u ¯ 2 t n s n 2 2 λ n i = 1 N δ i A i t n A i s n , s n u ¯ .
The definition of u n + 1 implies the following:
u n + 1 u ¯ 2 = ( 1 ρ ) t n + ρ s n + ρ λ n ( A i n t n A i n s n ) u ¯ 2 , = ( 1 ρ ) ( t n u ¯ ) + ρ ( s n u ¯ ) + ρ λ n ( A i n t n A i n s n ) 2 , = ( 1 ρ ) 2 t n u ¯ 2 + ρ 2 s n u ¯ 2 + ρ 2 λ n 2 A i t n A i s n 2 + 2 ρ ( 1 ρ ) t n u ¯ , s n u ¯ + 2 λ n ρ ( 1 ρ ) t n u ¯ , A i t n A i s n + 2 λ n ρ 2 s n u ¯ , A i t n A i s n ,
for each i = 1 , 2 , , N . Consider the following:
t n u ¯ 2 + s n u ¯ 2 t n s n 2 = 2 t n u ¯ , s n u ¯ .
Substituting (16) in (15), we obtain the following:
u n + 1 u ¯ 2 = ( 1 ρ ) 2 t n u ¯ 2 + ρ 2 s n u ¯ 2 + ρ 2 λ n 2 A i t n A i s n 2 + ρ ( 1 ρ ) ( t n u ¯ 2 + s n u ¯ 2 t n s n 2 ) + 2 λ n ρ ( 1 ρ ) t n u ¯ , A i t n A i s n + 2 λ n ρ 2 s n u ¯ , A i t n A i s n , = ( 1 ρ ) t n u ¯ 2 + ρ s n u ¯ 2 ρ ( 1 ρ ) t n s n 2 + λ n 2 ρ 2 A i t n A i s n 2 + 2 λ n ρ ( 1 ρ ) t n u ¯ , A i t n A i s n + 2 λ n ρ 2 s n u ¯ , A i t n A i s n ,
for each i = 1 , 2 , , N . Putting (14) in (17), we obtain that, for every i = 1 , 2 , , N ,
u n + 1 u ¯ 2 ( 1 ρ ) t n u ¯ 2 + ρ ( t n u ¯ 2 t n s n 2 2 λ n i = 1 N δ i A i t n A i s n , s n u ¯ ) ρ ( 1 ρ ) t n s n 2 + λ n 2 ρ 2 A i t n A i s n 2 + 2 λ n ρ ( 1 ρ ) t n u ¯ , A i t n A i s n + 2 λ n ρ 2 s n u ¯ , A i t n A i s n , = t n u ¯ 2 ρ ( 2 ρ ) t n s n 2 2 ρ λ n i = 1 N δ i A i t n A i s n , s n u ¯ + ρ 2 λ n A i t n A i s n 2 + 2 λ n ρ ( 1 ρ ) t n u ¯ , A i t n A i s n + 2 λ n ρ 2 s n u ¯ , A i t n A i s n t n u ¯ 2 ρ ( 2 ρ ) t n s n 2 + ρ 2 λ n 2 μ λ n + 1 2 t n s n 2 + 2 ρ λ n ( i = 1 N δ i ρ ) u ¯ s n , A i t n A i s n + 2 ρ λ n ( 1 ρ ) t n u ¯ , A i t n A i s n t n u ¯ 2 ρ ( 2 ρ ) t n s n 2 + ρ 2 λ n 2 μ λ n + 1 2 t n s n 2 + 2 ρ λ n ( i = 1 N δ i ρ ) u ¯ s n , A i t n A i s n + 2 ρ λ n ( 1 ρ ) t n u ¯ , A i t n A i s n t n u ¯ 2 ρ ( 2 ρ ) t n s n 2 + ρ 2 λ n 2 μ λ n + 1 2 t n s n 2 + 2 ρ λ n ( 1 ρ ) t n s n , A i t n A i s n t n u ¯ 2 ρ ( 2 ρ ) t n s n 2 + ρ 2 λ n 2 μ λ n + 1 2 t n s n 2 + 2 ρ λ n ( 1 ρ ) μ λ n + 1 t n s n 2 t n u ¯ 2 ρ ( 2 ρ 2 μ ( 1 ρ ) λ n λ n + 1 ρ μ 2 λ n 2 λ n + 1 2 ) t n s n 2 . = t n u ¯ 2 η ρ t n s n 2
where η = ( 2 ρ 2 μ ( 1 ρ ) λ n λ n + 1 ρ μ 2 λ n 2 λ n + 1 2 ) . By the definition of u n + 1 and (10), it follows that
u n + 1 s n = ( 1 ρ ) t n + ρ s n + ρ λ n ( A i n t n A i n s n ) s n ( 1 ρ ) t n s n + ρ λ n A i t n A i s n ( 1 ρ ) t n s n + ρ μ λ n λ n + 1 t n s n = ( 1 ρ ( 1 μ λ n λ n + 1 ) ) t n s n ,
for every i = 1 , 2 , , N . The following can be stated:
u n + 1 t n u n + 1 s n + s n t n .
From (19), it follows that
u n + 1 t n ( 1 ρ ( 1 μ λ n λ n + 1 ) ) t n s n + s n t n = ( 2 ρ ( 1 μ λ n λ n + 1 ) ) t n s n .
Therefore,
1 2 ρ ( 1 μ λ n λ n + 1 ) u n + 1 t n t n s n .
By (18) and (21), we have the following:
u n + 1 s n 2 t n u ¯ 2 η ρ t n s n 2 t n u ¯ 2 η ρ 2 ρ ( 1 μ λ n λ n + 1 ) u n + 1 t n .
We also can state the following:
ψ n = ρ ( 2 ρ 2 μ ( 1 ρ ) λ n λ n + 1 ρ μ 2 λ n 2 λ n + 1 2 ) ( 2 ρ ( 1 μ λ n λ n + 1 ) ) 2 .
Obviously, with λ n λ as n , we have the following:
lim n ψ n = ρ ( 2 ρ 2 μ ( 1 ρ ) ρ μ 2 ) ( 2 ρ ( 1 μ ) ) 2 = ( 1 μ ) ( 2 ρ + ρ μ ) ( 2 ρ ( 1 μ ) ) 2 > 0 .
From (12), there is an ω such that
( 2 ρ 2 μ ( 1 ρ ) λ n λ n + 1 ρ μ 2 λ n 2 λ n + 1 2 ) ( 2 ρ ( 1 μ λ n λ n + 1 ) ) 2 > ω
for every n N . The previous equation and (20) imply the following:
u n + 1 s n 2 t n u ¯ 2 ω u n + 1 t n .
By the definition of t n , we have the following:
t n u ¯ 2 = u n + ξ ( u n u n 1 ) u ¯ 2 = ( 1 ξ ) ( u n u ¯ ) ξ ( u n 1 u ¯ ) 2 = ( 1 ξ ) u n u ¯ 2 ξ u n 1 u ¯ 2 + ξ ( 1 + ξ ) u n u n 1 2 .
By the Cauchy–Schwartz inequality, it follows that
u n + 1 t n 2 = u n + 1 u n ξ ( u n u n 1 ) 2 = u n + 1 u n 2 + ξ 2 u n u n 1 2 2 ξ u n + 1 u n , u n u n 1 u n + 1 u n 2 + ξ 2 u n u n 1 2 2 ξ u n + 1 u n u n u n 1 u n + 1 u n 2 + ξ 2 u n u n 1 2 ξ u n + 1 u n 2 ξ u n u n 1 2 ( 1 ξ ) u n + 1 u n 2 + ( ξ 2 ξ ) u n u n 1 2 .
Thank to (24)–(26), we can see that
u n + 1 u ¯ 2 ( 1 + ξ ) u n u ¯ 2 ξ u n 1 u ¯ 2 + ξ ( 1 + ξ ) u n u n 1 2 ω ( ( 1 ξ ) u n + 1 u n 2 + ( ξ 2 ξ ) u n u n 1 2 ) ( 1 + ξ ) u n u ¯ 2 ξ u n 1 u ¯ 2 ω ( 1 ξ ) u n + 1 u n 2 + ( ξ ( 1 + ξ ) ω ( ξ 2 ξ ) ) u n u n 1 2 ( 1 + ξ ) u n p 2 ξ u n 1 p 2 + β n u n + 1 u n 2 + γ n u n u n 1 2
where β n = ω n ( 1 ξ ) and γ n = ( ξ ( 1 + ξ ) ω ( ξ 2 ξ ) ) . It can be stated that
κ n = u n u ¯ 2 ξ u n 1 u ¯ 2 + β n u n u n 1 2 .
Therefore,
κ n + 1 κ n = u n + 1 u ¯ 2 ξ u n u ¯ 2 + β n + 1 u n + 1 u n 2 u n u ¯ 2 + ξ u n 1 u ¯ 2 β n u n u n 1 2 = u n + 1 p 2 ( 1 + ξ ) u n u ¯ 2 + ξ u n 1 u ¯ 2 + β n + 1 u n + 1 u n 2 β n u n u n 1 2 u n + 1 u ¯ 2 ( 1 + ξ ) u n u ¯ 2 + ξ u n 1 u ¯ 2 + β n + 1 u n + 1 u n 2 β n u n u n 1 2 + β n + 1 u n + 1 u n 2 β n u n u n 1 2 ( γ n β n + 1 ) u n + 1 u n 2 .
We note the following:
γ n β n + 1 = ω n ( 1 ξ ) ( ξ ( 1 + ξ ) ω ( ξ 2 ξ ) ) = ( 1 ω ) ξ 2 ( 1 + 2 ω ) ξ + ω .
Thus,
κ n + 1 κ n ϕ u n + 1 u n 2
where ϕ = ( 1 ω ) ξ 2 ( 1 + 2 ω ) ξ + ω . By the assumption of (12), we obtain ϕ > 0 . This indicates that κ n is nonincreasing. Furthermore, we have the following:
κ n + 1 = u n + 1 u ¯ 2 ξ u n u ¯ 2 + β n + 1 u n + 1 u n 2 ξ u n u ¯ 2
and
κ n = u n u ¯ 2 ξ u n 1 u ¯ 2 + β n u n u n 1 2 u n u ¯ 2 ξ u n 1 u ¯ 2 .
It follows that
u n u ¯ 2 κ n + ξ u n 1 u ¯ 2 κ 1 + ξ u n 1 u ¯ 2 κ 1 ( ξ n 1 + + 1 ) + ξ n u 0 u ¯ 2 κ 1 1 ξ + ξ n u 0 u ¯ 2 .
Combining (31) and (33), we obtain the following:
κ n + 1 ξ u n u ¯ 2 ξ κ 1 1 ξ + κ n + 1 u 0 u ¯ 2 .
According to (30) and (33), it follows that
ϕ n = 1 m u n + 1 u n κ 1 κ m + 1 κ 1 + ξ κ 1 1 ξ + ξ m + 1 u 0 u ¯ 2 κ 1 1 ξ + u 0 u ¯ 2 .
Taking m in the previous equation, we have n = 1 u n + 1 u n < + . It can be concluded that lim n u n + 1 u n = 0 . We consider the following:
u n + 1 t n = u n + 1 u n 2 + ξ 2 u n u n 1 2 2 ξ u n + 1 u n , u n u n 1 .
Obviously, u n + 1 t n 0 as n . By means of (30) and Lemma 4, we obtain the following: lim n u n u ¯ 2 = h , for some h > 0 . From (25), we have lim n t n u ¯ 2 = h . Furthermore,
0 u n t n u n u n + 1 + u n + 1 t n 0 .
By (22), we see that s n t n 0 . Because lim n u n u ¯ 2 exists, it is implied that the sequence { u n } is bounded. We note that { u n k } is a subsequence of { u n } , and u n k u ^ . It follows from (36) that t n k u ^ . Since lim n t n s n = 0 and because of Lemma 6, it follows that u ^ Γ . As a result of Lemma 3, we may deduce that the sequence { u n } converges weakly to the solution of the MVIP.    □
For solving the problem of (2), the following algorithm (Algorithm 2) is suggested when setting A i A in Algorithm 1
Algorithm 2 (Modified Tseng’s method for solving the VIP)
Pick u 0 , u 1 H , μ ( 0 , 1 ) , ρ > 0 , ξ 0 , λ 0 > 0 .
Iterative steps: Given iterates u n 1 and u n in H .
Step 1. Set t n as
t n : = u n + ξ ( u n u n 1 ) .

Step 2. Compute
s n = ( 1 + λ n B ) 1 ( I λ n A ) t n .

If t n = s n , stop. t n is the solution of the VIP. Else, go to Step 3.
Step 3. Compute
u n + 1 = ( 1 ρ ) t n + ρ s n + ρ λ n ( A t n A s n ) .

The stepsize sequence { λ n } is updated as follows:
λ n + 1 : = min { λ n , μ t n s n A t n A s n } , if A t n A s n λ n , otherwise .

Set n : = n + 1 , and go back to Step 1.
Corollary 1.
If the solution set of the VIP ( Γ ) is nonempty, then the sequence { u n } generated by Algorithm 2 converges weakly to u ¯ Γ .
Proof. 
Proofs are similar by setting A i A in Lemma 5, 6, and Theorem 1. □

4. Numerical Experiments

We provide numerical examples to demonstrate the behavior of our algorithm in this section. We have broken it down into two examples.
The behavior of an error of the sequence { x n } generated by Algorithm 1 is demonstrated in Example 1. Furthermore, in Example 1, we demonstrate the behavior of the sequence { x n } , which is generated by Algorithm 1 by separating the dimension of the sequence { x n } .
In Example 2, we apply our main result to solve image deblurring problems. Image recovery problems could be explained using the inversion of the following observation model:
b = M x + δ
where M is a deblurring matrix, x R n represents the original image, b is the deblurring image, and δ R m is the Gaussian noise. The convex unconstrained optimization problem is known to be the same as solving (39):
min x R n 1 2 M x b 2 2 + ϵ x 1 2 ,
as the regularization parameter ϵ > 0 . For solving (40), we suppose A = F ( x ) and B = G , where F ( x ) = 1 2 M x b 2 2 and G ( x ) = x 1 2 . Therefore, F ( x ) = M t ( M x b ) is 1 M 2 -cocoercive. This implies that ( I τ F ) is nonexpansive and that 0 < τ < 2 M 2 . Because G is the maximal monotone, x is an (40) solution if and only if the following holds:
0 F ( x ) + G ( x ) x = arg min u R n G ( u ) + 1 2 ϵ u ( I ϵ F ) ( x ) 2 .
In addition, we might still formulate (40) as a convex constrained optimization problem for the split feasibility problem (SFP):
min x R n 1 2 M x b 2 2 subject to x 1 k ,
where k > 0 is a given constant and (41) is solved. Setting A x = F ( x ) , we consider C = { x R n | x 1 k } and Q = { b } . We use Algorithm 1 to solve this image deblurring problem. Figure 1 shows a flowchart of the technique for selecting a blurring function A by summing N blurring functions A 1 A N .
Example 1.
Let H = R 2 be a two-dimensional space of real numbers. Define monotone operators A 1 ( x ) = 2 x , A 2 ( x ) = 1 2 x and maximal monotone operator B ( x ) = 4 x for every x R 2 . Moreover, we set parameters ξ = 0.5 , ρ = 0.015 , ω = 1.5 , δ 1 = 0.5 , δ 2 = 0.5 , and μ = 0.5 and choose the initial point as x 1 = ( 1 , 1 ) and x 2 = ( 2 , 0 ) . For the time being, it is exciting to see that all of our parameters and operators are satisfied with our main result. By applying Algorithm 1, we can solve the problem of (3) and show the behavior of our algorithm.
After completion the experiment, we can observe that a sequence { x n } always converges to the solution point of the problem of (3), independently of the arbitrary starting point.
In Figure 2, we plot the error graph of a sequence { x n } . This shows that the sequence achieves a small error when the number of iterations is higher. We are also confident that increasing the number of iterations will result in an inaccuracy of less than ϵ < 10 7 .
In Figure 3, we plot the sequence { x n } by separating dimension of x n . We can conclude that a sequence { x n } always converges to the solution of the interested problem. That is, x n = ( x n 1 , x n 2 ) ( 0 , 0 ) as n .
Furthermore, the detailed analysis of the computations of our algorithm is shown in Table 1 for E ( x n 1 ) = x n + 1 1 x n 1 and E ( x n 2 ) = x n + 1 2 x n 2 for all n N .
Example 2.
As the blurring function, we used MATLAB’s motion blur by using A 1 as fspecial (’motion’,9,40) and A 2 as fspecial (’gaussian’,9,2). The standard test images of a house ( 256 × 256 ) and of boats ( 512 × 512 ) are used in the comparison (see Figure 4). We compare our proposed algorithm (Algorithm 1) with the algorithm in [28]. The control parameters are set as follows: ξ = 0.5 , ρ = 0.015 , ω = 1.5 , δ 1 = 0.5 , δ 2 = 0.5 , and μ = 0.5 . The results of this experiment can be seen in Figure 5, Figure 6 and Figure 7. To analyze the image quality that is restored, we adopted the signal-to-noise ratio (SNR) as follows:
SNR = 20 log x 2 2 x x n 2 2
where x is the original image, and x n is the estimated image at iteration n. In Figure 7, the SNR values of the house and boat images restored by Algorithm 1 (Figure 5b and Figure 6b) are higher than those images restored by the algorithm in [28] (Figure 5c and Figure 6c). The superiority of the proposed algorithm in measures of SNR is shown.

5. Conclusions

Two modified Tseng’s methods are presented for solving the modified variational inclusion and the variational inclusion problems by using the condition for the modified variational inclusion problems and the parallel technique. Additionally, we demonstrate the behavior of our algorithm and use it to solve image deblurring problems.

Author Contributions

Conceptualization, K.S. (Kanokwan Sitthithakerngkiet); formal analysis, T.S.; funding acquisition, T.S.; investigation, K.S. (Kanokwan Sitthithakerngkiet); methodology, T.S.; software, K.S. (Kanokwan Sitthithakerngkiet); validation, T.S., K.S. (Kamonrat Sombut) and A.A.; visualization, K.S. (Kanokwan Sitthithakerngkiet); writing—original draft, T.S.; writing—review & editing, K.S. (Kanokwan Sitthithakerngkiet). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Science, Research and Innovation Fund (NSRF), King Mongkut’s University of Technology North Bangkok with Contract No. KMUTNB-FF-65-13.

Acknowledgments

The authors would like to thank King Mongkut’s University of Technology North Bangkok (KMUTNB), Rajamangala University of Technology Thanyaburi (RMUTT) and Nakhon Sawan Rajabhat University. We appreciate the anonymous referee’s thorough review of the manuscript and helpful suggestions for refining the presentation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Baiocchi, C. Variational and Quasivariational Inequalities. In Applications to Free-Boundary Problems; Springer: Basel, Switzerland, 1984. [Google Scholar]
  2. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef] [Green Version]
  3. Marcotte, P. Application of Khobotov’s algorithm to varaitional inequalities and network equilibrium problems. INFOR Inf. Syst. Oper. Res. 1991, 29, 258–270. [Google Scholar]
  4. Hanjing, A.; Suantai, S. A fast image restoration algorithm based on a fixed point and optimization method. Mathematics 2020, 8, 378. [Google Scholar] [CrossRef] [Green Version]
  5. Gibali, A.; Thong, D.V. Tseng type method for solving inclusion problems and its applications. Calcolo 2018, 55, 49. [Google Scholar] [CrossRef]
  6. Thong, D.V.; Cholamjiak, P. Strong convergence of a forward-backward splitting method with a new step size for solving monotone inclusions. Comput. Appl. Math. 2019, 38, 94. [Google Scholar] [CrossRef]
  7. Khobotov, E.N. Modification of the extra-gradient method for solving variational inequalities and certain optimization problems. USSR Comput. Math. Math. Phys. 1987, 27, 120–127. [Google Scholar] [CrossRef]
  8. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 1998, 38, 431–446. [Google Scholar] [CrossRef]
  9. Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonliner operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  10. Passty, G.B. Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 1979, 72, 383–390. [Google Scholar] [CrossRef] [Green Version]
  11. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011. [Google Scholar]
  12. Eckstein, J.; Bertsekas, D. On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992, 55, 293–318. [Google Scholar] [CrossRef] [Green Version]
  13. Nesterov, Y. A method for unconstrained convex minimization problem with the rate of convergence O( 1 k 2 ). Doklady Ussr. 1983, 269, 543–547. [Google Scholar]
  14. Alvarez, F. On the minimizing property of a second order dissipative system in Hilbert spaces. SIAM J. Control Optim. 2000, 38, 1102–1119. [Google Scholar] [CrossRef] [Green Version]
  15. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  16. Abubarkar, J.; Kumam, P.; Rehman, H.; Ibrahim, A.H. Inertial iterative schemes with variable step sizes for variational inequality problem involving pseudo monotone operator. Mathematics 2020, 8, 609. [Google Scholar] [CrossRef]
  17. Ceng, L.C.; Petrusel, A.; Qin, X.; Yao, J.C. A Modified inertial subgradient extragradient method for solving pseudomonotone variational inequalities and common fixed point problems. Fixed Point Theory 2020, 21, 93–108. [Google Scholar] [CrossRef]
  18. Ceng, L.C.; Petrusel, A.; Qin, X.; Yao, J.C. Two inertial subgradient extragradient algorithms for variational inequalities with fixed-point constraints. Optimization 2021, 70, 1337–1358. [Google Scholar] [CrossRef]
  19. Zhao, T.Y.; Wang, D.Q.; Ceng, L.C.; He, L.; Wang, C.Y.; Fan, H.L. Quasi-inertial Tseng’s extragradient algorithms for pseudomonotone variational inequalities and fixed point problems of quasi-nonexpansive operators. Numer. Funct. Anal. Optim. 2020, 42, 69–90. [Google Scholar] [CrossRef]
  20. He, L.; Cui, Y.L.; Ceng, L.C.; Wang, D.Q.; Hu, H.Y. Strong convergence for monotone bilevel equilibria with constraints of variational inequalities and fixed points using subgradient extragradient implicit rule. J. Inequal. Appl. 2021, 146, 1–37. [Google Scholar] [CrossRef]
  21. Attouch, H.; Cabot, A. Convergence of a relaxed inertial proximal algorithm for maximally monotone operators. Math. Program. 2020, 184, 243–287. [Google Scholar] [CrossRef]
  22. Attouch, H.; Cabot, A. Convergence rate of a relaxed inertial proximal algorithm for convex minimization. Optimization 2019, 69, 1281–1312. [Google Scholar] [CrossRef]
  23. Attouch, H.; Cabot, A. Convergence of a relaxed inertial forward-backward algorithm for structured monotone inclusions. Appl. Math. Optim. 2019, 80, 547–598. [Google Scholar] [CrossRef]
  24. Bot, R.I.; Csetnek, E.R.; Hendrich, C. Inertial Douglas-Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 2015, 256, 472–487. [Google Scholar]
  25. Oyewole, O.K.; Abass, H.A.; Mebawondu, A.A.; Aremu, K.O. A Tseng extragradient method for solving variational inequality problems in Banach spaces. Numer. Algor. 2021, 1–21. [Google Scholar] [CrossRef]
  26. Abubakar, A.; Kumam, P.; Ibrahim, A.H.; Padcharoen, A. Relaxed inertial Tseng’s type method for solving the inclusion problem with application to image restoration. Mathematics 2020, 8, 818. [Google Scholar] [CrossRef]
  27. Khuangsatung, W.; Kangtunyakarn, A. Algorithm of a new variational inclusion problem and strictly pseudononspreading mapping with application. Fixed Point Theory Appl. 2014, 209. [Google Scholar] [CrossRef] [Green Version]
  28. Khuangsatung, W.; Kangtunyakarn, A. A theorem of variational inclusion problems and various nonlinear mappings. Appl. Anal. 2018, 97, 1172–1186. [Google Scholar] [CrossRef]
  29. Anh, P.K.; Hieu, D.V. Parallel and sequential hybrid methods for a finite family of asymptotically quasi ϕ-nonexpansive mappings. J. Appl. Math. Comput. 2015, 48, 241–263. [Google Scholar] [CrossRef]
  30. Anh, P.K.; Hieu, D.V. Parallel hybrid iterative methods for variational inequalities, equilibrium problems and common fixed point problems. Vietnam J. Math. 2016, 44, 351–374. [Google Scholar] [CrossRef]
  31. Cholamjiak, W.; Khan, S.A.; Yambangwai, D.; Kazmi, K.R. Strong convergence analysis of common variational inclusion problems involving an inertial parallel monotone hybrid method for a novel application to image restoration. Rev. Real Acad. Cienc. Exactas Fis. Nat. Ser. Mat. 2020, 114, 351–374. [Google Scholar] [CrossRef]
  32. Takahashi, W. Nonlinear Function Analysis; Yokohama Publishers: Yokohama, Japan, 2000. [Google Scholar]
  33. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef] [Green Version]
  34. Ofoedu, E.U. Strong convergence theorem for uniformly L-Lipschitzian asymptotically pseudo contractive mapping in real Banach space. J. Math. Anal. Appl. 2006, 321, 722–728. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The flowchart of the image restoration process.
Figure 1. The flowchart of the image restoration process.
Symmetry 13 02250 g001
Figure 2. Error plot of Algorithm 1.
Figure 2. Error plot of Algorithm 1.
Symmetry 13 02250 g002
Figure 3. Plot of the sequence { x n } by separating dimensions of x n .
Figure 3. Plot of the sequence { x n } by separating dimensions of x n .
Symmetry 13 02250 g003
Figure 4. Original test images: (a) House and (b) Boats.
Figure 4. Original test images: (a) House and (b) Boats.
Symmetry 13 02250 g004
Figure 5. (a) Degraded house image, (b) house image restored by Algorithm 1; (c) house image restored by the algorithm in [28].
Figure 5. (a) Degraded house image, (b) house image restored by Algorithm 1; (c) house image restored by the algorithm in [28].
Symmetry 13 02250 g005
Figure 6. (a) Degraded boats image; (b) boats image restored by Algorithm 1; (c) boats image restored by the algorithm in [28].
Figure 6. (a) Degraded boats image; (b) boats image restored by Algorithm 1; (c) boats image restored by the algorithm in [28].
Symmetry 13 02250 g006
Figure 7. SNR values of the house and boats images restored by Algorithm 1 and the algorithm in [28].
Figure 7. SNR values of the house and boats images restored by Algorithm 1 and the algorithm in [28].
Symmetry 13 02250 g007
Table 1. Detailed analysis of computational of Algorithm 1.
Table 1. Detailed analysis of computational of Algorithm 1.
Algorithm 1
n E ( x n 1 ) E ( x n 2 )
10.1000000000000010.100000000000000
20.0003851562499996850.00945859374999999
30.01037236786499030.000402462457275377
40.01131512194768720.000500674227283338
50.01135324172834550.000588079127189875
290.009962916534873580.000524394399890646
300.009908428503926150.000521526442682538
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Seangwattana, T.; Sombut, K.; Arunchai, A.; Sitthithakerngkiet, K. A Modified Tseng’s Method for Solving the Modified Variational Inclusion Problems and Its Applications. Symmetry 2021, 13, 2250. https://doi.org/10.3390/sym13122250

AMA Style

Seangwattana T, Sombut K, Arunchai A, Sitthithakerngkiet K. A Modified Tseng’s Method for Solving the Modified Variational Inclusion Problems and Its Applications. Symmetry. 2021; 13(12):2250. https://doi.org/10.3390/sym13122250

Chicago/Turabian Style

Seangwattana, Thidaporn, Kamonrat Sombut, Areerat Arunchai, and Kanokwan Sitthithakerngkiet. 2021. "A Modified Tseng’s Method for Solving the Modified Variational Inclusion Problems and Its Applications" Symmetry 13, no. 12: 2250. https://doi.org/10.3390/sym13122250

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop