Next Article in Journal
Accelerating Propagation Induced by Slowly Decaying Initial Data for Nonlocal Reaction-Diffusion Equations in Cylinder Domains
Previous Article in Journal
Half-Symmetric Connections of Generalized Riemannian Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inertial Algorithm for Best Proximity Point, Split Variational Inclusion and Equilibrium Problems with Application to Image Restorations

by
Mujahid Abbas
1,2,
Muhammad Waseem Asghar
1 and
Ahad Hamoud Alotaibi
3,*
1
Department of Mechanical Engineering Science, Faculty of Engineering and the Built Environment, University of Johannesburg, Johannesburg 2092, South Africa
2
Department of Medical Research, China Medical University, Taichung 40402, Taiwan
3
Department of Mathematics, Faculty of Science and Arts, King Abdulaziz University, Rabigh 21911, Saudi Arabia
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(12), 924; https://doi.org/10.3390/axioms14120924
Submission received: 3 November 2025 / Revised: 8 December 2025 / Accepted: 12 December 2025 / Published: 16 December 2025
(This article belongs to the Section Mathematical Analysis)

Abstract

If S and T are two non-self-mappings, then a solution of equation S a * = T a * = a * does not necessarily exist. The common best proximity point problem is to find the approximate optimal solution of such type of equation and have a key role in theory of approximation and optimization. The primary goal of this paper is to introduce an inertial-type self-adaptive algorithm for solving the common best proximity point, generalized equilibrium and split variational inclusion problems in Hilbert spaces. The strong convergence of the proposed algorithm is given under some mild conditions. It is worth mentioning that the step size in many existing algorithms requires the prior knowledge of operator norms which is difficult to compute, whereas our proposed algorithm does not require this condition. Numerical examples are given to illustrate the efficiency and applicability of the proposed approach. We further apply the proposed algorithm to an image restoration problem and show that it achieves a higher signal-to-noise ratio compared with the existing algorithms considered in this study.

1. Introduction and Preliminaries

Optimization, variational inclusion, equilibrium and best proximity point problems are well-known classes of nonlinear problems with numerous applications in applied sciences and engineering. The search for efficient, low cost, flexible and easily implementable iterative algorithms to approximate common solutions of such problems has been an active area of research over the last few decades. In this continuation, we propose an efficient algorithm to approximate the common solution of these problems.
Throughout this paper, R and N denote the sets of all real and natural numbers, respectively. The notations a n a * and a n a * represent strong and weak convergence, respectively, while χ ( a n ) denotes the set of weak limit points of the sequence { a n } .
Suppose that H is a real Hilbert space and C is a nonempty closed convex subset of H . A mapping T : C C is said to be nonexpansive if T a T b a b , a , b C . Such mappings arise frequently in convex optimization, economics, signal processing, image reconstruction, variational inclusion and other applied sciences (see, e.g., [1,2]).
Let C and D be nonempty subsets of H with C * C . A mapping T : C * D is called C * -nonexpansive if T a T c a c , a C , c C * .
Let us recall some other concepts needed in the sequel. A multi-valued mapping T is monotone, if for a T a and b T b , we have the following: a b , a b 0 , where a , b H . Note that if the graph of T is not properly contained in the graph of another monotone mapping then the mapping T is known as maximal monotone. For λ > 0 and I, the identity operator, the resolvent operator T is given as follows:
J λ T = ( I + λ T ) 1 ;
A self-mapping T on H is said to k-inversely monotone if, for all a , b H ,
a b , T a T b k T a T b 2 ,
where k > 0 is a constant. In particular, when k = 1 , the operator T is referred to as inverse strongly monotone (firmly nonexpansive).
Note that for any h 1 H , there is an element P C h 1 C such that
h 1 P C h 1 = inf { h 1 h 2 : h 2 C } .
The mapping P C : H C is known as metric projection, and the element P C h 1 is referred to as the best approximation of h 1 from C.
Note that for any h 1 H ,
P C h 1 = h 3 if and only if h 1 h 3 , h 3 h 2 0 , for all h 2 C .
For more details on metric projections, we refer to Section 3 of [3,4].
The projection mapping P C is a classical example of firmly nonexpansive mappings; consequently, it is also nonexpansive (see Section 11 in [3]). Moreover, a mapping T : C H is said to be demiclosed on C if the operator I T has a graph that is sequentially closed with respect to weak convergence in the domain and strong convergence in the codomain.
Lemma 1
([5]). Assume that T : C H is a nonexpansive mapping, then we have I T demiclosed at zero, i.e., for any sequence { h n } C such that h n h 1 and ( I T ) h n 0 , it follows that ( I T ) h 1 = 0 .
Definition 1
([6]). A function T : H R { + } is said to be weakly lower semi-continuous at a point h * H if, for any sequence { h n } in H with h n h * , we have
T ( h * ) lim inf n T ( h n ) .
Lemma 2
([7]). For any h 1 , h 2 H and δ R , the following equalities and inequalities hold:
(i)   
h 1 + h 2 2 2 h 1 + h 2 , h 2 + h 1 2 ,
(ii)  
h 1 + h 2 2 = h 1 2 + h 2 2 + 2 h 1 , h 2 ,
(iii) 
h 1 h 2 2 = h 1 2 + h 2 2 2 h 1 , h 2 ,
(iv) 
δ h 1 + ( 1 δ ) h 2 2 = δ h 1 2 + ( 1 δ ) h 2 2 δ ( 1 δ ) h 1 h 2 2 .
Lemma 3
([8]). For any h 1 , h 2 , h 3 H and ρ , η , θ [ 0 , 1 ] with ρ + η + θ = 1 , the following identity holds:
ρ h 1 + η h 2 + θ h 3 2 = ρ h 1 2 + η h 2 2 + θ h 3 2 ρ η h 1 h 2 2 η θ h 2 h 3 2 ρ θ h 1 h 3 2 .
Lemma 4
([9]). Let { a ¯ n } , { c ¯ n } R + and { b ¯ n } R , { δ n } ( 0 , 1 ) such that n = 0 c ¯ n < . Suppose
a ¯ n + 1 ( 1 δ n ) a ¯ n + b ¯ n + c ¯ n , n 0 .
       The following conclusions hold:
(i)  
If there exists a constant θ 0 such that b ¯ n θ δ n , then the sequence { a ¯ n } is bounded.
(ii) 
If n = 0 δ n = and lim sup n b ¯ n δ n 0 , then a ¯ n 0 as n .
Lemma 5
([10]). Let { a ¯ n } R + and { δ n } ( 0 , 1 ) satisfy n = 0 δ n = . Let { b ¯ n } R and assume that
a ¯ n + 1 ( 1 δ n ) a ¯ n + δ n b ¯ n , n N .
If lim sup n b ¯ n i 0 and for any subsequence { a ¯ n i } of { a ¯ n } satisfying lim inf i ( a ¯ n i + 1 a ¯ n i ) 0 , then it follows that lim n a ¯ n = 0 .
Assume that H 1 and H 2 are real Hilbert spaces, and let C and D be nonempty closed convex subsets of H 1 and H 2 , respectively. Suppose that A : H 1 H 2 is a bounded linear operator.
Let F : C × C R be a bifunction. The corresponding equilibrium problem (EP) is to obtain a * C such that
F ( a * , b ) 0 , b C .
If G : C H is a given operator, then the variational inequality problem (VIP) is formulated as
Find an element a * C such that b a * , G a * 0 , b C .
By combining these two concepts, we obtain the generalized equilibrium problem associated with F and G, denoted by G E P ( F , G ) , which can be formulated as follows: for any b C , find a * C ; we have
F ( a * , b ) + b a * , G a * 0 ,
It is clear that if G = 0 in (2), G E P ( F , G ) simplifies to the standard EP. Likewise, if F = 0 in (2), it reduces to the VIP. The set of all solutions of G E P ( F , G ) is represented by S ( G E P ( F , G ) ) .
It is worth mentioning that G E P ( F , G ) provides a unified framework that includes several important mathematical models such as optimization problems, equilibrium, and Nash equilibrium variational inequalities problems. For more detailed discussions, one can see [11,12,13,14].
Assume C 1 , C 2 H which are also closed and convex; T : C 1 C 2 is a mapping. The distance between two sets is given as follows:
d ( C 1 , C 2 ) = inf { c c : c C 1 and c C 2 } .
The best proximity point problem (BPP) is formulated as follows: find a * C 1 with
a * T a * = d ( C 1 , C 2 ) ,
where d ( C 1 , C 2 ) denotes the distance between sets C 1 and C 2 .
The BPP has a key role in optimization (see [15,16,17]) and includes the fixed point as a special case. Several methods have been proposed in the literature to solve the BPP (see [18,19,20,21]). We denote the solution set of BPP by B ( T ) for a given mapping T. Moreover, if T is a C * -nonexpansive mapping and we take C * = B ( T ) , in this case, T is referred to as a best proximally nonexpansive mapping.
The split inverse problem (SIP) is described as follows:
Find a point
a * H 1 that can solve I P 1
and
A a * H 2 that can solve I P 2 ,
where I P 1 and I P 2 are the inverse problems posed in the spaces H 1 and H 2 , respectively, and A : H 1 H 2 is an operator.
Similarly, SVIP (split variational inclusion problem) is formulated as follows:
Find an element
a * H 1 which satisfies 0 L 1 a * ,
and
A a * H 2 which satisfies 0 L 2 A a * ,
where L 1 and L 2 are multi-valued mappings on H 1 and H 2 , respectively.
Many practical problems in applied sciences can be modeled using SVIP (see, for example, [22,23]). In the sequel, Γ denotes the solution set of SVIP and C 1 and C 2 are closed and convex sets.
Remark 1
([24,25]). Note that we have the following facts:
  • A mapping L is maximal monotone if and only if J λ L is a single value.
  • J λ L a * = a * if and only if a * L 1 ( 0 ) .
  • The SVIP is equivalent to the following:
    Find a * H 1 with
    J λ L 1 a * = a * w i t h A a * H 2 a n d A a * = J λ L 2 A a * .
Let C 1 , C 2 H 1 . Define C 0 and D 0 as follows:
C 0 = { a C 1 :   a b = D ( C 1 , C 2 ) , for some b C 2 }
and
D 0 = { b C 2 :   a b = D ( C 1 , C 2 ) , for some a C 1 } .
Lemma 6
([18]). Let C 1 , C 2 H , and let T : C 1 C 2 be a mapping such that T ( C 0 ) D 0 . Then, any fixed point of the composition P C 1 T within C 0 gives the best proximity point of T.
Definition 2
([26]). If C 1 and C 2 are subset a metric space ( X , d ) . We say that C 1 and C 2 satisfy the P-property if, for any a 1 , a 2 C 0 and b 1 , b 2 D 0 , the following condition holds:
d ( a 1 , b 1 ) = d ( a 2 , b 2 ) = D ( C 1 , C 2 ) d ( a 1 , a 2 ) = d ( b 1 , b 2 ) .
Lemma 7
([19]). If C 1 , C 2 H , and T : C 1 C 2 satisfy T ( C 0 ) D 0 , then T | C 0 has the proximal property iff the operator ( I P C 1 T ) | C 0 is demiclosed at zero.
To approximate the solutions of nonlinear problems, different iterative algorithms have been widely studied and proposed in the literature ([27,28,29,30,31]). For example, Abbas et al., [32] proposed an algorithm called an AA-iteration scheme, where the proposed sequence { a n } is given as Algorithm 1:
Algorithm 1: AA-iteration
Initialization: Assume { α n } , { β n } , { γ n } ( 0 , 1 ) .
Set any a 1 C ; calculate a n + 1 in the following ways:
                                  d n = ( 1 γ n ) a n + γ n T a n ,                                   c n = T ( ( 1 β n ) d n + β n T d n ) ,                                   b n = T ( ( 1 α n ) T d n + α n T c n ) ,                                   a n + 1 = T b n .
The authors in [32] proved that the sequence generated by an AA-iteration scheme converges faster than many well-known comparable schemes. After that, several authors have applied the AA-algorithm to solve certain nonlinear problems; see [33,34,35,36].
Byrne et al. [37] proposed an iterative algorithm for SVIP involving L 1 and L 2 which are maximal monotone, and the proposed sequence is given as follows in Algorithm 2.
Algorithm 2: Byrne’s algorithm
Initialization: Suppose that { α n } is a sequence in ( 0 , 1 ) , λ 0 and σ ( 0 , 2 K ) ,
  where K = A * A .
For a 1 H 1 , calculate a n + 1 as follows:
                a n + 1 = α n a n + ( 1 α n ) J λ L 1 ( a n + σ A * ( J λ L 2 I ) A a n ) .
    Finding a common solution for various nonlinear problems has attracted significant attention in recent research. For example, Ref. [38] introduced an iterative sequence to compute a common solution of fixed point problems and SVIP for nonexpansive mappings. The proposed sequence is described as in Algorithm 3.
Algorithm 3: Wangkeeree’s algorithm
Initialization: Let { α n } ( 0 , 1 ) , λ 0 and σ ( 0 , 2 K ) , where K is as given above.
  Choose any a 1 H 1 , find a n + 1 as:
b n = J λ L 1 ( a n + σ A * ( J λ L 2 I ) A a n )
a n + 1 = α n η Φ a n + ( 1 α n S ) T b n .
In the algortithm, Φ and T are the contraction and nonexpansive mappings on H 1 , respectively, and S is operator that is bounded linear on H 1 . Authors have shown that the sequence defined in Algorithm 3 converges the unified solution of the problems mentioned above.
In 2021, Suntai [39] introduced an iterative algorithm to address the BPP and EP. The algorithm is formulated as Algorithm 4.
Recently, Husain et al. [40] proposed an iterative sequence for solution of BPP and generalized GEP. The proposed algorithm is given in Algorithm 5.
Algorithm 4: Suntai’s algorithm
Initialization: Let { α n } ( 0 , 1 ) with lim sup n α n < 1 and τ 0 , 1 A * 2 . Set
  any a 0 C 0 , find a n + 1 as follows:
c n = ( 1 α n ) a n + α n P C T a n , b n = P C c n + τ A * ( G r n F I ) A c n , C n + 1 = { d C n : b n d c n d a n d } , a n + 1 = P C n + 1 ( a 0 ) .
Algorithm 5: Husain’s algorithm
Initialization: Let { α n } , { δ n } ( 0 , 1 ) , τ 0 , 1 A * 2 . Set any a 0 , a 1 C 0 ,
  calculate a n + 1 as follows:
c n = a n + δ n ( a n a n 1 ) , b n = ( 1 α n ) c n + α n P C 1 T c n , a n + 1 = P C 1 ( b n + τ A * ( V I ) A b n ) ,
    In the algorithm, V = G r n F ( I r n G ) , r n ( 0 , ) .
The choice of step size is crucial in any iterative method, as it directly influences both the computational process and the convergence speed. The selection of an appropriate step size can lead to faster approximation of the solution, thereby improving the overall rate of convergence of the iterative scheme. Furthermore, incorporating an inertial term based on the previous two iterations can help accelerate the convergence of iterative algorithms. Various researchers have developed and applied algorithms that combine inertial terms with self-adaptive step sizes to approximate solutions of nonlinear operator equations (see, for example, [40,41]). This naturally raises a question: Can we design an iterative algorithm that achieves faster convergence to the common solution of some general nonlinear problems?
Motivated by the aforementioned studies, this paper introduces an efficient inertial-type algorithm for approximating the common solution of several nonlinear problems. Obtaining the unified solution instead of addressing each problem individually provides a unified perspective on the interrelated variables. This strategy yields a more complete understanding of the system’s behavior and ensures consistency complex cases.
Remark 2.
Unlike the shrinking projection and explicit schemes in the literature (e.g., [39,40,42]), our proposed algorithm uses a self-adaptive step size that does not require prior knowledge of the operator norm, which is a structural difference in the step size. Further, we relax several restrictive assumptions commonly used in existing work, for example, explicit norm-bounds or shrinking parameters. To the best of our knowledge, in the existing literature, the inertial projection algorithms are used to solve the split best proximity or at most a combination of best proximity and equilibrium problems.Our scheme is formulated as an inertial type self-adaptive iteration that can approximate the solution of the common best proximity, generalized equilibrium and split variational inclusion problems in one unified framework. We provide both theoretical and practical contribution and the applicability of inertial technique to a broader class of coupled problems. Numerical experiments and image restorations illustrate that the our proposed approach is superior and gives high signal-to-noise ratio in relation t comparable schemes.

2. Convergence Analysis

Assumption A1
([41]). Let F : C × C R and G : H 1 H 1 . To solve the generalized equilibrium problem G E P ( F , G ) , we impose the following conditions on F and G:
( A 1 )
F ( a , a ) = 0 ,
( A 2 )
F ( a , b ) + F ( b , a ) + G a , b a + G b , a b 0 ,
( A 3 )
F ( b , c ) lim α 0 F ( α a + ( 1 α ) b , c ) ,
( A 4 )
the function b F ( a , b ) + G a , b a is lower semi-continuous and convex, for all a , b , c C .
Definition 3
([41]). For a given r > 0 , define the operator G r F : H 1 2 C by
G r F ( a ) = c C : 1 r b c , c a + F ( c , b ) + G c , b c 0 , b H 1 .
Lemma 8
([41]). If we use ( A 1 ) ( A 4 ) , the operator G r F satisfies the following properties:
(1) 
G r F is single-valued and firmly nonexpansive.
(2) 
The fixed point of G r F solve the G E P ( F , G ) .
(3) 
The fixed point set of the mapping G r F is convex and closed.

Proposed Algorithm

Take C i , C H 1 and D i , D H 2 for i = 1 , 2 which are also convex and closed. Assume that F : C × C R satisfies Assumption A1, A * denotes the adjoint of a bounded linear operator A from H 1 to H 2 . Consider S , T : C 1 C 2 as best proximally nonexpansive mappings and Φ : H 1 H 1 as contraction with θ as contraction constant.
The following functions are defined as follows:
f 1 ( a ) = 1 2 ( I J λ 2 L 2 ) A a 2 , f 2 ( a ) = 1 2 ( I J λ 1 L 1 ) a 2 , M ( a ) = A * ( I J λ 2 L 2 ) A a , N ( a ) = ( I J λ 1 L 1 ) a .
Here, f 1 and f 2 are differentiable, weakly lower semi-continuous, and convex [43], while M and N are Lipschitz continuous [25]. The proposed iterative algorithm is now described in Algorithm 6.
Algorithm 6: Proposed algorithm
Step 0. Initialize a 0 , a 1 H and a non-negative parameter δ . Set n = 1 .
Step 1. For the ( n 1 ) th and nth iterates, select δ n satisfying 0 δ n δ ^ n , where
δ ^ n = min δ , τ n a n a n 1 , for a n a n 1 , δ , else .
Step 2. Calculate
e n = a n + δ n ( a n a n 1 ) .
Step 3. Compute                               d n C 1 which satisfies
1 r n a * d n , d n e n + F ( d n , a * ) + G d n , a * d n 0 , a * H .
Step 4. Find
c n = η n e n + ( 1 η n ) d n .
Step 5. Find
b n = P C J λ 1 L 1 ( I σ n A * ( I J λ 2 L 2 ) A ) c n ,
where
σ n = μ n f 1 ( c n ) M ( c n ) 2 + N ( c n ) 2 if M ( c n ) 2 + N ( c n ) 2 0 0 otherwise .
Step 6. Find
a n + 1 = α n Φ a n + β n P C 1 S b n + γ n P C 1 T c n .
Assume the following conditions:
(i)
{ α n } ( 0 , 1 ) with n = 0 α n = and lim n α n = 0 .
(ii)
{ η n } , { β n } , { γ n } ( 0 , 1 ) with α n = 1 β n γ n .
(iii)
δ > 0 is fixed, 0 < μ n < 4 , and { γ n } , { τ n } R + such that
lim n τ n α n = 0 and lim inf n γ n > 0 .
Remark 3.
Under Conditions (i), (iii), and (6), we have
lim n δ n α n a n a n 1 = 0 .
Assume that the set Ω = B ( T ) B ( S ) Γ S ( G E P ( F , G ) ) is nonempty.
Theorem 1.
Let { a n } be the sequence generated by Algorithm 6. If Assumptions ( A 1 ) ( A 4 ) and Conditions (i)–(iii) hold, then a n a * Ω .
The proof of the above Theorem 1 is presented through the following Lemmas.
Lemma 9.
If { a n } is the sequence generated by Algorithm 6, then { a n } is bounded.
Proof. 
Suppose that a * Ω . Since
P C 1 T c n c n = d C 1 , C 2 = a * T a * ,
and
P C 1 S b n b n = d C 1 , C 2 = a * S a * .
By using the P-property, we have
P C 1 T c n a * = T c n T a *
and
P C 1 S b n a * = S b n S a * .
Also, P Ω o Φ is a contraction and P Ω o Φ ( a * ) = a * , J λ 1 L 1 a * = a * and J λ 2 L 2 ( A a * ) = A a * . As G r n F is nonexpansive, we have
d n a *   =   G r n F e n a *     e n a * .
Note that
e n a *   = a n + δ n ( a n a n 1 ) a * a n a *   +   δ n a n a n 1 = a n a *   +   α n δ n α n a n a n 1 .
By Remark 3, lim n δ n α n a n a n 1 = 0 holds, which implies that we have constant M 1 > 0 , satisfying
δ n α n a n a n 1 M 1 .
From (11), we get
e n a *     a n a *   +   α n M 1 .
Also,
c n a * = η n e n + ( 1 η n ) d n a * η n e n a * + ( 1 η n ) d n a * η n e n a * + ( 1 η n ) e n a * = e n a * .
Using the definition M and consodering that I J λ 2 L 2 is firmly nonexpansive, we get
G c n , c n a * = A * ( I J λ 2 L 2 ) A c n , c n a * = ( I J λ 2 L 2 ) A c n , A c n A a * = ( I J λ 2 L 2 ) A c n A a * + A a * , A c n A a * = ( I J λ 2 L 2 ) A c n A a * + J λ 2 L 2 A a * , A c n A a * = ( I J λ 2 L 2 ) A c n ( I J λ 2 L 2 ) A a * , A c n A a * ( I J λ 2 L 2 ) A c n ( I J λ 2 L 2 ) A a * 2 = ( I J λ 2 L 2 ) A c n 2 = 2 g ( c n ) .
By nonexpansivness of J λ 1 L 1 , Lemma 2, (14), using the definition of σ n and assumption on μ n , we obtain
b n a * 2 = J λ 1 L 1 ( I σ n A * ( I J λ 2 L 2 ) A ) c n a * 2 c n σ n A * ( I J λ 2 L 2 ) A c n a * 2 = c n a * σ n M ( c n ) 2 = c n a * 2 + σ n 2 M ( c n ) 2 2 σ n M ( c n ) , c n a * = c n a * 2 + μ n 2 f 1 2 ( c n ) ( M ( c n ) 2 + N ( c n ) 2 ) 2 M ( c n ) 2 4 μ n f 1 2 ( c n ) M ( c n ) 2 + N ( c n ) 2 c n a * 2 ( 4 μ n ) μ n f 1 2 ( c n ) M ( c n ) 2 + N ( c n ) 2 c n a * 2 .
Hence, we have
b n a *     c n a * .
Since S and T are proximally nonexpansive mapping and Φ is contraction, by using Condition (ii), (8), (9), (12), (13) and (16), we get
a n + 1 a * = α n Φ a n + β n P C 1 S b n + γ n P C 1 T c n a * = α n ( Φ a n Φ a * ) + α n ( Φ a * a * ) + β n ( P C 1 S b n a * ) + γ n ( P C 1 T c n a * ) α n Φ a n Φ a *   +   α n Φ a * a *   +   β n P C 1 S b n a *   +   γ n P C 1 T c n a * α n θ a n a *   +   β n P C 1 S b n a *   +   γ n P C 1 T c n a *   +   α n Φ a * a * = α n θ a n a *   +   β n S b n S a *   +   γ n T c n T a *   +   α n Φ a * a * α n θ a n a *   +   β n b n a *   +   γ n c n a *   +   α n Φ a * a * α n θ a n a *   +   ( β n + γ n ) e n a *   +   α n Φ a * a * α n θ a n a *   +   ( 1 α n ) a n a *   +   α n M 1 + α n Φ a * a * = 1 α n ( 1 θ ) a n a * + α n ( 1 α n ) M 1 + Φ a * a * = 1 α n ( 1 θ ) a n a * + α n ( 1 θ ) ( 1 α n ) M 1 1 θ + Φ a * a * 1 θ 1 α n ( 1 θ ) a n a * + 2 α n ( 1 θ ) M * ,
where M * = sup n N ( 1 α n ) M 1 1 θ , Φ ( a * ) a * 1 θ . If we set a ¯ n = a n a * , b ¯ n = α n ( 1 θ ) M * , c ¯ n = 0 and σ n = α n ( 1 c ) , then by the Lemma 4, { a n a * } is bounded and hence { a n } is bounded. Moreover, { b n } , { c n } , { d n } and { e n } are also bounded.
Lemma 10.
The sequence { a n } generated by Algorithm 6 satisfies the following:
a n + 1 a * 2 1 2 α n ( 1 θ ) 1 α n θ a n a * 2 + 2 α n ( 1 θ ) 1 α n θ [ 1 1 θ Φ a * a * , a n + 1 a * + 3 M 2 2 ( 1 θ ) δ n α n a n a n 1 ] + β n ( 1 α n ) 1 θ [ η n ( 1 η n ) d n e n 2 ( 4 μ n ) μ n f 1 2 ( c n ) M ( c n ) 2 + N ( c n ) 2 ] .
Proof. 
Let a * Ω . By the Lemma 2, (10) and (15), we get
e n a * 2 = a n + δ n ( a n a n 1 ) a * 2 = a n a * 2 + δ n 2 a n a n 1 2 + 2 δ n a n a * , a n a n 1 a n a * 2 + δ n 2 a n a n 1 2 + 2 δ n a n a n 1 a n a * = a n a * 2 + δ n a n a n 1 ( δ n a n a n 1   +   2 a n a * ) a n a * 2 + 3 M 2 δ n a n a n 1 = a n a * 2 + 3 M 2 α n δ n α n a n a n 1 ,
where M 2 : = sup n N a n a * , δ n a n a n 1 0 .
Also,
c n a * 2 = η n e n + ( 1 η n ) d n a * 2 = η n e n + ( 1 η n ) d n η n a * + η n a * a * 2 = η n ( e n a * ) + ( 1 η n ) ( d n a * ) 2 = η n e n a * 2 + ( 1 η n ) d n a * 2 η n ( 1 η n ) e n d n 2 η n e n a * 2 + ( 1 η n ) e n a * 2 η n ( 1 η n ) e n d n 2 = e n a * 2 η n ( 1 η n ) e n d n 2 .
Using Lemma 2(i), along with (8), (9), (17), (18), and the Cauchy–Schwarz inequality, we obtain
a n + 1 a * 2 = α n Φ a n + β n P C 1 S b n + γ n P C 1 T c n a * 2 = α n Φ a n + β n P C 1 S b n + γ n P C 1 T c n α n a * β n a * γ n a * 2 = [ β n ( P C 1 S b n a * ) + γ n ( P C 1 T c n a * ) ] + α n ( Φ a n a * ) 2 β n ( P C 1 S b n a * ) + γ n ( P C 1 T c n a * ) 2 + 2 α n Φ a n a * , a n + 1 a * = β n 2 P C 1 S b n a * 2 + γ n 2 P C 1 T c n a * 2 + 2 β n γ n P C 1 S b n a * , P C 1 T c n a * + 2 α n Φ a n a * , a n + 1 a * β n 2 P C 1 S b n a * 2 + γ n 2 P C 1 T c n a * 2 + 2 β n γ n P C 1 S b n a * P C 1 T c n a * + 2 α n Φ a n a * , a n + 1 a * β n 2 S b n S a * 2 + γ n 2 T c n T a * 2 + β n γ n ( S b n S a * 2 + T c n S a * 2 ) + 2 α n Φ a n a * , a n + 1 a * β n 2 b n a * 2 + γ n 2 c n a * 2 + β n γ n ( b n a * 2 + c n a * 2 ) + 2 α n Φ a n a * , a n + 1 a * = β n ( β n + γ n ) b n a * 2 + γ n ( γ n + β n ) c n a * 2 + 2 α n Φ a n a * , a n + 1 a * = β n ( 1 α n ) b n a * 2 + γ n ( 1 α n ) c n a * 2 + 2 α n Φ a n + Φ a * Φ a * a * , a n + 1 a * γ n ( 1 α n ) e n a * 2 + β n ( 1 α n ) c n a * 2 ( 4 μ n ) μ n f 1 2 ( c n ) M ( c n ) 2 + N ( c n ) 2 + 2 α n Φ a n Φ a * , a n + 1 a * + 2 α n Φ a * a * , a n + 1 a * γ n ( 1 α n ) e n a * 2 + β n ( 1 α n ) [ e n a * 2 η n ( 1 η n ) d n e n 2 ( 4 μ n ) μ n f 1 2 ( c n ) M ( c n ) 2 + N ( c n ) 2 ] + 2 α n Φ a n Φ a * , a n + 1 a * + 2 α n Φ a * a * , a n + 1 a * ( 1 α n ) 2 e n a * 2 + β n ( 1 α n ) [ η n ( 1 η n ) d n e n 2 ( 4 μ n ) μ n f 1 2 ( c n ) M ( c n ) 2 + N ( c n ) 2 ] + 2 α n θ [ a n a * a n + 1 a * ] + 2 α n Φ a * a * , a n + 1 a * ( 1 α n ) 2 [ a n a * 2 + 3 M 2 α n δ n α n a n a n 1 ] + β n ( 1 α n ) η n ( 1 η n ) d n e n 2 ( 4 μ n ) μ n f 1 2 ( c n ) M ( c n ) 2 + N ( c n ) 2 + α n θ a n a * 2 + a n + 1 a * 2 + 2 α n Φ a * a * , a n + 1 a * = ( 1 α n θ 2 α n + 2 α n θ ) a n a * 2 + 3 M 2 α n δ n α n a n a n 1 + α n 2 a n a * 2 + β n ( 1 α n ) η n ( 1 η n ) d n e n 2 ( 4 μ n ) μ n f 1 2 ( c n ) M ( c n ) 2 + N ( c n ) 2 + α n θ a n a * 2 + a n + 1 a * 2 + 2 α n Φ a * a * , a n + 1 a * 1 2 α n ( 1 θ ) 1 α n θ a n a * 2 + 2 α n ( 1 θ ) 1 α n θ [ 1 1 θ Φ a * a * , a n + 1 a * + 3 M 2 2 ( 1 θ ) δ n α n a n a n 1 + α n M 3 2 ( 1 θ ) ] + β n ( 1 α n ) 1 θ [ η n ( 1 η n ) d n e n 2 ( 4 μ n ) μ n f 1 2 ( c n ) M ( c n ) 2 + N ( c n ) 2 ] ,
where M 3 = sup { a n a * 2 : n N } .
Lemma 11.
For sequence { a n } generated by Algorithm 6, we have the following:
a n + 1 a * 2 ( 1 α n ) a n a * 2 + α n Φ a n a * 2 + 3 M 2 ( 1 α n ) α n δ n α n a n a n 1 β n b n c n 2 + 2 β n M 4 A * ( I J λ 2 L 2 ) A c n β n γ n P C 1 S b n P C 1 T c n 2 .
Proof. 
Let a * Ω . Using (15), we have
c n σ n A * ( I J λ 2 L 2 ) A c n a * 2 c n a * 2 .
By Lemma 2, we get
b n a * 2 = J λ 1 L 1 ( I σ n A * ( I J λ 2 L 2 ) A c n ) a * 2 b n a * , c n σ n A * ( I J λ 2 L 2 ) A c n a * = 1 2 ( b n a * 2 + c n σ n A * ( I J λ 2 L 2 ) A c n a * 2 b n c n + σ n A * ( I J λ 2 L 2 ) A c n 2 ) 1 2 b n p 2 + c n a * 2 ( b n c n + σ n A * ( I J λ 2 L 2 ) A c n 2 ) = 1 2 ( b n p 2 + c n a * 2 ( b n c n 2 + σ n 2 A * ( I J λ 2 L 2 ) A c n 2 2 σ n c n b n , A * ( I J λ 2 L 2 ) A c n ) ) 1 2 ( b n a * 2 + c n a * 2 b n c n 2 σ n 2 A * ( I J λ 2 L 2 ) A c n 2 + 2 σ n c n b n A * ( I J λ 2 L 2 ) A c n ) 1 2 ( b n p 2 + c n a * 2 b n c n 2 + 2 σ n c n b n A * ( I J λ 2 L 2 ) A c n ) .
Thus, we have
b n a * 2 c n a * 2 b n c n 2 + 2 σ n c n b n A * ( I J λ 2 L 2 ) A c n e n a * 2 b n c n 2 + 2 M 4 A * ( I J λ 2 L 2 ) A c n ,
where M 4 = sup n N { σ n c n b n } . It follows from Lemma 3 and (8), (9), (17) and (19) that
a n + 1 a * 2 = α n Φ a n + β n P C 1 S b n + γ n P C 1 T c n a * 2 = α n ( Φ a n a * ) + β n ( P C 1 S b n a * ) + γ n ( P C 1 T c n a * ) 2 α n Φ a n a * 2 + β n P C 1 S b n a * 2 + γ n P C 1 T c n a * 2 β n γ n P C 1 S b n P C 1 T c n 2 = α n Φ a n a * 2 + β n S b n S a * 2 + γ n T c n T a * 2 β n γ n P C 1 S b n P C 1 T c n 2 α n Φ a n a * 2 + β n b n a * 2 + γ n c n a * 2 β n γ n P C 1 S b n T c n 2 α n Φ a n a * 2 + β n ( e n a * 2 b n c n 2 + 2 M 4 A * ( I J λ 2 L 2 ) A c n ) + γ n e n a * 2 β n γ n P C 1 S b n P C 1 T c n 2 = α n Φ a n a * 2 + ( 1 α n ) e n a * 2 β n b n c n 2 + 2 β n M 4 A * ( I J λ 2 L 2 ) A c n β n γ n P C 1 S b n P C 1 T c n 2 α n Φ a n a * 2 + ( 1 α n ) a n a * 2 + 3 M 2 α n δ n α n a n a n 1 β n b n c n 2 + 2 β n M 4 A * ( I J λ 2 L 2 ) A c n β n γ n P C 1 S b n P C 1 T c n 2 = ( 1 α n ) a n a * 2 + α n Φ a n a * 2 + 3 M 2 ( 1 α n ) α n δ n α n a n a n 1 β n b n c n 2 + 2 β n M 4 A * ( I J λ 2 L 2 ) A c n     β n γ n P C 1 S b n P C 1 T c n 2 .
Lemma 12.
The sequence { a n } given in Algorithm 6 converges strongly to p * Ω where p * = P Ω o Φ ( p * ) .
Proof. 
As, p * = P Ω o Φ ( p * ) , by Lemma 10, we have
a n + 1 p * 2 1 2 α n ( 1 θ ) 1 α n θ a n p * 2 + 2 α n ( 1 θ ) 1 α n θ { α n M 3 2 ( 1 θ ) + 3 M 2 ( 1 α n ) 2 1 θ δ n α n a n a n 1 + 1 1 θ Φ p * p * , a n + 1 p * } + γ n δ n ( 1 α n ) η n ( 1 η n ) ( 1 α n θ ) e n d n 2 .
We now show that { a n p * } 0 , as n . If we take a n ¯ = a n p * and b n ¯ = α n M 3 2 ( 1 θ ) + 3 M 2 ( 1 α n ) 2 1 θ δ n α n a n a n 1 + 1 1 θ Φ p * p * , a n + 1 p * in Lemma 5. Then the result follows by showing
lim sup k Φ p * p * , a n + 1 p * 0 ,
for each subsequence { a n k     p * } of { a n p * } with
lim inf k ( a n p *     a n k p * ) 0 .
Assume { a n k p * } is a subsequence of { a n p * } with
lim inf k ( a n k + 1 p * a n k p * ) 0 .
By Lemma 10, we have
δ n k γ n k ( 1 α n k ) η n k ( 1 η n k ) ( 1 α n k θ ) e n k d n k 2 1 2 α n k ( 1 θ ) ( 1 α n k c ) a n k a * 2
a n k + 1 a * 2 + 2 α n k ( 1 θ ) ( 1 α n k θ ) { α n k M 3 2 ( 1 θ ) + 3 M 2 ( 1 α n k ) 2 2 ( 1 θ ) δ n k α n k a n k a n k 1 + 1 1 θ Φ a * a * , a n k + 1 a * } .
By (22) and lim α n k = 0 , we obtain
δ n k γ n k ( 1 α n k ) ( 1 α n k θ ) η n k ( 1 η n k ) e n k d n k 2 0 .
This implies that
e n k d n k 0 as k .
Similarly, using Lemma 10, we get
( 4 μ n k ) μ n k f 1 2 ( c n k ) M ( c n k ) 2 + N ( c n k ) 2 0 as k .
Since M and N are Lipschitzian, then by the assumption on μ n k , we have
f 1 2 ( c n k ) 0 as k
and
lim k f 1 ( c n k ) = lim k 1 2 ( I J λ 2 L 2 ) A c n k = 0 .
Thus,
( I J λ 2 L 2 ) A c n k 0 as k .
Also,
A * ( I J λ 2 L 2 ) A c n k A * ( I J λ 2 L 2 ) A c n k = A ( I J λ 2 L 2 ) A c n k 0 as k .
By Lemma 11, we have
β n k b n k c n k 2 ( 1 α n k ) a n k a * 2 + α n k a n k + 1 a * 2 + α n k Φ a n k a * 2 + 3 M 2 ( 1 α n k ) α n k δ n k α n k a n k a n k 1 + 2 M 4 β n k A * ( I J λ 2 L 2 ) A c n k .
Using (21), (26), Remark 3 and lim k α n k = 0 , we have
b n k c n k 0 as k .
Now, using Lemma 11, we obtain
P C 1 S b n k P C 1 T c n k 0 as k .
By Remark 3, we obtain
e n k a n k = δ n k a n k a n k 1 0 as k .
From (23) and (29), we get
a n k d n k 0 and c n k a n k 0 as k .
Similarly, by applying Lemma 11, (27), (28) and (30), the following hold:
a n k b n k 0 , P C 1 S b n k b n k 0 , P C 1 T c n k c n k 0 as k .
We now need to show χ ( a n ) Ω . Clearly, χ ( a n ) S ( E P ( F , T ) ) . Since { a n } is bounded, we have χ ( a n ) . Let p χ ( a n ) . So, { a n k } subsequence of { a n } with a n k p as k . From (30), this gives d n k p as k . Using the property of G r n k F e n k , we obtain
F ( d n k , q ) + T d n k , q d n k + 1 r n k q d n k , d n k e n k 0 for all q C .
Using the monotonicity of F, we get
1 r n k q d n k , d n k e n k F ( q , d n k ) + T d n k , q d n k for all q C .
By (23), lim k inf r n k > 0 and Condition ( A 4 ) , we have
T d n k , q d n k + F ( q , d n k ) 0 .
Hence,
T p , q p + F ( q , p ) 0 .
Note that q α = α q + ( 1 α ) p C , q C   a n d   α ( 0 , 1 ] . From (32) and using Conditions ( A 1 ) ( A 4 ) , we obtain
T p , p q α + F ( q α , p ) 0 .
Thus,
0 = T q α , q α q α + F ( q α , q α ) α T q α , q q α + ( 1 α ) T p , a q α + α F ( q α , q ) + ( 1 α ) F ( q α , p ) α T q α , q q α + F ( q α , q ) .
That is,
T q α , q q α + F ( q α , q ) 0 , for all q C .
If α 0 , then by Condition ( A 3 ) we have
T p , q p + F ( p , q ) 0 , for all q C ,
this gives p E P ( F , T ) .
We now claim p Γ . As f 1 is lower semi-continuous, by (24), we have
0 f 1 ( p ) lim k f 1 ( c n k ) = lim k f 1 ( c n ) = 0 ,
which gives
f 1 ( p ) = 1 2 ( I J λ 2 L 2 ) A p * 2 = 0 .
By Remark 1, we have
A p L 2 1 ( 0 ) , that is , 0 L 2 ( A p ) .
Note that b n k = J λ 1 L 1 ( c n k σ n k A * ( I J λ 2 L 2 ) A c n k ) can be written as c n k σ n k A * ( I J λ 2 L 2 ) A c n k b n k + λ 1 L 1 ( b n k ) ; that is,
( c n k b n k ) σ n k A * ( I J λ 2 L 2 ) A c n k λ 1 L 1 ( b n k ) .
Taking the limit as k in (34) and using (26), (27), (31), and considering a graph of a maximal monotone operator is weakly–strongly closed, we get 0 L 1 ( p ) . From (33), this implies that p Γ .
Note that p B ( S ) B ( T ) . Indeed, S and T are proximal nonexpansive; by Lemma 7, I P C 1 S and I P C 1 T are demiclosed; and by (31), we get P C 1 S p = p and P C 1 T p = p , and hence p B ( S ) B ( T ) . Hence, χ ( a n ) Ω .
Now the boundedness of { a n k } implies that we have subsequence { a n k i } of { a n k } and a n k i p and
lim i Φ p * p * , a n k i p * = lim sup k Φ p * p * , a n k p * = lim sup k Φ p * p * , b n k p * .
As p * = P Ω o Φ p * , we have
lim sup k Φ p * p * , a n k p * = lim i Φ p * p * , a n k i p * = Φ p * p * , p p * 0 .
Now, using (35), we have
lim sup k Φ p * p * , a n k + 1 p * = lim sup k Φ p * p * , a n k p * = Φ p * p * , p p * 0 .
From Lemma 5 on (20) and (36) with the lim n δ n α n a n a n 1 = 0 and lim n α n = 0 , we obtain lim n a n p * 2 = 0 .

3. Numerical Experiments

In this section, the numerical examples are given to support our proposed Algorithm 6. We use the MATLAB (version R2018a) for evaluation of all numerical experimental results.
In the following Example 1, we compare convergence behavior and efficiency of our proposed Algorithm 6 with the comparable algorithm given in Equation (12) of Husain et al. [40] and the comparison can be seen in Table 1 and Figure 1.
Example 1.
Suppose that H 1 = H 2 = H = R and C 1 = C 2 = C = R with the usual inner product and induced with the usual norm. Set C 0 = [ 10 , 10 ] . Define F : C × C R by F ( a , b ) = a b a 2 , a , b C and G : H H by G ( a ) = 3 a , a H . The mapping A : H 1 H 2 is given by A ( a ) = 9 a 4 , a H 1 and S : C 1 C 2 is defined by S ( a ) = a 4 , a C 1 . Further, we set r n = 1 5 , Φ ( a ) = a 5 , δ n = 0.5 , η n = n 2 n + 1 , α n = 1 n + 1 , β n = 1 2 ( 1 α n ) = n 2 n + 2 , and γ n = 1 α n β n , n 1 .
Note that F satisfies ( A 1 ) ( A 4 ) of Assumption A1 and A is a bounded linear operator on R and its adjoint A * satisfies A * = 9 4 . Set τ = 1 20 . Note that G E P ( F , G ) = { 0 } . Take S = T which is the best proximally nonexpansive mapping with S ( C 0 ) D r n and B ( S ) = { 0 } . Hence, we have Ω = { 0 } .
Remark 4.
We take two cases: one for initial points a 0 = 2 , a 1 = 1.5 and other for a 0 = 2 , a 1 = 1.5 in the comparison analysis. From Table 1, we observe that the sequence { a n } proposed in Algorithm 6 converges to 0 which is a best proximity point of S. By Table 1 and Figure 1, it is observed that our proposed algorithm converges faster than the algorithm defined in Equation (12) of Husain et al. [40].
In Example 2, we compare convergence behavior and efficiency of our proposed Algorithm 6 with the comparable algorithm given in Equation (6) of Suantai [39] and the comparison can be seen in Table 2 and Figure 2.
Example 2.
Suppose that H 1 = R 2 , H 2 = R and C 1 = [ 1 , 0 ] × [ 0 , 1 ] , C 2 = [ 3 , 7 ] × [ 0 , 1 ] , C = [ 3 , 0 ] . Define F : C × C R by F ( a , b ) = ( a 1 ) ( b a ) , a , b C . Set G as zero mapping and mapping A : H 1 H 2 as A ( a , b ) = 3 a , ( a , b ) H 1 and S : C 1 C 2 as S ( a , b ) = 3 a , b 2 , ( a , b ) C 1 , C 0 = { ( 0 , b ) : 0 b 1 } . Further, we set r n = n n + 1 , τ = 1 20 ,   Φ ( a ) = a 4 ,   δ n = 0.5 , η n = 0.5 , α n = n 2 n + 1 , β n = 1 2 ( 1 α n ) = 3 n + 2 4 n + 2 , and γ n = 1 α n β n , n 1 .
Note that mapping F satisfies ( A 1 ) ( A 4 ) of Assumption A1 and G E P ( F , G ) = { 0 } . Set S = T which is the best proximally nonexpansive mapping with S ( C 0 ) D 0 and B ( S ) = { ( 0 , 0 ) } .
Remark 5.
We take initial point a 0 = a 1 = ( 0 , 1 ) in the comparison analysis. From Table 2, we observe that the sequence { a n } proposed in Algorithm 6 converges to ( 0 , 0 ) which is a best proximity point of S. By Table 2 and Figure 2, our proposed algorithm converges faster than the algorithm defined in Equation (6) of Suantai and Tiammee [39].
Example 3.
Assume that H 1 = H 2 = R 3 and C 1 = C 2 = C = { a R 3 : u , a v } . Define the following: μ n = 3 1 3 n 1 , η n = n 2 n + 1 , τ n = 1 3 n + 1 , r n = 1 3 n + 3 , λ = λ 1 = λ 2 = 0.55 , δ n = 0.8 , β n = γ n = 2 + n 2 n + 5 , α n = 1 2 n + 7 , Φ ( a ) = a 4 , S ( a ) = a 7 , T ( a ) = a 5 . Further, set σ = 0.0001 in Algorithms 2 and 3, and η = 0.45 , S = T in Algorithm 3.
L 1 = 7 0 0 5 5 0 0 0 2 , L 2 = 8 0 0 0 7 0 0 0 3 A = 6 3 1 8 7 5 3 6 2
For r > 0 , G r F ( a ) = v u , a a 2 u + a . We fix u = ( 7 , 2 , 2 ) and v = 1 and select the different initial guesses with T o l = a n + 1 a n < 10 3 . We also present the error curves plotted against the number of iterations for each test case. The corresponding graphical results are shown in Figure 3.
Remark 6.
Observe Figure 3, where different initial guesses are tested and the corresponding errors are plotted against the number of iterations. From these results, we note that varying the starting points and parameters does not significantly affect the performance of our iterative scheme in terms of its convergence speed. Figure 3 clearly illustrates that the proposed algorithm performs efficiently and remains stable under different initial conditions.

4. Application to Image Restoration

Images play a crucial role in many areas such as engineering, medical diagnosis and security applications. In this experiment, we apply the proposed algorithm to restore images that have been corrupted by noise. For evaluation, Algorithm 6 is compared with Algorithms 2 and 3 on the image restoration task using MATLAB.
We examine the restored images, and compared to the corresponding signal-to-noise ratio (SNR) values, it becomes evident that the proposed algorithm performs well than the other methods considered in this study.
We recall the split linear inverse problem introduced in [44]:
Find a * H 1 such that P ( a * ) + Q ( a * ) = min a H 1 [ P ( a ) + Q ( a ) ] ,
where P : H 1 is convex and continuously differentiable and Q : H 1 R is convex and lower semicontinuous.
A commonly studied special case of (37) is the image restoration model, written as follows:
min a R n L a b 2 2 + λ a 1 ,
where λ > 0 , a R n represents the original image, b R m is the noisy image and L : R n R m is a linear (blurring) operator. In this setting, we consider P ( a ) = L a b 2 2 and Q ( a ) = λ a 1 .
The function P is convex and continuously differentiable, and its gradient P is monotone and continuous. Since Q is convex and lower semicontinuous, its subdifferential Q is maximal monotone. Therefore, Problem (37) becomes equivalent to the following monotone inclusion:
P ( a * ) + Q ( a * ) = min a H 1 [ P ( a ) + Q ( a ) ] 0 P ( a * ) + Q ( a * ) ,
where P is monotone and Lipschitz continuous with Lipschitz constant L 2 . If we choose L 1 = P + Q , L 2 = 0 , A = 0 and C 1 = C 2 in algorithms, then the generated sequence { a n } converges strongly to the solution of the monotone inclusion problem stated in (39).
For the numerical evaluation, we select several color and grayscale images from a standard benchmark dataset. In the control parameters, we use a blur kernel with standard deviation σ = 2 , Gaussian kernel size 9, Gaussian noise level 0.01 , and set λ = λ 1 = 0.55 , η n = 0.75 , with a maximum of 30 iterations. To assess the quality of the restored images, we use both visual inspection and the signal-to-noise ratio (SNR), expressed in decibels (dB) and defined by
SNR = 20 × log 10 a a a r ,
where a denotes the original image and a r is the reconstructed (deblurred) image. A higher SNR value indicates better reconstruction performance. The restored results for the color images are displayed in Figure 4, Figure 5, Figure 6 and Figure 7.
Remark 7.
Observe Figure 4, Figure 5, Figure 6 and Figure 7. Algorithm 6 performs better than Algorithms 2 and 3: it yields higher SNR values in the image restoration experiments for both color and grayscale images which confirms its practical advantage for denoising and deblurring tasks.

5. Conclusions and Future Work

In this paper, we approximate the common solution of common best proximity point, split variational inclusion and generalized equilibrium problem using inertial type iterative algorithm. The main features of our paper are: (1) The step size in the algorithms proposed in [39,40,42] requires prior knowledge of the operator norm, whereas our proposed algorithm involves self-adaptive step size. (2) The strong convergence result of the proposed Algorithm 6 is proved. (3) Approximation of a common solution of three nonlinear problems is obtained instead of finding their solutions separately. (4) In view of Remarks 4 and 7, our approach is more efficient and converges faster than the algorithms proposed in [39,40]. In future research, we may extend our self-adaptive inertial scheme to non-convex models and large-scale imaging problems requiring distributed computation. We may also apply our main algorithm to solve compressed sensing problems.

Author Contributions

All authors contributed to the study conception, design, and computations. M.W.A. wrote the first draft of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

The second author gratefully acknowledges the Faculty of Engineering and the Built Environment, University of Johannesburg, South Africa, for providing the postdoctoral fellowship which supported this research.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

All authors are grateful to reviewers for their useful remarks which helped improve the paper.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Bauschke, H.H.; Borwein, J.M. On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38, 367–426. [Google Scholar] [CrossRef]
  2. Chen, P.; Huang, J.; Zhang, X. A primal–dual fixed point algorithm for convex separable minimization with applications to image restoration. Inverse Probl. 2013, 29, 025011. [Google Scholar] [CrossRef]
  3. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Non-Expansive Mappings; CRC Press: Boca Raton, FL, USA, 1984. [Google Scholar]
  4. Deutsch, F. Best Approximation in Inner Product Spaces; Springer: Berlin/Heidelberg, Germany, 2001; Volume 7. [Google Scholar]
  5. Zhao, J.; Liang, Y.; Liu, Y.; Cho, Y.J. Split equilibrium, variational inequality and fixed point problems for multi-valued mappings in Hilbert spaces. Appl. Comput. Math 2018, 17, 271–283. [Google Scholar]
  6. Agarwal, R.P.; O’Regan, D.; Sahu, D.R. Fixed Point Theory for Lipschitzian-Type Mappings with Applications; Springer: Berlin/Heidelberg, Germany, 2009; Volume 6. [Google Scholar]
  7. Chuang, C.-S. Strong convergence theorems for the split variational inclusion problem in Hilbert spaces. Fixed Point Theory Appl. 2013, 2013, 350. [Google Scholar] [CrossRef]
  8. Osilike, M.O.; Igbokwe, D.I. Weak and strong convergence theorems for fixed points of pseudocontractions and solutions of monotone type operator equations. Comput. Math. Appl. 2000, 40, 559–567. [Google Scholar] [CrossRef]
  9. Maing, P.-E. Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2007, 325, 469–479. [Google Scholar] [CrossRef]
  10. Saejung, S.; Yotkaew, P. Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. 2012, 75, 742–750. [Google Scholar] [CrossRef]
  11. Barbagallo, A. Existence and regularity of solutions to nonlinear degen-erate evolutionary variational inequalities with applications to dynamic network equilibrium problems. Appl. Math. Comput. 2009, 208, 1–13. [Google Scholar]
  12. Dafermos, S.; Nagurney, A. A network formulation of market equilibrium problems and variational inequalities. Oper. Res. Lett. 1984, 3, 247–250. [Google Scholar] [CrossRef]
  13. He, R. Coincidence theorem and existence theorems of solutions for a system of Ky Fan type minimax inequalities in FC-spaces. Adv. Fixed Point Theory 2012, 2, 47–57. [Google Scholar]
  14. Qin, X.; Cho, S.Y.; Kang, S.M. Strong convergence of shrinking projection methods for quasi-ϕ-nonexpansive mappings and equilibrium problems. J. Comput. Appl. Math. 2010, 234, 750–760. [Google Scholar] [CrossRef]
  15. Sadiq Basha, S. Best proximity points: Optimal solutions. J. Optim. Theory Appl. 2011, 151, 210–216. [Google Scholar] [CrossRef]
  16. Sadiq Basha, S. Best proximity points: Global optimal approximate solutions. J. Glob. Optim. 2011, 49, 15–21. [Google Scholar] [CrossRef]
  17. Gabeleh, M. Best proximity point theorems via proximal non-self mappings. J. Optim. Theory Appl. 2015, 164, 565–576. [Google Scholar] [CrossRef]
  18. Suparatulatorn, R.; Suantai, S. A new hybrid algorithm for global mini-mization of best proximity points in Hilbert spaces. Carpath. J. Math. 2019, 35, 95–102. [Google Scholar] [CrossRef]
  19. Bunlue, N.; Suantai, S. Hybrid algorithm for common best proximity points of some generalized nonself nonexpansive mappings. Math. Methods Appl. Sci. 2018, 41, 7655–7666. [Google Scholar] [CrossRef]
  20. Moussaoui, A.; Park, C.; Melliani, S. New best proximity point results via simulation functions in fuzzy metric spaces. Bol. Soc. Paran. Mat. 2025, 43, 1–12. [Google Scholar]
  21. Husain, S.; Furkan, M.; Khairoowala, M.U. Strong convergence of inertial shrinking projection method for split best proximity point problem and mixed equilibrium problem. J. Anal. 2025, 33, 31–48. [Google Scholar] [CrossRef]
  22. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441. [Google Scholar] [CrossRef]
  23. Combettes, P.L. The convex feasibility problem in image recovery. Adv. Imaging Electron Phys. 1996, 95, 155–270. [Google Scholar]
  24. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59, 301–323. [Google Scholar] [CrossRef]
  25. Tang, Y. Convergence analysis of a new iterative algorithm for solving split variational inclusion problems. J. Ind. Manag. Optim. 2020, 16, 945–964. [Google Scholar] [CrossRef]
  26. Raj, V.S. Best proximity point theorems for non-self mappings. Fixed Point Theory 2013, 14, 447–454. [Google Scholar]
  27. Halpern, B. Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 1976, 73, 959. [Google Scholar] [CrossRef]
  28. Iiduka, H. Fixed point optimization algorithm and its application to network bandwidth allocation. J. Comput. Appl. Math. 2012, 236, 1733–1742. [Google Scholar] [CrossRef]
  29. Kazmi, K.R.; Rizvi, S.H. An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping. Optim. Lett. 2014, 8, 1113–1124. [Google Scholar] [CrossRef]
  30. Moudafi, A. Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef]
  31. Asghar, M.W.; Abbas, M. A self-adaptive viscosity algorithm for split fixed point problems and variational inequality problems in Banach spaces. J. Non. Con. Anal. 2023, 24, 341–361. [Google Scholar]
  32. Abbas, M.; Asghar, M.W.; De la Sen, M. Approximation of the solution of delay fractional differential equation using AA-iterative scheme. Mathematics 2022, 10, 273. [Google Scholar] [CrossRef]
  33. Asghar, M.W.; Abbas, M.; Eyni, D.C.; Omaba, M.E. Iterative approximation of fixed points of generalized αm-nonexpansive mappings in modular spaces. AIMS Math. 2023, 8, 26922–26944. [Google Scholar] [CrossRef]
  34. Suanoom, C.; Gebrie, A.G.; Grace, T. The convergence of AA-iterative algorithm for generalized AK-α-nonexpansive mappings in Banach spaces. Sci. Technol. Asia 2023, 28, 82–90. [Google Scholar]
  35. Beg, I.; Abbas, M.; Asghar, M.W. Convergence of AA-iterative algorithm for generalized α-nonexpansive mappings with an application. Mathematics 2022, 10, 4375. [Google Scholar] [CrossRef]
  36. Abbas, M.; Ciobanescu, C.; Asghar, M.W.; Omame, A. Solution approximation of fractional boundary value problems and convergence analysis using AA-iterative scheme. AIMS Math. 2024, 9, 13129–13158. [Google Scholar] [CrossRef]
  37. Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. The split common null point problem. J. Nonlinear Convex Anal. 2012, 13, 759–775. [Google Scholar]
  38. Wangkeeree, R.; Rattanaseeha, K.; Wangkeeree, R. The general iterative methods for split variational inclusion problem and fixed point problem in Hilbert spaces. J. Comput. Anal. Appl 2018, 25, 19–31. [Google Scholar]
  39. Suantai, S.; Tiammee, J. The shrinking projection method for solving split best proximity point and equilibrium problems. Filomat 2021, 35, 1133–1140. [Google Scholar] [CrossRef]
  40. Husain, S.; Khan, F.A.; Furkan, M.; Khairoowala, M.U.; Eljaneid, N.H. Inertial projection algorithm for solving split best proximity point and mixed equilibrium problems in Hilbert spaces. Axioms 2022, 11, 321. [Google Scholar] [CrossRef]
  41. Asghar, M.W.; Abbas, M.; Rouhani, B.D. The AA-Viscosity Algorithm for Fixed-Point, Generalized Equilibrium and Variational Inclusion Problems. Axioms 2024, 13, 38. [Google Scholar] [CrossRef]
  42. Sharma, S.; Chandok, S. Split fixed point problems for quasi-nonexpansive map-pings in Hilbert spaces. Politehn. Univ. Bucharest Sci. Bull. Ser. A Appl. Math. Phys. 2024, 86, 109–118. [Google Scholar]
  43. Aubin, J.-P. Optima and Equilibria: An Introduction to Nonlinear Analysis; Springer Science Business Media: Berlin/Heidelberg, Germany, 2002; Volume 140. [Google Scholar]
  44. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
Figure 1. Comparison of Husain et al. [40] and the proposed algorithm.
Figure 1. Comparison of Husain et al. [40] and the proposed algorithm.
Axioms 14 00924 g001
Figure 2. Error plot: Comparison of Suantai [39] and the proposed algorithm.
Figure 2. Error plot: Comparison of Suantai [39] and the proposed algorithm.
Axioms 14 00924 g002
Figure 3. Error analysis plots for different initial values. (a) a 0 = ( 5 , 2.2 , 3.45 ) , a 1 = ( 5 , 1.25 , 3.2 ) . (b) a 0 = ( 6.25 , 2.12 , 3.45 ) , a 1 = ( 4 , 1.45 , 3 ) . (c) a 0 = ( 3.56 , 2.67 , 4.35 ) , a 1 = ( 5.75 , 0.62 , 1.22 ) . (d) a 0 = ( 24 , 11.7 , 33.5 ) , a 1 = ( 15.5 , 8.45 , 20 ) .
Figure 3. Error analysis plots for different initial values. (a) a 0 = ( 5 , 2.2 , 3.45 ) , a 1 = ( 5 , 1.25 , 3.2 ) . (b) a 0 = ( 6.25 , 2.12 , 3.45 ) , a 1 = ( 4 , 1.45 , 3 ) . (c) a 0 = ( 3.56 , 2.67 , 4.35 ) , a 1 = ( 5.75 , 0.62 , 1.22 ) . (d) a 0 = ( 24 , 11.7 , 33.5 ) , a 1 = ( 15.5 , 8.45 , 20 ) .
Axioms 14 00924 g003
Figure 4. Image restoration results for Saturn.
Figure 4. Image restoration results for Saturn.
Axioms 14 00924 g004
Figure 5. Image restoration results for Peppers.
Figure 5. Image restoration results for Peppers.
Axioms 14 00924 g005
Figure 6. Image restoration results for Coins.
Figure 6. Image restoration results for Coins.
Axioms 14 00924 g006
Figure 7. Image restoration results for Cameraman.
Figure 7. Image restoration results for Cameraman.
Axioms 14 00924 g007
Table 1. Numerical comparison for both algorithms with different initial values a 0 , a 1 .
Table 1. Numerical comparison for both algorithms with different initial values a 0 , a 1 .
No. IterHusain et al. [40]
a 0 = 2 , a 1 = 1.5
Proposed Algorithm
a 0 = 2 , a 1 = 1.5
Husain et al. [40]
a 0 = 2 , a 1 = 1.5
Proposed Algorithm
a 0 = 2 , a 1 = 1.5
12.0000002.000000−2.000000−2.000000
21.5000001.500000−1.500000−1.500000
30.9769220.305917−0.976922−0.305917
40.5938340.054838−0.593834−0.054838
50.3499190.004065−0.349919−0.004065
60.202574−0.002829−0.2025740.002829
70.115941−0.001055−0.1159410.001055
80.065829−0.000063−0.0658290.000063
90.0371570.000066−0.037157−0.000066
100.0208780.000023−0.020878−0.000023
110.0116890.000001−0.011689−0.000001
120.006525−0.000002−0.0065250.000002
130.003634−0.000001−0.0036340.000001
140.002020−0.000000−0.0020200.000000
150.0011200.000000−0.001120−0.000000
160.0006210.000000−0.000621−0.000000
170.000343−0.000000−0.0003430.000000
180.000190−0.000000−0.0001900.000000
190.000105−0.000000−0.0001050.000000
200.0000580.000000−0.000058−0.000000
210.0000320.000000−0.000032−0.000000
220.0000180.000000−0.000018−0.000000
230.000010−0.000000−0.0000100.000000
240.000005−0.000000−0.0000050.000000
250.000003−0.000000−0.0000030.000000
260.0000020.000000−0.000002−0.000000
270.0000010.000000−0.000001−0.000000
280.0000000.000000−0.000000−0.000000
290.000000−0.000000−0.0000000.000000
300.000000−0.000000−0.0000000.000000
Table 2. Numerical comparison for both algorithms.
Table 2. Numerical comparison for both algorithms.
No. IterSuantai [39] E n = a n + 1 a n Proposed Algorithm E n = a n + 1 a n
0(0, 1.0000)(0, 1.0000)
1(0, 0.916667)0.0833333(0, 1)0.454861
2(0, 0.825)0.0916667(0, 0.545139)0.458831
3(0, 0.736607)0.0883929(0, 0.0863084)0.072751
4(0, 0.654762)0.0818452(0, 0.0135574)0.0126738
5(0, 0.580357)0.0744048(0, 0.000883564)0.000835067
6(0, 0.513393)0.0669643(0, 4.84972 × 10−5)4.67119 × 10−5
7(0, 0.453497)0.0598958(0, 1.78533 × 10−6)1.72936 × 10−6
8(0, 0.400144)0.0533526(0, 5.59731 × 10−8)5.4418 × 10−8
9(0, 0.352759)0.0473855(0, 1.55506 × 10−9)1.51619 × 10−9
10(0, 0.310764)0.0419951(0, 3.88768 × 10−11)3.79933 × 10−11
11(0, 0.273607)0.0371565(0, 8.83564 × 10−13)8.65157 × 10−13
12(0, 0.240774)0.0328329(0, 1.84076 × 10−14)1.80536 × 10−14
13(0, 0.211792)0.0289821(0, 3.53992 × 10−16)3.47671 × 10−16
14(0, 0.186231)0.0255611(0, 6.32129 × 10−18)6.21593 × 10−18
15(0, 0.163703)0.022528(0, 1.05355 × 10−19)1.03709 × 10−19
16(0, 0.14386)0.0198428(0, 1.64617 × 10−21)1.62196 × 10−21
17(0, 0.126392)0.0174688(0, 2.42084 × 10−23)2.38721 × 10−23
18(0, 0.11102)0.015372(0, 3.36227 × 10−25)3.31803 × 10−25
19(0, 0.097498)0.0135216(0, 4.42404 × 10−27)4.36874 × 10−27
20(0, 0.085608)0.01189(0, 5.53005 × 10−29)5.46422 × 10−29
21(0, 0.0751559)0.0104521(0, 6.5834 × 10−31)6.50859 × 10−31
22(0, 0.0659702)0.00918572(0, 7.48113 × 10−33)7.39982 × 10−33
23(0, 0.0578994)0.00807082(0, 8.13167 × 10−35)8.04696 × 10−35
24(0, 0.0508096)0.00708972(0, 8.47049 × 10−37)8.38578 × 10−37
25(0, 0.044583)0.00622667(0, 8.47049 × 10−39)8.38904 × 10−39
26(0, 0.0391152)0.00546772(0, 8.1447 × 10−41)8.06928 × 10−41
27(0, 0.0343147)0.00480051(0, 7.54139 × 10−43)7.47405 × 10−43
28(0, 0.0301006)0.00421409(0, 6.73338 × 10−45)6.67534 × 10−45
29(0, 0.0264018)0.00369881(0, 5.80464 × 10−47)5.75627 × 10−47
30(0, 0.0231557)0.00324613(0, 4.8372 × 10−49)4.79819 × 10−49
145(0, 5.54148 × 10−9)7.88532 × 10−10(0, 0)0
146(0, 4.85116 × 10−9)6.90321 × 10−10(0, 0)0
147(0, 4.24682 × 10−9)6.04339 × 10−10(0, 0)0
148(0, 3.71775 × 10−9)5.29065 × 10−10(0, 0)0
149(0, 3.25459 × 10−9)4.63165 × 10−10(0, 0)0
150(0, 2.84912 × 10−9)4.05472 × 10−10(0, 0)0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abbas, M.; Asghar, M.W.; Alotaibi, A.H. Inertial Algorithm for Best Proximity Point, Split Variational Inclusion and Equilibrium Problems with Application to Image Restorations. Axioms 2025, 14, 924. https://doi.org/10.3390/axioms14120924

AMA Style

Abbas M, Asghar MW, Alotaibi AH. Inertial Algorithm for Best Proximity Point, Split Variational Inclusion and Equilibrium Problems with Application to Image Restorations. Axioms. 2025; 14(12):924. https://doi.org/10.3390/axioms14120924

Chicago/Turabian Style

Abbas, Mujahid, Muhammad Waseem Asghar, and Ahad Hamoud Alotaibi. 2025. "Inertial Algorithm for Best Proximity Point, Split Variational Inclusion and Equilibrium Problems with Application to Image Restorations" Axioms 14, no. 12: 924. https://doi.org/10.3390/axioms14120924

APA Style

Abbas, M., Asghar, M. W., & Alotaibi, A. H. (2025). Inertial Algorithm for Best Proximity Point, Split Variational Inclusion and Equilibrium Problems with Application to Image Restorations. Axioms, 14(12), 924. https://doi.org/10.3390/axioms14120924

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop