Next Article in Journal
Handwriting-Based Mathematical Assistant Software System Using Computer Vision Methods
Next Article in Special Issue
A Class of Relational Almost Nonlinear Contractions Using (c)-Comparison Functions with an Application in Nonlinear Integral Equations
Previous Article in Journal
Fuzzy and Gradual Prime Ideals
Previous Article in Special Issue
Some Common Best Proximity Point Results in Neutrosophic Complete Metric Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Some Modified Mann-Type Inertial Forward–Backward Iterative Methods for Monotone Inclusion Problems

1
Department of Mathematics, Faculty of Science, University of Tabuk, Tabuk 71491, Saudi Arabia
2
Department of Mathematics and Statistics, College of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), P.O. Box 65892, Riyadh 11566, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(24), 4000; https://doi.org/10.3390/math13244000
Submission received: 6 October 2025 / Revised: 10 December 2025 / Accepted: 13 December 2025 / Published: 15 December 2025
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications: 3rd Edition)

Abstract

In this paper, we propose three variants of Mann-type inertial forward–backward iterative methods for approximating the minimum-norm solution of the monotone inclusion problem and the fixed points of nonexpansive mappings. In the first two methods, we compute the Mann-type iteration together with the inertial extrapolation and fixed-point iteration in the initiation of the process, while the last method computes only the Mann-type iteration with inertial extrapolation at the start of the process. We establish the strong convergence results for each method with appropriate assumptions and discuss some applications of the presented methods. Finally, we present numerical examples in both finite- and infinite-dimensional Hilbert spaces to demonstrate their efficiency. A comparative analysis with existing methods is also provided.

1. Introduction

Throughout this work, we denote the real Hilbert space by Ξ and let D represent a closed and convex subset of Ξ . The strong convergence of a sequence { w m } to w is written as w m w , while weak convergence is denoted by w k w .
One of the central themes in nonlinear analysis is the fixed point problem (FPP). This problem serves as a unifying framework for a wide range of applications, including finance, network analysis, and optimization; see, e.g., [1,2,3,4] and the references therein.
Formally, given a nonexpansive mapping S : Ξ Ξ , the fixed point problem is stated as follows:
Find σ Ξ , so that S ( σ ) = σ .
In recent years, numerous approaches have been developed to address the fixed point problem. Among them, Mann’s iterative technique [5] has served as a fundamental source of motivation for many schemes designed to approximate fixed points. Given an initial point w 0 D , the iteration is defined by the following:
w m + 1 = p m w m + ( 1 p m ) S w m , m 0 ,
where { p m } is a sequence of control parameters. Under suitable conditions on { p m } , the sequence { w m } converges weakly to a fixed point σ of S. Building upon this foundation, several extensions have been proposed to enhance convergence properties and improve flexibility. One notable generalization is the following iterative scheme:
w m + 1 = ( 1 p m q m ) w m + q m S w m ,
which introduces two control parameters, p m and q m , along with the sequence { S w m } . This formulation not only broadens the scope of Mann’s iteration but also allows for acceleration, improved stability, and strong convergence in contexts where the classical method guarantees only weak convergence. As a result, it has become a cornerstone in the study of fixed point problems, variational inequalities, and convex optimization.
Another important problem in nonlinear analysis is the monotone inclusion problem (MIP). Let W : Ξ Ξ be a single-valued operator and Z : Ξ 2 Ξ a multi-valued operator. We consider the the following inclusion problem:
Find σ Ξ , such that 0 ( W + Z ) σ .
This framework is fundamental, as it encompasses a wide range of problems such as convex programming, image processing, split feasibility problems, and minimization problems (see, e.g., [6,7,8,9]).
Due to the importance and wide applicability of problem (3), many researchers have proposed iterative methods for its solution (see, e.g., [10,11,12,13,14,15,16,17] and the references therein). Among them, one of the most popular schemes is the well-known forward–backward splitting method (FBM), introduced independently by Passty [16] and Lion et al. [14]. The method can be described as follows. Starting from an arbitrary initial point w 0 Ξ , and given the current iterate w n , the next iterate is generated according to the update rule
w m + 1 = ( I + δ m Z ) 1 ( I δ m W ) ( w m ) ,
where I : Ξ Ξ denotes the identity operator, ( · ) 1 represents the inverse operator, δ m > 0 , and W and Z are as defined above, with their domains satisfying Dom ( Z ) Dom ( W ) .
In recent years, the design of fast iterative algorithms has received considerable attention, particularly those based on inertial methods, which arise from discrete approximations of second-order dissipative dynamical systems. A variety of accelerated schemes have been proposed using inertial techniques (see, e.g., [18,19,20,21,22,23] and the references therein). A common feature of such algorithms is that each new iterate is generated as a combination of the two preceding ones. Although this modification is minor, it significantly improves the overall performance of the methods.
In this context, Lorenz and Pock [24] introduced the inertial forward–backward algorithm for solving monotone inclusions:
u m = w m + e m ( w m w m 1 ) , w m + 1 = ( I + δ m W ) 1 ( I δ m Z ) u m ,
where { e n } is a sequence of inertial parameters, { δ m } is a sequence of positive step sizes, I denotes the identity operator, and W and Z are as defined above. In the past few decades, the convergence properties and various modifications of this method have been extensively investigated in the literature; see, e.g., [17,25,26,27,28,29,30] and the references therein. In their recent work, Tang et al. [31] proposed a class of inertial methods and established their convergence properties for solving variational inclusion problems. Subsequently, Tang and Gibali [32] extended this line of research by introducing a resolvent-free scheme aimed at approximating the minimum-norm solution of (MIP).
Motivated by the work discussed above, we suggest three modified Mann-type inertial forward–backward iterative methods for approximating a common solution of (MIP) and (FPP). Two algorithms are designed in such a way that the Mann-type iterative step is computed jointly with the inertial extrapolation and fixed-point iteration at the beginning of the iterative process, while the third algorithm computes the Mann-type iteration with the inertial extrapolation. Our methods are easily implemented and provide us with fast convergence in comparison to the previous related work. Some theoretical applications are also presented, and some numerical examples are discussed to show the effectiveness and efficiency of the suggested methods.
The remainder of this paper is organized as follows. In Section 2, we recall some basic definitions and preliminary results. In Section 3, we introduce three modified Mann-type inertial forward–backward methods and establish the strong convergence theorems. Some theoretical applications are also discussed in Section 4. Furthermore, in Section 5, we provide two numerical examples in infinite-dimensional spaces as well as in a finite-dimensional setting, which illustrate the performance of the algorithms and offer a preliminary computational comparison with related methods. Finally, concluding remarks are given in Section 6.

2. Preliminaries

For all ϕ , φ , and η in Hilbert space Ξ , m 1 , m 2 , m 3 [ 0 , 1 ] such that m 1 + m 2 + m 3 = 1 , the following equality and inequality hold in Hilbert space Ξ , for more details, see [33,34].
(a)
m 1 ϕ + m 2 φ + m 3 η 2 = m 1 ϕ 2 + m 2 φ 2 + m 3 η 2 m 1 m 2 ϕ φ 2 m 2 m 3 φ η 2 m 3 m 1 η ϕ 2 .
(b)
ϕ ± φ 2 = ϕ 2 ± 2 ϕ , φ + φ 2 ϕ 2 ± 2 φ , ϕ + φ .
Definition 1.
A mapping S : Ξ Ξ is called
(a) 
averaged, if S = ( 1 p ) I + p h , p ( 0 , 1 ) , where h : Ξ Ξ is a nonexpansive and I is identity mapping;
(b) 
η-Lipschitzian, if S ( ϕ ) S ( φ )     η ϕ φ , ϕ , φ Ξ , η > 0 ;
(c) 
contraction, if S ( ϕ ) S ( φ )     θ ϕ φ , ϕ , φ Ξ , θ ( 0 , 1 ) ;
(d) 
nonexpansive, if S ( ϕ ) S ( φ )     ϕ φ , ϕ , φ Ξ ;
(e) 
firmly nonexpansive, if S ( ϕ ) S ( φ ) 2 ϕ φ , S ( ϕ ) S ( φ ) , ϕ , φ Ξ ;
(f) 
ς-inverse strongly monotone (ς-ism), if ς > 0 so that
S ( ϕ ) S ( φ ) , ϕ φ ς S ( ϕ ) S ( φ ) 2 , ϕ , φ Ξ ;
(g) 
monotone, if
S ( ϕ ) S ( φ ) , ϕ φ 0 , ϕ , φ Ξ .
Definition 2
([33]). A set-valued mapping Z : Ξ 2 Ξ is called
(a) 
monotone, if η ζ , ϕ φ 0 , η , ζ , H , ϕ Z ( η ) , φ Z ( ζ ) ;
(b) 
G r a p h ( Z ) = { ( η , ϕ ) Ξ × Ξ : φ Z ( η ) } ;
(c) 
maximal monotone, if Z is monotone and ( I + δ Z ) ( Ξ ) = Ξ , δ > 0 .
The resolvent of Z is defined by J δ Z = [ I + δ Z ] 1 , δ > 0 , where I is the identity operator.
Definition 3
([33]). The metric projection of Ξ onto D is a mapping which assigns each value point η Ξ to a unique nearest point in D , that is
ϕ P D ϕ ϕ φ , φ D .
Some properties of P D are summarized as follows:
ϕ φ , P D ϕ P D φ P D ϕ P D φ 2 , ϕ , φ H ,
and
ϕ P D φ , φ P D ϕ 0 , ϕ H , φ D .
For the details of following remarks, please see [33,35].
Remark 1.
(a) 
Every ς-ism mapping is monotone and 1 ς -Lipschitzian.
(b) 
Every averaged mapping is nonexpansive but the converse need not be true, in general.
(c) 
S is firmly nonexpansive if I S is firmly nonexpansive.
(d) 
If h 1 and h 2 are averaged then h 1 h 2 is averaged.
Remark 2.
(a) 
If Z is maximal monotone mapping then J δ Z is single-valued, nonexpansive, and firmly nonexpansive.
(b) 
J δ Z is firmly nonexpansive if and only if
J δ Z ϕ J δ Z φ 2 ϕ φ 2 ( I J δ Z ) ϕ ( I J δ Z ) φ 2 , for all ϕ , φ Ξ .
(c) 
The operator I J δ Z is nonexpansive and so it is demiclosed at zero.
(d) 
σ solves  ( MIP )  ⇔  σ = J δ Z ( I δ W ) σ .
Lemma 1
([36]). Suppose D Ξ and S : D D is nonexpansive mapping with the properties
(a) 
Fix ( S ) ,
(b) 
the sequence { w m } σ and lim m S w m w m = 0 .
Then S σ = σ .
Lemma 2
([4]). If { w m } is a sequence of non-negative real numbers for which
w m + 1 ( 1 p m ) w m + p m q m , m 0 ,
where { p m } ( 0 , 1 ) and { q m } is a sequence of real numbers satisfying
(a) 
lim m p m = 0 , and m = 1 p m = ,
(b) 
lim sup m q m 0 .
Then lim m w m = 0 .
Lemma 3
([37]). Let { w m } be a sequence in Hilbert space Ξ for which a closed and convex subset D of Ξ exists such that
(a) 
lim m w m ϕ exists for every ϕ D ,
(b) 
any weak cluster point of { w m } falls within D .
Then there exists ϕ D satisfying w m ϕ .
Lemma 4
([38]). Let { w m } be a sequence of real numbers that does not decrease at infinity in the sense that one can find a subsequence { w m k } of { w m } satisfying w m k < w m k + 1 for all m 0 . Also consider the sequence of integers { Γ ( m ) } m m 0 defined by
Γ ( m ) = max { n m : w m w m + 1 } .
Then { Γ ( m ) } m m 0 is a nondecreasing sequence verifying lim m Γ ( m ) = and m m 0 ,
max { w Γ ( m ) , w ( m ) } w Γ ( m ) + 1 .

3. Main Contribution

The solution sets of (MIP) and (FPP) are indicated by Φ and Ω , correspondingly. To ensure the convergence of the proposed methods, we consider some assumptions, which are listed below: ( A 1 )   W : Ξ Ξ such that W is ς -inverse strongly monotone operator;
( A 2 )   Z : Ξ 2 Ξ is a maximal monotone operator;
( A 3 )   S : Ξ Ξ is nonexpansive;
( A 4 )   p m , q m ( 0 , 1 ) so that lim k p m = 0 , m = 1 p m = and inf q m ( 1 p m q m ) > 0 ;
( A 5 )   τ m be a positive sequence such that m = 1 τ m < and lim k τ m p m = 0 ;
( A 6 ) The common solution set of (MIP) and (FPP) is express by Φ Ω and Φ Ω .
Remark 3.
If w m + 1 = w m = v m = u m in Algorithm 1, then from (7), we get
w k = J δ m Z ( w m δ m W w m ) = ( I + δ m Z ) 1 ( w m δ m W w m )
which implies that
( w m δ m W w m ) w m + δ m Z w m , 0 δ m W w m + δ m Z w m , 0 ( W + Z ) ( w m ) .
Furthermore, from (5), we get w m = ( 1 p m q m ) w m + q m S ( w m ) , which is the Mann-type iteration method. Hence, the sequence { w m } converge strongly to the fixed point of S.
Algorithm 1 Modified Mann-type inertial forward-backward iterative method-1
Choose e [ 0 , 1 ) and 0 < δ ¯ < δ m < 2 ς are given. Pick the initial points w 0 and w 1 .
Iterative Step: For m 1 and iterates w m , w m 1 , select 0 < e m < e ¯ m , where
e ¯ m = min τ m w m w m 1 , e , if w m w m 1 , e , otherwise .
Compute
u m = ( 1 p m q m ) w m + q m [ S ( w m ) + e m ( w m w m 1 ) ] ,
v m = J δ m Z ( u m δ m W u m ) ,
w m + 1 = J δ m Z ( v m δ m W v m ) .
If w m + 1 = v m = w m = u m then exit, or else, assign m = m + 1 and back to the computation.
Remark 4.
From (4), we have e m w m w m 1 τ m or e m w m w m 1 p m τ m p m . Keeping in mind ( A 4 ) , and taking the limit m , we obtain lim m e m w m w m 1 p m = 0 . Hence, there exists constant L 1 such that e m w m w m 1 p m L 1 or e m w m w m 1 L 1 p m .
Theorem 1.
If the assumptions ( A 1 ) ( A 6 ) hold. Then the sequence { w m } induced by Algorithm 1, converge to w Φ Ω such that w   =   min { z : z Φ Ω } .
Proof. 
Let σ Φ Ω . It is given that W is ς -ism. Then in view of Remark 2(b) and inequality (b), we have
v m σ 2 = J δ m Z ( u m δ m W u m ) σ 2 u m δ m W u m σ + δ m W σ 2 ( I J δ m Z ) ( u m δ m W u m ) ( I J δ m Z ) ( σ δ m W σ ) 2 = u m σ δ m ( W u m W σ ) 2 u m v m δ m ( W u m W σ ) 2 u m σ 2 2 δ m W u m W σ , u m σ u m v m 2 + 2 δ m W u m W σ , u m v m u m σ 2 2 δ m ς W u m W σ 2 u m v m 2 + 2 δ m W u m B σ , u m v m .
Furthermore, we have
W u m W σ u m v m 2 ς 2 = W u m W σ 2 + u m v m 2 ς 2 1 ς W u m W σ , u m v m
or
2 δ m ς W u m W σ 2 + 2 δ m W u m W σ , u m v m = 2 δ m ς W u m W σ u m v m 2 ς 2 + 2 δ m ς u m v m 2 ς 2 .
By using (8) and (9), we get
v m σ 2 u m σ 2 2 δ m ς W u m W σ u m v m 2 ς 2 ( 2 ς δ m 2 ς ) u m v m 2
u m σ 2 .
Since J δ m Z ( I δ m W ) is averaged hence nonexpansive. From (7), (9) and (11), it follows that
w m + 1 σ 2 = J δ m Z ( v m δ m W v m ) σ 2 = J δ m Z ( I δ m W ) v m J δ m Z ( I δ m W ) σ 2 v m σ 2 u m σ 2 2 δ m ς W u m W σ u m v m 2 ς 2
( 2 ς δ m 2 ς ) u m v m 2
u m σ 2 .
Let x m = S w m + e m ( w m w m 1 ) and S is nonexpansive, then we have
x m σ = S w m + e m ( w m w m 1 ) σ S w m σ + e m w m w m 1 w m σ + e m w m w m 1 ,
and using (5), (14) and Remark 4, we have
u m σ ( 1 p m q m ) w n + q m x m σ ( 1 p m q m ) ( w m σ ) + q m ( x m σ ) + p m ( σ ) ( 1 p m q m ) w m σ + q m x m σ + p m σ ( 1 p m q m ) w m σ + q m w m σ + e m w m w m 1 + p m σ ( 1 p m ) w m σ + p m σ + e m w m w m 1 p m max w m σ , σ + L 1 max u m 1 σ , σ + L 1 max u 0 σ , | σ + L 1 ,
where e m w m w m 1 p m L 1 , which implies that { u m } is bounded. Hence, { v m } , { x m } and { w m } is also bounded. Since { x m } is bounded there exists a L 2 > 0 such that x m σ L 2 , then we have
x m σ 2 = S ( w m ) + e m ( w m w m 1 ) σ 2 S ( w m ) σ 2 + 2 e m w m w m 1 , x m σ S ( w m ) σ 2 + 2 e m w m w m 1 x m σ w m σ 2 + 2 L 2 e m w m w m 1 .
Now, we estimate
u m σ 2 ( 1 p m q m ) w n + q m x m σ 2 = ( 1 p m q m ) ( w m σ ) + q m ( x m σ ) + p m ( σ ) 2 = ( 1 p m q m ) w m σ 2 + q m x m σ 2 + p m σ 2 q m ( 1 p m q m ) w m x m 2 ( 1 p m q m ) w m σ 2 + q m w m σ 2 + 2 L 2 e m w m w m 1 + p m σ 2 q m ( 1 p m q m ) w m x m 2 ( 1 p m ) w m σ 2 + 2 L 2 e m w m w m 1 + p m σ 2 q m ( 1 p m q m ) w m x m 2 .
By using (13) and (17), we obtain
w m + 1 σ 2 ( 1 p m ) w m σ 2 + 2 L 2 e m w m w m 1 + p m σ 2 q m ( 1 p m q m ) w m x m 2 2 δ m ς W u m W σ u m v m 2 ς 2 ( 2 ς δ m 2 ς ) u m v m 2 .
There are two feasible cases:
Case I: If { w m σ } is monotonically decreasing, which guarantees the existence of a number L 3 such that w m + 1 σ     w m σ for all m L 3 . Hence, boundedness of { w m σ } implies that { w m σ } is convergent. Therefore, using (18), we have
2 δ m ς W u m W σ + u m v m 2 ς 2 + ( 2 ς δ m 2 ς ) u m v m 2 + q m ( 1 p m q m ) w m x m 2 w k σ 2 w m + 1 σ 2 + 2 L 2 e m w m w m 1 + p m σ 2 w m + 1 σ 2 .
By taking limit m , we get
lim m u m v m = 0 .
and
lim m w m x m = 0 .
Taking into account (20), (21) and x m = S w m + e m ( w m w m 1 ) , we have x m S w m = e m w m w m 1 0 . From (5), by using u m w m = q m ( x m w m ) p m w m , we have u m w m q m x m w m + p m w m 0 . Now, we can easily evaluate that v m w m 0 and
lim m S w k w m = 0 , lim m w m + 1 v m = 0 ,
and hence,
lim m w m + 1 w m w m + 1 v m + v m u m + u m w m 0 .
Boundedness of { w m } guarantees the existence of a subsequence { w m k } converging weakly to σ ¯ . It follows that { u m k } and { v m k } also converge weakly to σ ¯ . Implementing Lemma 1 to (22), we infer that σ ¯ F i x ( S ) . By (6), we have
u m v m δ m ( W u m W v m ) ( W + Z ) ( v m ) ,
where u m v m 0 , δ m is bounded, and W is ζ -ism and hence 1 ζ -Lipschitz continuous. Keeping in mind W + Z is maximal monotone and its graph is weakly strongly closed, taking the limit on the subsequence { w m k } of { w m } , which converge weakly to σ ¯ . We conclude 0 ( W + Z ) ( σ ¯ ) , i.e., σ ¯ Φ . Thus, σ ¯ Φ Ω .
To conclude, we drive the strong convergence of the sequence { w m } . Let U m = ( 1 q m ) w m + q m x m , then u n = ( 1 p m ) U m p m q m ( w n x n ) , then using (16), we estimate
U m σ 2 = ( 1 q m ) w m + q m x m σ 2 ( 1 q m ) w m σ 2 + q m x m σ 2 ( 1 q m ) w m σ 2 + q m w m σ 2 + 2 L 2 e m w m w m 1 w m σ 2 + 2 L 2 e m w m w m 1 ,
and
u m σ 2 = ( 1 p m ) U m p m q m ( w m x m ) σ 2 ( 1 p m ) ( U m σ ) p m q m ( w m x m ) p m σ 2 ( 1 p m ) U m σ 2 2 p m q m ( w m x m ) + p m σ , u m σ ( 1 p m ) w m σ 2 + 2 L 2 e m w m w m 1 2 p m q m ( w m x m ) + p m σ , u m σ ( 1 p m ) w m σ 2 + 2 L 2 e m w m w m 1 2 p m q m ( w m x m ) + p m σ , u m σ ( 1 p m ) w m σ 2 2 p m [ q m ( w m x m ) + σ , u m σ + L 2 e m w m w m 1 p m ] .
From (14) and (25), we have
w m + 1 σ 2 ( 1 p m ) w m σ 2 2 p m [ q m ( w m x m ) , u m σ + σ , u m σ + L 2 e m w m w m 1 p m ] ,
from the property of projection operator, we have
lim inf m σ , u m σ min σ ¯ Φ Ω σ , σ ¯ σ 0 .
Employing Lemma 2, we get w m σ 0 , we conclude that the sequence { w m } converge strongly to σ = P Φ Ω ( 0 ) . Again, from the property of metric projection, we have
σ , σ ¯ σ 0 , σ Φ Ω ,
that is σ , σ ¯ σ 2 , i.e., σ σ ¯ , which implies that σ is the minimum norm solution in Φ Ω .
Case II: If Case I is not true, then Γ : N : N for all m m 0 defined by Γ ( m ) = max { m N : m k : w m σ w m + 1 σ } is an increasing and lim m Γ ( m ) and
0 w Γ ( m ) σ w Γ ( m ) + 1 σ , m m 0 .
By using (18), we have
2 δ Γ ( m ) ς W u Γ ( m ) W σ + u Γ ( m ) v m 2 ς 2 + ( 2 ς δ Γ ( m ) 2 ς ) u Γ ( m ) v Γ ( m ) 2 + q Γ ( m ) ( 1 p Γ ( k ) q Γ ( k ) ) w Γ ( k ) x Γ ( k ) 2 2 L 2 e m w Γ ( m ) w Γ ( m ) 1 + q Γ ( m ) σ 2 w Γ ( m ) + 1 σ 2 .
By taking limit m , we get
u Γ ( m ) v Γ ( m ) 0 , w Γ ( m ) x Γ ( m ) 0 .
Following the similar steps used in the justification of Case I, we arrive at w Γ ( m ) u Γ ( m ) 0 and w Γ ( m ) v Γ ( m ) 0 , S ( w Γ ( m ) ) w Γ ( m ) 0 , as m . From (26) and (30), we get
0 w Γ ( m ) σ 2 q m ( w m x m ) , u m σ + σ , u m σ + L 2 e m w m w m 1 p m .
By taking limit m , we get w Γ ( m ) σ 0 . By Lemma 4, we have
0 w m σ max { w m σ , w Γ ( m ) σ } w Γ ( m ) + 1 σ .
It follows that w m σ 0 , as m . This establishes the result.    □
An alternative and slightly modified version of Algorithm 1 is presented below.
Theorem 2.
Suppose that the assumptions ( A 1 ) ( A 6 ) are valid. Then the sequence { w m } induced by Algorithm 2, converge to w Φ Ω such that w = min { z : z Φ Ω } .
Algorithm 2 Modified Mann-type inertial forward-backward iterative method-2
Choose e [ 0 , 1 ) and 0 < δ ¯ < δ m < 2 ς are given. Pick the initial points w 0 and w 1 .
Iterative Step: For m 1 and iterates w m , w m 1 , select 0 < e m < e ¯ m , where
e ¯ m = min τ m w m w m 1 , e , if w m w m 1 , e , otherwise .
Compute
u m = ( 1 p m q m ) w m + q m S w m + e m ( w m w m 1 ) ,
v m = J δ m Z ( u m δ m W u m ) ,
w m + 1 = J δ m Z ( v m δ m W v m ) .
If w m + 1 = v m = w m = u m then exit, or else, assign m = m + 1 and back to the computation.
Proof. 
First, we show that the sequence { u m } , { v m } , and { w m } are bounded. From (31) and Remark 4, it follows that
u m σ = ( 1 p m q m ) w m + q m S w m + e m ( w m w m 1 ) σ = ( 1 p m q m ) ( w m σ ) + q m ( S w m σ ) + p m ( σ ) + e m ( w m w m 1 ) ( 1 p m q m ) w m σ + q m S w m σ + p m σ + e m w m w m 1 ( 1 p m ) w m σ + p m σ + e m w m w m 1 ) p m ( 1 p m ) w m σ + p m σ + L 1 ( 1 p m ) u m 1 σ + p m σ + L 1 max u m 1 σ , σ + L 1 max u 0 σ , σ + L 1 ,
which implies that { u m } is bounded. In view of (11) and (12), we infer that { v m } and { w m } are also bounded. Let l m = ( 1 p m q m ) w m + q m S w m , then we have
u m σ 2 = l m + e m ( w m w m 1 ) σ 2 l m σ 2 + 2 u m σ , e m ( w m w m 1 ) l m σ 2 + 2 u m σ e m w m w m 1 = l m σ 2 + 2 L 4 e m w m w m 1 .
also
l m σ 2 = ( 1 p m q m ) w m + q m S w m σ 2 ( 1 p m q m ) ( w m σ ) + q m ( S w m σ ) + p m ( σ ) 2 ( 1 p m q m ) w m σ 2 + q m S w m σ 2 + p m σ 2 q m ( 1 p m q m ) w m S w m 2 . ( 1 p m ) w m σ 2 + p m σ 2 q m ( 1 p m q m ) w m S w m 2 .
From (35) and (36), we get
u m σ 2 = ( 1 p m ) w m σ 2 + p m σ 2 q m ( 1 p m q m ) w m S w m 2 + 2 L 4 e m w m w m 1 .
By using (13) and (37), we obtain
w m + 1 σ 2 ( 1 p m ) w m σ 2 + 2 L 4 e m w m w m 1 + p m σ 2 q m ( 1 p m q m ) w m S w m 2 2 δ m ς W u m W σ u m v m 2 ς 2 ( 2 ς δ m 2 ς ) u m v m 2 .
Considering the Case I as in the proof of Theorem 1, we obtain:
lim m u m v m = 0 , lim m w m S w m = 0 .
Following the same procedure in the proof of Theorem 1, one can obtain complete proof.    □
Another, modification of algorithm 1 is presented below.
Theorem 3.
If the assumptions ( A 1 ) ( A 6 ) hold. Then the sequence { w m } induced by Algorithm 3, converge to w Φ Ω such that w = min { z : z Φ Ω } .
Algorithm 3 Modified Mann-type inertial forward-backward iterative method-3
Choose e [ 0 , 1 ) and 0 < δ ¯ < δ m < 2 ς are given. Pick the initial points w 0 and w 1 .
Iterative Step: For m 1 and iterates w m , w m 1 , select 0 < e m < e ¯ m , where
e ¯ m = min τ m w m w m 1 , e , if w m w m 1 , e , otherwise .
Compute
u m = ( 1 p m q m ) w m + q m [ w m + e m ( w m w m 1 ) ] ,
v m = J δ m Z ( u m δ m W u m ) ,
w m + 1 = S J δ m Z ( v m δ m W v m ) .
If w m + 1 = v m = w m = u m then exit, or else, assign m = m + 1 and back to the computation.
Proof. 
Take σ Φ Ω and replacing S by identity operator I in (5), we get that { u m } , { v m } and { w m } are bounded. Denote y m = w m + e m ( w m w m 1 ) , then using the similar steps as in (17), we get
u m σ 2 ( 1 p m ) w m σ 2 + 2 L 2 e m w m w m 1 + p m σ 2 q m ( 1 p m q m ) w m y m 2 .
Denote z m = J δ m Z ( I δ m W ) v m , then from (40) and using the same steps used in (10), it can be concluded that
z m σ 2 u m σ 2 2 δ m ς W v m W σ v m u m 2 ς 2 2 ς δ m 2 ς v m u m 2 .
In view of (40)–(42), we get
w m + 1 σ 2 S z m σ 2 z m σ 2 ( 1 p m ) w m σ 2 + 2 L 2 e m w m w m 1 + p m σ 2 q m ( 1 p m q m ) w m y m 2 2 δ m ς W v m W σ v m u m 2 ς 2 2 ς δ m 2 ς v m u m 2 ,
Considering the Case(I) as in the derivation of Theorem 1, we achieve
u m v m 0 , w m y m 0 , m ,
and hence
u m y m = ( 1 p m q m ) w m + q m y m y m ( 1 p m ) w m y m + p m w m 0 .
Hence, as m , we get
z m v m v m u m 0 .
It is not difficult to show that
w m + 1 w m 0 , as m .
Employing (44)–(47), by taking m , we get
S z m z m = w m + 1 z m w m + 1 w m + w m y m + y m u m + u m v m + v m z m 0 .
Since σ ¯ ω w ( w m ) and { w m } is bounded, which guarantees the existence of a subsequence w m k and w m k σ ¯ , and so is the sequence { z m k } . Thus, applying Lemma 2, we obtain σ ¯ Ω . The remainder of the proof can be completed by proceeding with the same steps as in the proof of Theorem 1. □

4. Theoretical Applications

Some theoretical applications of proposed algorithms are discussed below for solving variational inequality convex minimization problems in conjunction with the fixed point problem.

4.1. The Variational Inequality Problem

The variational inequality problem (VIP) for the operator W : Ξ Ξ , is to find φ D such that
W ( ϕ ) , ϕ φ 0 ϕ D ,
where W is 1 ς -ism. Let Φ 1 be the solution set of the above defined (VIP). We know that solving (VIP) is a special case of solving (MIP). In fact, the projection operator is the resolvent of the normal cone. Then, by replacing J δ m Z by the projection P D in Algorithms 1–3, we can obtain the three iterative methods and their convergence theorems to approximate the common solution of (VIP) and (FPP).

4.2. The Convex Optimisation Problem

Let V 1 , V 2 : Ξ Ξ are two proper, convex and lower-continuous function such that V 2 is differentiable and ς -Lipschitz continuous. The convex minimization problem (COP) for V 1 and V 2 is to find ϕ Ξ such that
V 1 ( ϕ ) + V 2 ( ϕ ) V 1 ( φ ) + V 2 ( φ ) , φ Ξ .
Let the solution set of (COP) be symbolized by Φ 2 . The (COP) is equivalent to determine ϕ Ξ satisfying
0 V 1 ( ϕ ) + V 2 ( ϕ )
where V 1 is the gradient of V 1 and V 2 is the sub-differential of V 2 . We are aware that every ς -Lipschitz continuous function is 1 ς -ism and V 2 is maximal monotone. Thus, by replacing W = V 1 and Z = V 2 in Algorithms 1–3, we can obtain the three iterative methods and their convergence theorems to approximate the common solution of (COP) and (FPP).

5. Numerical Experiments

Example 1.
(Finite dimensional) Let Ξ = R 3 denote the Hilbert space equipped with the standard inner product and norm. We define the operators, W and Z, and the mapping S as follows:
W ( η 1 , η 2 , η 3 ) = η 1 , 2 η 2 , 3 η 2 , Z ( η 1 , η 2 , η 3 ) = η 1 , 2 η 2 , η 3 , S ( η 1 , η 2 , η 3 ) = ( η 1 2 , η 2 2 , η 3 2 ) .
It can be verified that the operator W is ς-ism, with ς = 1 3 , Z is maximal monotone, and S is nonexpansive. We choose τ m = 1 ( m + 25 ) 2 , δ n = 2 3 1 ( m + 10 ) ( 0 , 2 3 ) and δ = 2 3 . For numerical computation, we d select e m randomly from the interval ( 0 , e ¯ m ) , where
e ¯ m = min 1 ( m + 25 ) 2 w m w m 1 , e , if w m w m 1 , e , otherwise
We choose the parameters p m , q m , e, and the initial point under the following three different cases:
Case (a): w 0 = ( 10 , 70 , 19 ) , w 1 = ( 19 , 33 , 24 ) ; p m = 1 m + 25 , q m = 1 8 1 m + 25 , e = 0.25
Case (b): w 0 = ( 1 ; 5 ; 3 ) , w 1 = ( 2 , 4 , 7 ) ; p m = 1 k + 100 , q m = 1 6 1 m + 100 , e = 0.35
Case (c): w 0 = ( 1 / 3 , 5 / 7 , 3 / 2 ) , w 1 = ( 1 / 2 ; 4 / 5 ; 7 / 9 ) ; p m = 1 m + 1 , q m = 1 2 1 m + 1 , e = 0.75
We study the convergence of the proposed three schemes, together with Algorithm 10 of [32] and Algorithm 3.3 of [31], and present their numerical performance in Table 1. The algorithms are terminated when w m + 1 w m < 10 20 .
It is evident that our algorithms are effective and can be simply executed. The convergence of { w m } to { 0 } = Φ Ω are shown in Figure 1, Figure 2 and Figure 3 with varying values giving in Cases (a)–(c). It is noted that our algorithms approach toward the solution in fewer steps and less time taken by the CPU in comparison to Algorithm 3.3 of [31], while our algorithms are fast in respect of the number of iterations with respect to Algorithm 10 of [32].
Example 2.
(Infinite dimensional) Let Ξ = l 2 : = ϕ : = ( ϕ 1 , ϕ 2 , ϕ 3 , , ϕ m , ) , ϕ m R : m = 1 | ϕ m | 2 < , with inner product ϕ , ψ = m = 1 ϕ m ψ m and the norm ϕ = m = 1 | ϕ m | 2 1 / 2 . The mappings W, Z and S are expressed by
W ( ϕ ) : = ϕ 5 ; Z ( ϕ ) : = ϕ 3 ; S ( ϕ ) : = ϕ 2 , ϕ l 2 .
Clearly, W is 5-ism, Z is maximal monotone and S is nonexpansive. We choose τ m = 1 ( m + 10 ) 2 , δ m = 1 2 1 m + 25 , δ = 1 2 , and e m is randomly selected from ( 0 , e ¯ m ) , where
e ¯ m = min 1 ( m + 10 ) 2 w m w m 1 , e , if w m w m 1 , e , otherwise .
and the p m , q m , e and the initial values which are given below:
Case (a′): w 0 = ( 1 ) m m + 1 m = 1 , w 1 = 0 , i f m i s o d d , 1 ( m + 1 ) 2 , o t h e r w i s e , , p m = 1 m + 25 , q m = 1 7 1 m + 25 , e = 0.25
Case (b′): w 0 = 1 m + 1 m = 1 , w 1 = 1 ( m + 1 ) 2 m = 1 , p m = 1 m + 100 , q m = 1 5 1 m + 100 , e = 0.35
Case (c′): w 0 = ( 1 ) m m + 1 m = 1 , w 1 = 0 , i f m i s o d d , 1 ( m + 1 ) 2 , o t h e r w i s e , p m = 1 m + 1 , q m = 1 3 1 m + 1 , e = 0.65
We study the convergence of the proposed three schemes, together with Algorithm 10 of [32] and Algorithm 3.3 of [31], and present their numerical performance in Table 2. The algorithms are terminated when w m + 1 w m < 10 20 .
It is evident that our algorithms are effective and can be simply executed. The convergence of { w m } to { 0 } = Φ Ω are shown in Figure 4, Figure 5 and Figure 6 with varying values giving in Cases (a′)–(c′). It is noted that our algorithms approach toward the solution in fewer steps and less time taken by the CPU in comparison to Algorithm 3.3 of [31], while our algorithms are fast in respect of the number of iterations with respect to Algorithm 10 of [32].

6. Conclusions

In this study, we introduce three accelerated modifications of forward–backward iterative algorithms for approximating a common solution of the monotone inclusion problem (MIP) and the fixed-point problem (FPP). The proposed schemes are structured such that Mann-type iterations are incorporated at the initial stage, in conjunction with fixed point iterations and an extrapolation component. These methods are not only easy to implement but also achieve faster convergence in comparison to the existing approaches, as confirmed by numerical experiments. In addition, we demonstrated the applicability of proposed methods to solve variational inequality and convex optimization problems. Future research directions include a detailed investigation of the convergence rates of the proposed algorithms in comparison with other established methods for solving (MIP) and (FPP).

Author Contributions

Conceptualization, M.D.; Methodology, M.D. and M.N.; Validation, M.N.; Investigation, I.A.-D. and E.A.; Writing—original draft, M.D.; Supervision, I.A.-D.; Project administration, I.A.-D.; Funding acquisition, I.A.-D. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University (IMSIU) (grant number IMSIU-DDRSP2502).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Baillon, B.; Bruck, R.E.; Reich, S. On the assmptotic behaviour of nonexpansive and semigroups in Banch spaces. Houst. J. Math. 1978, 4, 1–9. [Google Scholar]
  2. Moudafi, A. Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef]
  3. Tan, K.K.; Xu, H.K. Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process. J. Math. Anal. Appl. 1993, 2, 301–308. [Google Scholar] [CrossRef]
  4. Xu, H.K. Another control condition in an iterative maethod for nonexpansive mappings. Bull. Aust. Math. Soc. 2002, 65, 109–113. [Google Scholar] [CrossRef]
  5. Mann, W. Mean value methods in iteration. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  6. Combettes, P.L.; Wajs, V. Signal recovery by proximal forward–backward splitting. SIAM Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef]
  7. Daubechies, I.; Defrise, M.; De Mol, C. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 2004, 57, 1413–1457. [Google Scholar] [CrossRef]
  8. Duchi, J.; Singer, Y. Efficient online and batch learning using forward–backward splitting. J. Mach. Learn. Res. 2009, 10, 2899–2934. [Google Scholar]
  9. Nwawuru, F.O.; Narain, O.K.; Dilshad, M.; Ezeora, J.N. Splitting method involving two-step inertial for solving inclusion and fixed point problems with applications. Fixed Point Theory Algorithms Sci. Eng. 2025, 2025, 8. [Google Scholar] [CrossRef]
  10. Attouch, H.; Peypouquet, J.; Redont, P. Backward–forward algorithms for structured monotone inclusions in Hilbert spaces. J. Math. Anal. Appl. 2018, 457, 1095–1117. [Google Scholar] [CrossRef]
  11. Bello Cruz, J.Y.; Millan, R.D. A variant of forward-backward splitting method for the sum of two monotone operators with new search strategy. Optimization 2015, 64, 1471–1486. [Google Scholar] [CrossRef]
  12. Dang, V.H.; Anh, P.K.; Le, D.M. Modified forward-backward splitting method for variational inclusions. 4OR-Q J. Oper. Res. 2021, 19, 127–151. [Google Scholar]
  13. Huang, Y.Y.; Dong, Y.D. New properties of forward-backward splitting and a practical proximaldescent algorithm. Appl. Math. Comput. 2014, 237, 60–68. [Google Scholar]
  14. Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  15. Moudafi, A.; Oliny, M. Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 2003, 155, 447–454. [Google Scholar] [CrossRef]
  16. Passty, G.B. Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 1979, 72, 383–390. [Google Scholar] [CrossRef]
  17. Tseng, P. A modified forward–backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar]
  18. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear osculattor with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar]
  19. Cholamjiak, W.; Cholamjiak, P.; Suantai, S. An inertial forward–backward splitting method for solving inclusion problems in Hilbert spaces. J. Fixed Point Theory Appl. 2018, 20, 42. [Google Scholar] [CrossRef]
  20. Dilshad, M.; Alamrani, F.M.; Alamer, A.; Alshaban, E.; Alshehri, M.G. Viscosity-type inertial iterative methods for variational inclusion and fixed point problems. AIMS Math. 2024, 9, 18553–18573. [Google Scholar] [CrossRef]
  21. Tang, Y.; Zhang, Y.; Gibali, A. New self-adaptive inertial-like proximal point methods for the split common null point problem. Symmetry 2021, 2021, 2316. [Google Scholar] [CrossRef]
  22. Qin, X.; Wang, L.; Yao, J.-C. Inertial splitting method for maximal monotone mappings. J. Nonlinear Convex Anal. 2020, 21, 2325–2333. [Google Scholar]
  23. Tan, B.; Li, S. Strong convergence of inertial Mann algorithms for solving hierarchical fixed point problems. J. Nonlinear Var. Anal. 2020, 4, 337–355. [Google Scholar]
  24. Lorenz, D.; Pock, T. An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 2015, 51, 311–325. [Google Scholar] [CrossRef]
  25. Alakoya, T.O.; Ogunsola, O.J.; Mewomo, O.T. An inertial viscosity algorithm for solving monotone variational inclusion and common fixed point problems of strict pseudocontractions. Bol. Soc. Mat. Mex. 2023, 29, 31. [Google Scholar] [CrossRef]
  26. Malitsky, Y.; Tam, M.K. A forward–backward splitting method formonotone inclusions without cocoercivity. SIAM J Optim. 2020, 30, 1451–1472. [Google Scholar]
  27. Moudafi, A.; Shehu, Y. Convergence of the forward–backward method for split null-point problems beyond coerciveness. J. Nonlinear Convex Anal. 2019, 20, 1659–1672. [Google Scholar]
  28. Reich, S.; Taiwo, A. Fast hybrid iterative schemes for solving variational inclusion problems. Math. Methods. Appl. Sci. 2023, 46, 17177–17198. [Google Scholar]
  29. Shehu, Y.; Yao, J.C. Rate of convergence for inertial iterative method for countable family of certain quasinonexpansive mappings. J. Nonlinear Convex Anal. 2020, 21, 533–541. [Google Scholar]
  30. Thong, D.; Vinh, N. Inertial methods for fixed point problems and zero point problems of the sum of two monotone mappings. Optimization 2019, 68, 1037–1072. [Google Scholar] [CrossRef]
  31. Tang, Y.; Lin, H.; Gibali, A.; Cho, Y.J. Convegence analysis and applicatons of the inertial algorithm solving inclusion problems. Appl. Numer. Math. 2022, 175, 1–17. [Google Scholar] [CrossRef]
  32. Tang, Y.; Gibali, S. Resolvent-free method for solving monotone inclusions. Axioms 2023, 12, 257. [Google Scholar] [CrossRef]
  33. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Space; CMS Books in Mathematics; Springer: New York, NY, USA, 2011. [Google Scholar]
  34. Kreyszig, E. Introductory Functional Analysis with Applications; John Wiley & Sons: New York, NY, USA, 1978. [Google Scholar]
  35. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59, 301–323. [Google Scholar] [CrossRef]
  36. Geobel, K.; Kirk, W.A. Topics in Metric Fixed Poit Theory; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  37. Opial, Z. Weak covergence of the sequence of successive approximations of nonexpansive mappings. Bull. Am. Math. Soc. 1976, 73, 591–597. [Google Scholar] [CrossRef]
  38. Mainge, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
Figure 1. Graphical illustration of w m + 1 w m obtained from Algorithms 1–3, together with Algorithm 3.3 of [31] and Algorithm 10 of [32], using Case (a).
Figure 1. Graphical illustration of w m + 1 w m obtained from Algorithms 1–3, together with Algorithm 3.3 of [31] and Algorithm 10 of [32], using Case (a).
Mathematics 13 04000 g001
Figure 2. Graphical illustration of w m + 1 w m obtained from Algorithms 1–3, together with Algorithm 3.3 of [31] and Algorithm 10 of [32], using Case (b).
Figure 2. Graphical illustration of w m + 1 w m obtained from Algorithms 1–3, together with Algorithm 3.3 of [31] and Algorithm 10 of [32], using Case (b).
Mathematics 13 04000 g002
Figure 3. Graphical illustration of w m + 1 w m obtained from Algorithms 1–3, together with Algorithm 3.3 of [31] and Algorithm 10 of [32], using Case (c).
Figure 3. Graphical illustration of w m + 1 w m obtained from Algorithms 1–3, together with Algorithm 3.3 of [31] and Algorithm 10 of [32], using Case (c).
Mathematics 13 04000 g003
Figure 4. Graphical illustration of w m + 1 w m obtained from Algorithms 1–3, together with Algorithm 3.3 of [31] and Algorithm 10 of [32], using Case (a′).
Figure 4. Graphical illustration of w m + 1 w m obtained from Algorithms 1–3, together with Algorithm 3.3 of [31] and Algorithm 10 of [32], using Case (a′).
Mathematics 13 04000 g004
Figure 5. Graphical illustration of w m + 1 w m obtained from Algorithms 1–3, together with Algorithm 3.3 of [31] and Algorithm 10 of [32], using Case (b′).
Figure 5. Graphical illustration of w m + 1 w m obtained from Algorithms 1–3, together with Algorithm 3.3 of [31] and Algorithm 10 of [32], using Case (b′).
Mathematics 13 04000 g005
Figure 6. Graphical illustration of w m + 1 w m obtained from Algorithms 1–3, together with Algorithm 3.3 of [31] and Algorithm 10 of [32], using Case (c′).
Figure 6. Graphical illustration of w m + 1 w m obtained from Algorithms 1–3, together with Algorithm 3.3 of [31] and Algorithm 10 of [32], using Case (c′).
Mathematics 13 04000 g006
Table 1. Numerical results of all the algorithms with different initial values and parameters.
Table 1. Numerical results of all the algorithms with different initial values and parameters.
Case Algorithm 1Algorithm 2Algorithm 3Algorithm 3.3 [31]Algorithm 10 [32]
Iter.CPU (s)Iter.CPU (s)Iter.CPU (s)Iter.CPU (s)Iter.CPU (s)
(a)191.18 × 10−5211.12 × 10−5181.19 × 10−5292.1 × 10−5>1004.7 × 10−6
(b)161.33 × 10−5221.34 × 10−5191.3 × 10−5322.4 × 10−5>1001.95 × 10−5
(c)161.15 × 10−5201.17 × 10−5211.34 × 10−5392.15 × 10−5>1005.2 × 10−6
Table 2. Numerical results of all the algorithms with different initial values and parameters.
Table 2. Numerical results of all the algorithms with different initial values and parameters.
Case Algorithm 1Algorithm 2Algorithm 3Algorithm 3.3 [31]Algorithm 10 [32]
Iter.CPU (s)Iter.CPU (s)Iter.CPU (s)Iter.CPU (s)Iter.CPU (s)
(a′)411.38 × 10−5391.31 × 10−5201.31 × 10−5702.17 × 10−5>1008.6 × 10−6
(b′)561.56 × 10−5371.37 × 10−5241.41 × 10−5652.62 × 10−5>1005.7 × 10−6
(c′)411.34 × 10−5401.33 × 10−5281.29 × 10−5422.21 × 10−5>1008.3 × 10−6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dilshad, M.; Al-Dayel, I.; Alshaban, E.; Nasiruzzaman, M. Some Modified Mann-Type Inertial Forward–Backward Iterative Methods for Monotone Inclusion Problems. Mathematics 2025, 13, 4000. https://doi.org/10.3390/math13244000

AMA Style

Dilshad M, Al-Dayel I, Alshaban E, Nasiruzzaman M. Some Modified Mann-Type Inertial Forward–Backward Iterative Methods for Monotone Inclusion Problems. Mathematics. 2025; 13(24):4000. https://doi.org/10.3390/math13244000

Chicago/Turabian Style

Dilshad, Mohammad, Ibrahim Al-Dayel, Esmail Alshaban, and Md. Nasiruzzaman. 2025. "Some Modified Mann-Type Inertial Forward–Backward Iterative Methods for Monotone Inclusion Problems" Mathematics 13, no. 24: 4000. https://doi.org/10.3390/math13244000

APA Style

Dilshad, M., Al-Dayel, I., Alshaban, E., & Nasiruzzaman, M. (2025). Some Modified Mann-Type Inertial Forward–Backward Iterative Methods for Monotone Inclusion Problems. Mathematics, 13(24), 4000. https://doi.org/10.3390/math13244000

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop