1. Introduction
In this work, we propose an algorithm to find a zero of the sum of two monotone operators in a Banach space
X. More precisely, we aim to solve the monotone inclusion problem
where
is a maximal monotone operator,
is monotone and
L-Lipschitz continuous, and the solution set
is assumed to be nonempty.
We begin by reviewing known results in the special case where
X is a Hilbert space. For the inclusion problem (
1), the classical forward–backward splitting method [
1,
2] applies when
B is
-cocoercive. Each iteration consists of an explicit (forward) step using
B followed by an implicit (backward) resolvent step with respect to
A. More concretely, the method generates an iterative sequence via the update rule
and is known to converge weakly to a solution of (
1), provided that
B satisfies the
-cocoercivity condition and the step size
is chosen from the interval
.
To relax the cocoercivity constraint imposed on operator
B, Tseng [
3] developed a modified version of the forward–backward algorithm. This modified method only requires Lipschitz continuity of
B, at the cost of an extra forward evaluation per iteration. Subsequently, Malitsky and Tam [
4] proposed the forward–reflected–backward splitting method for addressing the monotone inclusion problem (
1), whose iterative scheme is formulated as follows:
It has been established that the sequence
generated by the iterative Formula (
3) converges weakly to a solution of (
1), provided that the step size
is selected from the interval
.
In another line of research, Cevher and Vũ [
5] put forward the reflected–forward–backward splitting method for solving problem (
1) under the sole assumption that
B is
L-Lipschitz continuous. The update rule of this method is given by
Notably, the weak convergence of the iterates produced by (
4) is guaranteed when the step size
lies within the range
. Recently, inertial techniques have also been incorporated into splitting algorithms to accelerate convergence. For instance, Shehu et al. [
6] proposed an inertial outer-reflected forward–backward splitting method for solving monotone inclusions involving three operators (one of which is cocoercive) in Hilbert spaces, achieving both weak and strong convergence results. More recently, splitting schemes without cocoercivity have been further extended to multi-operator settings in Hilbert spaces. For instance, Cao et al. [
7] proposed a forward–reflected–backward algorithm for finding zeros of the sum of three maximal monotone operators and a Lipschitz monotone operator, establishing weak and strong convergence under mild step size conditions.
The theory of monotone operators in Banach spaces provides a natural framework for numerous applied problems. In partial differential equations, nonlinear elliptic and parabolic problems are often formulated as monotone inclusion problems in Sobolev spaces (see, e.g., Showalter [
8]). In signal processing and imaging, variational models involving
or total-variation penalties lead naturally to optimization problems in non-Hilbertian Banach spaces such as
or
(see, e.g., Chambolle and Pock [
9] and the compressed sensing framework of Candès et al. [
10]). These applications motivate the development of splitting algorithms that can operate directly in Banach spaces without relying on Hilbertian structure.
In the framework of Banach spaces, research findings related to the forward–backward method and its extended forms remain relatively scarce, see [
11,
12,
13,
14,
15]. Bello et al. [
16] introduced the forward–reflected–backward splitting method, specifically designed for real 2-uniformly convex and uniformly smooth Banach spaces. Given a maximally monotone operator
and a monotone Lipschitz operator
, their algorithm generates a sequence
via
where
J is the normalized duality mapping,
denotes the resolvent of
A, and the step sizes
are chosen in an interval
with
and
. Under mild conditions, they proved weak convergence of the iterates to a solution of the inclusion
.
More recently, Huang et al. [
17] proposed a new notion of
-monotonicity for operators in smooth Banach spaces, which explicitly incorporates the duality mapping
J. This definition (see Definition 2), given by
better aligns with the geometry of Banach spaces and recovers the classical Hilbert space notion when
J is the identity. Using this framework, they established contractive properties of the resolvent of
-monotone operators and, as an application, proved strong convergence with an
R-linear rate for the forward–reflected–backward splitting algorithm when the sum
satisfies a “strong-convexity-dominates-weak-convexity” condition (i.e.,
, where
are the monotonicity constants of
A and
B, respectively). To the best of our knowledge, no existing work has addressed the reflected–forward–backward splitting method in the setting of Banach spaces. A key challenge arises from the fact that the operators
A and
B map the Banach space
X into its dual space
. The analytical tools that are effectively applicable in Hilbert spaces cannot be directly transplanted or utilized in general Banach spaces. Furthermore, establishing the convergence of the reflected–forward–backward splitting method in Banach spaces proves more intricate than that of the forward–reflected–backward splitting method. A crucial underlying reason lies in the resolvent
, which incorporates the duality mapping
J. Unlike the Hilbert space scenario where
J coincides with the identity operator
I, the duality mapping
J exhibits nonlinear characteristics in general Banach spaces, posing additional obstacles to the convergence proof.
In the present work, we establish the first strong convergence result (with an R-linear rate) for the reflected–forward–backward splitting method applied to monotone inclusions in 2-uniformly convex and uniformly smooth Banach spaces. The convergence is guaranteed provided that the sum satisfies a “strong-convexity-dominates-weak-convexity” condition (i.e., , where are the monotonicity constants of A and B in the sense of Definition 2, respectively). This condition adapts naturally to the smooth geometry of the space via the duality mapping. Our primary contributions are the extension of the RFBS scheme to a Banach space setting and the establishment of its strong convergence under geometric conditions that are genuinely compatible with the nonlinear structure of the space. Key technical hurdles overcome in this extension include the following: (i) handling the resolvent , which is implicit in the nonlinear duality mapping J; (ii) replacing the Euclidean distance with the Bregman-type functional and leveraging its properties in uniformly convex and smooth spaces; and (iii) deriving a step size condition that explicitly incorporates the modulus of convexity , thereby capturing the intrinsic interplay between the algorithm’s parameters and the geometry of the underlying Banach space. We explicitly note that, unlike in Hilbert spaces where weak convergence is known for general monotone operators, establishing even weak convergence for RFBS under merely monotone assumptions in Banach spaces remains an open question, due primarily to the nonlinearity of J. Our results therefore constitute a foundational step, proving strong convergence under strengthened conditions, while highlighting the need for further research to bridge the gap with Hilbert space theory. The derived step size condition, while sufficient for linear convergence, is conservative and reflects the intrinsic interaction between the algorithm and the space’s geometry via the modulus of convexity.
2. Preliminaries
A Banach space X is said to be smooth if, for every pair of unit vectors , the directional derivative of the norm at x in the direction y exists; that is, the limit exists.
On the other hand, X is called strictly convex if the midpoint of any two distinct points on the unit sphere lies strictly inside the unit ball; equivalently, whenever and .
The modulus of convexity of a Banach space
X is defined by
for
.
The space
X is called uniformly convex if
for every
. Moreover, for a fixed exponent
, the space
X is said to be
q-uniformly convex if there exists a constant
such that
It is a classical result in Banach space theory that every uniformly convex space is strictly convex.
All symbols used above follow standard conventions in Banach space theory; for detailed definitions and properties, we refer the reader to Megginson’s monograph [
18].
The normalized duality mapping
is defined by
for all
.
For a smooth Banach space
X, following the work of Alber [
19] and Kamimura and Takahashi [
20], we introduce the mapping
given by
for all
. Notably,
coincides with the Bregman distance associated with the convex function
(see Bregman [
21], Butnariu, Iusem [
22], and Censor, Lent [
23] for detailed properties of Bregman distances). In the special case where
X is a Hilbert space, the duality mapping
J reduces to the identity operator
I, and thus
for all
. It is well-established that
holds for all
. Additionally, if
X is strictly convex, then the mapping
is non-degenerate; i.e.,
Lemma 1 ([
19,
24])
. Let X be a real smooth and uniformly convex Banach space. Then the following identities are satisfied for all :(1) ;
(2) .
Lemma 2 ([
24])
. Suppose X is a 2-uniformly convex and smooth Banach space. Then there exists a constant such thatfor all . This constant μ is referred to as the 2-uniformly convexity constant of X. The nonlinear duality mapping J and the associated functional play a central role in extending Hilbert space arguments to the Banach space setting. In a Hilbert space, where J coincides with the identity operator I, the functional reduces to the squared Euclidean distance . In a general smooth Banach space, retains key metric-like properties (as seen in (7) and (8)) while naturally incorporating the nonlinearity of J. This makes a suitable Lyapunov function for analyzing iterative algorithms. The identities in Lemma 1, which generalize the classical law of cosines, and the key inequality in Lemma 2, which links to the norm via the modulus of convexity , are fundamental tools that allow us to manipulate the resolvent step and ultimately establish convergence. The subsequent convergence analysis will heavily rely on these geometric properties of .
Next, we recall the classical definition of -monotone operators in Banach spaces.
Definition 1 (Classical
-monotonicity)
. An operator (multi-valued operator, denoted by ⇉) is said to be α-monotone for some ifwhere denotes the graph of A. In a smooth Banach space X, the normalized duality mapping is single-valued—a characteristic property of smooth spaces. Exploiting this feature, we introduce a revised notion of -monotone operators adapted to the setting of smooth Banach spaces.
Definition 2 (
-monotonicity in smooth Banach spaces)
. Let X be a smooth Banach space. An operator is called α-monotone () if The scalar is called the monotonicity constant of A. Specifically, A is monotone if ; A is strongly monotone if ; and A is weakly monotone if .
An operator A is said to be maximally -monotone if it is -monotone and there exists no other -monotone operator such that is a proper subset of (i.e., ).
For a Banach space
X that is strictly convex, smooth, and reflexive, Huang, Peng, and Tang [
17] established that the set of maximally strongly monotone operators under the revised definition (Definition 2) is dense in the set of maximally strongly monotone operators under the classical definition (Definition 1) in the following sense (see Theorem 1). For convenience, we introduce the following notations: let
denote the collection of maximally strongly monotone operators defined via Definition 1; let
denote the collection of maximally strongly monotone operators defined via Definition 2.
Theorem 1 ([
17])
. Let and suppose that . Since A is strongly monotone, is a singleton (denoted by ). Then there exists a sequence such that (and each is also a singleton, denoted by ). Furthermore, the sequence converges strongly to , i.e., as . Remark 1. In their work [17], the authors explicitly constructed the sequence as , where is a sequence of positive scalars satisfying as . In this construction, the convergence rate is estimated as , which implies as . 3. Main Results
To the best of our knowledge, no prior work has addressed the reflected–forward–backward splitting method in the setting of Banach spaces. In this paper, we establish the convergence of this method for monotone inclusion problems involving Lipschitz continuous operators in 2-uniformly convex Banach spaces.
In Hilbert spaces, where the duality mapping J coincides with the identity I, the reflected–forward–backward update can be analyzed using the elementary identity . This identity is no longer valid when J is nonlinear. Moreover, the resolvent becomes implicit in J, and the standard monotonicity inequalities do not directly combine with the norm. To overcome these obstacles we (i) replace the squared norm by the Bregman-type functional , which retains a “generalized Pythagorean” identity (Lemma 1); (ii) employ the modified monotonicity notion (Definition 2) that couples with J through the term ; and (iii) exploit the 2-uniform convexity inequality (Lemma 2) to convert estimates involving back to norm estimates. These adaptations allow us to construct a Lyapunov sequence that contracts at an R-linear rate, thereby recovering strong convergence.
Theorem 2.
Let X be a real Banach space that is 2-uniformly convex with modulus constant and uniformly smooth. Let be maximally monotone and α-monotone, and let be β-monotone and L-Lipschitz continuous, all in the sense of Definition 2, with .
Given , define the sequence bywhere is a non-increasing sequence satisfyingfor some and , with . If , then strongly converges to a point in at an R-linear rate.
Proof. Let
. So,
From (
9) we have the following:
Using (
10), (
11) and the strong monotonicity of
A, we obtain
Taking advantage of the Lipschitz continuity and monotonicity of
B, we obtain
From the definition of
in (
6) and Lemma 1(2), we have
Because
, it follows that
Multiplying by
and
, respectively, gives the lower bounds (
15) and (
16). This step translates the monotonicity conditions, originally stated with the duality mapping, into estimates involving the Lyapunov function
—a crucial ingredient for the subsequent recursive inequality. Substituting (
13) and (
14) into (
12), and considering (
15) and (
16), we have
After substituting (
13)–(
16) into (
12), we obtain the mixed estimate (
17), which contains both
-terms and squared norms. To unify the expression, we employ the characterization of 2-uniform convexity provided by Lemma 2:
Inserting these bounds into (
17) converts every squared-norm term into a quantity comparable with
, yielding the following inequality:
In (
14) we take some
, and since
we have
. Let
; then we have
. And since
is non-increasing, we obtain
which establishes that
with R-linear rate by Lemma 2. □
Remark 2 (On the assumptions and step size condition). The step size condition in Theorem 2, , is derived from the technical requirements of our proof, which involves reciprocal estimates between the norm and the Lyapunov functional . μ is the 2-uniform convexity constant of the space (fixed), m is chosen to satisfy , and ε is a small positive number that ensures the step size interval is nonempty. The condition ensures the upper bound remains positive. The appearance of the 2-uniform convexity constant μ reflects the intrinsic geometric property of the underlying Banach space, as Lemma 2 establishes the fundamental link . While sufficient for guaranteeing R-linear convergence, this bound is conservative. Investigating whether it can be substantially relaxed, possibly via alternative analytical techniques, remains an interesting open question.
Furthermore, the strong convergence result crucially relies on the assumption under Definition 2. Within our proof framework, which leverages the specific form to connect monotonicity with the functional ϕ, relaxing this to uniform or strict monotonicity (defined purely with respect to the norm) appears to be highly non-trivial. The principal difficulty stems from the nonlinearity of the duality mapping J in general Banach spaces. Whether strong or weak convergence can be established under weaker monotonicity conditions is an important direction for future research.
Theorem 3. Let X be a real Banach space that is 2-uniformly convex with modulus constant and uniformly smooth. Let be maximally monotone and α-monotone () (in the sense of Definition 1), and let be monotone and L-Lipschitz continuous.
For any , there exists a δ-strongly monotone operator (in the sense of Definition 2) such that, given initial points , the sequence is defined bywith a non-increasing step size sequence , and satisfiesfor some and , with . If , then for any , Proof. Since , it follows from Theorem 1 that there exists an operator , which is -strongly monotone in the sense of Definition 2 such that the set of solutions is not empty and a singleton. Specifically, let ; then the error estimate is valid. By virtue of Theorem 2, the iterative sequence converges strongly to as . Consequently, for sufficiently large n, the inequality is satisfied. □
Remark 3.
Theorem 3 yields an approximate solution within a δ-neighborhood. By letting , the corresponding sequence of approximate solutions converges strongly to an exact solution of the original problem. This shows that by approximating a maximally monotone operator with a slightly perturbed strongly monotone one (in the sense of Definition 2), the reflected–forward–backward iteration can be made to converge arbitrarily close to a true solution of the original inclusion.
Remark 4. In this work, we have established the first strong convergence result, with an R-linear rate, for the reflected–forward–backward splitting (RFBS) method in the setting of 2-uniformly convex and uniformly smooth Banach spaces. The convergence is guaranteed under a novel monotonicity condition (Definition 2), which requires the combined monotonicity constant and interacts naturally with the nonlinear duality mapping J.
4. Numerical Experiment
In this section, we present numerical experiments to verify the efficiency and effectiveness of the proposed algorithm. We mainly compare our method with the forward–reflected–backward splitting algorithm studied in [
16], which is hereafter referred to as FRB. All numerical experiments were implemented in MATLAB R2020a. The simulations were conducted on a 64-bit Lenovo laptop equipped with an Intel(R) Core(TM) i5-7200U CPU @ 2.50 GHz and 8 GB of RAM.
Example 1 ([
16])
. Let , with the norm and inner product defined, respectively, byObserve that, for with , the normalized duality mappingand its inverseare given (see, for example, Alber [25]) byandrespectively. In particular, when , the mappings J and , defined in (21) and (22), reduce to the identity operator. Now, define the operators
by
and
Then the operator
B is monotone and Lipschitz continuous with Lipschitz constant
, while
A is 1-strongly monotone on
. According to the definition of
A given above, the resolvent operator
is explicitly given by
Let
be a continuous linear functional. By the Riesz representation theorem, there exists a unique
such that
Since
, it is enough to prove that
as
, which is equivalent to showing
In the numerical experiments, we adopt the stopping criterion
where
is a prescribed small positive constant. For a fair comparison, the initial points are chosen as
and
, respectively.
Table 1 reports the numerical performance of the FRB method and Algorithm (9) for different values of the step size parameter
and tolerance
. It can be observed from
Table 1 that, for both methods, decreasing the tolerance from
to
results in an increase in both iteration numbers and CPU time, which is expected due to the higher accuracy requirement. In addition, increasing the step size parameter
generally reduces the number of iterations, indicating faster convergence. Although the two methods exhibit comparable iteration counts, Algorithm (9) consistently requires less CPU time than the FRB method for all tested parameters. This demonstrates that Algorithm (9) is computationally more efficient while maintaining similar convergence behavior. Furthermore, we use a log–log plot to illustrate that the error sequence converges to zero as the number of iterations increases, as shown in
Figure 1.