Next Article in Journal
Blockchain Technology for Green Supply Chain Management in the Maritime Industry: Integrating Extended Grey Relational Analysis, SWARA, and ARAS Methods Under Z-Information
Next Article in Special Issue
Fixed Point of Polynomial F-Contraction with an Application
Previous Article in Journal
A Novel Chaotic Encryption Algorithm Based on Fuzzy Rule-Based Sugeno Inference: Theory and Application
Previous Article in Special Issue
Solving Variational Inclusion Problems with Inertial S*Forward-Backward Algorithm and Application to Stroke Prediction Data Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Reflected–Forward–Backward Splitting Method for Monotone Inclusions Involving Lipschitz Operators in Banach Spaces

School of Mathematics and Information Science, Guangzhou University, Guangzhou 510006, China
*
Author to whom correspondence should be addressed.
Mathematics 2026, 14(2), 245; https://doi.org/10.3390/math14020245
Submission received: 26 November 2025 / Revised: 24 December 2025 / Accepted: 5 January 2026 / Published: 8 January 2026
(This article belongs to the Special Issue Nonlinear Functional Analysis: Theory, Methods, and Applications)

Abstract

The reflected–forward–backward splitting (RFBS) method is well-established for solving monotone inclusion problems involving Lipschitz continuous operators in Hilbert spaces, where it converges weakly under mild assumptions. Extending this method to Banach spaces presents significant challenges, primarily due to the nonlinearity of the duality mapping. In this paper, we propose and analyze an RFBS algorithm in the setting of real Banach spaces that are 2-uniformly convex and uniformly smooth. To the best of our knowledge, this work presents the first strong (R-linear) convergence result for the RFBS method in such Banach spaces, achieved under a newly adapted notion of strong monotonicity. Our results thus establish a foundational theoretical guarantee for RFBS in Banach spaces under strengthened monotonicity conditions, while highlighting the open problem of proving weak convergence for the general monotone case.

1. Introduction

In this work, we propose an algorithm to find a zero of the sum of two monotone operators in a Banach space X. More precisely, we aim to solve the monotone inclusion problem
find x X such that 0 ( A + B ) ( x ) ,
where A : X X is a maximal monotone operator, B : X X is monotone and L-Lipschitz continuous, and the solution set ( A + B ) 1 ( 0 ) is assumed to be nonempty.
We begin by reviewing known results in the special case where X is a Hilbert space. For the inclusion problem (1), the classical forward–backward splitting method [1,2] applies when B is 1 L -cocoercive. Each iteration consists of an explicit (forward) step using B followed by an implicit (backward) resolvent step with respect to A. More concretely, the method generates an iterative sequence via the update rule
x k + 1 = J λ A ( x k λ B ( x k ) ) k N
and is known to converge weakly to a solution of (1), provided that B satisfies the 1 L -cocoercivity condition and the step size λ is chosen from the interval ( 0 , 2 L ) .
To relax the cocoercivity constraint imposed on operator B, Tseng [3] developed a modified version of the forward–backward algorithm. This modified method only requires Lipschitz continuity of B, at the cost of an extra forward evaluation per iteration. Subsequently, Malitsky and Tam [4] proposed the forward–reflected–backward splitting method for addressing the monotone inclusion problem (1), whose iterative scheme is formulated as follows:
x k + 1 = J λ A x k 2 λ B ( x k ) + λ B ( x k 1 ) k N .
It has been established that the sequence { x k } generated by the iterative Formula (3) converges weakly to a solution of (1), provided that the step size λ is selected from the interval ( 0 , 2 L ) .
In another line of research, Cevher and Vũ [5] put forward the reflected–forward–backward splitting method for solving problem (1) under the sole assumption that B is L-Lipschitz continuous. The update rule of this method is given by
x k + 1 = J λ A x k λ B ( 2 x k x k 1 ) k N .
Notably, the weak convergence of the iterates produced by (4) is guaranteed when the step size λ lies within the range ( 0 , 2 1 L ) . Recently, inertial techniques have also been incorporated into splitting algorithms to accelerate convergence. For instance, Shehu et al. [6] proposed an inertial outer-reflected forward–backward splitting method for solving monotone inclusions involving three operators (one of which is cocoercive) in Hilbert spaces, achieving both weak and strong convergence results. More recently, splitting schemes without cocoercivity have been further extended to multi-operator settings in Hilbert spaces. For instance, Cao et al. [7] proposed a forward–reflected–backward algorithm for finding zeros of the sum of three maximal monotone operators and a Lipschitz monotone operator, establishing weak and strong convergence under mild step size conditions.
The theory of monotone operators in Banach spaces provides a natural framework for numerous applied problems. In partial differential equations, nonlinear elliptic and parabolic problems are often formulated as monotone inclusion problems in Sobolev spaces (see, e.g., Showalter [8]). In signal processing and imaging, variational models involving L 1 or total-variation penalties lead naturally to optimization problems in non-Hilbertian Banach spaces such as L p or B V (see, e.g., Chambolle and Pock [9] and the compressed sensing framework of Candès et al. [10]). These applications motivate the development of splitting algorithms that can operate directly in Banach spaces without relying on Hilbertian structure.
In the framework of Banach spaces, research findings related to the forward–backward method and its extended forms remain relatively scarce, see [11,12,13,14,15]. Bello et al. [16] introduced the forward–reflected–backward splitting method, specifically designed for real 2-uniformly convex and uniformly smooth Banach spaces. Given a maximally monotone operator A : E 2 E and a monotone Lipschitz operator B : E E , their algorithm generates a sequence { x n } via
x n + 1 = J λ n A J 1 J x n λ n B x n λ n 1 ( B x n B x n 1 ) , n 0 ,
where J is the normalized duality mapping, J λ n A denotes the resolvent of A, and the step sizes λ n are chosen in an interval [ ϵ , 1 2 ϵ 2 μ L ] with ϵ > 0 and μ 1 . Under mild conditions, they proved weak convergence of the iterates to a solution of the inclusion 0 ( A + B ) x .
More recently, Huang et al. [17] proposed a new notion of α -monotonicity for operators in smooth Banach spaces, which explicitly incorporates the duality mapping J. This definition (see Definition 2), given by
x y , u v α x y , J x J y ( x , u ) , ( y , v ) gra A ,
better aligns with the geometry of Banach spaces and recovers the classical Hilbert space notion when J is the identity. Using this framework, they established contractive properties of the resolvent of α -monotone operators and, as an application, proved strong convergence with an R-linear rate for the forward–reflected–backward splitting algorithm when the sum A + B satisfies a “strong-convexity-dominates-weak-convexity” condition (i.e., α + β > 0 , where α , β are the monotonicity constants of A and B, respectively). To the best of our knowledge, no existing work has addressed the reflected–forward–backward splitting method in the setting of Banach spaces. A key challenge arises from the fact that the operators A and B map the Banach space X into its dual space X . The analytical tools that are effectively applicable in Hilbert spaces cannot be directly transplanted or utilized in general Banach spaces. Furthermore, establishing the convergence of the reflected–forward–backward splitting method in Banach spaces proves more intricate than that of the forward–reflected–backward splitting method. A crucial underlying reason lies in the resolvent J λ A = ( J + λ A ) 1 J , which incorporates the duality mapping J. Unlike the Hilbert space scenario where J coincides with the identity operator I, the duality mapping J exhibits nonlinear characteristics in general Banach spaces, posing additional obstacles to the convergence proof.
In the present work, we establish the first strong convergence result (with an R-linear rate) for the reflected–forward–backward splitting method applied to monotone inclusions in 2-uniformly convex and uniformly smooth Banach spaces. The convergence is guaranteed provided that the sum A + B satisfies a “strong-convexity-dominates-weak-convexity” condition (i.e., α + β > 0 , where α , β are the monotonicity constants of A and B in the sense of Definition 2, respectively). This condition adapts naturally to the smooth geometry of the space via the duality mapping. Our primary contributions are the extension of the RFBS scheme to a Banach space setting and the establishment of its strong convergence under geometric conditions that are genuinely compatible with the nonlinear structure of the space. Key technical hurdles overcome in this extension include the following: (i) handling the resolvent J λ A = ( J + λ A ) 1 J , which is implicit in the nonlinear duality mapping J; (ii) replacing the Euclidean distance with the Bregman-type functional ϕ ( x , y ) = x 2 2 x , J y + y 2 and leveraging its properties in uniformly convex and smooth spaces; and (iii) deriving a step size condition that explicitly incorporates the modulus of convexity μ , thereby capturing the intrinsic interplay between the algorithm’s parameters and the geometry of the underlying Banach space. We explicitly note that, unlike in Hilbert spaces where weak convergence is known for general monotone operators, establishing even weak convergence for RFBS under merely monotone assumptions in Banach spaces remains an open question, due primarily to the nonlinearity of J. Our results therefore constitute a foundational step, proving strong convergence under strengthened conditions, while highlighting the need for further research to bridge the gap with Hilbert space theory. The derived step size condition, while sufficient for linear convergence, is conservative and reflects the intrinsic interaction between the algorithm and the space’s geometry via the modulus of convexity.

2. Preliminaries

A Banach space X is said to be smooth if, for every pair of unit vectors x , y S X , the directional derivative of the norm at x in the direction y exists; that is, the limit lim t 0 x + t y x t exists.
On the other hand, X is called strictly convex if the midpoint of any two distinct points on the unit sphere lies strictly inside the unit ball; equivalently, x + y 2 < 1 whenever x , y S X and x y .
The modulus of convexity of a Banach space X is defined by
δ X ( ε ) = inf 1 x + y 2 : x = y = 1 , x y = ε ,
for 0 < ε 2 .
The space X is called uniformly convex if δ X ( ε ) > 0 for every ε ( 0 , 2 ] . Moreover, for a fixed exponent q [ 2 , ) , the space X is said to be q-uniformly convex if there exists a constant C > 0 such that
δ X ( ε ) C ε q for all ε ( 0 , 2 ] .
It is a classical result in Banach space theory that every uniformly convex space is strictly convex.
All symbols used above follow standard conventions in Banach space theory; for detailed definitions and properties, we refer the reader to Megginson’s monograph [18].
The normalized duality mapping J : X 2 X is defined by
J x = { x X : x , x = x 2 = x 2 }
for all x X .
For a smooth Banach space X, following the work of Alber [19] and Kamimura and Takahashi [20], we introduce the mapping ϕ : X × X R given by
ϕ ( x , y ) = x 2 2 x , J y + y 2
for all x , y X . Notably, ϕ coincides with the Bregman distance associated with the convex function · 2 (see Bregman [21], Butnariu, Iusem [22], and Censor, Lent [23] for detailed properties of Bregman distances). In the special case where X is a Hilbert space, the duality mapping J reduces to the identity operator I, and thus ϕ ( x , y ) = x y 2 for all x , y X . It is well-established that
( x y ) 2 ϕ ( x , y ) ( x + y ) 2
holds for all x , y X . Additionally, if X is strictly convex, then the mapping ϕ is non-degenerate; i.e.,
ϕ ( x , y ) = 0 x = y .
Lemma 1
([19,24]). Let X be a real smooth and uniformly convex Banach space. Then the following identities are satisfied for all x , y , z X :
(1) ϕ ( x , y ) = ϕ ( x , z ) + ϕ ( z , y ) + 2 x z , J z J y ;
(2) ϕ ( x , y ) + ϕ ( y , x ) = 2 x y , J x J y .
Lemma 2
([24]). Suppose X is a 2-uniformly convex and smooth Banach space. Then there exists a constant μ 1 such that
1 μ x y 2 ϕ ( x , y )
for all x , y X . This constant μ is referred to as the 2-uniformly convexity constant of X.
The nonlinear duality mapping J and the associated functional ϕ play a central role in extending Hilbert space arguments to the Banach space setting. In a Hilbert space, where J coincides with the identity operator I, the functional ϕ ( x , y ) reduces to the squared Euclidean distance x y 2 . In a general smooth Banach space, ϕ retains key metric-like properties (as seen in (7) and (8)) while naturally incorporating the nonlinearity of J. This makes ϕ a suitable Lyapunov function for analyzing iterative algorithms. The identities in Lemma 1, which generalize the classical law of cosines, and the key inequality in Lemma 2, which links ϕ to the norm via the modulus of convexity μ , are fundamental tools that allow us to manipulate the resolvent step J λ A and ultimately establish convergence. The subsequent convergence analysis will heavily rely on these geometric properties of ϕ .
Next, we recall the classical definition of α -monotone operators in Banach spaces.
Definition 1
(Classical α -monotonicity). An operator A : X X (multi-valued operator, denoted by ⇉) is said to be α-monotone for some α R if
( x , u ) , ( y , v ) gra ( A ) , x y , u v α x y 2 ,
where gra ( A ) = { ( x , u ) X × X u A ( x ) } denotes the graph of A.
In a smooth Banach space X, the normalized duality mapping J : X X is single-valued—a characteristic property of smooth spaces. Exploiting this feature, we introduce a revised notion of α -monotone operators adapted to the setting of smooth Banach spaces.
Definition 2
( α -monotonicity in smooth Banach spaces). Let X be a smooth Banach space. An operator A : X X is called α-monotone ( α R ) if
( x , u ) , ( y , v ) gra ( A ) , x y , u v α x y , J x J y .
The scalar α is called the monotonicity constant of A. Specifically, A is monotone if α = 0 ; A is strongly monotone if α > 0 ; and A is weakly monotone if α < 0 .
An operator A is said to be maximally α -monotone if it is α -monotone and there exists no other α -monotone operator B : X X such that gra ( A ) is a proper subset of gra ( B ) (i.e., gra ( A ) gra ( B ) ).
For a Banach space X that is strictly convex, smooth, and reflexive, Huang, Peng, and Tang [17] established that the set of maximally strongly monotone operators under the revised definition (Definition 2) is dense in the set of maximally strongly monotone operators under the classical definition (Definition 1) in the following sense (see Theorem 1). For convenience, we introduce the following notations: let csm denote the collection of maximally strongly monotone operators defined via Definition 1; let nsm denote the collection of maximally strongly monotone operators defined via Definition 2.
Theorem 1
([17]). Let A csm and suppose that A 1 ( 0 ) . Since A is strongly monotone, A 1 ( 0 ) is a singleton (denoted by { x } = A 1 ( 0 ) ). Then there exists a sequence { A n } n = 1 nsm such that A n 1 ( 0 ) (and each A n 1 ( 0 ) is also a singleton, denoted by { x n } = A n 1 ( 0 ) ). Furthermore, the sequence { x n } converges strongly to x , i.e., x n x 0 as n .
Remark 1.
In their work [17], the authors explicitly constructed the sequence { A n } as A n = A + α n J , where { α n } is a sequence of positive scalars satisfying α n 0 + as n . In this construction, the convergence rate is estimated as x n x α n , which implies x n x 0 as n .

3. Main Results

To the best of our knowledge, no prior work has addressed the reflected–forward–backward splitting method in the setting of Banach spaces. In this paper, we establish the convergence of this method for monotone inclusion problems involving Lipschitz continuous operators in 2-uniformly convex Banach spaces.
In Hilbert spaces, where the duality mapping J coincides with the identity I, the reflected–forward–backward update can be analyzed using the elementary identity x y 2 = x 2 2 x , y + y 2 . This identity is no longer valid when J is nonlinear. Moreover, the resolvent J λ A = ( J + λ A ) 1 J becomes implicit in J, and the standard monotonicity inequalities x y , B x B y 0 do not directly combine with the norm. To overcome these obstacles we (i) replace the squared norm by the Bregman-type functional ϕ ( x , y ) = x 2 2 x , J y + y 2 , which retains a “generalized Pythagorean” identity (Lemma 1); (ii) employ the modified monotonicity notion (Definition 2) that couples with J through the term x y , J x J y ; and (iii) exploit the 2-uniform convexity inequality 1 μ x y 2 ϕ ( x , y ) (Lemma 2) to convert estimates involving ϕ back to norm estimates. These adaptations allow us to construct a Lyapunov sequence { ϕ ( x , x n ) } that contracts at an R-linear rate, thereby recovering strong convergence.
Theorem 2.  
Let X be a real Banach space that is 2-uniformly convex with modulus constant μ > 0 and uniformly smooth. Let A : X X be maximally monotone and α-monotone, and let B : X X be β-monotone and L-Lipschitz continuous, all in the sense of Definition 2, with α + β > 0 .
Given x 1 , x 0 X , define the sequence { x n } by
x n + 1 = J λ n A J 1 J x n λ n B ( 2 x n x n 1 ) , n 0 ,
where { λ n } is a non-increasing sequence satisfying
λ n ε , 1 ( 2 + ε ) 4 m L μ ,
for some ε > 0 and m > 2 L μ α + β , with ε < 1 ( 2 + ε ) 4 m L μ .
If ( A + B ) 1 ( 0 ) , then { x n } strongly converges to a point in ( A + B ) 1 ( 0 ) at an R-linear rate.
Proof. 
Let x ( A + B ) 1 ( 0 ) . So,
B x A x .
From (9) we have the following:
1 λ n J x n λ n B ( 2 x n x n 1 ) J x n + 1 A x n + 1
Using (10), (11) and the strong monotonicity of A, we obtain
J x n + 1 J x n + λ n B ( 2 x n x n 1 ) B ( x ) , x x n + 1 λ n α x x n + 1 , J x J x n + 1
By Lemma 1(1), we have
2 J x n + 1 J x n , x x n + 1 = ϕ x , x n ϕ x , x n + 1 ϕ x n + 1 , x n .
Taking advantage of the Lipschitz continuity and monotonicity of B, we obtain
B ( 2 x n x n 1 ) B x , x x n + 1 = B x n + 1 B x , x x n + 1 + B ( 2 x n x n 1 ) B x n + 1 , x x n + 1 β x x n + 1 , J x J x n + 1 + B ( 2 x n x n 1 ) B x n + 1 , x x n + 1 β x x n + 1 , J x J x n + 1 + L 2 x n x n 1 x n + 1 x x n + 1 β x x n + 1 , J x J x n + 1 + m L 2 x n x n 1 x n + 1 2 + L m x x n + 1 2 for m > 0 β x x n + 1 , J x J x n + 1 + m L x n x n 1 + x n x n + 1 2 + L m x x n + 1 2 β x x n + 1 , J x J x n + 1 + 2 m L x n x n 1 2 + 2 m L x n x n + 1 2 + L m x x n + 1 2 .
λ n α x x n + 1 , J x J x n + 1 = 1 2 λ n α [ ϕ ( x , x n + 1 ) + ϕ ( x n + 1 , x ) ] λ n α 2 ϕ ( x , x n + 1 ) .
Similarly, we have
β x x n + 1 , J x J x n + 1 β 2 ϕ ( x , x n + 1 ) .
From the definition of ϕ in (6) and Lemma 1(2), we have
x x n + 1 , J x J x n + 1 = 1 2 ϕ ( x , x n + 1 ) + ϕ ( x n + 1 , x ) .
Because ϕ ( · , · ) 0 , it follows that
1 2 ϕ ( x , x n + 1 ) + ϕ ( x n + 1 , x ) 1 2 ϕ ( x , x n + 1 ) .
Multiplying by λ n α and β , respectively, gives the lower bounds (15) and (16). This step translates the monotonicity conditions, originally stated with the duality mapping, into estimates involving the Lyapunov function ϕ —a crucial ingredient for the subsequent recursive inequality. Substituting (13) and (14) into (12), and considering (15) and (16), we have
1 + λ n ( α + β ) ϕ x , x n + 1 + ϕ x n + 1 , x n 4 λ n m L x n x n + 1 2 2 λ n L m x x n + 1 2 ϕ x , x n + 4 λ n m L x n x n 1 2 .
After substituting (13)–(16) into (12), we obtain the mixed estimate (17), which contains both ϕ -terms and squared norms. To unify the expression, we employ the characterization of 2-uniform convexity provided by Lemma 2:
x x n + 1 2 μ ϕ ( x , x n + 1 ) , ϕ ( x n + 1 , x n ) 1 μ x n x n + 1 2 .
Inserting these bounds into (17) converts every squared-norm term into a quantity comparable with ϕ , yielding the following inequality:
1 + λ n ( α + β ) 2 λ n L μ m ϕ x , x n + 1 + ( 1 μ 4 λ n m L ) x n x n + 1 2 ϕ x , x n + 4 λ n m L x n x n 1 2 .
In (14) we take some m > 2 L μ α + β , and since λ n [ ϵ , 1 ( 2 + ϵ ) 4 m L μ ] we have 1 μ 4 λ n m L ( 1 + ϵ ) 4 λ n m L . Let θ = inf n 1 + λ n ( α + β ) 2 λ n L μ m , 1 μ 4 λ n m L 4 λ n m L ; then we have θ > 1 . And since λ n is non-increasing, we obtain
θ ( ϕ x , x n + 1 + 4 λ n + 1 m L x n x n + 1 2 ) ϕ x , x n + 4 λ n m L x n x n 1 2 ,
which establishes that x n x with R-linear rate by Lemma 2. □
Remark 2
(On the assumptions and step size condition). The step size condition in Theorem 2, λ n 1 ( 2 + ϵ ) 4 m L μ , is derived from the technical requirements of our proof, which involves reciprocal estimates between the norm · and the Lyapunov functional ϕ ( · , · ) . μ is the 2-uniform convexity constant of the space (fixed), m is chosen to satisfy m > 2 L μ / ( α + β ) , and ε is a small positive number that ensures the step size interval is nonempty. The condition ε < 1 / [ ( 2 + ε ) 4 m L μ ] ensures the upper bound remains positive. The appearance of the 2-uniform convexity constant μ reflects the intrinsic geometric property of the underlying Banach space, as Lemma 2 establishes the fundamental link 1 μ x y 2 ϕ ( x , y ) . While sufficient for guaranteeing R-linear convergence, this bound is conservative. Investigating whether it can be substantially relaxed, possibly via alternative analytical techniques, remains an interesting open question.
Furthermore, the strong convergence result crucially relies on the assumption α + β > 0 under Definition 2. Within our proof framework, which leverages the specific form x y , J x J y to connect monotonicity with the functional ϕ, relaxing this to uniform or strict monotonicity (defined purely with respect to the norm) appears to be highly non-trivial. The principal difficulty stems from the nonlinearity of the duality mapping J in general Banach spaces. Whether strong or weak convergence can be established under weaker monotonicity conditions is an important direction for future research.
Theorem 3.
Let X be a real Banach space that is 2-uniformly convex with modulus constant μ > 0 and uniformly smooth. Let A : X X be maximally monotone and α-monotone ( α > 0 ) (in the sense of Definition 1), and let B : X X be monotone and L-Lipschitz continuous.
For any δ > 0 , there exists a δ-strongly monotone operator A δ : X X (in the sense of Definition 2) such that, given initial points x 1 , x 0 X , the sequence { x n } is defined by
x n + 1 = J λ n A δ J 1 J x n λ n B ( 2 x n x n 1 ) , n 0 ,
with a non-increasing step size sequence { λ n } , and satisfies
λ n ε , 1 ( 2 + ε ) 4 m L μ ,
for some ε > 0 and m > 2 L μ δ , with ε < 1 ( 2 + ε ) 4 m L μ .
If ( A + B ) 1 ( 0 ) , then for any x ( A + B ) 1 ( 0 ) ,
x n x δ for all sufficiently large n .
Proof. 
Since x ( A + B ) 1 ( 0 ) , it follows from Theorem 1 that there exists an operator A δ , which is δ -strongly monotone in the sense of Definition 2 such that the set of solutions ( A δ + B ) 1 ( 0 ) is not empty and a singleton. Specifically, let { x δ } = ( A δ + B ) 1 ( 0 ) ; then the error estimate x δ x δ 2 is valid. By virtue of Theorem 2, the iterative sequence { x n } converges strongly to x δ as n . Consequently, for sufficiently large n, the inequality x n x δ is satisfied. □
Remark 3.  
Theorem 3 yields an approximate solution within a δ-neighborhood. By letting δ 0 + , the corresponding sequence of approximate solutions converges strongly to an exact solution of the original problem. This shows that by approximating a maximally monotone operator with a slightly perturbed strongly monotone one (in the sense of Definition 2), the reflected–forward–backward iteration can be made to converge arbitrarily close to a true solution of the original inclusion.
Remark 4.
In this work, we have established the first strong convergence result, with an R-linear rate, for the reflected–forward–backward splitting (RFBS) method in the setting of 2-uniformly convex and uniformly smooth Banach spaces. The convergence is guaranteed under a novel monotonicity condition (Definition 2), which requires the combined monotonicity constant α + β > 0 and interacts naturally with the nonlinear duality mapping J.

4. Numerical Experiment

In this section, we present numerical experiments to verify the efficiency and effectiveness of the proposed algorithm. We mainly compare our method with the forward–reflected–backward splitting algorithm studied in [16], which is hereafter referred to as FRB. All numerical experiments were implemented in MATLAB R2020a. The simulations were conducted on a 64-bit Lenovo laptop equipped with an Intel(R) Core(TM) i5-7200U CPU @ 2.50 GHz and 8 GB of RAM.
Example 1
([16]). Let H = L 2 ( [ 0 , 1 ] ) , with the norm and inner product defined, respectively, by
x 2 = 0 1 | x ( t ) | 2 d t 1 2 and x , y = 0 1 x ( t ) y ( t ) d t .
Observe that, for 1 < p 2 with 1 p + 1 q = 1 , the normalized duality mapping
J : L p ( [ 0 , 1 ] ) L q ( [ 0 , 1 ] )
and its inverse
J 1 : L q ( [ 0 , 1 ] ) L p ( [ 0 , 1 ] )
are given (see, for example, Alber [25]) by
J ( f ) = f p 2 p f | f | p 2 , for all f L p ( [ 0 , 1 ] ) ,
and
J 1 ( g ) = g q 2 q g | g | q 2 , for all g L q ( [ 0 , 1 ] ) ,
respectively.
In particular, when p = q = 2 , the mappings J and J 1 , defined in (21) and (22), reduce to the identity operator.
Now, define the operators A , B : L 2 ( [ 0 , 1 ] ) L 2 ( [ 0 , 1 ] ) by
( B x ) ( t ) = 0 1 x ( s ) 2 t s e t + s e e 2 1 cos x ( s ) d s + 2 t e t e e 2 1 , x L 2 ( [ 0 , 1 ] ) ,
and
( A x ) ( t ) = x ( t ) , t [ 0 , 1 ] .
Then the operator B is monotone and Lipschitz continuous with Lipschitz constant L = 2 , while A is 1-strongly monotone on L 2 ( [ 0 , 1 ] ) . According to the definition of A given above, the resolvent operator
J r A : L 2 ( [ 0 , 1 ] ) L 2 ( [ 0 , 1 ] ) , r > 0 ,
is explicitly given by
( J r A x ) ( t ) = 1 1 + r x ( t ) .
Let ψ H be a continuous linear functional. By the Riesz representation theorem, there exists a unique f ψ H such that
ψ , x = f ψ , x for all x H .
Since ( A + B ) 1 ( 0 ) = { 0 } , it is enough to prove that x n 0 as n , which is equivalent to showing
x n 2 = 0 1 | x n ( t ) | 2 d t 1 2 0 .
In the numerical experiments, we adopt the stopping criterion
x n 2 Tol ,
where Tol is a prescribed small positive constant. For a fair comparison, the initial points are chosen as x 0 ( t ) = e t and x 1 ( t ) = cos ( t ) e t , respectively. Table 1 reports the numerical performance of the FRB method and Algorithm (9) for different values of the step size parameter λ n and tolerance Tol . It can be observed from Table 1 that, for both methods, decreasing the tolerance from 10 2 to 10 4 results in an increase in both iteration numbers and CPU time, which is expected due to the higher accuracy requirement. In addition, increasing the step size parameter λ n generally reduces the number of iterations, indicating faster convergence. Although the two methods exhibit comparable iteration counts, Algorithm (9) consistently requires less CPU time than the FRB method for all tested parameters. This demonstrates that Algorithm (9) is computationally more efficient while maintaining similar convergence behavior. Furthermore, we use a log–log plot to illustrate that the error sequence converges to zero as the number of iterations increases, as shown in Figure 1.

5. Conclusions

In this work, we developed and analyzed the reflected–forward–backward splitting (RFBS) method for solving monotone inclusion problems involving Lipschitz continuous operators in 2-uniformly convex and uniformly smooth Banach spaces. By introducing a Banach space-adapted notion of strong monotonicity and employing the Bregman-type functional associated with the nonlinear duality mapping, we established the first R-linear strong convergence result for RFBS in this non-Hilbertian setting. Our analysis reveals the intrinsic interplay between the geometry of the Banach space—captured through the modulus of convexity—and the algorithmic parameters, leading to a sufficient step size condition ensuring contraction of a carefully constructed Lyapunov sequence. In addition, we proposed an approximation strategy that replaces a general maximally monotone operator with a strongly monotone surrogate, enabling the RFBS iteration to produce approximate solutions of arbitrary precision. Numerical experiments further confirmed the computational efficiency of the proposed algorithm when compared with the forward–reflected–backward (FRB) method.
Despite these advances, an important theoretical question remains open: whether RFBS ensures even weak convergence in Banach spaces under mere monotonicity assumptions, mirroring the classical Hilbert space theory. Resolving this problem would significantly advance our understanding of splitting algorithms in Banach spaces, bridging a notable theoretical gap with the well-established Hilbertian theory.

Future Work

We acknowledge that the resulting bound in Theorem 2 is conservative, and some estimates in the proof are deliberately conservative to guarantee a clean contraction inequality. Whether the factor 1 8 m (or the dependence on m itself) can be significantly relaxed—perhaps by a more refined treatment of the reflection step or by employing different Lyapunov functions—is an interesting and non-trivial open question. We regard this as a valuable direction for future research.

Author Contributions

Conceptualization, C.H. and J.P.; methodology, C.H.; software, L.Q.; validation, C.H., J.P., L.Q., and Y.T.; formal analysis, C.H.; investigation, L.Q. and Y.T.; resources, C.H.; data curation, L.Q.; writing—original draft preparation, C.H.; writing—review and editing, J.P. and Y.T.; visualization, L.Q.; supervision, J.P.; project administration, J.P. and Y.T.; funding acquisition, C.H., J.P., and Y.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundations of China (12031003, 12571558, 12571491), the Guangzhou Education Scientific Research Project 2024 (202315829), the Jiangxi Provincial Natural Science Foundation (20224ACB211004), and the Postdoctoral Startup Foundation of Guangdong Province (62402153).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Acknowledgments

We thank the referees for their valuable comments and constructive suggestions, which have significantly improved the quality of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  2. Passty, G.B. Ergodic convergence to a zero of the sum of monotone operators in Hilbert spaces. J. Math. Anal. Appl. 1979, 72, 383–390. [Google Scholar] [CrossRef]
  3. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Numer. Anal. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  4. Malitsky, Y.; Tam, M.K. A forward-Backward spliting method for monotone inclusion without cocoercivity. SIAM J Optim. 2020, 30, 1451–1472. [Google Scholar] [CrossRef]
  5. Cevher, V.; Vũ, B.C. A reflected forward-backward splitting method for monotone inclusions involving Lipschitzian operators. Set-Valued Var. Anal. 2021, 29, 163–174. [Google Scholar] [CrossRef]
  6. Shehu, Y.; Jolaoso, L.O.; Okeke, C.C.; Xu, R. Outer reflected forward-backward splitting algorithm with inertial extrapolation step. Optimization 2025, 74, 3901–3924. [Google Scholar] [CrossRef]
  7. Cao, Y.; Wang, Y.; Rehman, H.; Shehu, Y.; Yao, J.C. Convergence analysis of a new forward-reflected-backward algorithm for four operators without cocoercivity. J. Optim. Theory Appl. 2024, 203, 256–284. [Google Scholar] [CrossRef]
  8. Showalter, R.E. Monotone operators in Banach space and nonlinear partial differential equations. In Mathematical Surveys and Monographs; American Mathematical Society: Providence, RI, USA, 1997; Volume 49, p. xiv+278. [Google Scholar]
  9. Chambolle, A.; Pock, T. Thomas An introduction to continuous optimization for imaging. Acta Numer. 2016, 25, 161–319. [Google Scholar] [CrossRef]
  10. Candès, E.J.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar] [CrossRef]
  11. Bello, A.U.; Chidume, C.E.; Alka, M. Self-adaptive forward-backward splitting algorithm for the sum of two monotone operators in Banach spaces. Fixed Point Theory Algorithms Sci. Eng. 2022, 25, 16. [Google Scholar] [CrossRef]
  12. Shehu, Y. Convergence results of forward-backward algorithm for sum of monotone operator in Banach spaces. Results Math. 2019, 74, 95. [Google Scholar] [CrossRef]
  13. Tang, Y.; Promkam, R.; Cholamjiak, P.; Sunthrayuth, P. Convergence results of iterative algorithms for the sum of two monotone operators in reflexive Banach spaces. Appl. Math. 2022, 67, 129–152. [Google Scholar] [CrossRef]
  14. Sunthrayuth, P.; Yang, J.; Cholamjiak, P. A new generalized forward-backward splitting method in reflexive Banach spaces. J. Nonlinear Convex Anal. 2022, 23, 1311–1333. [Google Scholar]
  15. Tuyen, T.M.; Promkam, R.; Sunthrayuth, P. Strong convergence of a generalized forward-backward splitting method in reflexive Banach spaces. Optimization 2022, 71, 1483–1508. [Google Scholar] [CrossRef]
  16. Bello, A.U.; Okeke, C.C.; Isyaku, M.; Omojola, M.T. Forward-reflected-backward splitting method without cocoercivity for the sum of maximal monotone operators in Banach spaces. Optimization 2023, 72, 2201–2222. [Google Scholar] [CrossRef]
  17. Huang, C.; Peng, J.G.; Tang, Y.C. On α-monotone operators and their resolvent in Banach spaces. arXiv 2025, arXiv:2510.12538. [Google Scholar]
  18. Megginson, R.E. An introduction to Banach space theory. In Graduate Texts in Mathematics; Springer: New York, NY, USA, 1998; Volume 183, p. xx+596. [Google Scholar]
  19. Alber, Y. Metric and generalized projection operators in Banach spaces: Properties and applications. In Theory and Applications of Nonlinear Operators of Accretive and Monotone Type; Lecture notes in pure and applied mathematics; Dekker: New York, NY, USA, 1996; Volume 178, pp. 15–50. [Google Scholar]
  20. Kamimura, S.; Takahashi, W. Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 2002, 13, 938–945. [Google Scholar] [CrossRef]
  21. Bregman, L.M. The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 1967, 7, 200–217. [Google Scholar] [CrossRef]
  22. Butnariu, D.; Iusem, A.N. Totally Convex Functions for Fixed Points Computation and Infinite Dimensional Optimization; Kluwer Academic: Dordrecht, The Netherland, 2000. [Google Scholar]
  23. Censor, Y.; Lent, A. An iterative row-action method for interval convex programming. J. Optim. Theory Appl. 1981, 34, 321–353. [Google Scholar] [CrossRef]
  24. Aoyama, K.; Kohsaka, F. Strongly relatively nonexpansive sequences generated by firmly nonexpansive-like mappings. Fixed Point Theory Appl. 2014, 95, 13. [Google Scholar] [CrossRef]
  25. Alber, Y. Generalized projection operators in Banach space: Properties and applications. Funct. Differ. Equ. 1993, 1, 1–21. [Google Scholar]
Figure 1. The error sequence versus the number of iterations with Tol = 1 × 10 −4. (a) λ n = 0.2 ; (b) λ n = 0.4 ; (c) λ n = 0.6 ;
Figure 1. The error sequence versus the number of iterations with Tol = 1 × 10 −4. (a) λ n = 0.2 ; (b) λ n = 0.4 ; (c) λ n = 0.6 ;
Mathematics 14 00245 g001
Table 1. Numerical results of the FRB method and Algorithm (9) in terms of iteration numbers and CPU time (in seconds).
Table 1. Numerical results of the FRB method and Algorithm (9) in terms of iteration numbers and CPU time (in seconds).
λ n Tol MethodsIterCPU
0.21 × 10 −2FRB19 6.73
Algorithm (9)20 3.24
1 × 10 −4FRB44 13.38
Algorithm (9)44 5.08
0.41 × 10 −2FRB11 3.68
Algorithm (9)12 1.55
1 × 10 −4FRB24 6.35
Algorithm (9)24 2.66
0.61 × 10 −2FRB9 2.16
Algorithm (9)10 1.06
1 × 10 −4FRB18 4.29
Algorithm (9)20 2.34
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, C.; Peng, J.; Qin, L.; Tang, Y. A Reflected–Forward–Backward Splitting Method for Monotone Inclusions Involving Lipschitz Operators in Banach Spaces. Mathematics 2026, 14, 245. https://doi.org/10.3390/math14020245

AMA Style

Huang C, Peng J, Qin L, Tang Y. A Reflected–Forward–Backward Splitting Method for Monotone Inclusions Involving Lipschitz Operators in Banach Spaces. Mathematics. 2026; 14(2):245. https://doi.org/10.3390/math14020245

Chicago/Turabian Style

Huang, Changchi, Jigen Peng, Liqian Qin, and Yuchao Tang. 2026. "A Reflected–Forward–Backward Splitting Method for Monotone Inclusions Involving Lipschitz Operators in Banach Spaces" Mathematics 14, no. 2: 245. https://doi.org/10.3390/math14020245

APA Style

Huang, C., Peng, J., Qin, L., & Tang, Y. (2026). A Reflected–Forward–Backward Splitting Method for Monotone Inclusions Involving Lipschitz Operators in Banach Spaces. Mathematics, 14(2), 245. https://doi.org/10.3390/math14020245

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop