Next Article in Journal
Convergence Properties and Numerical Illustration of a Resolvent-Based Inertial Extrapolation Method for Variational Inclusions in Banach Space
Previous Article in Journal
A Multivariable Mathematical Model of Conductivity, β-Amyloid and T-Protein Dynamics in Alzheimer’s Disease Progression
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Converse Inertial Step Approach and Its Applications in Solving Nonexpansive Mapping

1
Faculty of Data Science, City University of Macau, Macao 999078, China
2
School of Mathematics and Statistics, Linyi University, Linyi 276000, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(22), 3722; https://doi.org/10.3390/math13223722
Submission received: 22 October 2025 / Revised: 15 November 2025 / Accepted: 18 November 2025 / Published: 20 November 2025
(This article belongs to the Section E: Applied Mathematics)

Abstract

In spite of great successes of the inertial step approach (ISA) in various fields, we are investigating the converse inertial step approach (CISA) for the first time. First, the classical Picard iteration for solving nonexpansive mappings converges weakly with CISA integration. Its analysis is based on the newly developed weak quasi-Fejér monotonicity under mild assumptions. We also establish O ( 1 / k γ ) ( γ ( 0 , 1 ) ) and linear convergence rate under different assumptions. This extends the O ( 1 / k ) convergence rate of the Krasnosel’skiĭ–Mann iteration. A generalized version of CISA is then studied. Second, combining CISA with over-relaxed step approach for solving nonexpansive mappings leads to a new algorithm, which not only converges without restrictive assumptions but also allows an inexact calculation in each iteration. Third, with CISA integration, a Backward–Forward splitting algorithm succeeds in accepting a larger step-size, and a Peaceman–Rachford splitting algorithm is guaranteed to converge.

1. Introduction

As is well known, the easy-to-implement inertial step approach (ISA) (difference between the current and the last iterative steps, i.e., x k x k 1 ) have played a significant role in accelerating iterative algorithms and have attracted great attention in various fields [1,2,3,4]. It was first introduced by Polyak in the heavy ball method [5] to accelerate the first-order method. Nesterov’s optimal gradient method [6], which improves the convergence rate of the first-order method in minimizing gradient Lipchitz continuous convex function from O ( 1 / k ) to O ( 1 / k 2 ) , is in the scheme of ISA. Very recently, based on a slight variant of ISA, Attouch and Peypouquet [7] further improved the convergence rate in [6] from O ( 1 / k 2 ) to o ( 1 / k 2 ) .
Generally, we can observe that ISA can (nonmonotonically) accelerate the existing convergent algorithms for solving averaged nonexpansive mappings [8], which aims at finding a fixed point of ( 1 λ ) I + λ T , where λ ( 0 , 1 ) , I : X X is the identity mapping and T : X X is a nonexpansive mapping, which satisfies
T x T y     x y , x , y X ,
and X is a closed convex subset in Hilbert space H equipped with inner product x , y and the corresponding norm · . Mathematically, x X is defined as a fixed point of T if
x = T x , x X .
Throughout this paper, the fixed point set of T is denoted by FixT and assumed to be nonempty. There are many applications for this problem [9,10,11,12]. Variational inequalities [13,14,15,16,17,18,19], equilibrium problems [20,21], and complementarity problems [22] can also be seen as special cases for this problem. The extreme case of the above averaged nonexpansive mappings with λ = 1 is known as nonexpansive mappings [8], and there are many popular optimization algorithms in special applications of nonexpansive mappings and averaged nonexpansive mappings, such as the Proximal Point algorithm (PPA) [23,24], Douglas–Rachford splitting (DRS) algorithm [25], Peaceman–Rachford splitting (PRS) algorithm [26], Primal–Dual (PD) algorithms [27,28,29,30], and alternating direction method of multipliers (ADMM) algorithm [31,32,33,34,35]. To the best of our knowledge, there are no references on employing ISA for nonexpansive mappings. A possible reason is that any attempt may result in failure. This motivates our study on the converse inertia step approach (CISA), which is defined to be the difference between the last and current iterative steps ( x k 1 x k ). In this paper, we introduce CISA, for the first time, to solve nonexpansive mappings (1), and study what benefits we can achieve.

1.1. Difficulties in Solving Nonexpansive Mappings

In practical computations, solving the subproblems involving nonexpansive mappings exactly is often prohibitively expensive or even impossible. This has motivated the development of inexact solution methods, where the subproblems are solved only up to a certain tolerance. However, integrating such inexactness introduces significant challenges. First, the propagation and accumulation of errors must be carefully controlled to ensure the overall convergence of the algorithm. Second, the choice of the inexactness criterion is critical; an overly strict criterion negates the computational benefits of inexact solving, while an overly loose one may lead to divergence or instability. Furthermore, for inertial-type algorithms, the interaction between the extrapolation step and the approximation errors creates additional complexity in the analysis. These difficulties highlight the need for a robust theoretical framework that can accommodate computationally feasible inexact computations without compromising convergence guarantees.

1.2. Related Work

It is well known that the standard Picard iteration x k + 1 = T x k for solving (1) may not converge, for a counter-example, T = I . In order to guarantee the convergence, various modifications are introduced. A popular one is the Krasnosel’skiĭ–Mann (KM) iteration [36]
( k N ) x k + 1 = ( 1 λ k ) x k + λ k T x k , λ k [ 0 , 1 ] .
Under the assumption k = 0 λ k ( 1 λ k ) = , weak convergence of { x k } k N was established in [37]. Different from the KM iteration, Mann [38] proposed another modification approach. All the past iterations { x i } 0 i k are convexly combined with a very carefully selected coefficient to build convergent iterative schemes. Moreover, there is some equivalence between the KM iteration and Mann’s method [38], as seen in [37,39]. In order to relax the restrictive assumptions on the coefficients of past iterations { x i } 0 i k , Combettes and Pennanen [40] improved Mann’s method by combining it with KM iteration. As an extension, Combettes and Glaudin [41] established a unified algorithmic scheme by further combining the method in [40] with the m-layer algorithm [42] for quasi-nonexpansive mapping [8], under restrictive assumptions such as requiring any weak cluster point of the iterative sequences to be a fixed point of T. Moreover, for this algorithm [41], there is no any analysis on the convergence rate.
In order to accelerate the KM iteration, apart from ISA, the over-relaxed step approach (ORSA) (i.e., λ k 1 in the KM iteration) is a effective way. Corman and Yuan [43] first introduced the ORSA to accelerate the convergence of PPA, under strong assumptions on the resolvent operator. ORSA has also be extended to the Augmented Lagrangian Method (ALM), Alternating Direction Method of Multipliers (ADMM). In [44,45], the authors extended the PRS algorithm to the sum of two maximal strongly monotone operators, using a larger relaxed parameter. It can be regarded as combining the PRS algorithm with ORSA. Very recently, Themelis and Patrinos [46] analyzed Douglas–Rachford splitting with ORSA based on the Douglas–Rachford envelope for the sum of a strongly convex function with gradient Lipchitz continuous and a nonconvex function. Note that all the above-mentioned references [43,44,45,46] require very restrictive assumptions and must be errorless during iterative calculation.

1.3. Contributions

We list in the following the contributions of this paper:
  • We show that the classical Picard iteration for solving nonexpansive mappings converges weakly with CISA integration. It leads to the newly proposed CISA algorithm, which only uses the last two rather than the whole past iterations. We introduce a new framework of weak quasi-Fejér monotonicity (see Section 2 for more details) in the convergence analysis. Moreover, our assumptions are much more relaxed than those made in [41]. As a further extension, a generalized version of CISA (GCISA) is presented.
  • We establish O ( 1 / k γ ) ( γ ( 0 , 1 ) ) and linear convergence rates of the CISA algorithm under different assumptions. In particular, our analysis on linear convergence is different from that of the existing analysis [47,48] due to the special structure of CISA.
  • By combining CISA with ORSA, we develop the CISA-ORSA algorithm with a better numerical performance. The usual restrictive assumptions required in [43,44,45,46] are no longer needed in our analysis. Moreover, our CISA-ORSA algorithm allows inexact calculation in each iteration.
  • With CISA integration, the Backward–Forward splitting (BFS) algorithm [49] could accept a larger step-size and the Peaceman–Rachford splitting (PRS) algorithm [26] could be guaranteed to converge.

1.4. Organization

Section 2 presents some preliminaries and lemmas. Section 3 investigates CISA. We develop the CISA algorithm and analyze the impact of CISA for solving nonexpansive mappings and corresponding convergence rate in Section 3.1. A general version of CISA (GIISA) is studied in Section 3.2. Section 4 explores the relationship between CISA and ORSA. Section 5 combines CISA with BFS algorithm and PRS algorithm, respectively. Conclusions are made in Section 6.

2. Preliminaries

We first give some notations, definitions and lemmas, which are useful for later analysis. Throughout this section, let S be a nonempty convex set in Hilbert space.
Let γ > 0 and prox γ f be the proximal mapping of f : H R , defined as
prox γ f ( x ) : = arg min y H f ( y ) + 1 2 γ y x 2 .
Definition 1
(quasi-Fejér monotone [48]). The sequence  { x k } k N  in Hilbert space is quasi-Fejér monotone with respect to S, if it holds that
x k + 1 a     x k a +   ε k , k N ,
for all a S and { ε k } k N satisfying ε k 0 and k = 0 ε k < .
Lemma 1
(Theorem 5.5 [8]). If the sequence  { x k } k N  in Hilbert space satisfies the following:
  • { x k } k N  is quasi-Fejér monotone with respect to S;
  • every weak cluster of  { x k } k N  belongs to S;
then  { x k } k N  converges weakly to a point of S.
Lemma 2.
Suppose nonnegative sequences { a k } k N and { ε k } k N satisfy that a k + 1 a k + ε k and k = 0 ε k < . Then, { a k } k N is convergent.
Denote by dist ( x k , a ) the Euclidean distance between x k and a. For any α k [ 0 , 1 ] , let y k = x k α k ( x k x k 1 ) as located in Figure 1. One can verify that
dist ( x k + 1 , a ) dist ( y k , a ) ( 1 α k ) dist ( x k , a ) + α k dist ( x k 1 , a ) .
It is interesting to see that Figure 1 geometrically characterizes the iterative process of the exact CISA algorithm (7) in R 2 .
Figure 1. The point x k + 1 falls into the disk centered at a with a radius dist ( y k , a ) .
Figure 1. The point x k + 1 falls into the disk centered at a with a radius dist ( y k , a ) .
Mathematics 13 03722 g001
Motivated by Figure 1, we make the following extension of quasi-Fejér monotonicity, which is used for later analysis.
Definition 2
(weak quasi-Fejér monotone). The sequence { x k } k N in a Hilbert space is said to be weak quasi-Fejér monotone with respect to S, if for all  a S , there exist sequences { α k } k N [ 0 , 1 ] and { ε k } k N [ 0 , ) with k = 0 ε k < such that
x k + 1 a   ( 1 α k ) x k a +   α k x k 1 a + ε k , k N .
Setting α k 0 reduces the definition of weak quasi-Fejér monotonicity to that of quasi-Fejér monotonicity. The following example shows that there is a weak quasi-Fejér monotone sequence { x k } k N , which is not quasi-Fejér monotone:
x k a   = mod ( k , 2 ) + 1 , α k = 1 mod ( k , 2 ) / 2 ,
where mod ( k , 2 ) returns the remainder of k divided by 2.
As a further extension, we propose in the following a generalized version of weak quasi-Fejér monotonicity. Throughout this paper, we assume m 2 .
Definition 3
(m weak quasi-Fejér monotone). The sequence  { x k } k N  in Hilbert space is m weak quasi-Fejér monotone with respect to S, if it holds that
x k + 1 a   i = k m + 1 k α k , i x i a + ε k , i = k m + 1 k α k , i = 1 , α k , i [ 0 , 1 ] ,
for all a S , ε k 0 , and k = 0 ε k < .
It is trivial to observe that the definition of quasi-Fejér monotonicity corresponds to that of 2 weak quasi-Fejér monotonicity. Similarly to Lemma 1, which clarifies the relation between quasi-Fejér monotonicity and weak convergence of { x k } k N , the following lemma establishes the relation between m weak quasi-Fejér monotonicity and weak convergence of { x k } k N .
Lemma 3.
If the sequence  { x k } k N  in Hilbert space satisfies the following:
  • { x k } k N  is m weak quasi-Fejér monotone with respect to S;
  • every weak cluster of  { x k } k N  belongs to S;
  • for 1 i m 1 ,
    k = m 1 | α k , k i α k i , k 2 i | < .
Then,  { x k } k N  converges weakly to a point of S.
Proof. 
It follows from (3) that, for a S ,
x k + 1 a   max k m + 1 i k x i a + ε k max 1 m i 0 x i a + i = 0 k ε i .
According to i = 0 ε k < , then { x k a } k N is bounded. Again, (3) implies that
x k + 1 a     x k a + i = k m + 1 k 1 α k , i ( x i a   x k a ) + ε k .
Since we can verify that
k = m 1 K i = k m + 1 k 1 α k , i x i a x k a + ε k = k = m 1 K i = 1 m 1 α k , k i x k i a x k a + ε k = k = m 1 K ( i = 1 m 1 ( α k i , k 2 i x k i a α k , k i x k a + ( α k , k i α k i , k 2 i ) x k i a ) + ε k ) = i = 1 m 1 j = 0 i 1 α m 1 i + j , m 1 2 i + j x m 1 i + j a i = 1 m 1 j = 0 i 1 α K j , K j i x K j a + k = m 1 K i = 1 m 1 ( α k , k i α k i , k 2 i ) x k i a + ε k < , K .
where the last inequality follows from the boundedness of { x k a } k N and i = 0 ε k , as well as the assumption (4). Therefore, (5) and (6) yield that { x k } k N is quasi-Fejér monotone with respect to S. Hence, according to Lemma 1, { x k } k N converges weakly to a point of S.          □
Remark 1.
If we select
α k , i = α k i = k , ( 1 α k ) i = k 1 , 0 k m + 1 i < k 1 ,
then the assumption (4) becomes k = 1 | α k α k 1 | < .
Finally, the following two facts will be used frequently in latter analysis. For a , b H , we have the following:
F1.
a + b 2 a 2 + 2 b , a + b .
F2.
s a + t b 2 = s ( s + t ) a 2 + t ( s + t ) b 2 s t a b 2 , s , t R .

3. Investigation of CISA

We first study the role of CISA in solving nonexpansive mappings, and then make a generalization of CISA.

3.1. CISA in Solving Nonexpansive Mapping

We first present in the following the CISA algorithm (Algorithm 1) for solving nonexpansive mappings (1):
Algorithm 1 CISA
Input: 
Initial points x 1 , x 0 X ; sequence { α k } [ 0 , 1 ] ; error sequence { e k }
  1:
for  k = 0 , 1 , 2 ,  do
  2:
    Step 1: Converse Inertial Extrapolation
  3:
     y k = x k α k ( x k x k 1 ) .
  4:
    Step 2: Inexact Operator Evaluation
  5:
     x k + 1 = T ( y k ) + e k .
  6:
    Note:  e k represents computational error in evaluating T ( y k )
  7:
end for
Output: 
x k conditions
( x 1 , x 0 X ) y k = x k α k ( x k x k 1 ) , x k + 1 = T y k + e k ,
where α k [ 0 , 1 ] and e k characterize the error when calculating T y k . In spite of the impossibility of applying ISA in solving nonexpansive mappings, we can show that CISA stably guarantees the weak convergence for solving nonexpansive mappings. Our analysis, based on the above newly developed weak quasi-Fejér monotonicity, is under very mild assumptions on α k and e k rather than strong assumptions in [41]. In addition, in each step, we only use the last two steps x k and x k 1 , greatly reducing the cost of storing all the past iterative sequences as in [41]. It is not difficult to verify that Figure 1 geometrically characterizes the exact iterative process of the exact CISA algorithm (7) ( e k = 0 ) in R 2 if a Fix T .
Theorem 1.
Under the assumption that
k = 0 e k < ,
k = 0 α k ( 1 α k ) = ,
k = 1 | α k α k 1 | < ,
the sequence { x k } k N generated by CISA algorithm (7) satisfies the following:
(i)
{ x k } k N  is bounded.
(ii)
{ x k } k N  is weak quasi-Fejér monotone with respect to FixT.
(iii)
every weak cluster of { x k } k N belongs to FixT.
(iv)
{ x k } k N  converges weakly to a point of FixT.
Proof. 
(i) Suppose x * Fix T is a fixed point of T, for k N , we have
x k + 1 x * = T ( 1 α k ) x k + α k x k 1 x * + e k T ( 1 α k ) x k + α k x k 1 x *   +   e k ( 1 α k ) x k + α k x k 1 x *   +   e k
( 1 α k ) x k x *   +   α k x k 1 x *   +   e k max { x k x * , x k 1 x * }   +   e k max { x 0 x * , x 1 x * } + i = 0 k e i < ,
where (9) holds since T is nonexpansive and x * Fix T , and the last inequality follows from the assumption (8a). That is, { x k } k N is bounded.
(ii) Weak quasi-Fejér monotone of { x k } k N directly follows from (10) together with the assumption (8a).
(iii) According to the nonexpansiveness of T, we have
x k + 1 x * 2 =   T ( 1 α k ) x k + α k x k 1 x * + e k 2 F 1 T ( 1 α k ) x k + α k x k 1 x * 2 + 2 e k , x k + 1 x *   ( 1 α k ) x k + α k x k 1 x * 2 +   2 e k , x k + 1 x * = F 2 ( 1 α k ) x k x * 2 + α k x k 1 x * 2 α k ( 1 α k ) x k x k 1 2 + 2 e k , x k + 1 x * .
Rearranging the summation of the above inequalities (11) from k = 0 to K gives the following inequation:
k = 0 K α k ( 1 α k ) x k x k 1 2 k = 0 K ( x k x * 2 +   α k 1 x k 1 x * 2 ) ( x k + 1 x * 2 +   α k x k x * 2 ) + k = 0 K ( α k α k 1 ) x k x * 2 + 2 e k , x k + 1 x * = ( x 0 x * 2 + α 0 x 1 x * 2 ) ( x K + 1 x * 2 + α K x K x * 2 ) + k = 0 K ( α k α k 1 ) x k x * 2 + 2 e k , x k + 1 x * .
Notice that
e k , x k + 1 x *   e k · x k + 1 x * .
According to the boundedness of { x k } k N , assumptions (8a) and (8c), we obtain
k = 0 α k ( 1 α k ) x k x k 1 2 < .
Combining the assumption (8b) with (13) implies that
lim inf k x k x k 1 2 = 0 .
Again, based on the nonexpansiveness of T, we have
x k + 1 x k 2   = T y k T y k 1 + e k e k 1 2 F 1 y k y k 1 2 +   2 e k e k 1 , x k + 1 x k = ( 1 α k ) ( x k x k 1 ) + α k 1 ( x k 1 x k 2 ) 2 +   2 e k e k 1 , x k + 1 x k F 2 ( 1 α k α k 1 ) ( 1 α k ) x k x k 1 2 + α k 1 x k 1 x k 2 2 +   2 e k e k 1 , x k + 1 x k ( 1 α k ) x k x k 1 2 +   α k 1 x k 1 x k 2 2 + | α k α k 1 | ( x k x k 1 2 +   x k 1 x k 2 2 )   +   2 ( e k   +   e k 1 ) x k + 1 x k .
We conclude that x k x k 1 2 is convergent by substituting a k = x k x k 1 2 and ε k = α k 1 x k 1 x k 2 2   α k x k x k 1 2 +   | α k α k 1 | ( x k x k 1 2 +   x k 1 x k 2 2 ) +   2 ( e k + e k 1 ) x k + 1 x k into Lemma 2. Thus, according to (14), it holds that lim k x k x k 1 2 = 0 . Therefore, we have
lim k T x k x k = lim k T x k T y k + T y k x k lim k α k x k x k 1   +   x k + 1 x k +   e k = 0 .
Together with the boundedness of { x k } k N , it implies that every weak cluster of { x k } k N belongs to FixT.
(iv) According to Lemma 3 with m = 2, it follows from (ii) and (iii) that { x k } k N converges weakly to a point of FixT.          □
As shown in the following corollary, { y k } k N is also a weak quasi-Fejér monotone and thus converges weakly to a point of FixT.
Corollary 1.
Under assumption (8a), the sequence { y k } k N generated by CISA algorithm (7) satisfies the following:
(i)
{ y k } k N  is bounded.
(ii)
{ y k } k N  is weak quasi-Fejér monotone.
Under additional assumptions (8b) and (8c), the following holds:
(iii)
Every weak cluster of ( y k ) k N belongs to FixT.
(iv)
( y k ) k N  converges weakly to a point of FixT.
Proof. 
Observing that CISA algorithm (7) is equivalent to
y k + 1 = ( 1 α k + 1 ) T y k + α k + 1 T y k 1 + ( 1 α k + 1 ) e k + α k + 1 e k 1 ,
we can prove (i) and (ii) as with Theorem 1.
Proof of (iii) follows from the fact:
T y k y k     x k + 1 y k e k     x k + 1 x k   +   α k x k x k 1   +   e k .
Proof of (iv) is similar as that in Theorem 1.          □
Remark 2.
Assumptions (8b) and (8c) are easy to satisfy. We list in the following two examples:
  • α k α ( 0 , 1 ) ;
  • α k = 1 ( k + 1 ) p , p ( 0 , 1 ] .
The sublinear convergence of ClSA agorithm (7) is then established.
Theorem 2.
Let { z k } k N { { x k } k N , ( y k ) k N } be generated by CISA algorithm (7).
(i)
If α k α ( 0 , 1 ) and k = 0 k e k < , then
T z k z k 2   = O 1 k .
(ii)
If α k = 1 ( k + 1 ) p for p ( 0 , 1 ] and k = 0 k 1 p e k < , then
T z k z k 2 = O 1 ln k p = 1 ; O 1 k 1 p p [ 1 2 , 1 ) ; O 1 k p p ( 0 , 1 2 ) .
Proof. 
Rearranging the inequality (15) gives
x k + 1 x k 2 +   α k x k x k 1 2   x k x k 1 2 +   α k 1 x k 1 x k 2 2 + 2 δ | α k α k 1 |   +   e k   +   e k 1 ,
where δ = 2 max k N { x k x k 1 } .
Proof of (i). For the case α k α ( 0 , 1 ) , by repeatedly applying (16), we have
k x k + 1 x k 2 +   α x k x k 1 2 k x k x k 1 2 +   α x k 1 x k 2 2 + 2 k δ e k   +   e k 1 , i = 0 k 1 x i + 1 x i 2 +   α x i x i 1 2 + 2 δ i = 1 k i e i   +   e i 1 < ,
where the last inequality follows from (13) and k = 0 k e k < . Then, it holds that, for k ,
x k + 1 x k 2 = O 1 / k ,
Substituting (18) into the following two inequalities completes the proof of (i).
T x k x k 2 =   T x k T y k + T y k x k 2   = T x k T y k + x k + 1 x k e k 2 3 T x k T y k 2 +   3 x k + 1 x k 2 +   3 e k 2 3 α 2 x k x k 1 2 +   3 x k + 1 x k 2 +   3 e k 2 .
T y k y k 2 =   x k + 1 y k e k 2   = x k + 1 x k + α ( x k x k 1 ) e k 2 3 α 2 x k x k 1 2 +   3 x k + 1 x k 2 +   3 e k 2 .
Proof of (ii). For α k = 1 ( k + 1 ) p ( p ( 0 , 1 ] ), repeatedly applying (16) implies that
i = 1 k 1 ( i + 1 ) p x k + 1 x k 2 +   1 ( k + 1 ) p x k x k 1 2 i = 0 k 1 1 ( i + 1 ) p x i + 1 x i 2 +   1 ( i + 1 ) p x i x i 1 2 + 2 δ i = 1 k i 1 ( i + 1 ) p 1 i p 1 ( i + 1 ) p + 2 δ i = 1 k i 1 ( i + 1 ) p ( e i   +   e i 1 ) .
Dividing both sides of the above inequality by i = 1 k 1 / ( i + 1 ) p , we obtain that x k + 1 x k 2 has the same order of k as
i = 1 k i 1 ( i + 1 ) p 1 i p 1 ( i + 1 ) p i = 1 k 1 ( i + 1 ) p = O 1 ln k p = 1 ; O 1 k 1 p p [ 1 2 , 1 ) ; O 1 k p p ( 0 , 1 2 ) ,
when k . The proof is completed from (19) and (20).          □
Remark 3.
1. It was shown in (Theorem 1 [48]) that the inexact KM iteration converges at the same rate as (i) of Theorem 2 under the assumption that ( k + 1 ) e k is summable. According to Theorem 2, this assumption can be relaxed, for example, if we only assume that e k is summable; selecting α k = 1 / ( k + 1 ) will also lead to a convergence (at a rate of O ( 1 / ln k ) ). 2. Specifically, sublinear rate derivation constants are based on e k , x 1 and x 2 . We can also refer to Equation (13).
We now establish the linear convergence rate for CISA algorithm (7).
Theorem 3.
The sequence { x k } k N generated by CISA algorithm (7) with α k α and e k = 0 , if further satisfying that
μ x k x * 2     T x k x k 2 , x * Fix T , μ > 0 ,
converges to a point of FixT in finite steps, or linearly converges in the sense that there is at lest one i k { 0 , 1 , 2 , 3 } such that
x k + 1 x * 2 +   x k + 2 x * 2     δ ( x k i k x * 2 +   x k i k + 1 x * 2 ) ,
where 0 < δ < 1 .
Proof. 
The technical proof is presented in Appendix A.          □
Remark 4.
The assumption (22) is called Metric sub-regularity [50] of T I . It was used in the analysis of linear convergence rate of the KM iteration in [47,48]. In linear programming, metric sub-regularity of the KKT solution mapping at a strictly complementary solution guarantees local linear convergence for various first-order optimization methods, such as the augmented Lagrangian method.

3.2. G-CISA: General Converse Inertial Step Approach

In this subsection, we extend CISA to a generalized version for solving nonexpansive mappings (1), denoted by the G-CISA algorithm:
x 1 m , , x 1 , x 0 X y k = i = k m + 1 k α k , i x i , x k + 1 = T y k + e k ,
where α k , i satisfied that i = k m + 1 k α k , i = 1 with α k , i 0 , and e k stands for the error during calculating T y k . By rewriting y k as
y k = x k i = k m + 1 k 1 α k , i ( x k x i ) ,
this is exactly the origin of the name of general converse inertial step approach.
The G-CISA algorithm (24) can be compared with Anderson acceleration [51,52] and Energy Direct Inversion on the Iterative Subspace (EDIIS) algorithm [53,54]. The main difference is that Anderson acceleration and EDIIS are based on the optimal choice of α k , i to gain excellent numerical performance under strict assumptions, including that T is contracting and Lipschitz differentiable. As shown in the following, G-CISA in solving nonexpansive mappings stably converges under weak assumptions. Our analysis is based on the newly developed m weak quasi-Fejér momotonicity.
Theorem 4.
Under assumption (8a), the sequence { x k } k N generated by G-CISA algorithm (24) satisfies the following:
(i)
{ x k } k N  is bounded.
(ii)
{ x k } k N  is m weak quasi-Fejér monotone with respect to FixT.
Moreover, if there is an i ¯ { 0 , 1 , . . , m 2 } such that
k = 0 α k , k i ¯ α k , k i ¯ 1 =
and for 0 i m 1 ,
k = i | α k , k i α k i , k i 1 | < .
Then, the following holds:
(iii)
Every weak cluster of { x k } k N belongs to FixT.
(iv)
{ x k } k N  converges weakly to a point of FixT.
Proof. 
See Appendix B for the proof.         □
Notice that G-CISA algorithm (24) can be reformulated as
y k + 1 = i = k m + 2 k + 1 α k + 1 , i ( T y i 1 + e i 1 ) , k N .
We have the following result similar to Corollary 1.
Corollary 2.
Under assumption (8a), the sequence ( y k ) k N generated by the G-CISA algorithm (24) satisfies the following:
(i)
( y k ) k N is bounded.
(ii)
( y k ) k N is m weak quasi-Fejér monotone.
Moreover, under assumptions (25) and (26), we have the following:
(iii)
every weak cluster of ( y k ) k N belongs to FixT.
(iv)
( y k ) k N converges weakly to a point of FixT.
Remark 5.
Consider the coefficient matrix of x k ,
C = α 0 , m + 1 α 1 , m + 2 α k , k m + 1 α 0 , m + 2 α 1 , m + 3 α k , k m + 2 α 0 , 0 α 1 , 1 α k , k .
Assumption (25) means that there are at least two adjacent rows such that their inner product is divergent. Assumption (26) means that the absolute difference between two adjacent rows is summable. We list in the following two matrices satisfying assumptions (25) and (26).
C = a 1 a 1 a 1 a 2 a 2 a 2 a m a m a m , i = 1 m a i = 1 , a i > 0 , C = 1 ( k + 1 ) p ( m 1 ) 1 ( k + 1 ) p ( m 1 ) 1 ( k + 1 ) p ( m 1 ) 1 ( k + 1 ) p ( m 1 ) 1 ( k + 1 ) p ( m 1 ) 1 ( k + 1 ) p ( m 1 ) 1 ( k + 1 ) p ( m 1 ) 1 ( k + 1 ) p ( m 1 ) 1 ( k + 1 ) p ( m 1 ) 1 1 ( k + 1 ) p 1 1 ( k + 1 ) p 1 1 ( k + 1 ) p , 0 < p 1 .
Moreover, if we select α k , i as in Remark 1, assumptions (25) and (26) in Theorem 4 reduce to assumptions (8b) and (8c) in Theorem 1, respectively. It turns out that Theorem 1 corresponds to the special case of Theorem 4 with m = 2 .

4. Relation Between CISA and Over-Relaxed Step Approach (ORSA)

As briefly presented in the Introduction section, ORSA is proposed to achieve a better numerical performance under very restrictive assumptions; see [43,44,45,46] for examples. This section shows that in solving nonexpansive mapping with CISA integration, ORSA could converge under much more relaxed assumptions. Moreover, inexact calculation in each iteration is allowed.
We first present the general inexact version of the CISA-ORSA algorithm to solve nonexpansive mapping (1):
( x 1 , x 0 X ) y k = x k α k ( x k x k 1 ) , x k + 1 = ( 1 λ k ) y k + λ k T y k + e k ,
where α k [ 0 , 1 ] , λ k 1 , and e k denotes the error in calculating T y k . Notice that ORSA corresponds to the special case α k 0 and e k 0 . Specially, Boţ et al. [55] combine ISA to solve averaged nonexpansive mappings in exact cases, i.e., α k , λ k [ 1 , 0 ] with the setting e k 0 . The following theorem contains the inexact case with α k [ 0 , 1 ] and λ k 1 only under mild assumptions.
Theorem 5.
Let T : X X be bounded. For k N , assume that
{ α k } i s n o n - i n c r e a s i n g i n [ 0 , 1 ] ,
{ λ k } i s n o n - d e c r e a s i n g a n d λ 0 1 ,
k = 0 e k < ,
( 1 λ k ) ( 1 + α k ) 2 λ k + α k ( 1 α k ) δ > 0 .
Then, the sequence { x k } k N generated by CISA-ORSA algorithm (27) converges weakly to a point in FixT.
Proof. 
Let x * be a fixed point in FixT. It follows from the presentation of CISA-ORSA algorithm (27) and the nonexpansiveness of T that
x k + 1 x * 2 = ( 1 λ k ) ( y k x * ) + λ k ( T y k x * ) + e k 2 F 1 ( 1 λ k ) y k x * 2 +   λ k T y k x * 2   λ k ( 1 λ k ) T y k y k 2 + 2 e k , x k + 1 x *   y k x * 2   1 λ k λ k x k + 1 y k e k 2 + 2 e k , x k + 1 x * =   y k x * 2   1 λ k λ k x k + 1 y k 2 + 2 e k , T y k x * + 1 λ k λ k + 2 e k 2 ϵ k F 2 ( 1 α k ) x k x * 2 +   α k x k 1 x * 2   α k ( 1 α k ) x k x k 1 2 1 λ k λ k ( 1 + α k ) x k + 1 x k 2 +   α k ( 1 + α k ) x k x k 1 2 + ϵ k ,
where F2 is used twice in deriving the last inequality. According to assumptions (28a) and (28b), rearranging the above inequality (29) yields that
x k + 1 x * 2 +   α k x k x * 2 +   ( 1 λ k + 1 ) ( 1 + α k + 1 ) λ k + 1 x k + 1 x k 2   x k x *   +   α k 1 x k 1 x * 2 +   ( 1 λ k ) ( 1 + α k ) λ k x k x k 1 2 ( 1 λ k ) ( 1 + α k ) 2 λ k + α k ( 1 α k ) x k x k 1 2 + ϵ k ,
Adding both sides of the above inequality from k = 0 to , we obtain
k = 0 ( 1 λ k ) ( 1 + α k ) 2 λ k + α k ( 1 α k ) x k x k 1 2   x 0 x *   +   α 1 x 1 x * 2 +   ( 1 λ 0 ) ( 1 + α 0 ) λ 0 x 0 x 1 2 + k = 0 ϵ k < ,
where the last inequality holds true as follows by using the boundedness of T and assumption (28c):
k = 0 ϵ k k = 0 2 T y k x *   + λ k 1 λ k + 2 e k e k < .
Then, it follows from assumption (28d) that
lim k x k x k 1 2 = 0 .
Substituting (32) and (28c) into the following inequality
T x k x k =   T x k T y k + T y k y k + y k x k 2 x k y k + 1 λ k x k + 1 y k   +   1 λ k e k 2 α k + α k λ k x k x k 1   +   1 λ k x k + 1 x k   +   1 λ k e k
yields that
lim k T x k x k 2 = 0 .
That is, every weak cluster of { x k } k N belongs to FixT. Proof of weak convergence is similar to that of (Lemma 2.39 [8]) and hence omitted.          □
Remark 6.
According to the proof of Theorem 5, suppose e k 0 , then we can remove the boundedness assumption on T.
Remark 7.
Suppose α k α ( 0 , 1 ) and λ k λ , then { x k } k N  generated by the CISA-ORSA algorithm (27) converges weakly to a point in FixT under the following simplified assumption:
( 1 + α ) 2 ( 1 + α ) 2 α ( 1 α ) > λ 1 .
Example 1.
Consider the problem of minimizing f ( x , y ) = ( x 2 + y 2 ) 3 , which has a unique minimizer x * = ( 0 , 0 ) . Define a nonexpansive mapping T f = 2 prox τ f I . Set τ = 10 2 . We compare three convergence-guaranteed algorithms, PPA (with an initial point x 0 = ( 3 , 1 ) ), CISA algorithm (7) and CISA-ORSA algorithm (24) (with the setting e k 0 , T = T f and initial points x 1 = x 0 = ( 3 , 1 ) ), respectively. We plot in Figure 2 their convergence and observe that in this example, the CISA-ORSA algorithm (24) runs the fastest.

5. Application

In this section, we combine CISA with Backward–Forward splitting (BFS) algorithm [49] and Peaceman–Rachford splitting (PRS) algorithm [26], respectively.

5.1. CISA-BFS Algorithm

We aim at solving the following separable optimization problem
min x H θ 1 ( x ) + θ 2 ( x ) ,
where θ i = 1 , 2 : H R are proper lower semi-continuous convex functions and θ 1 is L-Lipschitz differential. This problem has been solved by [9,56,57] and has application in [58,59]. One way to solve (33) is the BFS algorithm presented as
x k + 1 = prox γ θ 2 ( x k ) γ θ 1 prox γ θ 2 ( x k ) .
The convergence is guaranteed if 0 < γ < 2 / L (see more detials in [49]). We now introduce the following CISA-BFS algorithm (Algorithm 2):
Algorithm 2 ICISA-BFS
Input: 
Initial points x 1 , x 0 H ; step-size γ > 0 ; converse inertial parameter α ( 0 , 1 )
  1:
for  k = 0 , 1 , 2 , do
  2:
    Step 1: Converse Inertial Extrapolation
  3:
     y k = x k α ( x k x k 1 ) ,
  4:
    Step 2: Forward-Backward Update
  5:
     x k + 1 = prox γ θ 2 ( y k ) γ θ 1 prox γ θ 2 ( y k ) ,
  6:
end for
Output: 
x k + 1
We will show that the benefit is that γ could be larger than 2 / L in CISA-BFS Algorithm 2.
For convenience, let T 1 = I γ θ 1 and T 2 = prox γ θ 2 . Then, CISA-BFS Algorithm 2 can be rewritten as
( x 1 , x 0 H ) , x k + 1 = T 1 T 2 y k .
We first study a special property of T 1 T 2 .
Lemma 4.
For any x , y H , it holds that
T 1 T 2 x T 1 T 2 y 2 x y 2 s ( 2 I T 2 T 1 T 2 ) x ( 2 I T 2 T 1 T 2 ) y 2 + 2 s ( I T 1 T 2 ) x ( I T 1 T 2 ) y 2 ( 1 2 s ) ( I T 2 ) x ( I T 2 ) y 2 ,
where
s = 1 γ L 2 2 γ L .
Proof. 
Since I 2 L θ 1 is nonexpansive, for all x , y H , we have
T 1 x T 1 y 2 =   ( 1 γ L 2 ) + γ L 2 ( I 2 L θ 1 ) x ( 1 γ L 2 ) + γ L 2 ( I 2 L θ 1 ) y 2 = ( 1 γ L 2 ) x y 2 + γ L 2 ( I 2 L θ 1 ) x ( I 2 L θ 1 ) y 2 ( 1 γ L 2 ) γ L 2 2 L θ 1 ( x ) θ 1 ( y ) 2 x y 2 + s ( I T 1 ) x ( I T 1 ) y 2 .
As T 2 is firmly nonexpansive, it holds that
T 2 x T 2 y 2 x y 2 ( I T 2 ) x ( I T 2 ) y 2 , x , y H .
Then, we have
T 1 T 2 x T 1 T 2 y 2 ( 35 )   T 2 x T 2 y 2 + s ( T 2 T 1 T 2 ) x ( T 2 T 1 T 2 ) y 2 ( 36 ) x y 2 ( I T 2 ) x ( I T 2 ) y 2 + s ( T 2 T 1 T 2 ) x ( T 2 T 1 T 2 ) y 2 = x y 2 s ( 2 I T 2 T 1 T 2 ) x ( 2 I T 2 T 1 T 2 ) y 2 +   2 s ( I T 1 T 2 ) x ( I T 1 T 2 ) y 2 ( 1 2 s ) ( I T 2 ) x ( I T 2 ) y 2 ,
where the last equation is obtained by taking a = ( I T 2 ) x ( I T 2 ) y and b = ( I T 1 T 2 ) x ( I T 1 T 2 ) y into the identical equation 2 ( a 2 + b 2 ) = ( a + b ) 2 + ( a b ) 2 . The proof is complete.         □
Now, we show that CISA-BFS Algorithm 2 converges with a larger step-size.
Theorem 6.
Let S * be the solution set of (33) and assumed to be nonempty. The sequence  { x k } k N  generated by CISA-BFS Algorithm 2 converges weakly to a point in S * , if the following condition holds:
2 L γ < 2 L ( 1 + α ) 2 ( 1 + α ) 2 1 2 α ( 1 α ) , α ( 0 , 1 ) .
Proof. 
Let x * S * so that T x * = x * . Taking y = x * and x = y k into Lemma 4 yields
x k + 1 x * 2   y k x * 2 +   2 s y k x k + 1 2   s ( 2 I T 2 T ) y k ( I T 2 ) x * 2 ( 1 2 s ) ( I T 2 ) y k ( I T 2 ) x * 2 ,
where s is defined in (34). Then, according to (37) and (38), we have
x k + 1 x * 2     y k x * 2 +   2 s y k x k + 1 2 .
The remaining proof for the weak convergence of { x k } k N is similar to that of Theorem 5 with e k 0 , α k α and ( λ k 1 ) / λ k = 2 s .          □
Remark 8.
Theorem 6 is not a corollary of Theorem 5. Actually, in the case γ > 2 / L , we can not rewrite T 1 T 2 as
T 1 T 2 = ( 1 λ ) I + λ T ,
where λ 1 and T is a nonexpansive mapping. That is, CISA-BFS Algorithm 2 with γ > 2 / L does not satisfy the conditions assumed in Theorem 5.
Remark 9.
Suppose θ 2 = 0 , then T 2 = I . CISA-BFS Algorithm 2 reduces to
( x 1 , x 0 H ) y k = x k α ( x k x k 1 ) , x k + 1 = y k γ θ 1 ( y k ) .
It follows from (38) and T 2 = I that
x k + 1 x * 2     y k x * 2 +   s y k x k + 1 2 .
Similar to the proof for Theorem 6, condition (37) can be improved to
2 L γ < 2 L ( 1 + α ) 2 ( 1 + α ) 2 α ( 1 α ) .
Interestingly, based on an approach similar to (39), Alecsa, László, and Viorel [60] proposed a gradient method to minimize nonconvex and gradient Lipchitz continuous function with an extended step-size from ( 0 , 1 / L ) to ( 0 , 2 / L ) . Comparing with the work in [60], we can extend the step-size of gradient method to larger than 2 / L by CISA in convex case.

5.2. CISA-PRS Algorithm

In this subsection, we consider the following separable convex optimization problem with linear constraint.
min x , y H { θ 1 ( x ) + θ 2 ( y ) | A x + B y = b }
where θ i = 1 , 2 : H R are proper lower semi-continuous convex functions, A and B are both linear operators. This problem address optimization problems across various domains, including business optimization [61], sensor technology [62], and data science [63,64,65]. The classical Peaceman–Rachford splitting [26] method for solving (40) reads as follows:
x k + 1 arg min x H θ 1 ( x ) + θ 2 ( y k ) λ k T ( A x + B y k b ) + β 2 A x + B y k b 2 , λ k + 1 2 = λ k β ( A x k + B y k b ) , y k + 1 arg min y H θ 1 ( x k + 1 ) + θ 2 ( y ) λ k + 1 2 T ( A x k + 1 + B y b ) + β 2 A x k + 1 + B y b 2 , λ k + 1 = λ k + 1 2 β ( A x k + 1 + B y k + 1 b ) .
However, there is no convergence guaranteed for PRS in the general situation. To overcome this shortcoming, He et al. propose the following strictly contractive Peaceman–Rachford splitting [66] method for (40) with a relaxation factor r ( 0 , 1 ) in dual variable updating:
x k + 1 arg min x H θ 1 ( x ) + θ 2 ( y k ) λ k T ( A x + B y k b ) + β 2 A x + B y k b 2 , λ k + 1 2 = λ k r β ( A x k + B y k b ) , y k + 1 arg min y H θ 1 ( x k + 1 ) + θ 2 ( y ) λ k + 1 2 T ( A x k + 1 + B y b ) + β 2 A x k + 1 + B y b 2 , λ k + 1 = λ k + 1 2 r β ( A x k + 1 + B y k + 1 b ) .
In this subsection, as an alternative modification to guarantee the convergence, we propose the following CISA–Peaceman–Rachford splitting (CISA-PRS) algorithm (Algorithm 3) for solving (40):
Algorithm 3 CISA-PRS
Input: 
( y 1 , λ 1 ) , ( y 0 , λ 0 ) H × H ; parameter β > 0 ; sequence { α k } [ 0 , 1 ]
  1:
for  k = 0 , 1 , 2 ,  do
  2:
    Step 1: Converse Inertial Extrapolation
  3:
     ( y ¯ k , λ ¯ k ) T = ( y k , λ k ) T α k ( y k y k 1 , λ k λ k 1 ) T ,
  4:
    Step 2: Primal Update (x-subproblem)
  5:
     x k + 1 arg min x H θ 1 ( x ) + θ 2 ( y ¯ k ) λ ¯ k T ( A x + B y ¯ k b ) + β 2 A x + B y ¯ k b 2 ,
  6:
    Step 3: Intermediate Dual Update
  7:
     λ k + 1 2 = λ ¯ k β ( A x k + B y ¯ k b ) ,
  8:
    Step 4: Primal Update (y-subproblem)
  9:
     y k + 1 arg min y H θ 1 ( x k + 1 ) + θ 2 ( y ) λ k + 1 2 T ( A x k + 1 + B y b ) + β 2 A x k + 1 + B y b 2 ,
  10:
    Step 5: Dual Update
  11:
     λ k + 1 = λ k + 1 2 β ( A x k + 1 + B y k + 1 b ) ,
  12:
end for
Output: 
Sequence { ( x k , y k , λ k ) } converges to a solution of (40)
Define z = ( y , λ ) T and z M = z , M z with M being symmetric positive semidefinite. Then CISA-PRS Algorithm 3 can be rewritten in short as
z ¯ k = z k α k ( z k z k 1 ) , z k + 1 = T z ¯ k .
As shown in [66], the above operator T satisfies that, for k N ,
T z ¯ k z * M 2     z ¯ k z * M 2 , T z ¯ k + 1 T z ¯ k M 2     z ¯ k + 1 z ¯ k M 2 .
where M = 1 2 β B T B B T B 1 β I 0 . Moreover, according to (Lemma 3.2 [66]), if T z ¯ k z ¯ k M 2   = 0 , then x k + 1 , y k + 1 , λ ¯ k + β ( A x k + 1 B y ¯ k b ) is a solution of (40). Therefore, we are sufficient to give the convergence rate of T z ¯ k z ¯ k M 2 .
Theorem 7.
Suppose the solution of (40) is nonempty. Let ( z ¯ k ) k N  be generated from CISA-PRS Algorithm 3.
(i)
If α k α ( 0 , 1 ) , then T z ¯ k z ¯ k M 2 = O ( 1 / k ) .
(ii)
If α k = 1 / ( k + 1 ) p for p ( 0 , 1 ] , then
T z ¯ k z ¯ k M 2 = O 1 ln k , p = 1 , O 1 k 1 p , p [ 1 2 , 1 ) , O 1 k p , p ( 0 , 1 2 ) .
Proof. 
The proof is similar to that of Theorem 2, with an extended norm · M satisfying (41). □

6. Conclusions

We introduce and investigate the converse inertial step approach (CISA) in solving nonexpansive mappings. The weak convergence of the CISA algorithm is first established based on a newly developed weak quasi-Fejér monotonicity under mild assumptions. We also establish sublinear and linear convergence under different assumptions. As a further extension, we then study the generalized version of CISA. The second contribution of this paper is to design a more general inexact CISA-ORSA algorithm to achieve a better numerical performance without making restrictive assumptions. Finally, with CISA ing, BFS algorithm accepts a larger step-size and PRS algorithm is guaranteed to converge. More applications of CISA are expected in the area of nonconvex optimization.
Based on the present study, several promising directions for future research emerge. First, extending the CISA framework to nonconvex optimization problems is of significant interest, as many modern applications in machine learning and signal processing involve nonconvex objectives. Second, investigating the applicability of CISA in real time and online settings, such as adaptive control systems and online learning, where its convergence properties could be highly beneficial, represents a critical next step. Additionally, exploring stochastic variants of CISA and its integration with large-scale distributed computing frameworks are important avenues for further development.

Author Contributions

Conceptualization, G.Y. and T.Z.; Formal analysis, G.Y. and T.Z.; Writing—original draft, G.Y. and T.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Proof of Theorem 3

Let x * be a fixed point of FixT. If there is a k ¯ satisfying x k ¯ x * 2 = 0 , then { x k } k N converges to a point of FixT in finite steps. So, we assume x k x * 2 0 for k N in the following. Form the nonexpansiveness of T and e k = 0 , we have
x k + 1 T x k + 1 2   ( 1 α ) x k + α x k 1 x k + 1 2 F 2 ( 1 + α ) x k + 1 x k 2 +   α ( 1 + α ) x k x k 1 2 .
According to (11), for α k α and e k = 0 , we have
x k + 1 x * 2   ( 1 α ) x k x * 2 +   α x k 1 x * 2 α ( 1 α ) x k x k 1 2 , x k + 2 x * 2   ( 1 α ) x k + 1 x * 2 +   α x k x * 2 α ( 1 α ) x k + 1 x k 2 .
Adding the above two inequations together yields
x k + 1 x * 2 + x k + 2 x * 2 α x k 1 x * 2 + x k x * 2 + ( 1 α ) x k + 1 x * 2 α ( 1 α ) ( x k x k 1 2 + x k + 1 x k 2 ) α x k 1 x * 2 +   x k x * 2 + ( 1 α ) x k + 1 x * 2 α ( 1 α ) ( α x k x k 1 2 + x k + 1 x k 2 ) ( A1 ) α x k 1 x * 2 +   x k x * 2 + ( 1 α ) x k + 1 x * 2   α ( 1 α ) 1 + α x k + 1 T x k + 1 2 ( 22 ) α x k 1 x * 2 +   x k x * 2 + ( 1 α ) x k + 1 x * 2   μ α ( 1 α ) 1 + α x k + 1 x * 2 ,
where the second inequality holds as α ( 0 , 1 ) . Choose μ 1 ( 0 , μ α ( 1 α ) / ( 1 + α ) ) and define
τ k = x k + 1 x * 2 x k x * 2 , s k + 1 = 1 1 + min { μ α ( 1 α ) 1 + α μ 1 , μ 1 τ k + 1 } .
According to the above inequality (A3), we have
1 + μ α ( 1 α ) 1 + α x k + 1 x * 2 + x k + 2 x * 2 = 1 + μ α ( 1 α ) 1 + α μ 1 x k + 1 x * 2 + ( 1 + μ 1 τ k + 1 ) x k + 2 x * 2 α x k 1 x * 2 +   x k x * 2 + ( 1 α ) x k + 1 x * 2 max { x k x * 2 + x k + 1 x * 2 , x k 1 x * + x k x * 2 } .
Multiplying both sides of the above inequality by s k + 1 obtains
x k + 1 x * 2 + x k + 2 x * 2 s k + 1 max { x k x * 2 +   x k + 1 x * 2 , x k 1 x * 2 +   x k x * 2 } .
For τ ( 0 , 1 ) , choose G > 0 so that
( 1 τ ) α G 1 + α + α ( 1 α ) G 1 + α + α 2 < 1 .
By (A2), we have
τ k + 1 ( 1 α ) + α 1 τ k .
We divide the proof of (23) into two cases.
Case 1  τ k + 1 G . According to (A5) and
s k + 1 = 1 1 + min { μ α ( 1 α ) 1 + α μ 1 , μ 1 τ k + 1 } 1 1 + min { μ α ( 1 α ) 1 + α μ 1 , μ 1 G } < 1 ,
we obtain (23) with the setting i k { 0 , 1 } .
Case 2  τ k + 1 > G . From (A6), we obtain
τ k α τ k + 1 1 + α < α G 1 + α .
We first assume x k + 1 x * 2     x k 1 x * 2 , which implies that
max { x k x * 2 + x k + 1 x * 2 , x k 1 x * 2 + x k x * 2 } =   x k x * 2 + x k + 1 x * 2 .
It then follows that
x k + 1 x * 2 +   x k + 2 x * 2 ( A5 ) , ( A8 ) s k + 1 ( x k x * 2 +   x k + 1 x * 2 ) ( A5 ) s k + 1 s k max { x k 1 x * 2 +   x k x * 2 , x k 2 x * 2 +   x k 1 x * 2 } .
Thus, (23) holds true for i k { 1 , 2 } by noting that
s k + 1 s k 1 1 + min { μ α ( 1 α ) 1 + α μ 1 , μ 1 τ k } ( A7 ) 1 1 + min { μ α ( 1 α ) 1 + α μ 1 , ( G 1 + α ) μ 1 α } < 1 .
Now, we consider the other case x k + 1 x * 2   <   x k 1 x * 2 . It follows that
(A9) max { x k x * 2 +   x k + 1 x * 2 , x k 1 x * 2 +   x k x * 2 } =   x k 1 x * 2 +   x k x * 2 , (A10) x k + 1 x * 2 x k 1 x * 2 = τ k τ k 1 < 1 .
We divide the remainder of proof into two cases.
1.
τ k 1 G . We have
x k + 1 x * 2 +   x k + 2 x * 2 ( A9 ) s k + 1 ( x k 1 x * 2 +   x k x * 2 ) ( A5 ) s k + 1 s k 1 max i = 2 , 3 { x k i x * 2 +   x k i + 1 x * 2 } .
We completes the proof of (23) with i k { 2 , 3 } by noting that
s k + 1 s k 1 1 1 + min { μ α ( 1 α ) 1 + α μ 1 , μ 1 τ k 1 } 1 1 + min { μ α ( 1 α ) 1 + α μ 1 , μ 1 G } < 1 .
2.
τ k 1 > G . Similar to (A7), we have
τ k 2 < α G 1 + α .
By observing that
τ k + 1 τ k ( A6 ) ( 1 α ) τ k + α < ( A7 ) α ( 1 α ) G 1 + α + α , τ k 1 τ k 2 ( 1 α ) τ k 2 + α < α ( 1 α ) G 1 + α + α ,
we obtain
x k + 1 x * 2 +   x k + 2 x * 2 = τ k τ k 1 x k 1 x * 2 +   τ k + 1 τ k τ k 1 τ k 2 x k 2 x * 2   x k 1 x * 2 +   ( α ( 1 α ) G 1 + α + α ) 2 x k 2 x * 2 = τ x k 1 x * 2 + ( 1 τ ) τ k 2 + ( α ( 1 α ) G 1 + α + α ) 2 x k 2 x * 2 τ x k 1 x * 2 + ( 1 τ ) α G 1 + α + ( α ( 1 α ) G 1 + α + α ) 2 x k 2 x * 2 max τ , ( 1 τ ) α G 1 + α + α ( 1 α ) G 1 + α + α 2 · x k 2 x * 2 + x k 1 x * 2 ,
where the first inequality follows from (A10)–(A11). We complete the proof of (23) with the setting i k = 2 .

Appendix B. Proof of Theorem 4

(i). Let x * be a fixed point of T in FixT. Since T is a nonexpansive mapping, we have
x k + 1 x * = T i = k m + 1 k α k , i x i + e k x * T i = k m + 1 k α k , i x i x *   +   e k     i = k m + 1 k α k , i x i x * + e k i = k m + 1 k α k , i x i x *   +   e k max k m + 1 i k x i x *   +   e k   max 1 m i 0 x i x *   + i = 0 k e i .
Hence, { x k } k N is bounded.
(ii). According to the above inequality (A12), { x k } k N is m weak quasi-Fejér monotone with respect to FixT.
(iii). Since x * is a fixed point of T, we have
x k + 1 x * 2 = T i = k m + 1 k α k , i x i x * + e k 2 F 1 T i = k m + 1 k α k , i x i T x * 2 + 2 e k , x k + 1 x * i = k m + 1 k α k , i ( x i x * ) 2 + 2 e k , x k + 1 x * = i = k m + 1 k α k , i x i x * 2 1 2 i , j = k m + 1 k α k , i α k , j x i x j 2 + 2 e k , x k + 1 x * ,
where the last equality holds due to an extended version of F2.
Notice that assumption (26) implies that
k = m 1 | α k , k i α k i , k 2 i | < , i { 1 , 2 , , m 1 } .
Then, similar to the proof of (6), for K , we have
k = m 1 K i = k m + 1 k α k , i x i x * 2   x k + 1 x * 2 = k = m 1 K i = k m + 1 k 1 α k , i x i x * 2 x k x * 2   +   x k x * 2   x k + 1 x * 2 = k = m 1 K ( i = 1 m 1 ( α k i , k 2 i x k i x * 2   α k , k i x k x * 2 + ( α k , k i α k i , k 2 i ) x k i x * 2 )   +   x k x * 2   x k + 1 x * 2 ) < .
Adding both sides of (A13) from k = m 1 to k = K and then rearranging it gives
1 2 k = m 1 K i , j = k m + 1 k α k , i α k , j x i x j 2 k = m 1 K i = k m + 1 k ( α k , i x i x * 2 x k + 1 x * 2 ) + 2 k = m K e k , x k + 1 x * < ( A15 ) ,
for K . According to assumption (25), it follows from (A16) that there is an i ¯ { 0 , 1 , , m 2 } satisfying
lim inf k x k i ¯ x k i ¯ 1 2 = 0 .
Without loss of generality, we assume i ¯ = 0 and then
lim inf k x k x k 1 2 = 0 .
We observe that
x k + 1 x k 2   = T y k + e k T y k 1 e k 1 2   i = k m + 1 k α k , i x i j = k m k 1 α k 1 , j x j 2 +   2 e k e k 1 , x k + 1 x k =   i = k m + 1 k α k , i ( x i x i 1 ) + j = k m + 1 k ( α k , j α k 1 , j 1 ) x j 1 2 +   2 e k e k 1 , x k + 1 x k   i = k m + 1 k α k , i ( x i x i 1 ) 2 +   2 j = k m + 1 k ( α k , j α k 1 , j 1 ) x j 1 , y k y k 1 +   2 e k e k 1 , x k + 1 x k i = k m + 1 k α k , i x i x i 1 2 + 2 j = k m + 1 k ( α k , j α k 1 , j 1 ) x j 1 , y k y k 1 +   2 e k e k 1 , x k + 1 x k = x k x k 1 2 + i = 1 m 1 α k , k i x k i x k i 1 2 x k x k 1 2 +   2 j = k m + 1 k ( α k , j α k 1 , j 1 ) x j 1 , y k y k 1 + 2 e k e k 1 , x k + 1 x k .
Then, based on the proof similar to that of (6), it follows from the boundedness of { x k } k N , assumptions (8a) and (26) that
k = m ( i = 1 m 1 α k , k i x k i x k i 1 2   x k x k 1 2 + 2 e k e k 1 , x k + 1 x k +   2 j = k m + 1 k ( α k , j α k 1 , j 1 ) x j 1 , y k y k 1 ) < .
Setting a k = x k x k 1 2 and ε k = i = 1 m 1 α k , k i ( ( x k i x k i 1 ) 2 x k x k 1 2 + 2 e k e k 1 , x k + 1 x k + 2 j = k m + 1 k ( α k , j α k 1 , j 1 ) x j 1 , y k y k 1 in Lemma 2 yields the convergence of x k x k 1 2 . Then, (A17) implies that
lim k x k x k 1 2 = 0 .
Hence, every weak cluster of { x k } k N belongs to FixT since
lim k T x k + 1 x k + 1 lim k i = k m + 1 k α k , i x i x k + 1 + e k lim k i = k m + 1 k α k , i x i x k + 1   +   e k lim k i = 1 m j = k m + i k x j x j + 1   +   e k = 0 .
(iv). Weak convergence of { x k } k N can be obtained from Lemma 3.

References

  1. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
  2. Lorenz, D.A.; Pock, T. An Inertial Forward-Backward Algorithm for Monotone Inclusions. J. Math. Imaging Vis. 2015, 51, 311–325. [Google Scholar] [CrossRef]
  3. Ochs, P.; Chen, Y.; Brox, T.; Pock, T. iPiano: Inertial proximal algorithm for nonconvex optimization. SIAM J. Imaging Sci. 2014, 7, 1388–1419. [Google Scholar] [CrossRef]
  4. Tran-Dinh, Q. From Halpern’s fixed-point iterations to Nesterov’s accelerated interpretations for root-finding problems. Comput. Optim. Appl. 2024, 87, 181–218. [Google Scholar] [CrossRef]
  5. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  6. Nesterov, Y.E. A method for solving the convex programming problem with convergence rate O(1/k2). Dokl. Akad. Nauk SSSR 1983, 269, 543–547. [Google Scholar]
  7. Attouch, H.; Peypouquet, J. The rate of convergence of Nesterov’s accelerated forward-backward method is actually faster than 1/k2. SIAM J. Optim. 2016, 26, 1824–1834. [Google Scholar] [CrossRef]
  8. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Space; Springer: New York, NY, USA, 2011; pp. 287–316. [Google Scholar]
  9. Wang, H.; Du, J.; Su, H.; Sun, H. A linearly convergent self-adaptive gradient projection algorithm for sparse signal reconstruction in compressive sensing. Aims Math 2023, 8, 14726–14746. [Google Scholar] [CrossRef]
  10. Ge, L.; Niu, H.; Zhou, J. Convergence analysis and error estimate for distributed optimal control problems governed by Stokes equations with velocity-constraint. Adv. Appl. Math. Mech. 2022, 14, 33–55. [Google Scholar] [CrossRef]
  11. Sun, J.; Qu, W. Dca for sparse quadratic kernel-free least squares semi-supervised support vector machine. Mathematics 2022, 10, 2714. [Google Scholar] [CrossRef]
  12. Diao, Y.; Zhang, Q. Optimization of management mode of small-and medium-sized enterprises based on decision tree model. J. Math. 2021, 2021, 2815086. [Google Scholar] [CrossRef]
  13. Ceng, L.; Yuan, Q. Variational inequalities, variational inclusions and common fixed point problems in Banach spaces. Filomat 2020, 34, 2939–2951. [Google Scholar] [CrossRef]
  14. Ceng, L.; Yuan, Q. On a General Extragradient Implicit Method and Its Applications to Optimization. Symmetry 2020, 12, 124. [Google Scholar] [CrossRef]
  15. Ceng, L.; Yuan, Q. Composite inertial subgradient extragradient methods for variational inequalities and fixed point problems. J. Inequalities Appl. 2019, 2019, 274. [Google Scholar] [CrossRef]
  16. Ceng, L.; Yuan, Q. Systems of variational inequalities with nonlinear operators. Mathematics 2019, 7, 338. [Google Scholar] [CrossRef]
  17. Ceng, L.; Yuan, Q. Hybrid Mann viscosity implicit iteration methods for triple hierarchical variational inequalities, systems of variational inequalities and fixed point problems. Mathematics 2019, 7, 142. [Google Scholar] [CrossRef]
  18. Ceng, L.; Yuan, Q. Triple hierarchical variational inequalities, systems of variational inequalities, and fixed point problems. Mathematics 2019, 7, 187. [Google Scholar] [CrossRef]
  19. Ceng, L.; Yuan, Q. Strong convergence of a new iterative algorithm for split monotone variational inclusion problems. Mathematics 2019, 7, 123. [Google Scholar] [CrossRef]
  20. Darvish, V.; Ogwo, G.; Oyewole, O.; Abass, H.; Ikramov, A. Inertial Iterative Method for Generalized Mixed Equilibrium Problem and Fixed Point Problem. Eur. J. Pure Appl. Math. 2024, 18, 6173. [Google Scholar] [CrossRef]
  21. Rahaman, M.; Islam, M.; Irfan, S.; Yao, J.; Zhao, X. Inertial subgradient splitting projection methods for solving equilibrium problems and applications. Numer. Algorithms 2024, 1–35. [Google Scholar] [CrossRef]
  22. Sun, H.; Sun, M.; Wang, Y. New global error bound for extended linear complementarity problems. J. Inequalities Appl. 2018, 2018, 258. [Google Scholar] [CrossRef]
  23. Rockafellar, R.T. Monotone Operators and the Proximal Point Algorithm. SIAM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef]
  24. Sun, H.; Sun, M.; Zhang, B. An Inverse Matrix-Free Proximal Point Algorithm for Compressive Sensing. Sci. Asia 2018, 44, 311–318. [Google Scholar] [CrossRef]
  25. Eckstein, J.; Bertsekas, D. On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992, 55, 293–318. [Google Scholar] [CrossRef]
  26. Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  27. Chambolle, A.; Pock, T. A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging. J. Math. Imaging Vis. 2011, 40, 120–145. [Google Scholar] [CrossRef]
  28. Condat, L. A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms. J. Optim. Theory Appl. 2013, 158, 460–479. [Google Scholar] [CrossRef]
  29. Qu, Y.; He, H.; Zhang, T.; Han, D. Practical proximal primal-dual algorithms for structured saddle point problems. J. Glob. Optim. 2025, 1–29. [Google Scholar] [CrossRef]
  30. Sun, M.; Sun, H.; Wang, Y. Two proximal splitting methods for multi-block separable programming with applications to stable principal component pursuit. J. Appl. Math. Comput. 2018, 56, 411–438. [Google Scholar] [CrossRef]
  31. Glowinski, R.; Marrocco, A. Sur l’approximation, par éléments finis d’ordre 1, et la résolution, par pénalisation-dualité, d’une classe de problèmes de Dirichlet non linéaires. J. Equine Vet. Sci. 1975, 2, 41–76. [Google Scholar] [CrossRef]
  32. Xue, B.; Du, J.; Sun, H.; Wang, Y. A linearly convergent proximal ADMM with new iterative format for BPDN in compressed sensing problem. Aims Math 2022, 7, 10513–10533. [Google Scholar] [CrossRef]
  33. Sun, M.; Sun, H. Improved proximal ADMM with partially parallel splitting for multi-block separable convex programming. J. Appl. Math. Comput. 2018, 58, 151–181. [Google Scholar] [CrossRef]
  34. Zhang, T. On the O (1/K 2) Ergodic Convergence of ADMM with Dual Step Size from 0 to 2. J. Oper. Res. Soc. China 2024, 1–12. [Google Scholar] [CrossRef]
  35. Zhang, T. Faster augmented Lagrangian method with inertial steps for solving convex optimization problems with linear constraints. Optimization 2025, 1–32. [Google Scholar] [CrossRef]
  36. Krasnosel’skiĭ, M. Two remarks on the method of successive approximations. Uspehi Mat. Nauk 1955, 10, 123–127. [Google Scholar]
  37. Groetsch, C.W. A note on segmenting Mann iterates. J. Math. Anal. Appl. 1972, 40, 369–372. [Google Scholar] [CrossRef]
  38. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  39. Borwein, J.; Reich, S.; Shafrir, I. Krasnosel’ski-Mann iterations in normed spaces. Can. Math. Bull. 1992, 35, 21–28. [Google Scholar] [CrossRef]
  40. Combettes, P.L.; Pennanen, T. Generalized Mann iterates for constructing fixed points in Hilbert spaces. J. Math. Anal. Appl. 2002, 275, 521–536. [Google Scholar] [CrossRef]
  41. Combettes, P.L.; Glaudin, L.E. Quasinonexpansive Iterations on the Affine Hull of Orbits: From Mann’s Mean Value Algorithm to Inertial Methods. SIAM J. Optim. 2017, 27, 2356–2380. [Google Scholar] [CrossRef]
  42. Combettes, P.L. Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004, 53, 475–504. [Google Scholar] [CrossRef]
  43. Corman, E.; Yuan, X. A generalized proximal point algorithm and its convergence rate. SIAM J. Optim. 2014, 24, 1614–1638. [Google Scholar] [CrossRef]
  44. Dong, Y.; Fischer, A. A family of operator splitting methods revisited. Nonlinear Anal. 2010, 72, 4307–4315. [Google Scholar] [CrossRef]
  45. Monteiro, R.D.C.; Sim, C.K. Complexity of the relaxed Peaceman-Rachford splitting method for the sum of two maximal strongly monotone operators. Comput. Optim. Appl. 2018, 70, 763–790. [Google Scholar] [CrossRef]
  46. Themelis, A.; Patrinos, P. Douglas–Rachford splitting and ADMM for nonconvex optimization: Tight convergence results. SIAM J. Optim. 2020, 30, 149–181. [Google Scholar] [CrossRef]
  47. Borwein, J.M.; Li, G.; Tam, M. Convergence Rate Analysis for Averaged Fixed Point Iterations in Common Fixed Point Problems. SIAM J. Optim. 2017, 27, 1–33. [Google Scholar] [CrossRef]
  48. Liang, J.; Fadili, J.; Peyré, G. Convergence rates with inexact nonexpansive operators. Math. Program. 2014, 159, 403–434. [Google Scholar] [CrossRef]
  49. Attouch, H.; Peypouquet, J.; Redont, P. Backward–forward algorithms for structured monotone inclusions in Hilbert spaces. J. Math. Anal. Appl. 2018, 457, 1095–1117. [Google Scholar] [CrossRef]
  50. Dontchev, A.L.; Rockafellar, R.T. Implicit Functions and Solution Mappings: A View from Variational Analysis; Springer: New York, NY, USA, 2009. [Google Scholar]
  51. Anderson, D.G. Iterative procedures for nonlinear integral equations. J. ACM 1965, 12, 547–560. [Google Scholar] [CrossRef]
  52. Toth, A.; Kelley, C.T. Convergence analysis for Anderson acceleration. SIAM J. Numer. Anal. 2015, 53, 805–819. [Google Scholar] [CrossRef]
  53. Kudin, K.N.; Scuseria, G.E.; CancèS, E. A black-box self-consistent field convergence algorithm: One step closer. J. Chem. Phys. 2002, 116, 8255–8261. [Google Scholar] [CrossRef]
  54. Chen, X.; Kelley, C.T. Convergence of the EDIIS algorithm for nonlinear equations. SIAM J. Sci. Comput. 2019, 41, A365–A379. [Google Scholar] [CrossRef]
  55. Boţ, R.I.; Csetnek, E.R.; Hendrich, C. Inertial Douglas–Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 2015, 256, 472–487. [Google Scholar] [CrossRef]
  56. Sun, J.; Kong, L.; Zhou, S. Gradient projection Newton algorithm for sparse collaborative learning using synthetic and real datasets of applications. J. Comput. Appl. Math. 2023, 422, 114872. [Google Scholar] [CrossRef]
  57. Jiang, T.; Zhang, Z.; Jiang, Z. A new algebraic technique for quaternion constrained least squares problems. Adv. Appl. Clifford Algebr. 2018, 28, 14. [Google Scholar] [CrossRef]
  58. Wen, R.; Fu, Y. Toeplitz matrix completion via a low-rank approximation algorithm. J. Inequalities Appl. 2020, 2020, 71. [Google Scholar] [CrossRef]
  59. Wang, G.; Zhang, D.; Vasiliev, V.; Jiang, T. A complex structure-preserving algorithm for the full rank decomposition of quaternion matrices and its applications. Numer. Algorithms 2022, 91, 1461–1481. [Google Scholar] [CrossRef]
  60. Alecsa, C.D.; László, S.C.; Viorel, A. A gradient type algorithm with backward inertial steps associated to a nonconvex minimization problem. Numer. Algorithms 2020, 84, 485–512. [Google Scholar] [CrossRef]
  61. Zhang, M.; Li, X. Understanding the relationship between coopetition and startups’ resilience: The role of entrepreneurial ecosystem and dynamic exchange capability. J. Bus. Ind. Mark. 2025, 40, 527–542. [Google Scholar] [CrossRef]
  62. Sun, L.; Shi, W.; Tian, X.; Li, J.; Zhao, B.; Wang, S.; Tan, J. A plane stress measurement method for CFRP material based on array LCR waves. NDT E Int. 2025, 151, 103318. [Google Scholar] [CrossRef]
  63. Meng, T.; Liu, R.; Cai, J.; Cheng, X.; He, Z.; Zhao, Z. Breaking structural symmetry of atomically dispersed Co sites for boosting oxygen reduction. Adv. Funct. Mater. 2025, e22046. [Google Scholar] [CrossRef]
  64. Rong, L.; Zhang, B.; Qiu, H.; Zhang, H.; Yu, J.; Yuan, Q.; Wu, L.; Chen, H.; Mo, Y.; Zou, X.; et al. Significant generational effects of tetracyclines upon the promoting plasmid-mediated conjugative transfer between typical wastewater bacteria and its mechanisms. Water Res. 2025, 287, 124290. [Google Scholar] [CrossRef] [PubMed]
  65. Liu, H.; Zhou, S.; Gu, W.; Zhuang, W.; Gao, M.; Chan, C.; Zhang, X. Coordinated planning model for multi-regional ammonia industries leveraging hydrogen supply chain and power grid integration: A case study of Shandong. Appl. Energy 2025, 377, 124456. [Google Scholar] [CrossRef]
  66. He, B.; Liu, H.; Wang, Z.; Yuan, X. A strictly contractive Peaceman–Rachford splitting method for convex programming. SIAM J. Optim. 2014, 24, 1011–1040. [Google Scholar] [CrossRef] [PubMed]
Figure 2. Numerical comparison of PPA and CISA algorithm (7) ( α k = 0.3 , 0.5 , 0.8 ), where the ordinate is in log scale.
Figure 2. Numerical comparison of PPA and CISA algorithm (7) ( α k = 0.3 , 0.5 , 0.8 ), where the ordinate is in log scale.
Mathematics 13 03722 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yan, G.; Zhang, T. Converse Inertial Step Approach and Its Applications in Solving Nonexpansive Mapping. Mathematics 2025, 13, 3722. https://doi.org/10.3390/math13223722

AMA Style

Yan G, Zhang T. Converse Inertial Step Approach and Its Applications in Solving Nonexpansive Mapping. Mathematics. 2025; 13(22):3722. https://doi.org/10.3390/math13223722

Chicago/Turabian Style

Yan, Gangxing, and Tao Zhang. 2025. "Converse Inertial Step Approach and Its Applications in Solving Nonexpansive Mapping" Mathematics 13, no. 22: 3722. https://doi.org/10.3390/math13223722

APA Style

Yan, G., & Zhang, T. (2025). Converse Inertial Step Approach and Its Applications in Solving Nonexpansive Mapping. Mathematics, 13(22), 3722. https://doi.org/10.3390/math13223722

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop