Abstract
We study a relaxed inertial forward–backward–half-forward splitting approach with variable step size to solve a monotone inclusion problem involving a maximal monotone operator, a cocoercive operator, and a monotone Lipschitz operator. The convergence of the sequence of iterations generated by the discretisations of a continuous-time dynamical system is established under suitable conditions. Given the challenges associated with computing the resolvent of the composite operator, the proposed method is employed to tackle the composite monotone inclusion problem. Additionally, a convergence analysis is conducted under certain conditions. To demonstrate the effectiveness of the algorithm, numerical experiments are performed on the image deblurring problem.
1. Introduction
In a (real) Hilbert space , we focus on resolving the monotone inclusion problem, which entails the sum of three operators, as follows:
where is maximal monotone, is monotone L-Lipschitz continuous with , and is -cocoercive, for some . Moreover, let be maximal monotone and assume that it has a solution, namely,
Problem (1) captures numerous significant challenges in convex optimization problems, signal and image processing, saddle point problems, variational inequalities, partial differential equations, and similar problems. For example, see [,,,].
In recent years, many popular algorithms dealing with monotone inclusion problems involving the sum of three or more operators have been covered in the literature. Although traditional splitting algorithms [,,,] play an indispensable part in addressing monotone inclusions that include the sum of two operators, they cannot be directly applied to solve problems beyond the sum of two operators. A generalized forward–backward splitting (GFBS) method [] is designed to address the monotone inclusion problem:
where , indicate maximal monotone operators, and is the same as (1). A subsequent work by Raguet and Landrieu [] addressed (2) using a preconditioned generalized forward–backward splitting algorithm. An equivalent form of Equation (2) can be expressed as the following monotone inclusion formulated in the product space:
where and are the same as (1), and denotes the normal cone of a closed vector subspace V. Two novel approaches for solving (3) are presented in [], where the two methods are equivalent under appropriate conditions. Interestingly, can be extended to a maximal monotone operator. However, the resolvent in this case is no longer linear, necessitating more complicated work to establish convergence. In [], Davis and Yin finished this work by a three-operator splitting method. In a sense, it extends the GFBS method. It is a remarkable fact that the links between the above methods were precisely studied by Raguet [], who also derived a new approach to solve (2) along with an extra maximal monotone operator M. Note that the case of (3) occurs when is generalized as a maximal monotone operator and C is relaxed to monotone Lipschitz continuous. A new approach [] has been established to deal with this case within the computer-assisted skill. In contrast to [], two new schemes [] were discovered for tackling the same problem through discretising a continuous-time dynamical system. If is replaced by a monotone Lipschitz operator B, (3) can be translated into (1). Concerning (1), a forward–backward–half-forward (FBHF) splitting method was derived by Briceño-Arias and Davis [], who exploited the cocoercivity of C by only utilizing it once in every iteration with great ingenuity. See also [,,] for recent advances in four-operator methods.
Designed as an acceleration method, the inertial scheme is a powerful approach that leverages the characteristic of each new iteration being defined by fully using the previous two iterations. The basic idea was first considered in [] as a heavy ball method, which was further developed in []. Later, Güler [] and Alvarez et al. [] generalized the accelerated scheme in [] for addressing the proximal point scheme and maximal monotone problem, respectively. After that, numerous works involving inertial features were discussed and studied in [,,,,,,].
Relaxation approaches, a principal part of resolving monotone inclusions, offer greater flexibility to the iterative schemes (see [,]). In particular, it makes sense to unite inertial and relaxation parameters in a way that enjoys their advantages. Motivated by the inertial proximal algorithm [], a relaxed inertial proximal algorithm (RIPA) was reported to find the zero of a maximal monotone operator by Attouch and Cabot [], who also exploited RIPA to address non-smooth convex minimization problems and studied convergence rates in []. Further research was made on the more general structure of approaching the solution of the sum of two operators with one being the cocoercive operator []. Meanwhile, the idea of combining inertial effect and relaxation method has also been used in the context of the Douglas–Rachford algorithm [], Tseng’s forward–backward–forward algorithm [], and alternating minimization algorithm [].
This paper aims to develop a relaxed inertial forward–backward–half-forward (RIFBHF) scheme that serves as an extension of the FBHF method [] by combining inertial effects and relaxation parameters to solve (1). It is noteworthy that the FBHF method [] was considered in a set constraint () of the monotone inclusion (1). For simplicity, we only study (1). Specifically, the relaxed inertial algorithm is derived from the perspective of discretisations of the continuous-time dynamical system [], and its convergence is analysed under certain assumptions. We also discuss the relationship between the relaxed parameters and inertial effects. Since estimation of the resolvent of is generally challenging, solving the composite monotone inclusion is not straightforward. By drawing upon the primal–dual idea introduced in [], the composite monotone inclusion can be reformulated equivalently as presented in (1), which can be addressed by our scheme. Similarly, a convex minimization problem is also solved accordingly. At last, numerical tests are designed to validate the effectiveness of the proposed algorithm.
The structure of the paper is outlined as follows. Section 2 provides an introduction to fundamental definitions and lemmas. Section 3 presents the development of a relaxed inertial forward–backward–half-forward splitting algorithm through discretisations of a continuous-time dynamical system, accompanied by a comprehensive convergence analysis. In Section 4, we apply the proposed algorithm to solve the composite monotone inclusion and the convex minimization problem. Section 5 presents several numerical experiments to demonstrate the effectiveness of the proposed algorithm. Finally, conclusions are given in the last section.
2. Preliminaries
In the following discussion, and are real Hilbert spaces equipped with an inner product and corresponding norms , and represents the set of nonnegative integers. The ⇀ and → signify weak and strong convergence, respectively. denotes the Hilbert direct sum of and . The set of proper lower semicontinuous convex functions from to is denoted by . The followings are denoted:
- The set of zeros of A is zer .
- The domain of A is dom .
- The range of A is ran .
- The graph set A is gra .
The definitions and lemmas that follow are among the most commonly encountered, as documented in the monograph referenced as [].
Let an operator be a set-valued map; then,
- A is characterized as monotone if it satisfies the inequality for all and belonging to the graph of A.
- A is called maximal monotone if no other monotone operator exists for which its graph strictly encompasses the graph of A.
- A is called -strongly monotone with if for all and , there holds that .
- A is said to be -cocoercive with if for all .
- The resolvent of A is defined by
Let ; the subdifferential of f, denoted by , is defined as . It is well-known that is maximal monotone. The proximity operator of is then defined as:
The well-established relationship holds. According to the Baillon–Haddad theorem, if is a convex and differentiable function with a gradient that is Lipschitz continuous with constant for some , then is said to be -cocoercive. The following equation will be employed later:
for all , , and .
The subsequent two lemmas will play a crucial role in the convergence analysis of the proposed algorithm.
Lemma 1
([], Lemma 2.3). Assume , , and are the sequences in such that for each ,
and there exists a real number α satisfying for all . Thus, the following assertions hold:
- (i)
- , where ;
- (ii)
- There exists such that .
Lemma 2
((Opial) []). Let be a nonempty subset of , and let be a sequence in satisfying the following conditions:
- (1)
- For all , exists;
- (2)
- Every weak sequential cluster point of belongs to .
Then converges weakly to a point in .
3. The RIFBHF Algorithm
We establish the RIFBHF algorithm from the perspective of discretisations of continuous-time dynamical systems. First, we pay attention to the second-order dynamical system of the FBHF method studied in []:
where indicate Lebesgue measurable functions, and and L are as in (1), and . Let
Thereby, (4) is equal to
Note that the cocoercivity of an operator implies its Lipschitz continuity, which implies, in turn, that is Lipschitz continuous with the Lipschitz constant . One can find that T is Lipschitz continuous by Proposition 1 in []. Therefore, by the Cauchy–Lipschitz theorem for absolutely continuous trajectories, it can be deduced that the trajectory of (4) exists and is unique.
Next, the trajectories of (5) are approximated at the time point using discrete trajectories . Specifically, we employ the central discretisation and the backward discretisation . Let be an extrapolation of and ; one gets
which implies
where and . Define that for all ; then, one gets the following RIFBHF iterative for all :
Remark 1.
The subsequent iterative algorithms can be regarded as specific instances of the proposed algorithm:
- (i)
- FBHF method []: assume and when ,
- (ii)
- Inertial forward–backward–half-forward scheme []: assume when ,
- (iii)
- Relaxed forward–backward–half-forward method: assume when ,
Furthermore, the convergence results of the proposed algorithm will be established. The convergence analysis relies heavily on the following properties.
Proposition 1.
Proof.
By definition, we get and such that and . Therefore, in view of the monotonicity of A and B, one has
and
Using (7) and (8), we yield
Therefore, we obtain
Since C is cocoercive, one gets for all :
Similar to [], let for , allowing us to determine the widest interval for such that the second and third terms on the right-hand side of (11) are negative. □
Proposition 2.
Consider the problem (1) and assume that . Suppose that , , and χ is defined as in Proposition 1. Assume is nondecreasing and satisfies . Let denote the sequence generated by (6). Then
- (i)
- where .
- (ii)
- Define that
Then
where .
Proof.
- (i)
- Proposition 1 leads toAccording to the Lipschitz continuity of B,
- (ii)
- It follows from the definition of and the Cauchy–Schwartz inequality thatSimultaneously, we haveNow, by (17) and , we obtainIf follows from thatLet ; we obtainThe proof is completed.
□
Furthermore, seeking to ensure the convergence of (6), let , and by the idea of Boţ et al. []. Proposition 2(ii) implies that
Since , we have
Then, to ensure , the following holds:
Next, we establish the principal convergence theorem.
Theorem 1.
Proof.
For any , by (19), we have . This implies the existence of such that for every ,
As a result, the sequence is nonincreasing, and the bound for yields
which indicates that
Combining (20)–(22) and , we have
which indicates that . Since , this yields . Let us take account of (17) and Lemma 1 and observe that exists.
Meanwhile,
which implies that . In addition, for every , we have
Due to , we deduce that
Assume is a weak limit point of the sequence . Since is bounded, there exists a subsequence that converges weakly to . In view of (23) and (24), and also converge weakly to . Next, since , we have . Therefore, utilizing (24), the fact that , and combined with the Lipschitz continuity of , we conclude that . Due to the maximal monotonicity of and the cocoercivity of C, it follows that is maximal monotone, and its graph is closed in the weak–strong topology in (see Proposition 20.37(ii) in []). As a result, . Following Lemma 2, we conclude that the sequence weakly converges to an element of . This completes the proof. □
Remark 2
(Inertia versus relaxation). The inequation (19) establishes a relationship between inertial and relaxation parameters. Figure 1 displays the relationship between and by a graphical representation for some given values of γ and ε, which has a similar graphical representation to Figure 1 in []. It is noteworthy that the expression of the upper bound for resembles that in ([], Remark 2) if . Assume that is constant; it follows from (19) that the upper bound of takes the form of with . Further, note that the relaxation parameter is a decreasing function with respect to inertial parameter α on the interval : for example, when , then . Of course, we can also get when , and is also decreasing on because of limiting values as and as .

Figure 1.
Balance between and with , (left), and (right).
Remark 3.
The parameters for FBHF [] and RIFBHF are given in Table 1, which shows that the two schemes have the same range of step sizes. Different from FBHF [], RIFBHF introduces relaxation parameter and inertial parameter . Note that RIFBHF can reduce to FBHF [] if and .

Table 1.
The parameter selection range for FBHF [] and RIFBHF.
Remark 4.
The existence of the over-relaxation for RIFBHF deserves further discussion. If with for , we conclude that has over-relaxation. In addition, observe that the over-relaxation in [] exists when for . Although the upper bounds of α for the two approaches are different, the over-relaxation for the two methods is possible when α is in a small range.
4. Composite Monotone Inclusion Problem
The aim of this section is to use the proposed algorithm to solve a more generalized inclusion problem, which is outlined as follows:
where and represent two maximal monotone operators, is a -cocoercive operator with , and is a bounded linear operator. In addition, the following assumption is given:
The key to solving (25) is to know the exact resolvent . As we know, can be estimated exactly using only the resolvent of the operator B, the linear operator L, and its adjoint when for some . However, this condition usually does not hold in our interesting problems, such as total variation regularization. To address this challenge, we introduce an efficient iterative algorithm to tackle (25) by combining the primal–dual approach [] and (6). Specifically, drawing inspiration from the fully splitting primal–dual algorithm studied by Briceño-Arias and Combettes [], we naturally rewrite (25) as the following problem, letting :
where
Notice that M is maximal monotone and S is monotone Lipschitz continuous within a constant , as stated in Proposition 2.7 of []. This implies that is also maximal monotone since S is skew-symmetric. A result yields the cocoercivity of N by the cocoercivity of C. Therefore, it follows from (6) and (27) that the following convergence analysis can deal with (26), which implies that (25) is also solved.
Corollary 1.
Suppose that is maximal monotone; let be maximal monotone, and assume that is β-cocoercive with . Let be a nonzero bounded linear operator. Given initial data and , the iteration sequences are defined:
where , , χ is defined in Proposition 1, and is non-decreasing such that . Assume that the sequence fulfils the condition
where . Therefore, the iterative sequence generated by (28) weakly converges to a solution of .
Proof.
Using Proposition 2.7 in [], we observe that M is maximal monotone and S is Lipschitz continuous together with . Considering the -cocoercivity of C, it follows that N is also -cocoercive. Additionally, for arbitrary , let
Therefore, using (27) and Proposition 2.7 (iv) in [], we can rewrite (28) in the following form:
which has the same structure as (6). Meanwhile, our assumptions guarantee that the conditions of Theorem 1 are held. Hence, according to Theorem 1, the sequence generated by (28) weakly converges to an element of . □
In the following, we apply the results of Corollary 1 to tackle the convex minimization problem.
Corollary 2.
Consider the convex optimization problem as follows:
where , , is convex differentiable with -Lipschitz continuous gradient for some , and is a bounded linear operator. For (29), given initial data and , iteration sequences are presented for :
where , , χ is defined in Proposition 1, and is nondecreasing such that . Assume that the sequence satisfies that
where . If is nonempty, then the sequence weakly converges to a minimizer of (29).
Proof.
According to [], and are maximal monotone. In view of the Baillon–Haddad theorem, is indicated to be -cocoercive. Thus, solving (29) with our algorithm is equivalent to (25) under suitable qualification conditions, which gives
Therefore, it follows from the same arguments as the proof of Corollary 1 that we arrive at the conclusions of Corollary 2. □
5. Numerical Experiments
This section reports the feasibility and efficiency of (6). In particular, we discuss the impact of parameters on (6). All experiments were conducted using MATLAB on a standard Lenovo machine equipped with an Intel(R) Core(TM) i5-8265U CPU @ 1.60 GHz with 1.80 GHz boost. Our objective is to address the following constrained total variation (TV) minimization problem:
in which represents the blurring matrix, z signifies the unknown original image in , and D indicates a nonempty closed convex set and represents the prior information regarding the deblurred images. Specifically, D is selected as a nonnegative constraint set, indicates the regularization parameter, denotes the total variation term, and d stands for the recorded blurred image data.
Notice that (31) can be equivalently written with the following structure:
where represents the indicator function, which equals 0 when and otherwise. The term can be expressed as a combination of a convex function (either using for the anisotropic total-variation or for the isotropic total-variation) with a first-order difference matrix H, denoted as (refer to Section 4 in []), where H represents a matrix written as:
where denotes the identity matrix, and ⨂ is the Kronecker product. Consequently, it is evident that (32) constitutes a special instance of (29).
In order to assess the quality of the deblurred images, we employ the signal-to-noise ratio (SNR) [], which is defined by
where x is the original image, and is the deblurred image. The following stopping criterion is utilized to terminate iterative algorithms by the relative error between adjacent iterative steps. We choose test image “Barbara" with a size of and use four typical image blurring scenarios as in Table 2. In addition, the range of the values in the original images is , and the norm of operator A in (31) is equal to 1 for the selected experiments. The cocoercive coefficient is 1, and for the total variation, as estimated in [], where H is the linear operator. We terminate the iterative algorithms if the relative error is less than or if the maximum number of iterations reaches 1000.

Table 2.
Description of image blurring scenarios.
Prior to the comparisons, we do a test to display how the performance of RIFBHF is affected by different parameters. For simplicity, we set for all . In view of (19), the relationship between inertial parameter and relaxation parameter is presented as follows:
where , and . As we know, the upper boundedness of is similar to the one of ([], Remark 2) if . Firstly, we assume , , and , or and for RIFBHF. The SNR values and the numbers of iterations used with various for RIFBHF are recorded in Table 3 and Table 4. Observe that the SNR values in image blurring scenarios 1 and 2 are the best when , while the SNR values in image blurring scenarios 3 and 4 are the best when . Therefore, we choose for image blurring scenarios 1 and 2 and for image blurring scenarios 3 and 4. For the case of in image blurring scenario 1, we further study the effect of the other parameters. The development of the SNR value and the normalized error of the objective function along the running time are considered; here, f represents the present objective value, while signifies the optimal objective value. To obtain an approximation of the optimal objective value, we set the function value given by running experimental algorithms for 5000 iterations as an estimate of the optimal value. One can observe that a larger results in better error when and , and a larger also brings better error when and in Figure 2. Of course, and also affect the value of . Meanwhile, a conclusion that a larger allows for better error can be given. It is worth noting that over-relaxation () exists, and it enjoys a better effect. Figure 3 shows the development of SNR for different parameters; the experiment results are similar to those in Figure 2.

Table 3.
SNR values and iterations when , , and with different parameter for Barbara image in four blurred scenarios.

Table 4.
SNR values and iterations when , , and with different parameter for Barbara image in four blurred scenarios.

Figure 2.
Behaviour of the error of objective value against running time for different parameters , , and .

Figure 3.
Behaviour of SNR against running time for different parameters , , and .
To further validate the rationality and efficiency of (30), the following algorithms and parameter settings are utilized:
- FBHF []: for four image blurring scenarios.
- PD: the first-order primal–dual splitting algorithm [] with , , and for four image blurring scenarios.
- RIFBHF: using and for image blurring scenarios 1 and 2 and and for image blurring scenarios 3 and 4.
Figure 4 plots the normalized error of the objective function of FBHF, PD, and RIFBHF along the running time. Note that PD appears to be the fastest algorithm. FBHF and RIFBHF are almost the same in image blurring scenarios 1 and 2, while in image blurring scenarios 3 and 4, the effect of RIFBHF is better than that of FBHF, which shows that our algorithm is acceptable. Meanwhile, to succinctly illustrate the deblurring impact of the proposed algorithm, the deblurred results of image blurring scenario 4 are shown in Figure 5. One can observe visually the better deblurred images generated by RIFBHF.

Figure 4.
Behaviour of the error of objective value against running time for different image burring scenarios, i.e., (a) scenario 1, (b) scenario 2, (c) scenario 3, and (d) scenario 4.

Figure 5.
(a) The original image of Barbara. (b) The blurred image of Barbara. (c) The deblurred image by FBHF. (d) The deblurred image by RIFBHF.
6. Conclusions
In this paper, we proposed an RIFBHF algorithm to solve (1). On the way, the proposed approach was deduced by discretising a continuous-time dynamical system, and a variable stepsize was introduced into the proposed algorithm. Additionally, we studied the theoretical convergence properties of (6) under reasonable parameter conditions. Inspired by the primal–dual scheme, our approach tackled both the composite monotone inclusion problem (25) and analysed the composite convex optimization problem (29). Subsequently, we conducted numerical experiments focused on image deblurring to illustrate the effectiveness of our proposed technique.
Author Contributions
Methodology, C.Z.; formal analysis, G.Z. and Y.T.; writing—original draft preparation, C.Z.; writing—review and editing, G.Z. and Y.T.; All authors have read and agreed to the published version of the manuscript.
Funding
Funding was provided to this project by the National Natural Science Foundation of China (11771193,12061045), the Jiangxi Provincial Natural Science Foundation (20224ACB211004), the Guangzhou Education Scientific Research Project 2024 (202315829), and the Guangzhou University Research Project (RC2023061).
Data Availability Statement
The datasets generated during and/or analysed during the current study are available from the corresponding author upon reasonable request.
Acknowledgments
We express our thanks to the anonymous referees for their constructive suggestions, which significantly improved the presentation of this paper.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Daubechies, I.; Defrise, M.; De Mol, C. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Comm. Pure Appl. Math. 2004, 57, 1413–1457. [Google Scholar] [CrossRef]
- Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef]
- Raguet, H.; Fadili, J.; Peyré, G. A generalized forward-backward splitting. SIAM J. Imaging Sci. 2013, 6, 1199–1226. [Google Scholar] [CrossRef]
- Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: London, UK, 2017. [Google Scholar]
- Goldstein, A.A. Convex programming in Hilbert space. Bull. Am. Math. Soc. 1964, 70, 709–710. [Google Scholar] [CrossRef]
- Lions, P.-L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
- Combettes, P.L.; Pesquet, J.-C. A Douglas-Rachford splitting approach to nonsmooth convex variational signal recovery. IEEE J. Sel. Top. Signal Process. 2007, 1, 564–574. [Google Scholar] [CrossRef]
- Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
- Raguet, H.; Landrieu, L. Preconditioning of a generalized forward-backward splitting and application to optimization on graphs. SIAM J. Imaging Sci. 2015, 8, 2706–2739. [Google Scholar] [CrossRef]
- Briceño-Arias, L.M. Forward-Douglas-Rachford splitting and forward-partial inverse method for solving monotone inclusions. Optimization 2015, 64, 1239–1261. [Google Scholar] [CrossRef]
- Davis, D.; Yin, W.T. A three-operator splitting scheme and its optimization applications. Set-Valued Var. Anal. 2017, 25, 829–858. [Google Scholar] [CrossRef]
- Raguet, H. A note on the forward-Douglas-Rachford splitting for monotone inclusion and convex optimization. Optim. Lett. 2019, 13, 717–740. [Google Scholar] [CrossRef]
- Ryu, E.K.; Vũ, B.C. Finding the forward-Douglas-Rachford-forward method. J. Optim. Theory Appl. 2020, 184, 858–876. [Google Scholar] [CrossRef]
- Rieger, J.; Tam, M.K. Backward-forward-reflected-backward splitting for three operator monotone inclusions. Appl. Math. Comput. 2020, 381, 125248. [Google Scholar] [CrossRef]
- Briceño-Arias, L.M.; Davis, D. Forward-backward-half forward algorithm for solving monotone inclusions. SIAM J. Optim. 2018, 28, 2839–2871. [Google Scholar] [CrossRef]
- Alves, M.M.; Geremia, M. Iteration complexity of an inexact Douglas-Rachford method and of a Douglas-Rachford-Tseng’s F-B four-operator splitting method for solving monotone inclusions. Numer. Algorithms 2019, 82, 263–295. [Google Scholar] [CrossRef]
- Giselsson, P. Nonlinear forward-backward splitting with projection correction. SIAM J. Optim. 2021, 31, 2199–2226. [Google Scholar] [CrossRef]
- Briceño-Arias, L.; Chen, J.; Roldán, F.; Tang, Y. Forward-partial inverse-half-forward splitting algorithm for solving monotone inclusions. Set-Valued Var. Anal. 2022, 30, 1485–1502. [Google Scholar] [CrossRef]
- Polyak, B.T. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
- Nesterov, Y. A method for unconstrained convex minimization problem with the rate of convergence O(). Doklady AN SSSR 1983, 269, 543–547. [Google Scholar]
- Güler, O. New proximal point algorithms for convex minimization. SIAM J. Optim. 1992, 2, 649–664. [Google Scholar] [CrossRef]
- Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
- Alvarez, F. Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J. Optim. 2003, 14, 773–782. [Google Scholar] [CrossRef]
- Ochs, P.; Chen, Y.; Brox, T.; Pock, T. iPiano: Inertial proximal algorithm for nonconvex optimization. SIAM J. Imaging Sci. 2014, 7, 1388–1419. [Google Scholar] [CrossRef]
- Boţ, R.I.; Csetnek, E.R. A hybrid proximal-extragradient algorithm with inertial effects. Numer. Funct. Anal. Optim. 2015, 36, 951–963. [Google Scholar] [CrossRef]
- Chen, C.; Chan, R.H.; Ma, S.; Yang, J. Inertial proximal ADMM for linearly constrained separable convex optimization. SIAM J. Imaging Sci. 2015, 8, 2239–2267. [Google Scholar] [CrossRef]
- Dong, Q.; Lu, Y.; Yang, J. The extragradient algorithm with inertial effects for solving the variational inequality. Optimization 2016, 65, 2217–2226. [Google Scholar] [CrossRef]
- Combettes, P.L.; Glaudin, L.E. Quasi-nonexpansive iterations on the affine hull of orbits: From Mann’s mean value algorithm to inertial methods. SIAM J. Optim. 2017, 27, 2356–2380. [Google Scholar] [CrossRef]
- Attouch, H.; Cabot, A. Convergence rates of inertial forward-backward algorithms. SIAM J. Optim. 2018, 28, 849–874. [Google Scholar] [CrossRef]
- Eckstein, J.; Bertsekas, D.P. On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992, 55, 293–318. [Google Scholar] [CrossRef]
- Attouch, H.; Peypouquet, J. Convergence of inertial dynamics and proximal algorithms governed by maximally monotone operators. Math. Program. 2019, 174, 391–432. [Google Scholar] [CrossRef]
- Attouch, H.; Cabot, A. Convergence of a relaxed inertial proximal algorithm for maximally monotone operators. Math. Program. 2020, 184, 243–287. [Google Scholar] [CrossRef]
- Attouch, H.; Cabot, A. Convergence rate of a relaxed inertial proximal algorithm for convex minimization. Optimization 2020, 69, 1281–1312. [Google Scholar] [CrossRef]
- Attouch, H.; Cabot, A. Convergence of a relaxed inertial forward-backward algorithm for structured monotone inclusions. Appl. Math. Optim. 2019, 80, 547–598. [Google Scholar] [CrossRef]
- Boţ, R.I.; Csetnek, E.R.; Hendrich, C. Inertial Douglas-Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 2015, 256, 472–487. [Google Scholar] [CrossRef]
- Boţ, R.I.; Sedlmayer, M.; Vuong, P.T. A relaxed inertial forward-backward-forward algorithm for solving monotone inclusions with application to GANs. J. Mach. Learn. Res. 2023, 1–37. [Google Scholar]
- Tang, Y.; Yang, Y.; Peng, J. Convergence analysis of a relaxed inertial alternating minimization algorithm with applications. In Advanced Mathematical Analysis and Its Applications, 1st ed.; Chapman and Hall/CRC: Boca Raton, FL, USA, 2023; 27p. [Google Scholar]
- Boţ, R.I.; Csetnek, E.R. Second order forward-backward dynamical systems for monotone inclusion problems. SIAM J. Control Optim. 2016, 54, 1423–1443. [Google Scholar] [CrossRef]
- Briceño-Arias, L.M.; Combettes, P.L. A monotone+skew splitting model for composite monotone inclusions in duality. SIAM J. Optim. 2011, 21, 1230–1250. [Google Scholar] [CrossRef]
- Zong, C.; Tang, Y.; Zhang, G. An accelerated forward-backward-half forward splitting algorithm for monotone inclusion with applications to image restoration. Optimization 2024, 73, 401–428. [Google Scholar] [CrossRef]
- Micchelli, C.A.; Shen, L.; Xu, Y. Proximity algorithms for image models: Denoising. Inverse Problems 2011, 27, 045009. [Google Scholar] [CrossRef]
- Chambolle, A.; Pock, T. A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vision 2011, 40, 120–145. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).