Next Article in Journal
Exponential Growth and Properties of Solutions for a Forced System of Incompressible Navier–Stokes Equations in Sobolev–Gevrey Spaces
Next Article in Special Issue
The Krasnoselskii–Mann Method for Approximation of Coincidence Points of Set-Valued Mappings
Previous Article in Journal
Entire Irregularity Indices: A Comparative Analysis and Applications
Previous Article in Special Issue
An Inertial Subgradient Extragradient Method for Efficiently Solving Fixed-Point and Equilibrium Problems in Infinite Families of Demimetric Mappings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Approach for Proximal Split Minimization Problems

by
Abdellatif Moudafi
1,* and
André Weng-Law
2
1
Laboratoire d’Informatique et Systèmes (LIS UMR 7020 CNRS/AMU/UTLN), Aix-Marseille Université, 13288 Marseille, France
2
Laboratoire MEMIAD, Université des Antilles, Campus de Schoelcher, 97233 Schoelcher, Cedex, France
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(1), 144; https://doi.org/10.3390/math13010144
Submission received: 6 September 2024 / Revised: 7 November 2024 / Accepted: 27 December 2024 / Published: 2 January 2025
(This article belongs to the Special Issue Applied Functional Analysis and Applications: 2nd Edition)

Abstract

:
We provide an alternative formulation of proximal split minimization problems, a very recently developed and appealing strategy that relies on an infimal post-composition approach. Then, forward–backward and Douglas–Rachford splitting algorithms will guide both the design and analysis of some split numerical methods. We provide evidence of globally weak convergence and the fact that these algorithms can be equipped with relaxed and/or inertial steps, leading to improved convergence guarantees.

1. Introduction

Convex optimization has emerged as an effective and powerful tool for modeling a wide class of problems arising in various branches of social, physical, engineering, and pure and applied sciences in united and general frameworks. In recent years, much attention has been given to developing efficient and implementable numerical methods including the projection method and its variant forms, the auxiliary problem principle, the proximal point algorithm, and the descent framework, for solving variational inequalities and related optimization problems. It is well known that the projection method and its variant forms cannot be used to suggest and analyze iterative methods for solving variational inequalities due to the presence of the nonlinear term. This fact motivated us to develop another technique, which involves the use of proximal mappings associated with proper convex and lower semicontinuous functions, the origin of which can be traced back to Martinet [1]. The resulting method, namely the proximal point algorithm, has been extended and generalized in different directions by using novel and innovative techniques and ideas, both for their own sake and for their applications relying, for example, on Bregman distance. Since, in general, it is difficult to evaluate proximal mapping, one alternative is to decompose the given function into the sum of two (or more) functions whose proximal mappings are easier to evaluate than the proximal mapping of the original one to yield an easily implementable algorithm. Such methods are known as splitting methods. This can lead to the development of very efficient methods, since one can treat each part of the original function independently. The splitting methods and related techniques have been analyzed and studied by many authors; for an excellent account of splitting methods, see [2] and the references therein. It is worth mentioning that the acceleration of the convergence of these algorithms is an important issue for applications and is based on either an analysis of the second order in space [3] or on an approach, initiated by Nesterov [4], which is based on an analysis of the second order in time, and gives rise to the so-called FISTA method [5].
In this note, we will focus on the split feasibility problem, which has received much attention due to its applications in signal processing and image reconstruction [6], with particular progress in intensity-modulated therapy [7]. Our purpose here is to study the more general case of proximal split minimization problems and to investigate the convergence properties of the associated numerical solutions. More precisely, we are interested in finding a solution x ¯ H 1 of the following problem:
min x H 1 f ( x ) + g λ ( A x ) ,
where H 1 , H 2 are two real Hilbert spaces, f : H 1 I R { + } and g : H 2 I R { + } are two proper convex lower semicontinuous functions, A : H 1 H 2 is a bounded linear operator, and g λ ( y ) = min u H 2 { g ( u ) + 1 2 λ u y 2 } is the Moreau envelope of the function g of parameter λ .
The first class of approaches includes primal algorithms such as the forward–backward algorithm (FB) and Douglas–Rachford algorithm (DR), whose use depends essentially on the Lipschitz differentiability or proximability of the functions. In our context, g λ is differentiable with a Lipschitz-continuous gradient. More precisely, g λ = I p r o x λ g λ , and an application of the FB algorithm leads to
x k + 1 = p r o x μ k f ( x k μ k A * ( I p r o x λ g ) ( A x k ) ,
The determination of the step size depends on the operator norm, whose computation (or at least estimation) is not an easy task. An approach to avoid this problem was proposed in [8].
To solve Problem (1) in Hilbert spaces, in this paper, we consider an approach based on the notion of the infimal post-composition of a convex function by a linear operator. We provide first an equivalent formulation involving the infimal post-composition, and we propose weakly convergent algorithms by means of both the FB and DR algorithms. These algorithms require the computation of the proximity operator of the infimal post-composition considered in [9]; see also [10] or even [11]. Observe that in the context of image restoration, an explicit formula of this proximity operator was given in [9]; see also [12] for applications in image super-resolution with examples in non-blind deblurring and interpolation as well as polyphase implementation for image super-resolution based on Sherman–Morrison–Woodbury identity or even [10,13] in the context of the preconditioned alternating direction method of multipliers.
In order to provide the essential information, we assume the reader has some basic knowledge of convex analysis, as can be found, for example, in [14,15].

2. New Formulation of Proximal Split Minimization Problems

In what follows, we will propose an alternative formulation of Problem (1) that involves infimal post-composition. To this end, let us first recall this notion; see, for example [9,14].
Definition 1.
Given a function f : H 1 I R { + } and a bounded linear operator A : H 1 H 2 , the infimal post-composition of f by A is defined by
A f : H 2 I R { + } : u inf x H 1 ; A ( x ) = u f ( x ) .
The infimal post-composition A f is exact at u H 2 , if a r g min x H 1 ; A ( x ) = u f ( x ) . Note that d o m ( A f ) r a n ( A ) .
Remember also that p r o x γ f , A , which is a generalization of the classical proximal mapping, was defined according to
p r o x γ f , A ( u ) = inf x H 1 ( f ( x ) + 1 2 γ A ( x ) u 2 ) .
It is worth mentioning that the latter reduces to the classical proximity operator when A = I , namely p r o x γ f , I d = p r o x γ f : = min x H 1 ( f ( x ) + 1 2 γ x · 2 ) , and that it has a closed expression in several instances; see, for example [16]. It was formulated by Moreau [17] and possesses several attractive properties; see, for example [15].
Following the approach developed in [9], we obtain the formulation below.
Problem 1.
Let f : H 1 I R { + } , g : H 2 I R { + } be two proper convex lower semicontinuous functions and let A : H 1 H 2 be a bounded linear operator. We are interested in the following problems:
inf x H 1 f ( x ) + g λ ( A x ) ,
and
inf u H 2 g λ ( u ) + ( A f ) ( u ) .
Throughout this paper, the solution sets of ( 4 ) and ( 5 ) will be denoted by S 1 and S 2 . By means of [9]’s Proposition 1, we directly infer the connection below between problems ( 4 ) and ( 5 ) .
Proposition 1.
The following statements hold true:
(i) 
inf x H 1 f ( x ) + g λ ( A ( x ) ) = inf u H 2 g λ ( u ) + ( A f ) ( u ) .
(ii) 
Suppose that x ¯ S 1 , then A ( x ¯ ) S 2 .
(iii) 
Suppose that u ¯ S 2 , then a r g min x H 1 ; A ( x ) = u ¯ f ( x ) S 1 .
Solving the alternative formulation of problem ( 4 ) by means of proximal splitting algorithms requires the computation of the proximal mapping of infimal post-composition A f . This does not of course preclude the possibility of using other algorithms; see, for instance [18,19]. For this purpose, we recall first the following key proposition; see [9]’s Proposition 2.
Proposition 2.
If we assume, in addition to assumptions on f , g and A, that 0 s r i ( d o m f * r a n A * ) , then we obtain the following properties:
(i) 
The infimal post-composition A f is a proper, convex and lower semicontinuous function and it is exact in H 2 .
(ii) 
p r o x γ A f ( u ) = A p r o x γ f , A ( u ) γ > 0 , a key property with d o m ( p r o x γ f , A ) = H 2 .
Here, f * ( z ) = sup x H 1 ( x , z f ( x ) ) stands for the conjugate of the function f, and s r i , r a n denote the strong relative interior and the range, respectively. This was obtained thanks to the formula ( f * A * ) * = A f .
The new Formulation (5), being an optimization problem involving the sum of two proper lower semicontinuous convex functions, can therefore be solved by using split proximal algorithms, if the proximity mapping of the function g is available.
Firstly, we can use the forward–backward algorithm, since the function g λ is differentiable. This, together with the expression of the proximity mapping of A f , therefore leads to an infimal post-composition FB algorithm by activating the proximity mappings of A f and g, which can be described as follows:
FB algorithm: Let z 0 H 1 , fix λ > 0 , let 0 < γ < 2 λ , and compute for n = 0 , 1 , · · ·
( F B A l g o ) u n = p r o x λ g ( z n ) , x n = p r o x γ f , A 1 γ λ z n + γ λ u n , z n + 1 = A x n .
We present below a convergence result of the FB algorithm.
Theorem 1.
Suppose that S 1 , the qualification condition 0 s r i ( d o m f * r a n A * ) holds true, and let ( z n ) n I N be a sequence generated by the FB-Inf-post-composition algorithm. Then, ( z n ) n I N weakly converge to some solution u ¯ S 2 provided that the parameters satisfy 0 < γ < 2 λ . Moreover, every x ¯ a r g min x H 1 ; A x = u ¯ f ( x ) solves the initial Problem (4).
Proof of Theorem 1. 
S 1 being assumed to be non-empty ensures that S 2 thanks to (ii)’s Proposition 1. In our setting, we also have that A f is proper convex and lower semicontinuous and that p r o x γ ( A f ) = A p r o x γ f , A . On the other hand, the FB algorithm can be written as
z n + 1 = p r o x γ ( A f ) ( z n γ g λ ( z n ) ) .
Indeed,
z n + 1 = A p r o x γ f , A ( 1 γ λ ) z n + γ λ p r o x λ g ( z n ) ,
and equivalently,
z n + 1 = A p r o x γ f , A z n γ λ ( z n p r o x λ g ( z n ) ) ,
from which we infer that
z n + 1 = p r o x γ ( A f ) ( z n γ g λ ( z n ) ) ,
since g λ ( z n ) = z n p r o x λ g ( z n ) λ .
A direct application of [14]’s Corollary 27.9 to the forward–backward algorithm (6) implies that the sequence ( z n ) n I N convergences weakly to a solution u ¯ of S 2 , and the fact that x ¯ solves S 1 follows from the exactness property of A f given by (i)’s Proposition 2 together with (iii)’s Proposition 1. □
Remark 1.
It is worth mentioning that relaxed and accelerated versions can be considered, for example, by replacing the third equation in the FB algorithm with z n + 1 = ( 1 α n ) z n + α n A x n . The convergence result is still valid for ( α n ) n I N , a sequence in [ 0 , δ ] verifying n I N α n ( δ α n ) = + with δ = min { 1 , λ γ } + 1 2 . It can be also equipped with inertial steps, leading to improved convergence guarantees; see, for example [5].
Secondly, we also solve Problem (5) by using the Douglas–Rachford algorithm together with the expression of the proximity operator of A f . These lead to the following infimal post-composition DR algorithm:
D R 1 algorithm: Let z 0 H 1 , fix λ > 0 , let γ > 0 , and compute for n = 0 , 1 , · · ·
( D R 1 A l g o ) u n = λ λ + γ z n + γ λ + γ p r o x ( λ + γ ) g ( z n ) , x n = p r o x γ f , A ( 2 u n z n ) , z n + 1 = A x n + z n u n .
We present below a convergence result of the D R 1 algorithm.
Theorem 2.
Suppose that S 1 , the qualification condition 0 s r i ( d o m f * r a n A * ) holds true, and let ( x n ) n I N be a sequence generated by the DR-Inf-post-composition algorithm. Then, the sequences ( u n ) n I N and A ( x n ) n I N weakly converge to some solution u ¯ S 2 . Moreover, every x ¯ a r g min x H 1 ; A x = u ¯ f ( x ) solves the initial Problem (4).
Proof of Theorem 2. 
S 1 being assumed to be non-empty ensures that S 2 thanks to (ii)’s Proposition 1. In our setting, we also have that the function A f is proper convex and lower semicontinuous and that p r o x γ ( A f ) = A p r o x γ f , A . Now, the D R 1 algorithm can be reformulated as
z n + 1 = p r o x γ ( A f ) ( 2 p r o x γ g λ z n z n ) + z n p r o x γ g λ z n .
Indeed, the D R 1 algorithm can be written in the form
z n + 1 = A p r o x γ f , A λ λ + γ z n γ λ + γ ( z n 2 p r o x ( λ + γ ) g ( z n ) ) + γ λ + γ ( z n p r o x ( λ + γ ) g ( z n ) ) .
Using the following key formula (see Formula (42) in [20]),
p r o x γ g λ ( z ) = λ λ + γ z + γ λ + γ p r o x ( λ + γ ) g ( z ) ,
we obtain
z n + 1 = A p r o x γ f , A ( 2 p r o x γ g λ z n z n ) + z n p r o x γ g λ z n .
Therefore,
z n + 1 = p r o x γ ( A f ) ( 2 p r o x γ g λ z n z n ) + z n p r o x γ g λ z n .
By virtue of [14]’s Corollary 27.4 applied to Algorithm (7), we obtain that the sequences ( u n = p r o x γ g λ z n ) and ( A x n = A p r o x γ f , A ( 2 u n z n ) ) converge weakly to a solution u ¯ of S 2 , and the fact that x ¯ solves S 1 follows again from the exactness of the function A f given by (i)’s Proposition 2 together with (iii)’s Proposition 1. □
Remark 2.
(i) 
By swapping the activation of the proximity operators of A f and g, we can also design the following algorithm:
D R 2 algorithm:Let z 0 H 1 , fix λ > 0 , let γ > 0 , and compute for n = 0 , 1 , · · ·
x n = p r o x γ f , A z n , u n = p r o x γ g λ ( 2 A x n z n ) z n + 1 = u n + z n A x n .
We can of course use Formula (8) again to obtain the proximity operator of g λ in terms of the function g. Moreover, the convergence result in Theorem 2 is also valid for a sequence generated by the D R 2 algorithm by once again applying [14]’s Corollary 27.4.
(ii) 
It is worth mentioning that Formula (8) follows easily by writing ( g λ ) γ = ( g ) λ + γ .

3. Numerical Experiments

In this section, we conduct some numerical experiments on two different minimization problems, so as to compare the proposed algorithms with other classical and recent methods.

3.1. Application to a Simple Split Feasibility Problem

We first consider the following theoretical problem that has been investigated by Abbas et al. [21]:
x * a r g m i n ( g ) s u c h t h a t A x * a r g m i n ( f ) ,
where f : = . 2 is the Euclidean norm while g : = i = 1 n max { | x i | 1 , 0 } and A = I d .
It is obvious that 0 belongs to a r g m i n ( g ) A 1 a r g m i n ( f ) which is thus non-empty. Consequently, Problem 9 has the same set of solutions as the structured minimization problem
min x R N F ( x ) : = f ( x ) + g λ ( x ) , f o r λ > 0 .
We can therefore compare the convergence of the algorithm investigated in [21] with the FB algorithm and DR1 algorithm in this particular instance of Problem (4). As A = I d , the generalized proximal mapping p r o x γ f , A reduces to the classical proximal mapping p r o x γ f , which has a closed form. Figure 1 and Figure 2 show a comparison between the three methods through the convergence to 0 of F ( x n ) and x n + 1 x n 2 . It appears that the algorithms converge, which is in accordance with the theoretical results. Moreover, as can be expected, a better convergence rate is reached for the DR1 algorithm.

3.2. Application to Image Restoration

We now consider the classical inverse problem of image denoising and deblurring. For simplicity, we consider the case of square grayscale N × N images, which can be represented as elements of R N 2 . Given an altered image b R N 2 , we aim to compute a restored image x by solving the following total variation denoising and deconvolution model (see, e.g., [9]):
min x R N 2 F ( x ) : = 1 2 L x b 2 2 + x 2 2 + λ x 1 ,
where L : R N 2 R N 2 is a blurring operator, L ( R N 2 , R 2 N 2 ) (set of linear functions from R N 2 to R 2 N 2 ) is the discrete gradient operator on R N 2 , and λ is a trade-off parameter between data fidelity and smoothness. ( . ) 1 is the classical total variation operator introduced by Ruding–Osher–Fatemi in their seminal paper [22]. Minimizing the total variation is known to reduce noise while preserving edges. This problem can be cast into Problem (4) by setting f : = 1 2 L x b 2 2 + x 2 2 , g : = λ . 1 and A x : = x for x R N 2 . The explicit expression of the proximal mapping of the infimal post-composition (that is, p r o x γ A f ( u ) = A p r o x γ f , A ( u ) ) required by both the FB algorithm and the DR1 algorithm) is given in [9]’s Proposition 5. For simplicity, we consider here the case of a pure denoising process (that is, L = I d ) applied to an image altered by white Gaussian noise. The result is illustrated in Figure 3.
Figure 4 and Figure 5 illustrate the behavior of the sequences generated by the two proposed algorithms and other classical or recent methods (ADMM and the Chambolle–Pock Algorithm introduced in [18]). As in example 1, the sequences generated by all the considered algorithms converge, as predicted by the theoretical results. Here again, the DR1 algorithm outperforms the FB algorithm in terms of convergence rate.

4. Conclusions

In this paper, we propose a formulation for proximal split minimization problems relying on an infimal post-composition approach. This alternative formulation together with a recent implementation of the proximity operator of the infimal post-composition introduced in [9] allows the development of additional algorithms based on the forward–backward algorithm as well as others based on the Douglas–Rachford splitting method. The corresponding global weak convergence results are derived from classical convergence results. A relaxed version has been suggested, and we could do better with versions based on an inertial technique which has a remarkable acceleration effect. We expect that this note will become a further step towards developing other algorithms and applications. Indeed, in image restoration applications, PnP (Plug-and-Play) approaches can make use of a pre-trained CNN (Convolutional Neural Network) denoiser to evaluate p r o x γ f , A , which is a Gaussian-like denoising subproblem, by splitting it into a data recovery term and a feature expression term. FTT (Fast Fourier Transform) solves the data recovery term due to an analytical solution (see [12,23] and references therein) and the feature expression term, in this case, is nothing else than the proximity operator of f, which is very often available in a closed form. It is worth mentioning that PnP methods have recently made significant progress, particularly with the incorporation of learning-based denoisers. Moreover, Convolutional Neural Networks (CNNs) have shown good performances through end-to-end training for image denoising and non-blind deblurring. Finally, we would also like to stress that many algorithms and techniques in machine learning and data processing involve random projections, in general onto a lower-dimensional subspace. This naturally calls for the study of random isometric matrices A p × n , n p such that A t A = I n (for instance, real Haar random matrices [24], Parseval frame [25] and Learned Stiefel matrices [26]). In this case, and for convex feasibility problems (i.e., f = δ C and g = δ Q , the indicator functions of two non-empty closed convex sets C and D), we can easily check that the FB algorithm reduced to
Let z 0 C , fix λ > 0 , let 0 < γ < 2 λ and compute for n = 0 , 1 , · · ·
z n + 1 = A P r o j Q A t 1 γ λ z n + γ λ P r o j C ( z n ) .
On the other hand, the D R 1 algorithm takes the following form:
Let z 0 C , fix λ > 0 , let γ > 0 , and compute for n = 0 , 1 , · · ·
u n = λ λ + γ z n + γ λ + γ P r o j Q ( z n ) , z n + 1 = A P r o j C A t ( 2 u n z n ) + z n u n .

Author Contributions

The conceptualization and writing of the original draft were done by A.M., the software and validation by A.W.-L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

We would first like to thank the anonymous referees and the editor for their careful reading of the paper and for their valuable recommendations. The first author would also like to thank his research laboratory LIS (Laboratoire d’Informatique et Systèmes) and I&M team (Images et Modèles).

Conflicts of Interest

The authors declare that they have no conflicts of interest, and the manuscript has no associated data.

References

  1. Martinet, B. Reégularisation d’inéequations variationnelles par approximations successives. Rev. Fr. Inf. Rech. Oper. 1970, R-3, 154–159. [Google Scholar]
  2. Ekstein, J. Splitting Methods for Monotone Operators with Application to Parallel Optimization. Doctoral Dissertation, Massachussets Institute of Technology, Cambridge, MA, USA, 1989. [Google Scholar]
  3. Attouch, H.; Peypouquet, J.; Redont, P. Backward–forward algorithms for structured monotone inclusions in Hilbert spaces. J. Math. Anal. Appl. 2018, 457, 1095–1117. [Google Scholar] [CrossRef]
  4. Nesterov, Y.E. A method for solving the convex programming problem with convergence rate O(1/k2). Dokl. Akad. Nauk SSSR 1983, 269, 543–547. [Google Scholar]
  5. Beck, A.; Teboulle, M. A fast iterative shrinkage- thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
  6. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef]
  7. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef] [PubMed]
  8. Moudafi, A.; Thakur, B.S. Solving Proximal Split Feasibility Problems without prior knowledge of operator norms. Optim. Lett. 2014, 8, 2099–2110. [Google Scholar] [CrossRef]
  9. Briceno-Arias, L.M.; Pustelnik, N. Infimal post-composition approach for composite convex optimization applied to image restoration. Signal Process. 2024, 223, 109549. [Google Scholar] [CrossRef]
  10. Côté, F.D.; Psaromiligkos, I.N.; Gross, W.J. A theory of generalized proximity operator for ADMM, 2017. In Proceedings of the IEEE Global Conference on Signal and Information Processing, (GlobalSIP), Montreal, QC, Canada, 14–16 November 2017; pp. 578–582. [Google Scholar]
  11. Themelis, A.; Patrinos, P. Douglas–Rachford splitting and ADMM for nonconvex optimization: Tight convergence results. SIAM J. Optim. 2020, 30, 149–181. [Google Scholar] [CrossRef]
  12. Chan, H.-S.; Wang, X.; Elgendy, O.-A. Plug-and-Play ADMM for Image Restoration, Fixed Point Convergence and Applications. IEEE Trans. Comput. Imaging 2016, 3, 84–98. [Google Scholar] [CrossRef]
  13. Bredies, K.; Sun, H. A proximal point analysis of the preconditioned alternating di- rection method of multipliers. J. Optim. Theory Appl. 2017, 173, 878–907. [Google Scholar] [CrossRef]
  14. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. In CMS Books in Mathematics, 2nd ed.; Springer: Cham, Switzerland, 2017. [Google Scholar]
  15. Rockafellar, R.T.; Wets, R.J.-B. Variational Analysis; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar]
  16. Burger, M.; Sawatzky, A.; Steidl, G. First Order Algorithms in Variational Image Processing. In Splitting Methods in Communication, Imaging, Science, and Engineering; Scientific Computation; Glowinski, R., Osher, S., Yin, W., Eds.; Springer: Cham, Switzerland, 2016. [Google Scholar]
  17. Moreau, J.-J. Proximité et dualité dans un espace hilbertien. Bull. Soc. Math. Fr. 1965, 93, 273–299. [Google Scholar] [CrossRef]
  18. Chambolle, A.; Pock, T. A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 2011, 40, 120–145. [Google Scholar] [CrossRef]
  19. Condat, L.; Kitahara, D.; Contreras, A.; Hirabayashi, A. Proximal Splitting Algorithms for Convex Optimization: A Tour of Recent Advances, with New Twists. SIAM Rev. 2023, 65, 375–435. [Google Scholar] [CrossRef]
  20. Moudafi, A.; Théra, M. Finding the zero for the sum of two maximal monotone operators. J. Optim. Theory and Appl. 1997, 94, 425–448. [Google Scholar] [CrossRef]
  21. Abbas, M.; AlShahrani, M.; Ansari, Q.H.; Iyiola, O.S.; Shehu, Y. Nonlinear, Iterative methods for solving proximal split minimization problems. Numer. Algorithms 1992, 78, 193–215. [Google Scholar] [CrossRef]
  22. Rudin, L.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Physica D 1992, 60, 259–268. [Google Scholar] [CrossRef]
  23. Lv, T.; Pan, Z.; Wei, W.; Yang, G.; Song, J.; Wang, X.; Sun, L.; Li, Q.; Sun, X. Iterative deep neural networks based on proximal gradient descent for image restoration. PLoS ONE 2022, 17, e0276373. [Google Scholar] [CrossRef] [PubMed]
  24. Couillet, R.; Liao, Z. Random Matrix Methods for Machine Learning; Cambridge University Press: Cambridge, UK, 2022. [Google Scholar]
  25. Hasannasab, M.; Hertrich, J.; Plonka, S.N.; Setzer, G.-S.; Steidl, G. Parseval Proximal Neural Networks. J. Fourier Anal. Appl. 2020, 26, 59. [Google Scholar] [CrossRef]
  26. Chaudhry, A.; Khan, N.; Dokania, P.-K.; Torr, P.H.-H. Continual Learning in Low-rank Orthogonal Subspaces. In Proceedings of the NIPS’20: 34th International Conference on Neural Information, Vancouver, BC, Canada, 6–12 December 2020; Processing Systems, Article N 830. pp. 9900–9911. [Google Scholar]
Figure 1. Profiles of F ( x n ) for different numerical schemes.
Figure 1. Profiles of F ( x n ) for different numerical schemes.
Mathematics 13 00144 g001
Figure 2. Profile of x n + 1 x n for different numerical schemes.
Figure 2. Profile of x n + 1 x n for different numerical schemes.
Mathematics 13 00144 g002
Figure 3. Application to total variation-based image denoising. Original image (left), altered input image (center), restored image (right) using λ = 0.042 .
Figure 3. Application to total variation-based image denoising. Original image (left), altered input image (center), restored image (right) using λ = 0.042 .
Mathematics 13 00144 g003
Figure 4. Profiles of F ( x n ) for different numerical schemes.
Figure 4. Profiles of F ( x n ) for different numerical schemes.
Mathematics 13 00144 g004
Figure 5. Evolution of the objective function values for different numerical schemes.
Figure 5. Evolution of the objective function values for different numerical schemes.
Mathematics 13 00144 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Moudafi, A.; Weng-Law, A. A New Approach for Proximal Split Minimization Problems. Mathematics 2025, 13, 144. https://doi.org/10.3390/math13010144

AMA Style

Moudafi A, Weng-Law A. A New Approach for Proximal Split Minimization Problems. Mathematics. 2025; 13(1):144. https://doi.org/10.3390/math13010144

Chicago/Turabian Style

Moudafi, Abdellatif, and André Weng-Law. 2025. "A New Approach for Proximal Split Minimization Problems" Mathematics 13, no. 1: 144. https://doi.org/10.3390/math13010144

APA Style

Moudafi, A., & Weng-Law, A. (2025). A New Approach for Proximal Split Minimization Problems. Mathematics, 13(1), 144. https://doi.org/10.3390/math13010144

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop