Abstract
A new accelerated algorithm for approximating the common fixed points of a countable family of G-nonexpansive mappings is proposed, and the weak convergence theorem based on our main results is established in the setting of Hilbert spaces with a symmetric directed graph G. As an application, we apply our results for solving classification and convex minimization problems. We also apply our proposed algorithm to estimate the weight connecting the hidden layer and output layer in a regularized extreme learning machine. For numerical experiments, the proposed algorithm gives a higher performance of accuracy of the testing set than that of FISTA-S, FISTA, and nAGA.
1. Introduction
Let H be a real Hilbert space with the norm and C be a nonempty closed convex subset of A mapping is said to be nonexpansive if it satisfies the following symmetric contractive-type condition:
for all ; see [1].
The element is a fixed point of T if and stands for the set of all fixed points of T.
Fixed point theory, i.e., the study of the conditions under which a map admits a fixed point, is an extensive area of research due to its numerous applications in many fields. It started with Banach’s work, which established the existence of a unique fixed point for a contraction using a classical theorem known as the Banach contraction principle; see [2]. The contraction principle of Banach has been expanded and generalized in various directions due to its applications in mathematics and other fields. One of the more recent generalizations is due to Jachymski.
Jachymski [3] introduced the structure of the graph on metric spaces using fixed point theory and obtained certain conditions for self-mapping to be a Picard operator. Several authors [4,5,6,7] proved fixed point theorems for a new type of contraction on a metric space endowed with graphs. Aleomraninejad et al. [8] used the idea of Reich et al. [9] and proved a strong convergence theorem for G-contractive and G-nonexpansive mappings. On hyperbolic metric spaces, Alfuraidan and Khamsi [10] gave a definition of G-monotone nonexpansive multivalued mappings and proved the existence of a fixed point for multivalued contraction and monotone single-valued mappings. Later on, Alfuraidan [11] presented and study the existence of fixed points for G-monotone nonexpansive and extended the results of Jachymski [3]. For approximating common fixed points of a finite family of G-nonexpansive mappings, Suantai et al. [12] used the shrinking projection with the parallel monotone hybrid method. They also used a graph to prove a strong convergence theorem in Hilbert spaces under specific conditions, and they then applied their iterative scheme to signal recovery.
In the past decade, algorithms for approximating fixed points of G-nonexpansive mappings without inertial techniques have been proposed by many researchers; see [3,8,10,11,12,13,14,15,16,17,18]. We require more efficient algorithms for solving such problems. As a result, some accelerated fixed-point algorithms using inertial techniques have been proposed to improve convergence behavior; see [19,20,21,22,23,24,25,26,27]. Recently, Janngam et al. [28] proved the weak convergence theorem for a countable family of G-nonexpansive mappings in a Hilbert space by using a coordinate affine structure with an inertial technique. They also applied their method to image recovery.
Inspired by previous research described above, we introduce a new accelerated algorithm based on the concept of the inertial technique for finding a common fixed point of a family of G-nonexpansive mappings in Hilbert spaces. We employ our result to solve data classification and convex minimization problems and also compare our algorithm efficiency to that of FISTA-S, FISTA, and nAGA.
This paper is classified as follows: in Section 2, we give certain terminology as well as some facts that will be useful in later sections. We investigate and prove our algorithm’s weak convergence in Section 3. For application, we apply our method for solving convex minimization and data classification problems in Section 4 and provide numerical experiments of classification problems in Section 5. The last section of our paper, Section 6, is a summary.
2. Preliminaries
Let C be a nonempty subset of a real Banach space X. Let , where Δ stands for the diagonal of the Cartesian product . Consider a directed graph G in which the set of its vertices corresponds to C, and the set of its edges contains all loops. A directed graph G is said to have parallel edges if two or more edges with both the same tail vertex and the same head vertex.
Assume that G does not have parallel edges. Then, . The conversion of a graph G is denoted by . Thus, we have
Let us give some definitions of basic graph properties which are used in this paper (see [29] for more details).
Definition 1.
A graph G is said to be
- (i)
- symmetric if for all we have
- (ii)
- transitive if for any with then,
- (iii)
- connected if there is a path between any two vertices of the graph
The definition of G-contraction [3] and G-nonexpansive mappings [13] are given as follows.
Definition 2.
A mapping is said to be
- (i)
- G-contraction if
- (a)
- T is edge-preserving, i.e., for all
- (b)
- There exists such that for all where ρ is called a contraction factor.
- (ii)
- G-nonexpansive if
- (a)
- T is edge-preserving.
- (b)
- for all
Example 1.
Let . Suppose that if and only if or , where . Let be defined by
for all . Then, both S and T are G-nonexpansive but not nonexpansive (see [30] for more details).
Definition 3.
A mapping is called G-demiclosed at 0 if and , then for all sequence with .
To prove our main result, we need the definition of the coordinate affine of the graph G as follows.
Definition 4
([28]). Assume that and . Then, is said to be
- (i)
- left coordinate affine if for all , and all α, with
- (ii)
- right coordinate affine if for all , and all α, with
We say that is coordinate affine if is both left and right coordinate affine.
The results of the following lemmas can be used to prove our main theorem; see also [19,31,32].
Lemma 1
([31]). Let and be sequences of nonnegative real numbers such that and . Suppose that
Then, exists.
Lemma 2
([32]). For a real Hilbert space H, the following results hold:
(i) For any and
(ii) For any
Lemma 3
([19]). Let and be sequences of nonnegative real numbers such that
Then,
where If then is bounded.
Let be a sequence in We write to indicate that a sequence converges weakly to a point Similarly, will symbolize the strong convergence. For if there is a subsequence of such that then v is called a weak cluster point of The set of all weak cluster points of is denoted by .
The following lemma was proved by Moudafi and Al-Shemas; see [33].
Lemma 4
([33]). Let be a sequence in a real Hilbert space H such that there exists satisfying:
(i) For any exists.
(ii) Any weak cluster point of
Then, there exists such that
Let and be families of nonexpansive mappings of C into itself such that where is the set of all common fixed points of each A sequence satisfies the NST-condition (I) with if, for any bounded sequence in
for all ; see [34]. If then satisfies the NST-condition (I) with
Example 2
([30]). Let . Define , where for all . Therefore, is a family of G-nonexpansive mappings and satisfies the NST-condition.
Let be a maximal monotone operator and . The resolvent of A is defined by , where I is an identity operator. If for some , where stands for the set of proper lower semicontinuous convex functions from , then . The forward-backward operator of lower semi-continuous and convex functions of has the following definition:
A forward-backward operator T is defined by for , where is the gradient operator of function f and (see [35,36]). Moreau [37] defined the operator as the proximity operator with respect to and g. If , then T is nonexpansive and L is a Lipschitz constant of .
We have the following remark for the definition of the proximity operator; see [38].
Remark 1.
Let be given by . The proximity operator of g is evaluated by the following formula
where and .
The following lemma was proved by Bussaban et al.; see [20].
Lemma 5.
Let H be a real Hilbert space and T be the forward-backward operator of f and g, where g is a proper lower semi-continuous convex function from H into , and f is a convex differentiable function from H into with gradient being L-Lipschitz constant for some . If is the forward-backward operator of f and g such that with a, , then satisfies the -condition (I) with T.
3. Main Results
Let C be a nonempty closed and convex subset of a real Hilbert space H with a directed graph such that . Let be a family of G-nonexpansive mappings of C into itself such that .
The following proposition is useful for our main theorem.
Proposition 1.
Let and be such that , . Let be a sequence generated by Algorithm 1. Suppose is symmetric, transitive, and a right coordinate affine. Then, and for all
| Algorithm 1: (ASA) An Accelerated S-algorithm |
|
Proof.
We shall prove the results by using strong mathematical induction. From Algorithm 1, we obtain
Since and is edge preserving, we obtain Again, by Algorithm 1, we obtain
Since and is edge preserving, we obtain Next, we assume that and for all By Algorithm 1, we obtain
and
We now prove the weak convergence of G-nonexpansive mapping in a real Hilbert space by using Algorithm 1.
Theorem 1.
Let C be a nonempty closed and convex subset of a real Hilbert space H with a directed graph with and is symmetric, transitive, and right coordinate affine. Let , and be a sequence in H defined by Algorithm 1. Suppose satisfies NST-condition (I) with T such that and for all Then,
Proof.
Let By the definition of and we obtain
and
which implies that
By the definition of we obtain
It follows from (8) that
Applying Lemma 3, we obtain where Since we obtain that is bounded and so are and . Thus,
By Lemma 1 and (9), we obtain exists. By Lemma 2(i) and the definition of we obtain
Let From the boundedness of and (6), we obtain
Since and (10), we obtain
According to satisfying the NST-condition (I) with we obtain that as Let be the set of all weak cluster point of Thus, by demicloseness of at From Lemma 4, we conclude that with as required. □
Corollary 1.
Let C be a nonempty closed and convex subset of a real Hilbert space H and let be a family of nonexpansive mappings of C into itself. Let , , and be a sequence in H defined by Algorithm 1. Suppose that satisfies NST-condition (I) with T such that . Then, converges weakly to a point in .
4. Applications
In the past decade, extreme learning machine (ELM) [39], a new learning algorithm for single-hidden layer feedforward networks (SLFNs), has been extensively studied in various research topics for machine learning and artificial intelligence such as face classification, image segmentation, regression, and data classification problems. ELM was theoretically proven to have extremely fast learning speed and good performance better than the gradient-based learning such as backpropagation in most of the cases. The target of this model is to find the parameter that solves the following minimization problem, called ordinary least square (OLS),
where is the -norm defined by , is the target of data, is a weight which connects hidden layer and output layer, and is the hidden layer output matrix. In general mathematical modeling, there are several methods to estimate the solution of (16); in this case, the solution obtained by , where is the Moore–Penrose generalized inverse of . However, in a real situation, the number of unknown variable M is much more than the number of training data N, which causes the network to possibly lead to overfitting. On the other hand, the accuracy is low while the number of hidden nodes M is small. Thus, in order to improve (16), several regularization methods were introduced. The classical two standard techniques for improving (16) are subset selection and ridge regression (sometimes called Tikhonov regularization) [40].
In this paper, we focus on the following problem, called least absolute shrinkage and selection operator (LASSO) [41],
where is a regularization parameter. LASSO tries to retain the good features of both subset selection and ridge regression [41]. After the regularization methods and the original ELM were introduced for improving performance of OLS, five years later, the regularized extreme learning machine [42] was proposed and applied to solve regression problems. In general, the (17) can be rewritten as minimization of , that is,
where f is a smooth convex function with gradient having Lipschitz constant L and g is a convex smooth (or possible non-smooth) function. The solution of (18) can be rewritten into is a minimizer of if and only if , where is the gradient of f and is a subdifferential of g by using Fermat’s rule (see [35] for more details). In fixed point theory, Parikh et al. [43] characterized (18) as follows: is a minimizer of if and only if
where is the proximity operator of , and is defined by , is the resolvent of and I is an identity operator. The problem (18) can be rewritten into a general problem, called a zero of sum of two operators problem, by finding such that
where are two set-valued operators and . In this case, we assume that is a maximal monotone operator and is an L-Lipschitz operator. For convenience, (19) also can be rewritten as:
where . It is also known that T is nonexpansive if when L is a Lipschitz constant of .
We are interested in applying our proposed method for solving a convex minimization problem and compared the convergence behavior of our proposed algorithm with the others and give some applications to solve classification problems. Our proposed method will be used to solve (18). Over the past two decades, several algorithms have been introduced for solving the problem (18). A simple and classical algorithm is the forward-backward algorithm (FBA), which was introduced by Lions and Mercier [21].
The forward-backward algorithm (FBA) is defined by
where , and L is a Lipschitz constant of , , and is a sequence in such that . A technique for improving speed and giving a better convergence behavior of the algorithms was introduced firstly by Polyak [44] by adding an inertial step called inertial techniques. Since then, many authors have employed the inertial technique to accelerate their algorithms for various kinds of problems; see [19,20,22,23,24,25,26]. The performance of FBA can be improved using an iterative method with the inertial steps described below.
A fast iterative shrinkage-thresholding algorithm (FISTA) [25] is defined by
where , , , and is the inertial step size. Beck and Teboulle [25] solved the image recovery and proved the convergence rate using FISTA. The inertial step size of the FISTA was firstly introduced by Nesterov [45].
A fast iterative shrinkage-thresholding algorithm-Siteration (FISTA-S) [27] is defined by
where , , , and . Bussaban et al. [27] solved the image recovery and proved the weak convergence theorem using FISTA-S.
A new accelerated proximal gradient algorithm (nAGA) [26] is defined by
where , with and are sequences in and . The nAGA was introduced for proving a convergence theorem by Verma and Shukla [26]. The nonsmooth convex minimization problem with sparsity, including regularizers, was solved using this method for the multitask learning framework.
Theorem 2.
Let H be a Hilbert space, be maximal monotone operator and be an L-Lipschitz operator. Let and such that . Define and . Suppose that . Let be a sequence in H defined by Algorithm 1. Then, converges weakly to a point in .
Proof.
Using Proposition 26.1(iv) (see [35]), we have and T are nonexpansive mappings such that Then, the proof is completed by Theorem 1 and Lemma 5. □
The convergence of Algorithm 2 is obtained by using our main result.
| Algorithm 2: (FBASA) A forward-backward accelerated S-algorithm |
|
Theorem 3.
For , f is a smooth convex function with a gradient having a Lipschitz constant L and g is a convex function. Let be such that converges to a and let and and let be a sequence generated by Algorithm 2, where and are the same as in Algorithm 1. Then,
(i) , where and
(ii) converges weakly to a point in Argmin .
Proof.
We know that T and are nonexpansive operators, and for all n; see [35]. Then, satisfies the NST-condition (I) with T by using Lemma 5. We get the desired result immediately from Theorem 1 by putting , the complete graph, on . □
5. Numerical Experiments
The classification problem is one of the most important problems in the convex minimization problem. We illustrate the process of reformulating the data classification problem in machine learning.
We first present a basic idea of extreme learning machines for data classification problem, and use our algorithm to find this problem through numerical experiments. Moreover, the performance of Algorithm 2, FISTA-S, FISTA, and nAGA are compared.
Extreme learning machine (ELM). Let R:= , , be a training set of different samples N, where is input data and is a target. A standard SLFNs with activation function (for instance sigmoid), and M hidden nodes can be rewritten as
where is the weight vector connecting the j-th the output node and hidden node, is the weight vector connecting the j-th the input node and hidden node, and is the threshold of the j-th hidden node. The objective of standard SLFNs is to estimate these N different samples with , that is, there exist , , such that
We can derive a simple equation from the above N equations as follows:
A standard SLFN goal is to estimate , , and to solve (18), whereas an ELM goal is to find only with and chosen at random.
In an experiment on classification problems, we employ the model (17) to solve the convex minimization problem. We set and . Next, we use the Iris dataset to classify iris plant types, and the Heart Disease UCI dataset to identify heart patients which are detailed as follows:
Iris dataset [46]. This dataset has three classes of 50 examples, each of which represents a different variety of iris plant. The purpose is to identify each iris plant species based on the length of its sepals and petals.
Heart Disease UCI dataset [47]. Although there are 76 attributes in the original dataset, all published experiments only use 14 of them. Data on patients with heart disease are provided in this dataset. We divide the data into two classes based on the predicted attributes.
The dataset was graciously provided by https://archive.ics.uci.edu (accessed on April 2020).
All control parameters are set to the values shown in Table 1, , where is a hidden layer output matrix of a training matrix, , and is sigmoid. Each dataset is given a training set, as indicated in Table 2. We evaluated the output data’s accuracy by
Table 1.
Parameters of each methods.
Table 2.
Training and testing sets of the Iris and Heart Disease UCI datasets.
From the results in Table 3, we conclude that the proposed learning algorithm under selection with the identical number of hidden nodes M has high performance in terms of the accuracy. The weight computed by Algorithm 2 converges faster to the optimal weight and performs accuracy better than those computed by FISTA-S, FISTA, and nAGA.
Table 3.
Performance comparison using different methods at the 400th iteration.
6. Conclusions
We generate and study an algorithm for approximating the common fixed points of a countable family of G-nonexpansive mappings and prove the weak convergence of our algorithm. In addition, we give an application of our result for solving data classification and convex minimization problems. Finally, our numerical experiments assert that our proposed algorithm provides more accuracy than FISTA-S, FISTA, and nAGA.
In future work, we aim to find new models and methods for data prediction and classification of real datasets in medical science and create new innovations for health care service.
Author Contributions
Conceptualization, R.W.; Formal analysis, K.J. and R.W.; Investigation, K.J.; Methodology, R.W.; Supervision, R.W.; Validation, R.W.; Writing—original draft, K.J.; Writing—review and editing, R.W. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Acknowledgments
The first author was supported by Fundamental Fund 2022, Chiang Mai University, Thailand, under the supervision of Suthep Suantai. The second author would like to thank Ubon Ratchathani University, Thailand.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Berinde, V. A Modified Krasnosel’skiǐ–Mann Iterative Algorithm for Approximating Fixed Points of Enriched Nonexpansive Mappings. Symmetry 2022, 14, 123. [Google Scholar] [CrossRef]
- Banach, S. Sur les oprations dans les ensembles abstraits et leur application aux quations intgrales. Fund. Math. 1922, 3, 133–181. [Google Scholar] [CrossRef]
- Jachymski, J. The contraction principle for mappings on a metric space with a graph. Proc. Am. Math. Soc. 2008, 136, 1359–1373. [Google Scholar] [CrossRef]
- Bojor, F. Fixed point of ψ-contraction in metric spaces endowed with a graph. Anna. Univ. Crai. Math. Comp. Sci. Ser. 2010, 37, 85–92. [Google Scholar]
- Chifu, C.; Petruşel, G. Generalized contractions in metric spaces endowed with a graph. Fixed Point Theory Appl. 2012, 2012, 161. [Google Scholar] [CrossRef] [Green Version]
- Acar, Ö.; Altun, I. Multivalued F-contractive mappings with a graph and some fixed point results. Publ. Math. Debr. 2016, 88, 305–317. [Google Scholar] [CrossRef]
- Acar, Ö.; Aydi, H.; De la Sen, M. New Fixed Point Results via a Graph Structure. Mathematics 2021, 9, 1013. [Google Scholar] [CrossRef]
- Aleomraninejad, S.M.A.; Rezapour, S.; Shahzad, N. Some fixed point result on a metric space with a graph. Topol. Appl. 2012, 159, 659–663. [Google Scholar] [CrossRef] [Green Version]
- Riech, S.; Zaslavski, A.J. Convergence of inexact iterative schemes for nonexpansive set-valued mappings. Fixed Point Theory Appl. 2010, 2010, 518243. [Google Scholar] [CrossRef] [Green Version]
- Alfuraidan, M.R.; Khamsi, M.A. Fixed points of monotone nonexpansive mappings on a hyperbolic metric space with a graph. Fixed Point Theory Appl. 2015, 2015, 44. [Google Scholar] [CrossRef]
- Alfuraidan, M.R. Fixed points of monotone nonexpansive mappings with a graph. Fixed Point Theory Appl. 2015, 2015, 49. [Google Scholar] [CrossRef] [Green Version]
- Suantai, S.; Kankam, K.; Cholamjiak, P.; Cholamjiak, W. A parallel monotone hybrid algorithm for a finite family of G-nonexpansive mappings in Hilbert spaces endowed with a graph applicable in signal recovery. Comp. Appl. Math. 2021, 40, 145. [Google Scholar] [CrossRef]
- Tiammee, J.; Kaewkhao, A.; Suantai, S. On Browder’s convergence theorem and Halpern iteration process for G-nonexpansive mappings in Hilbert spaces endowed with graphs. Fixed Point Theory Appl. 2015, 2015, 187. [Google Scholar] [CrossRef] [Green Version]
- Tripak, O. Common fixed points of G-nonexpansive mappings on Banach spaces with a graph. Fixed Point Theory Appl. 2016, 2016, 87. [Google Scholar] [CrossRef] [Green Version]
- Sridarat, P.; Suparaturatorn, R.; Suantai, S.; Cho, Y.J. Convergence analysis of SP-iteration for G-nonexpansive mappings with directed graphs. Bull. Malays. Math. Sci. Soc. 2019, 42, 2361–2380. [Google Scholar] [CrossRef]
- Glowinski, R.; Tallec, P.L. Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanic; SIAM: Philadelphia, PA, USA, 1989. [Google Scholar]
- Haubruge, S.; Nguyen, V.H.; Strodiot, J.J. Convergence analysis and applications of the Glowinski Le Tallec splitting method for finding a zero of the sum of two maximal monotone operators. J. Optim. Theory Appl. 1998, 97, 645–673. [Google Scholar] [CrossRef]
- Noor, M.A. New approximation schemes for general variational inequalities. J. Math. Anal. Appl. 2000, 251, 217–229. [Google Scholar] [CrossRef] [Green Version]
- Hanjing, A.; Suantai, S. A fast image restoration algorithm based on a fixed point and optimization method. Mathematics 2020, 8, 378. [Google Scholar] [CrossRef] [Green Version]
- Bussaban, L.; Suantai, S.; Kaewkhao, A. A parallel inertial S-iteration forward-backward algorithm for regression and classification problems. Carpathian J. Math. 2020, 36, 21–30. [Google Scholar] [CrossRef]
- Lions, P.L.; Mercier, B. Splitting Algorithms for the Sum of Two Nonlinear Operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
- Janngam, K.; Suantai, S. An accelerated forward-backward algorithm with applications to image restoration problems. Thai J. Math. 2021, 19, 325–339. [Google Scholar]
- Alakoya, T.O.; Jolaoso, L.O.; Mewomo, O.T. Two modifications of the inertial Tseng extragradient method with self-adaptive step size for solving monotone variational inequality problems. Demonstr. Math. 2020, 53, 208–224. [Google Scholar] [CrossRef]
- Gebrie, A.G.; Wangkeeree, R. Strong convergence of an inertial extrapolation method for a split system of minimization problems. Demonstr. Math. 2020, 53, 332–351. [Google Scholar] [CrossRef]
- Beck, A.; Teboulle, M. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
- Verma, M.; Shukla, K. A new accelerated proximal gradient technique for regularized multitask learning framework. Pattern Recogn. Lett. 2018, 95, 98–103. [Google Scholar] [CrossRef]
- Bussaban, L.; Kaewkhao, A.; Suantai, S. Inertial s-iteration forward-backward algorithm for a family of nonexpansive operators with applications to image restoration problems. Filomat 2021, 35, 771–882. [Google Scholar] [CrossRef]
- Janngam, K.; Wattanataweekul, R. An Accelerated Fixed-Point Algorithm with an Inertial Technique for a Countable Family of G-Nonexpansive Mappings Applied to Image Recovery. Symmetry 2022, 14, 662. [Google Scholar] [CrossRef]
- Johnsonbaugh, R. Discrete Mathematics; Pearson: Hoboken, NJ, USA, 1997. [Google Scholar]
- Suantai, S.; Donganont, M.; Cholamjiak, W. Hybrid Methods for a Countable Family of G-Nonexpansive Mappings in Hilbert Spaces Endowed with Graphs. Mathematics 2019, 7, 936. [Google Scholar] [CrossRef] [Green Version]
- Tan, K.; Xu, H.K. Approximating fixed points of nonexpansive mappings by the ishikawa iteration process. J. Math. Anal. Appl. 1993, 178, 301–308. [Google Scholar] [CrossRef] [Green Version]
- Takahashi, W. Introduction to Nonlinear and Convex Analysis; Yokohama Publishers: Yokohama, Japan, 2009. [Google Scholar]
- Moudafi, A.; Al-Shemas, E. Simultaneous iterative methods for split equality problem. Trans. Math. Program. Appl. 2013, 1, 1–11. [Google Scholar]
- Nakajo, K.; Shimoji, K.; Takahashi, W. Strong convergence to a common fixed point of families of nonexpansive mappings in Banach spaces. J. Nonlinear Convex Anal. 2007, 8, 11–34. [Google Scholar]
- Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; Springer: Berlin, Germany, 2017. [Google Scholar]
- Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 168–1200. [Google Scholar] [CrossRef] [Green Version]
- Moreau, J.J. Fonctions convexes duales et points proximaux dans un espace hilbertien. C. R. Acad. Sci. Paris Ser. A Math. 1962, 255, 2897–2899. [Google Scholar]
- Beck, A. First-Order Methods in Optimization; Tel-Aviv University: Tel Aviv-Yafo, Israel, 2017; pp. 129–177. ISBN 978-1-61197-498-0. [Google Scholar]
- Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
- Tikhonov, A.N.; Arsenin, V.Y. Solutions of Ill-Posed Problems; V.H. Winston: Washington, DC, USA, 1977. [Google Scholar]
- Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B Methodol. 1996, 58, 267–288. [Google Scholar] [CrossRef]
- Martnez-Martnez, J.M.; Escandell-Montero, P.; Soria-Olivas, E.; Martn-Guerrero, J.D.; MagdalenaBenedito, R.; Gmez-Sanchis, J. Regularized extreme learning machine for regression problems. Neurocomputing 2011, 74, 3716–3721. [Google Scholar] [CrossRef]
- Parikh, N.; Boyd, S. Proximal Algorithms. Found. Trends Optim. 2014, 1, 127–239. [Google Scholar] [CrossRef]
- Polyak, B. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
- Nesterov, Y. A method for solving the convex programming problem with convergence rate O(1/k2). Dokl. Akad. Nauk SSSR 1983, 269, 543–547. [Google Scholar]
- Dua, D.; Karra, E. Taniskidou, UCI Machine Learning Repository; University of California Irvinea: Irvinea, CA, USA, 2017. [Google Scholar]
- Lichman, M. UCI Machine Learning Repository; University of California Irvinea: Irvinea, CA, USA, 2013. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).