Abstract
In this paper, we propose a new accelerated common fixed-point algorithm for two countable families of G-nonexpansive mappings. Weak convergence results are obtained in the context of directed graphs in real Hilbert spaces. As applications, we apply the obtained results to solving some convex minimization problems and employ our proposed algorithm to solve the data classification of Breast Cancer, Heart Diseases and Ionosphere. Moreover, we also compare the performance of our proposed algorithm with other algorithms in the literature and it is shown that our algorithm has a better convergence behavior than the others.
Keywords:
classification problems; convex minimization; coordinate affine; forward–backward algorithm; G-nonexpansive MSC:
47H09; 90C25; 65K10
1. Introduction
The Banach contraction mapping principle [1] unquestionably plays a significant role in the literature on fixed-point theory, despite the fact that it is just one of many cornerstone results that are presented. In fact, the metric fixed-point theory is thought to have its roots in this idea, which is one of the fundamental outcomes of mathematical analysis. However, this fact is a strong motivation for creating other mappings that satisfy specific contractive conditions; see [2,3,4]. In 2004, Ran et al. [5] introduced a new concept of Banach’s fixed-point theorem in partially ordered sets and applied this result to solve linear and nonlinear matrix equations. In 2007, Jachymski [6] presented the notion of single-valued G-contraction on complete metric spaces with graphs and proved a fixed-point theorem which extends the results of [5]. He called such mappings a Banach G-contraction. The Banach G-contraction was subsequently extended in various ways by many authors; see [7,8,9,10]. In the past decade, many researchers introduced algorithms for finding the fixed points of G-nonexpansive mappings; see [11,12,13,14]. Recently, Janngam et al. [15,16,17] introduced fixed-point algorithms in Hilbert spaces with directed graphs and applied these results to classification and image recovery.
At present, fixed-point theory was applied to solve various problems in sciences, engineering, economics, physics, and data science such as signal/image processing; see [18,19,20,21,22], and intensity-modulated radiation therapy treatment planning; see [23,24].
In the field of image processing, the image restoration problem is an interesting and important topic. The least absolute shrinkage and selection operator (LASSO) model can be used to convert this problem into an optimization problem. For this problem, there are several optimizations and fixed-point methods; see [25,26,27,28,29] for more detail. A fast iterative shrinkage-thresholding algorithm (FISTA) is one of the most widely used approaches for resolving image restoration problems. Beck et al. [30] demonstrated that FISTA with the inertial step technique has a faster convergence rate than previous methods in the literature.
From this perspective, the primary purpose of this study is to construct an accelerated algorithm for finding the common fixed points of two countable families of G-nonexpansive mappings in real Hilbert spaces with graphs based on the idea of the inertial technique. The applications of this result are to solve convex minimization and data classification problems. Moreover, we compared our algorithm’s performance with those of other algorithms.
The structure of the paper is as follows. In Section 2, we provide fundamental ideas about fixed-point theorems. In Section 3, we present an inertial modified S-algorithm and prove a weak-convergence theorem. In Section 4, convex minimization and classification problems are discussed. Moreover, some numerical experiments on classification problems are also given in Section 5. Finally, we provide the conclusions and discussions.
2. Preliminaries
Let be a real Hilbert space with the norm and let be a nonempty closed convex subset of . A mapping T of into itself is called nonexpansive if for all . For a mapping T of C into itself, we denote by the set all fixed points of T, that is, .
Let G be a directed graph such that the set of its vertices corresponds to and Δ is the diagonal of such that , where is a set of its edges. When two or more edges in a directed graph G connect the same ordered pair of vertices, the edges are said to be parallel.
Assume that G has no parallel edges. Consequently, G can be identified by the pair . The graph is obtained from G by reversing the direction of the edges, which is represented by . That is
Here, we will give a basic knowledge of the definitions of the graph properties that will be used in this work; see [31].
Definition 1.
A graph is said to be
- (i)
- Connected if there is a path between every pair of vertices;
- (ii)
- Symmetric if , then for all ;
- (iii)
- Transitive if and then, for all .
Definition 2.
Let be a directed graph. A mapping is said to be
- (i)
- G-contraction [6] if
- (a)
- T preserves edges of G, that is, if , then ;
- (b)
- There exists such that for any if , then , where c is a contraction factor;
- (ii)
- G-nonexpansive [13] if
- (a)
- T preserves edges of G;
- (b)
- , whenever for all .
Example 1
([11]). Let and be a directed graph such that and if and only if , where S and T are mappings of into itself and given by
for all . It is shown in [11] that both S and T are G-nonexpansive but not nonexpansive.
We write ⇀ and → denote the weak and strong convergences, respectively. A mapping is said to be G-demiclosed at 0 if, for any with and such that and imply .
The following definition is necessary for our algorithm to be well defined.
Definition 3
([17]). Assume that and . Then, is called
- (i)
- Right coordinate affine if for any , , then for all γ, with ;
- (ii)
- Left coordinate affine if for any , , then for all γ, with
If is right and left coordinate affine, then is coordinate affine.
Our main result will be proved using the following lemma.
Lemma 1
([32]). Let and be sequences of non-negative real numbers satisfying the inequality
for all . If and , then exists.
Lemma 2
([33]). Let and . Then,
- (i)
- ;
- (ii)
Lemma 3
([34]). Let and be sequences of non-negative real numbers satisfying the inequality
for all . Then, the following inequality holds:
where Moreover, if then is bounded.
We say that is a weak cluster point of if there is a subsequence of such that and the set of all weak cluster points of is denoted by .
To prove our main convergence result, we need the following Opial’s lemma.
Lemma 4
([35]). Let be a sequence in such that there exists . If for any , exists and , then there exists such that weakly converges to v.
Definition 4
([36]). Let and φ be two families of nonexpansive mappings of into itself. Suppose that where stands for the set of all common fixed points of each The sequence satisfies the NST-condition (I) with φ if
for all bounded sequences and . A sequence satisfies the NST-condition (I) with S if .
Example 2
([37]). Define , where and for all . Then, is G-nonexpansive and satisfies the NST-condition (I) with ψ; see [37] for more details.
Definition 5
([20,38]). Let be the forward–backward operator of lower semi-continuous and convex functions. A forward–backward operator T is defined by
where and
This operator was introduced by Moreau [39] and it is known as the proximity operator with respect to and function g. If , then T is a nonexpansive mapping, where L is a Lipschitz constant of .
For the definition of the proximity operator, we have the following remark; see [40].
Remark 1
([40]). Let be such that . The proximity operator of f is evaluated by
Bussaban et al. [41] proved the following lemma.
Lemma 5.
Let f be a convex differentiable function from into with gradient being L-Lipschitz constant for some and g be a proper lower semi-continuous convex function from into . Let T be the forward–backward operator of f and g. Then, satisfies the -condition (I) with T if is the forward–backward operator of f and g such that with a, .
3. Main Results
In this section, we introduce a new modified S-algorithm (Algorithm 1) with the inertial technical term and then we prove a weak convergence theorem of the sequence which is defined by Algorithm 1 as a common fixed point of two families for G-nonexpansive mappings in Hilbert spaces with graphs.
Throughout this section, let be a nonempty closed and convex subset of a real Hilbert space and let where and is convex, right coordinate affine, symmetric, and transitive. Let be G-nonexpansive mappings with . Let and be families of G-nonexpansive mappings of into itself such that and . We also let .
To prove the weak convergence result of Algorithm 1, the following tools are needed.
Proposition 1.
Let and be such that , . Suppose that is right coordinate affine, symmetric, and transitive. Let a sequence be generated by Algorithm 1. Then, and are in for all
Proof.
We shall use strong mathematical induction to prove our result. In order to prove this, we use Algorithm 1 to obtain
Since is edge-preserving and , we have Using Algorithm 1, we obtain
Since and are edge-preserving and we have
| Algorithm 1. (IMSA) An inertial modified S-algorithm. |
| 1: Initial. Take arbitrary
, and , , and such that and . |
| 2: Step 1. Compute and :
Step 2. Compute the inertial step: |
For all , we assume that and . We obtain from Algorithm 1 that
and
Lemma 6.
Let be a real Hilbert space and be a nonempty closed convex subset of . Let be a sequence generated by Algorithm 1 and for arbitrary and Then, exists.
Proof.
Let . By Proposition 1, we have Then
and
We obtain from (6) that
It follows from Lemma 3 that where Hence, is bounded sequence. Moreover, and are bounded. Therefore,
Applying Lemma 1 and (7), the conclusion of Lemma 6 holds. □
Lemma 7.
Let be a real Hilbert space and be a nonempty closed convex subset of . Let be a sequence generated by Algorithm 1 and for arbitrary and . Then, .
Proof.
Let . Applying Lemma 2 together with G-nonexpansiveness of , we have
It implies that, for ,
Next, we shall show that
Taking the lim inf yields
Since , we obtain
Then
Since as and (5), we obtain
This together with (14) yields
Finally, we shall show that
In order to show this, we consider the following
Since and (12), the above inequality leads to
We now prove the weak convergence of Algorithm 1 to a common fixed point of two families for G-nonexpansive mappings in Hilbert spaces.
Theorem 1.
Let be a real Hilbert space and be a nonempty closed convex subset of . Let be a sequence generated by Algorithm 1 and for arbitrary and Suppose that and satisfy the NST-condition (I) with T and ,, respectively. Then, converge weakly to a point in .
Proof.
Let be such that . Then, exists as proven in Lemma 6. By Lemma 7 and and satisfy the NST-condition (I) with T and ,, respectively, therefore
Since and are G-demiclosed at 0, we obtain . We conclude from Lemma 4 that converges weakly to , as required. □
4. Applications
In 2004, Huang et al. [42] proposed the extreme learning machine (ELM) as a feedforward neural network-based machine learning technique. The single hidden layer feedforward neural network algorithm can be more effectively used if standard ELM employs the structure of a single-layer feedforward neural network (SLFN); see [43] for more detail. Only the weight vector between the hidden and output nodes needs to be determined in the initial ELM because hidden nodes can be random [42]. The training can be completed considerably more quickly because there are a lot fewer parameters that need to be updated than with traditional SLFNs. Fast learning times, easy implementation, and little human involvement are some of the benefits of ELM; see [44]. On the other hand, unstable results necessitate many experiments to identify the best ELM design; see [45] for more details. ELM is employed in a variety of areas, including computational intelligence and pattern rearrangement.
Let us give some basic knowledge of ELM for data classification problems. After that, we apply our obtained results to the convex minimization problem.
Let , , be a set of training of N distinct samples, where is an input data and is a target. that represents the activation function, and ELM with M hidden nodes can be represented as the following mathematical model:
where is the weight vector that connects the hidden node and the j-th output node, is the weight vector that connects the hidden node and the j-th input nodes and is the j-th hidden node’s threshold.
The standard of SLFNs with M hidden nodes can be taken as samples of N without error. In other words, , that is, there exist , , such that
From the above equations, it can be written as follows:
For the model , we aim to estimate the parameter for solving the minimization problem known as ordinary least square (OLS),
where , is the target data, is an output weight, is the hidden layer output matrix, N is the number of training data, and M is the number of unknown variables.
There are several ways to estimate the solution of Equation (22) using mathematical models. The output weight can be obtained in different ways; see [42,46,47,48]. The solution is obtained from when the Moore–Penrose generalized inverse of exists. However, the number of unknown variables M in a realistic situation is substantially more than the quantity of training data N, which might cause the network to become overfitted. The accuracy is low, whereas there are few M hidden nodes. Subset selection and ridge regression are the two classical methods for improving (22); see [49] for more detail. One well-known model for estimation of the output weight , called least absolute shrinkage and selection operator (LASSO) [50],
where is a regularization parameter. The LASSO maintains the beneficial features of both ridge regression and subset selection, that is, regression analysis using LASSO improves the predictability and interpretability of the statistical model by performing both variable selection and regularization. Five years after, the regularization techniques and the original ELM were established to enhance OLS performance. In more general, we can rewrite (23) as a minimization of the sum of the following form:
where are proper lower semi-continuous functions such that f is differentiable. Let be the set of all solutions of the problem (24).
We consider the convex minimization problem (24). We also know that is the solution of problem (24) if and only if , where and ; see [20] for more detail.
Several methods have been proposed to solve the convex minimization problem (24). Polyak [51] was the first to present a method for accelerating algorithms and providing an improved convergence behavior by including an inertial step. Since then, numerous authors have employed the inertial technique to speed up the convergence rate of their algorithms to solve various problems; see [30,34,41,52,53,54,55,56].
The fast iterative shrinkage–thresholding algorithm (FISTA) [30] which performs an inertial step, is one of the most well-known forward–backward-type algorithms. It is defined by
where , , , , and is the inertial step size, which was introduced by Nesterov [57]. Beck et al. [30] introduced FISTA and proved the convergence rate of this algorithm. They also applied these results to the image restoration problem.
Recently, Bussaban et al. [41] introduced parallel inertial S-iteration forward–backward algorithm (PISFBA) [41]. It is defined by
where , , , and . They proved the weak convergence theorem of PISFBA and applied this method to solve regression and data classification problems.
Finally, we constructed Algorithm 2 to solve the convex minimization problem (24) by applying Algorithm 1. Let and , where , and , are proper lower semi-continuous functions such that are differentiable and that are a Lipschitz continuity with constant .
| Algorithm 2. (FBIMSA) A forward–backward inertial modified S-algorithm. |
|
1: Initial. Take arbitrary , and when , and are the same as in Algorithm 1. |
| 2: Step 1. Compute and :
|
| Step 2. Compute the inertial step:
|
| Then, and back to the first step. |
In the next theorem, we use the result of the convergence theorem of Algorithm 1 to obtain the convergence theorem of Algorithm 2.
Theorem 2.
Let a sequence be generated by Algorithm 2. Then, , where
Proof.
Let and where and Then, and are nonexpansive operators for all Similarly, we set T and S to be forward–backward operators of and with respect to and ,, respectively, where and Then, T and S are nonexpansive operators. Thus, and By Proposition 26.1 in [38], we know that and It is derived from Lemma 5 that and satisfy the NST-condition (I) with T and S, respectively. Applying Theorem 1, we obtain the required result directly by setting the complete graph on . □
5. Numerical Experiments
This section will present the basic ELM model and its fundamental supervised classification versions. We also give the result of data classification using each method.
For solving the convex minimization problem (24), we use the model of LASSO when is sigmoid. We set and for our algorithm. For other algorithms, we set , .
The values shown in Table 1 are set for all control parameters, , where is a hidden layer output matrix of a training matrix, and I is an iterations number. We use the output data’s accuracy to measure the performance of each method which is calculated by
Table 1.
Selected parameters of each method.
Next, we use the Breast Cancer, Heart Disease UCI and Ionosphere data sets for classifying which are detailed as follows:
Wisconsin Breast Cancer data set [58]: W.H. Wolberg created this data set, at the General Surgery Department, University of Wisconsin, Clinical Sciences Center, W.N. Street, and O.L. Mangasarian, Computer Sciences Department, University of Wisconsin. It contains 2 classes, 569 observations, and 30 attributes.
Heart Disease UCI [59]: This data set contains 76 attributes. However, all published studies use only a subset of 14 of them. This data set shows the patient’s presence of heart disease. Our goal is to divide the data into two categories.
Ionosphere data set [60]: This radar data set, from the Ionosphere collection, was gathered by a system near Goose Bay, Labrador. This data set consists of 351 observations and 34 attributes. Radar results indicating signs of an ionosphere structure are considered “good”. Bad returns are those whose transmissions do not penetrate the ionosphere.
We set up the training and testing data on Table 2.
Table 2.
Data sets of Breast Cancer, Heart Disease UCI, and Ionosphere, 70% of training and 30% of testing of each data set.
We performed the experiments in order to compare the performance of each studied algorithm, namely Algorithm 2, PISFBA, and FISTA. In each data set, we use the number of hidden nodes M and the number of iterations I as follows:
The number of hidden nodes M depends on the characteristic of each data set and the number of iterations for each data set is selected to achieve the highest performance for each studied algorithm.
The following numerical experiments are obtained by each algorithm and each data set under the control sequences in Table 1 and the selected parameters for each data set in Table 3.
Table 3.
Number of hidden nodes and iterations for each data set.
In Table 4, we use acc.Train and acc.Test to represent the accuracy of training and testing, respectively.
Table 4.
Performance comparison using different methods.
We observe from Table 4 that our proposed algorithm, Algorithm 2, has a higher performance than PISFBA and FISTA in terms of the accuracy of training and testing of each data set. So, we can conclude from our experiments that Algorithm 2 can be used for data classifications of the selected data sets with higher accuracy compared to PISFBA and FISTA.
Remark 2.
Limitations of the proposed algorithm and its applications.
Our proposed algorithm, Algorithm 2, guarantees weak convergence in a setting of real Hilbert spaces under the control sequences and together with the inertial parameter such that the conditions , and , . For applications of Algorithm 2, we have to choose , and under above restrictions. However, in finite-dimensional Euclidean spaces, we obtain a strong convergence of Algorithm 2. Another limitation of Algorithm 2 is computation technique for Lipschitzian constant of when . In the case of big data sets, it may cause difficulty in such computation because of the large dimension of the matrix .
6. Discussions
In this work, we propose a new accelerated common fixed-point algorithm, Algorithm 2, and employ it to solve data classifications of Breast Cancer, Heart Diseases and Ionosphere. A convergence theorem of the proposed algorithm is established under some control conditions , and , . From our experiments, Algorithm 2 has a higher performance than PISFBA and FISTA. The performance of our proposed algorithm depends on the inertial parameter . We note that if we choose which is closed to 1, then we obtain a higher performance of Algorithm 2. We also observe that the performance Algorithm 2 depends on the number of hidden nodes and characteristics of data sets. However, future research will focus on finding new methods or techniques that increase the performance of algorithms for the classification of big real data sets of NCDs of patients from the Sriphat medical center, the faculty of medicine, Chiang Mai University, Thailand.
7. Conclusions
We introduce and prove the weak convergence theorem of an inertial modified S-algorithm (IMSA) for finding a common fixed point of two countable families of G-nonexpansive mappings. Firstly, we proved the weak convergence of IMSA. Secondly, we proposed a new forward-backward inertial modified S-algorithm (FBIMSA) for solving the convex minimization problem. Finally, we applied the proposed algorithm to solve the data classification of Breast Cancer, Heart Diseases and Ionosphere. The numerical results demonstrated the advantages of the proposed algorithm.
Author Contributions
Conceptualization, S.S.; Formal analysis, K.J. and S.S.; Investigation, K.J.; Methodology, S.S.; Supervision, S.S.; Validation, S.S.; Writing—original draft, K.J.; Writing—review and editing, S.S. All authors have read and agreed to the published version of the manuscript.
Funding
NSRF via the program Management Unit for Human Resources & Institutional Development, Research and Innovation [grant number B05F640183].
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
All data were obtained from https://archive.ics.uci.edu (accessed on 23 September 2022).
Acknowledgments
This research has also received funding support from the NSRF via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation (grant number B05F640183) and Chiang Mai University.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Banach, S. Sur les oprations dans les ensembles abstraits et leur application aux quations intgrales. Fundam. Math. 1922, 3, 133–181. [Google Scholar] [CrossRef]
- Branciari, A. A fixed point theorem for mappings satisfying a general contractive condition of integral type. Int. J. Math. Math. Sci. 2002, 29, 531–536. [Google Scholar] [CrossRef]
- Suzuki, T. A generalized Banach contraction principle that characterizes metric completeness. Proc. Am. Math. Soc. 2008, 136, 1861–1869. [Google Scholar] [CrossRef]
- Zhang, X. Common fixed point theorems for some new generalized contractive type mappings. J. Math. Anal. Appl. 2007, 333, 780–786. [Google Scholar] [CrossRef]
- Ran, A.C.M.; Reurings, M.C.B. A fixed point theorem in partially ordered sets and some applications to matrix equations. Proc. Am. Math. Soc. 2004, 132, 1435–1443. [Google Scholar] [CrossRef]
- Jachymski, J. The contraction principle for mappings on a metric space with a graph. Proc. Am. Math. Soc. 2008, 136, 1359–1373. [Google Scholar] [CrossRef]
- Thianwan, T.; Yambangwai, D. Convergence analysis for a new two-step iteration process for G-nonexpansive mappings with directed graphs. Fixed Point Theory Appl. 2019, 2019, 44. [Google Scholar] [CrossRef]
- Bojor, F. Fixed point of ψ-contraction in metric spaces endowed with a graph. Anna. Univ. Crai. Math. Comp. Sci. Ser. 2010, 37, 85–92. [Google Scholar]
- Aleomraninejad, S.M.A.; Rezapour, S.; Shahzad, N. Some fixed point result on a metric space with a graph. Topol. Appl. 2012, 159, 659–663. [Google Scholar] [CrossRef]
- Tiammee, J.; Suantai, S. Coincidence point theorems for graph-preserving multi-valued mappings. Fixed Point Theory Appl. 2014, 2014, 70. [Google Scholar] [CrossRef]
- Sridarat, P.; Suparaturatorn, R.; Suantai, S.; Cho, Y.J. Convergence analysis of SP-iteration for G-nonexpansive mappings with directed graphs. Bull. Malays. Math. Sci. Soc. 2019, 42, 2361–2380. [Google Scholar] [CrossRef]
- Tripak, O. Common fixed points of G-nonexpansive mappings on Banach spaces with a graph. Fixed Point Theory Appl. 2016, 2016, 87. [Google Scholar] [CrossRef]
- Tiammee, J.; Kaewkhao, A.; Suantai, S. On Browder’s convergence theorem and Halpern iteration process for G-nonexpansive mappings in Hilbert spaces endowed with graphs. Fixed Point Theory Appl. 2015, 2015, 187. [Google Scholar] [CrossRef]
- Suantai, S.; Kankam, K.; Cholamjiak, P.; Cholamjiak, W. A parallel monotone hybrid algorithm for a finite family of G-nonexpansive mappings in Hilbert spaces endowed with a graph applicable in signal recovery. Comput. Appl. Math. 2021, 40, 145. [Google Scholar] [CrossRef]
- Janngam, K.; Wattanataweekul, R. A new accelerated fixed-point algorithm for classification and convex minimization problems in Hilbert spaces with directed graphs. Symmetry 2022, 14, 1059. [Google Scholar] [CrossRef]
- Janngam, K.; Wattanataweekul, R. An accelerated fixed-point algorithm with an inertial technique for a countable family of G-nonexpansive mappings applied to image recovery. Symmetry 2022, 14, 662. [Google Scholar] [CrossRef]
- Wattanataweekul, R.; Janngam, K. An accelerated common fixed point algorithm for a countable family of G-nonexpansive mappings with applications to image recovery. J. Inequal. Appl. 2022, 2022, 68. [Google Scholar] [CrossRef]
- Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef]
- Cholamjiak, P.; Shehu, Y. Inertial forward-backward splitting method in Banach spaces with application to compressed sensing. Appl. Math. 2019, 64, 409–435. [Google Scholar] [CrossRef]
- Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef]
- Suantai, S.; Eiamniran, N.; Pholasa, N.; Cholamjiak, P. Three-step projective methods for solving the split feasibility problems. Mathematics 2019, 7, 712. [Google Scholar] [CrossRef]
- Suantai, S.; Kesornprom, S.; Cholamjiak, P. Modified proximal algorithms for finding solutions of the split variational inclusions. Mathematics 2019, 7, 708. [Google Scholar] [CrossRef]
- Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef] [PubMed]
- Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple set split feasibility problem and its applications. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef]
- Ben-Tal, A.; Nemirovski, A. Lectures on Modern Convex Optimization, Analysis, Algorithms, and Engineering Applications; MPS/SIAM Ser. Optim.; SIAM: Philadelphia, PA, USA, 2001. [Google Scholar]
- Bioucas-Dias, J.; Figueiredo, M. A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration. IEEE Trans. Image Process. 2007, 16, 2992–3004. [Google Scholar] [CrossRef] [PubMed]
- Chen, S.S.; Donoho, D.L.; Saunders, M.A. Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 1998, 20, 33–61. [Google Scholar] [CrossRef]
- Donoho, D.L.; Johnstone, I.M. Adapting to unknown smoothness via wavelet shrinkage. J. Am. Statist. Assoc. 1995, 90, 1200–1224. [Google Scholar] [CrossRef]
- Figueiredo, M.A.T.; Nowak, R.D.; Wright, S.J. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 2007, 1, 586–597. [Google Scholar] [CrossRef]
- Beck, A.; Teboulle, M. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
- Johnsonbaugh, R. Discrete Mathematics; Pearson: Hoboken, NJ, USA, 1997. [Google Scholar]
- Tan, K.; Xu, H.K. Approximating fixed points of nonexpansive mappings by the ishikawa iteration process. J. Math. Anal. Appl. 1993, 178, 301–308. [Google Scholar] [CrossRef]
- Takahashi, W. Introduction to Nonlinear and Convex Analysis; Yokohama Publishers: Yokohama, Japan, 2009. [Google Scholar]
- Hanjing, A.; Suantai, S. A fast image restoration algorithm based on a fixed point and optimization method. Mathematics 2020, 8, 378. [Google Scholar] [CrossRef]
- Moudafi, A.; Al-Shemas, E. Simultaneous iterative methods for split equality problem. Trans. Math. Program. Appl. 2013, 1, 1–11. [Google Scholar]
- Nakajo, K.; Shimoji, K.; Takahashi, W. Strong convergence to a common fixed point of families of nonexpansive mappings in Banach spaces. J. Nonlinear Convex Anal. 2007, 8, 11–34. [Google Scholar]
- Suantai, S.; Donganont, M.; Cholamjiak, W. Hybrid methods for a countable family of G-nonexpansive mappings in Hilbert spaces endowed with graphs. Mathematics 2019, 7, 936. [Google Scholar] [CrossRef]
- Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
- Moreau, J.J. Fonctions convexes duales et points proximaux dans un espace hilbertien. Comptes Rendus Acad. Sci. Paris Ser. A Math. 1962, 255, 2897–2899. [Google Scholar]
- Beck, A. First-Order Methods in Optimization; Tel-Aviv University: Tel Aviv-Yafo, Israel, 2017; pp. 129–177. ISBN 978-1-61197-498-0. [Google Scholar]
- Bussaban, L.; Suantai, S.; Kaewkhao, A. A parallel inertial S-iteration forward-backward algorithm for regression and classification problems. Carpathian J. Math. 2020, 36, 21–30. [Google Scholar] [CrossRef]
- Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
- Ding, S.; Xu, X.; Nie, R. Extreme learning machine and its applications. Neural Comput. Appl. 2014, 25, 549–556. [Google Scholar] [CrossRef]
- Wang, Z.; Xin, J.; Sun, P.; Lin, Z.; Yao, Y.; Gao, X. Improved lung nodule diagnosis accuracy using lung CT images with uncertain class. Comput. Methods Programs Biomed. 2018, 162, 197–209. [Google Scholar] [CrossRef]
- Silitonga, A.S.; Shamsuddin, A.H.; Mahlia, T.M.I.; Milano, J.; Kusumo, F. Siswantoro, J.; Dharma, S.; Sebayang, A.H.; Masjuki, H.H.; Ong, H.C. Biodiesel synthesis from Ceiba pentandra oil by microwave irradiation-assisted transesterification: ELM modeling and optimization. Renew. Energy 2020, 146, 1278–1291. [Google Scholar] [CrossRef]
- Huang, G.-B.; Chen, L.; Siew, C.-K. Universal approximation using incremental constructive feedforward networks with random hidden nodes. Trans. Neural Netw. 2006, 17, 879–892. [Google Scholar] [CrossRef]
- Widrow, B.; Greenblatt, A.; Kim, Y.; Park, D. The no-prop algorithm: A new learning algorithm for multilayer neural networks. J. Comput. Graph. Stat. 2013, 17, 182–188. [Google Scholar] [CrossRef]
- Brunton, S.L.; Kutz, J.N. Singular Value Decomposition (SVD). In Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control; Cambridge University Press: Cambridge, UK, 2019; pp. 3–46. [Google Scholar]
- Tikhonov, A.N.; Arsenin, V.Y. Solutions of Ill-Posed Problems; V.H. Winston: Washington, DC, USA, 1977. [Google Scholar]
- Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B Methodol. 1996, 58, 267–288. [Google Scholar] [CrossRef]
- Polyak, B. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
- Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
- Janngam, K.; Suantai, S. An accelerated forward-backward algorithm with applications to image restoration problems. Thai. J. Math. 2021, 19, 325–339. [Google Scholar]
- Alakoya, T.O.; Jolaoso, L.O.; Mewomo, O.T. Two modifications of the inertial Tseng extra gradient method with self-adaptive step size for solving monotone variational inequality problems. Demonstr. Math. 2020, 53, 208–224. [Google Scholar] [CrossRef]
- Gebrie, A.G.; Wangkeeree, R. Strong convergence of an inertial extrapolation method for a split system of minimization problems. Demonstr. Math. 2020, 53, 332–351. [Google Scholar] [CrossRef]
- Yatakoat, P.; Suantai, S.; Hanjing, A. On some accelerated optimization algorithms based on fixed point and linesearch techniques for convex minimization problems with applications. Adv. Contin. Discrete Models 2022, 2022, 25. [Google Scholar] [CrossRef]
- Nesterov, Y. A method for solving the convex programming problem with convergence rate O(1/k2). Dokl. Akad. Nauk SSSR 1983, 269, 543–547. [Google Scholar]
- Mangasarian, O.L.; Wolberg W., H. Cancer diagnosis via linear programming. SIAM News 1990, 23, 1–18. [Google Scholar]
- Lichman, M. UCI Machine Learning Repository. Available online: https://archive.ics.uci.edu/ml/datasets/Heart+Disease (accessed on 20 April 2020).
- Dua, D.; Graff, C. UCI Machine Learning Repository. Available online: https://archive.ics.uci.edu/ml/datasets/ionosphere (accessed on 23 September 2022).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).