Next Article in Journal
On the Numerical Solution of 1D and 2D KdV Equations Using Variational Homotopy Perturbation and Finite Difference Methods
Next Article in Special Issue
The Gao-Type Constant of Absolute Normalized Norms on ℝ2
Previous Article in Journal
A 3D Cuboid Image Encryption Algorithm Based on Controlled Alternate Quantum Walk of Message Coding
Previous Article in Special Issue
Contraction in Rational Forms in the Framework of Super Metric Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Inertial Modified S-Algorithm for Convex Minimization Problems with Directed Graphs and Its Applications in Classification Problems

1
Graduate Ph.D. Degree Program in Mathematics, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
2
Data Science Research Center, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
3
Research Center in Mathematics and Applied Mathematics, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(23), 4442; https://doi.org/10.3390/math10234442
Submission received: 4 November 2022 / Revised: 21 November 2022 / Accepted: 21 November 2022 / Published: 24 November 2022

Abstract

:
In this paper, we propose a new accelerated common fixed-point algorithm for two countable families of G-nonexpansive mappings. Weak convergence results are obtained in the context of directed graphs in real Hilbert spaces. As applications, we apply the obtained results to solving some convex minimization problems and employ our proposed algorithm to solve the data classification of Breast Cancer, Heart Diseases and Ionosphere. Moreover, we also compare the performance of our proposed algorithm with other algorithms in the literature and it is shown that our algorithm has a better convergence behavior than the others.

1. Introduction

The Banach contraction mapping principle [1] unquestionably plays a significant role in the literature on fixed-point theory, despite the fact that it is just one of many cornerstone results that are presented. In fact, the metric fixed-point theory is thought to have its roots in this idea, which is one of the fundamental outcomes of mathematical analysis. However, this fact is a strong motivation for creating other mappings that satisfy specific contractive conditions; see [2,3,4]. In 2004, Ran et al. [5] introduced a new concept of Banach’s fixed-point theorem in partially ordered sets and applied this result to solve linear and nonlinear matrix equations. In 2007, Jachymski [6] presented the notion of single-valued G-contraction on complete metric spaces with graphs and proved a fixed-point theorem which extends the results of [5]. He called such mappings a Banach G-contraction. The Banach G-contraction was subsequently extended in various ways by many authors; see [7,8,9,10]. In the past decade, many researchers introduced algorithms for finding the fixed points of G-nonexpansive mappings; see [11,12,13,14]. Recently, Janngam et al. [15,16,17] introduced fixed-point algorithms in Hilbert spaces with directed graphs and applied these results to classification and image recovery.
At present, fixed-point theory was applied to solve various problems in sciences, engineering, economics, physics, and data science such as signal/image processing; see [18,19,20,21,22], and intensity-modulated radiation therapy treatment planning; see [23,24].
In the field of image processing, the image restoration problem is an interesting and important topic. The least absolute shrinkage and selection operator (LASSO) model can be used to convert this problem into an optimization problem. For this problem, there are several optimizations and fixed-point methods; see [25,26,27,28,29] for more detail. A fast iterative shrinkage-thresholding algorithm (FISTA) is one of the most widely used approaches for resolving image restoration problems. Beck et al. [30] demonstrated that FISTA with the inertial step technique has a faster convergence rate than previous methods in the literature.
From this perspective, the primary purpose of this study is to construct an accelerated algorithm for finding the common fixed points of two countable families of G-nonexpansive mappings in real Hilbert spaces with graphs based on the idea of the inertial technique. The applications of this result are to solve convex minimization and data classification problems. Moreover, we compared our algorithm’s performance with those of other algorithms.
The structure of the paper is as follows. In Section 2, we provide fundamental ideas about fixed-point theorems. In Section 3, we present an inertial modified S-algorithm and prove a weak-convergence theorem. In Section 4, convex minimization and classification problems are discussed. Moreover, some numerical experiments on classification problems are also given in Section 5. Finally, we provide the conclusions and discussions.

2. Preliminaries

Let H be a real Hilbert space with the norm · and let C be a nonempty closed convex subset of H . A mapping T of C into itself is called nonexpansive if T x T y x y for all x , y C . For a mapping T of C into itself, we denote by F ( T ) the set all fixed points of T, that is, F ( T ) : = { x C : T x = x } .
Let G be a directed graph such that the set V ( G ) of its vertices corresponds to C and Δ is the diagonal of C × C such that E ( G ) , where E ( G ) is a set of its edges. When two or more edges in a directed graph G connect the same ordered pair of vertices, the edges are said to be parallel.
Assume that G has no parallel edges. Consequently, G can be identified by the pair ( V ( G ) , E ( G ) ) . The graph is obtained from G by reversing the direction of the edges, which is represented by G 1 . That is
E ( G 1 ) = { ( u , v ) C × C : ( v , u ) E ( G ) } .
Here, we will give a basic knowledge of the definitions of the graph properties that will be used in this work; see [31].
Definition 1.
A graph G = ( V ( G ) , E ( G ) ) is said to be
(i) 
Connected if there is a path between every pair of vertices;
(ii) 
Symmetric if ( u , v ) E ( G ) , then ( v , u ) E ( G ) for all u , v V ( G ) ;
(iii) 
Transitive if ( u , v ) and ( v , w ) E ( G ) then, ( u , w ) E ( G ) for all u , v , w V ( G ) .
Definition 2.
Let G = ( V ( G ) , E ( G ) ) be a directed graph. A mapping T : C C is said to be
(i) 
G-contraction [6] if
(a) 
T preserves edges of G, that is, if ( u , v ) E ( G ) , then ( T u , T v ) E ( G ) ;
(b) 
There exists c ( 0 , 1 ) such that for any u , v V ( G ) if ( u , v ) E ( G ) , then T u T v c u v , where c is a contraction factor;
(ii) 
G-nonexpansive [13] if
(a) 
T preserves edges of G;
(b) 
T u T v u v , whenever ( u , v ) E ( G ) for all u , v V ( G ) .
Example 1
([11]). Let C = [ 0 , 2 ] R and G = ( V ( G ) , E ( G ) ) be a directed graph such that V ( G ) = C and ( u , v ) E ( G ) if and only if 0.5 u v 1.7 , where S and T are mappings of C into itself and given by
S u = 1 + 2 3 a r c s i n ( u 1 ) a n d T u = 1 + 1 3 t a n ( u 1 ) ,
for all u C . It is shown in [11] that both S and T are G-nonexpansive but not nonexpansive.
We write ⇀ and → denote the weak and strong convergences, respectively. A mapping T : C C is said to be G-demiclosed at 0 if, for any { u n } C with ( u n , u n + 1 ) and such that u n C and T u n 0 imply T u = 0 .
The following definition is necessary for our algorithm to be well defined.
Definition 3
([17]). Assume that Υ : = n = 1 F ( T n ) and Υ × Υ E ( G ) . Then, E ( G ) is called
(i) 
Right coordinate affine if for any ( p , q ) , ( p , n ) E ( G ) , then γ ( p , q ) + ξ ( p , n ) E ( G ) for all γ, ξ R with γ + ξ = 1 ;
(ii) 
Left coordinate affine if for any ( p , q ) , ( m , q ) E ( G ) , then γ ( p , q ) + ξ ( m , q ) E ( G ) for all γ, ξ R with γ + ξ = 1 .
If E ( G ) is right and left coordinate affine, then E ( G ) is coordinate affine.
Our main result will be proved using the following lemma.
Lemma 1
([32]). Let { x n } , { y n } and { ζ n } be sequences of non-negative real numbers satisfying the inequality
x n + 1 ( 1 + ζ n ) x n + y n
for all n 1 . If n = 1 ζ n < and n = 1 y n < , then lim n x n exists.
Lemma 2
([33]). Let m , n H and ξ [ 0 , 1 ] . Then,
(i) 
ξ m + ( 1 ξ ) n 2 = ξ m 2 + ( 1 ξ ) n 2 ξ ( 1 ξ ) m n 2 ;
(ii) 
m ± n 2 = m 2 ± 2 m , n + n 2 .
Lemma 3
([34]). Let { u n } and { μ n } be sequences of non-negative real numbers satisfying the inequality
u n + 1 ( 1 + μ n ) u n + μ n u n 1
for all n 1 . Then, the following inequality holds:
u n + 1 M · j = 1 n ( 1 + 2 μ j ) ,
where M = max { u 1 , u 2 } . Moreover, if n = 1 μ n < , then { u n } is bounded.
We say that v C is a weak cluster point of { u n } if there is a subsequence { u n k } of { u n } such that u n k v and the set of all weak cluster points of { u n } is denoted by ω w ( u n ) .
To prove our main convergence result, we need the following Opial’s lemma.
Lemma 4
([35]). Let { u n } be a sequence in H such that there exists Υ H . If for any p Υ , lim n u n p exists and ω w ( u n ) Υ , then there exists v Υ such that { u n } weakly converges to v.
Definition 4
([36]). Let { S n } and φ be two families of nonexpansive mappings of C into itself. Suppose that F ( φ ) n = 1 F ( S n ) , where F ( φ ) stands for the set of all common fixed points of each S φ . The sequence { S n } satisfies the NST-condition (I) with φ if
lim n S n u n u n = 0 lim n S u n u n = 0
for all bounded sequences { u n } C and S φ . A sequence { S n } satisfies the NST-condition (I) with S if φ = { S } .
Example 2
([37]). Define T n = β n I + ( 1 β n ) T , where T ψ and 0 < a β n b < 1 for all n 1 . Then, T n is G-nonexpansive and { T n } satisfies the NST-condition (I) with ψ; see [37] for more details.
Definition 5
([20,38]). Let f , g : R n ( , + ] be the forward–backward operator of lower semi-continuous and convex functions. A forward–backward operator T is defined by
T : = p r o x μ g ( I μ f ) ,
where μ > 0 and
p r o x μ g x : = a r g m i n y H g ( y ) + 1 2 μ y x 2 .
This operator was introduced by Moreau [39] and it is known as the proximity operator with respect to μ and function g. If μ ( 0 , 2 L ) , then T is a nonexpansive mapping, where L is a Lipschitz constant of f .
For the definition of the proximity operator, we have the following remark; see [40].
Remark 1
([40]). Let f : R n R be such that f ( x ) = μ x 1 . The proximity operator of f is evaluated by
p r o x μ · 1 ( x ) = ( s i g n ( x i ) m a x ( | x i | μ , 0 ) ) i = 1 n .
Bussaban et al. [41] proved the following lemma.
Lemma 5.
Let f be a convex differentiable function from H into R with gradient f being L-Lipschitz constant for some L > 0 and g be a proper lower semi-continuous convex function from H into R { } . Let T be the forward–backward operator of f and g. Then, { T n } satisfies the N S T -condition (I) with T if  { T n } is the forward–backward operator of f and g such that a n a with a, a n ( 0 , 2 / L ) .

3. Main Results

In this section, we introduce a new modified S-algorithm (Algorithm 1) with the inertial technical term and then we prove a weak convergence theorem of the sequence { x n } which is defined by Algorithm 1 as a common fixed point of two families for G-nonexpansive mappings in Hilbert spaces with graphs.
Throughout this section, let C be a nonempty closed and convex subset of a real Hilbert space H and let G = ( V ( G ) , E ( G ) ) where V ( G ) = C and E ( G ) is convex, right coordinate affine, symmetric, and transitive. Let T , S : C C be G-nonexpansive mappings with F ( T ) F ( S ) . Let { T n } and { S n } be families of G-nonexpansive mappings of C into itself such that F ( T ) n = 1 F ( T n ) and F ( S ) n = 1 F ( S n ) . We also let F = n = 1 F ( T n ) n = 1 F ( S n ) .
To prove the weak convergence result of Algorithm 1, the following tools are needed.
Proposition 1.
Let v ˘ F and y 0 , x 1 C be such that ( v ˘ , y 0 ) , ( v ˘ , x 1 ) E ( G ) . Suppose that E ( G ) is right coordinate affine, symmetric, and transitive. Let a sequence { x n } be generated by Algorithm 1. Then, ( v ˘ , z n ) , ( v ˘ , y n ) , ( v ˘ , x n ) and ( x n , x n + 1 ) are in E ( G ) for all n 1 .
Proof. 
We shall use strong mathematical induction to prove our result. In order to prove this, we use Algorithm 1 to obtain
( v ˘ , z 1 ) = v ˘ , ( 1 β 1 ) x 1 + β 1 T 1 x 1 = ( 1 β 1 ) ( v ˘ , x 1 ) + β 1 ( v ˘ , T 1 x 1 ) .
Since T n is edge-preserving and ( v ˘ , x 1 ) E ( G ) , we have ( v ˘ , z 1 ) E ( G ) . Using Algorithm 1, we obtain
( v ˘ , y 1 ) = ( v ˘ , ( 1 α 1 ) T 1 x 1 + α 1 S 1 z 1 ) = ( 1 α 1 ) ( v ˘ , T 1 x 1 ) + α 1 ( v ˘ , S 1 z 1 ) .
Since T n and S n are edge-preserving and ( v ˘ , z 1 ) E ( G ) , we have ( v ˘ , y 1 ) E ( G ) .
Algorithm 1. (IMSA) An inertial modified S-algorithm.
1: Initial. Take arbitrary y 0 , x 1 C and n = 1 , β n [ a , b ] ( 0 , 1 ) , and ϱ n 0 such that
    n = 1 ϱ n < and α n 1 .
2: Step 1. Compute y n and z n :
z n = ( 1 β n ) x n + β n T n x n , y n = ( 1 α n ) T n x n + α n S n z n .

    Step 2. Compute the inertial step:
x n + 1 = y n + ϱ n ( y n y n 1 ) .
    Then, n : = n + 1 and back to the first step.
For all k < n , we assume that ( v ˘ , z k ) , ( v ˘ , y k ) and ( v ˘ , x k ) E ( G ) . We obtain from Algorithm 1 that
( v ˘ , z k + 1 ) = ( v ˘ , ( 1 β k + 1 ) x k + 1 + β k + 1 T k + 1 x k + 1 ) = ( 1 β k + 1 ) ( v ˘ , x k + 1 ) + β k + 1 ( v ˘ , T k + 1 x k + 1 ) ,
( v ˘ , y k + 1 ) = ( v ˘ , ( 1 α k + 1 ) T k + 1 x k + 1 + α k + 1 S k + 1 z k + 1 ) = ( 1 α k + 1 ) ( v ˘ , T k + 1 x k + 1 ) + α k + 1 ( v ˘ , S k + 1 z k + 1 )
and
( v ˘ , x k + 1 ) = ( v ˘ , y k + ϱ k ( y k y k 1 ) ) = ( v ˘ , ( 1 + ϱ k ) y k ϱ k y k 1 ) = ( 1 + ϱ k ) ( v ˘ , y k ) ϱ k ( v ˘ , y k 1 ) .
Since (1)–(3) and T n , S n preserve edges, it follows from the fact that E ( G ) is the right coordinate affine that ( v ˘ , x k + 1 ) , ( v ˘ , z k + 1 ) and ( v ˘ , y k + 1 ) E ( G ) . Using strong mathematical induction, we have ( v ˘ , x n ) , ( v ˘ , z n ) , ( v ˘ , y n ) E ( G ) for all n N . It easy to see that ( x n , v ˘ ) E ( G ) . Since E ( G ) is transitive and ( x n , v ˘ ) , ( v ˘ , x n + 1 ) E ( G ) , we obtain ( x n , x n + 1 ) E ( G ) , as required.    □
Lemma 6.
Let H be a real Hilbert space and C be a nonempty closed convex subset of H . Let { y n } be a sequence generated by Algorithm 1 and ( v ˘ , y 0 ) , ( v ˘ , x 1 ) E ( G ) for arbitrary y 0 , x 1 C and v ˘ F . Then, lim n v ˘ y n exists.
Proof. 
Let v ˘ F . By Proposition 1, we have ( v ˘ , z n ) , ( v ˘ , x n ) , ( v ˘ , y n ) E ( G ) . Then
v ˘ z n =   v ˘ β n T n x n ( 1 β n ) x n ( 1 β n ) v ˘ x n + β n v ˘ T n x n ( 1 β n ) v ˘ x n + β n v ˘ x n =   v ˘ x n ,
and
v ˘ y n =   v ˘ α n S n z n ( 1 α n ) T n x n ( 1 α n ) v ˘ T n x n + α n v ˘ S n z n ( 1 α n ) v ˘ x n + α n v ˘ z n
( 1 α n ) v ˘ x n + α n v ˘ x n =   v ˘ x n .
We obtain from (6) that
v ˘ y n   v ˘ x n =   v ˘ y n 1 ϱ n 1 ( y n 1 y n 2 )   v ˘ y n 1 + ϱ n 1 y n 2 y n 1 ( 1 + ϱ n 1 ) v ˘ y n 1 + ϱ n 1 v ˘ y n 2 .
It follows from Lemma 3 that v ˘ y n K · j = 1 n ( 1 + 2 ϱ j ) , where K = max { v ˘ y 1 , v ˘ y 2 } . Hence, { y n } is bounded sequence. Moreover, { x n } and { z n } are bounded. Therefore,
n = 1 ϱ n y n y n 1 < .
Applying Lemma 1 and (7), the conclusion of Lemma 6 holds.    □
Lemma 7.
Let H be a real Hilbert space and C be a nonempty closed convex subset of H . Let { y n } be a sequence generated by Algorithm 1 and ( v ˘ , y 0 ) , ( v ˘ , x 1 ) E ( G ) for arbitrary y 0 , x 1 C and v ˘ F . Then, lim n T n x n x n = lim n S n x n x n = 0 .
Proof. 
Let v ˘ F . Applying Lemma 2 together with G-nonexpansiveness of T n , we have
v ˘ z n 2 =   v ˘ β n T n x n ( 1 β n ) x n 2 =   β n ( v ˘ T n x n ) + ( 1 β n ) ( v ˘ x n ) 2 =   β n v ˘ T n x n 2 + ( 1 β n ) v ˘ x n 2 β n ( 1 β n ) T n x n x n 2   v ˘ x n 2 β n ( 1 β n ) T n x n x n 2 .
It implies that, for n 1 ,
β n ( 1 β n ) T n x n x n 2 v ˘ x n 2 v ˘ z n 2 .
Next, we shall show that
lim n T n x n x n = 0 .
In order to do this, we know from Lemma 6 that lim n v ˘ y n exists. Call it a. From (6), we have
v ˘ y n     v ˘ x n .
Taking the lim inf yields
a lim inf n v ˘ x n .
It follows from (8) and
v ˘ x n + 1     v ˘ y n   +   ϱ n y n 1 y n
that
lim sup n v ˘ x n a .
Using (10) and (11), we have
lim n v ˘ x n = a .
Since v ˘ z n     v ˘ x n , we obtain
lim sup n v ˘ z n   lim sup n v ˘ x n = a .
Then
lim sup n v ˘ z n a .
Since α n 1 as n and (5), we obtain
a lim inf n v ˘ z n .
This together with (14) yields
lim n v ˘ z n = a .
Combining expressions (9), (12) and (15), we obtain
lim n T n x n x n = 0 .
Finally, we shall show that
lim n S n x n x n = 0 .
In order to show this, we consider the following
v ˘ y n 2 =   α n ( v ˘ S n z n ) + ( 1 α n ) ( v ˘ T n x n ) 2 = α n v ˘ S n z n 2 + ( 1 α n ) v ˘ T n x n 2 α n ( 1 α n ) T n x n S n z n 2   v ˘ x n 2 α n ( 1 α n ) T n x n S n z n 2 .
Since lim n v ˘ y n = a and (12), the above inequality leads to
lim n T n x n S n z n = 0 .
Now
x n z n   β n T n x n x n
implies by (16) that
lim n x n z n = 0 .
Using (16), (17) and (18), we have
S n x n x n   =   S n x n S n z n   +   S n z n T n x n   +   T n x n x n
and so
lim n S n x n x n = 0 ,
as required.    □
We now prove the weak convergence of Algorithm 1 to a common fixed point of two families for G-nonexpansive mappings in Hilbert spaces.
Theorem 1.
Let H be a real Hilbert space and C be a nonempty closed convex subset of H . Let { y n } be a sequence generated by Algorithm 1 and ( v ˘ , y 0 ) , ( v ˘ , x 1 ) E ( G ) for arbitrary y 0 , x 1 C and v ˘ F . Suppose that { T n } and { S n } satisfy the NST-condition (I) with T and S ,, respectively. Then, { x n } converge weakly to a point in F .
Proof. 
Let v ˘ F be such that ( v ˘ , y 0 ) , ( v ˘ , x 1 ) E ( G ) . Then, lim n v ˘ y n exists as proven in Lemma 6. By Lemma 7 and { T n } and { S n } satisfy the NST-condition (I) with T and S ,, respectively, therefore
lim n T x n x n = 0 and lim n S x n x n = 0 .
Since I T and I S are G-demiclosed at 0, we obtain ω w ( x n ) F ( T ) F ( S ) . We conclude from Lemma 4 that { x n } converges weakly to v ˘ F ( T ) F ( S ) , as required.    □

4. Applications

In 2004, Huang et al. [42] proposed the extreme learning machine (ELM) as a feedforward neural network-based machine learning technique. The single hidden layer feedforward neural network algorithm can be more effectively used if standard ELM employs the structure of a single-layer feedforward neural network (SLFN); see [43] for more detail. Only the weight vector between the hidden and output nodes needs to be determined in the initial ELM because hidden nodes can be random [42]. The training can be completed considerably more quickly because there are a lot fewer parameters that need to be updated than with traditional SLFNs. Fast learning times, easy implementation, and little human involvement are some of the benefits of ELM; see [44]. On the other hand, unstable results necessitate many experiments to identify the best ELM design; see [45] for more details. ELM is employed in a variety of areas, including computational intelligence and pattern rearrangement.
Let us give some basic knowledge of ELM for data classification problems. After that, we apply our obtained results to the convex minimization problem.
Let { ( x k , t k ) : x k R n , t k R m , k = 1 , 2 , , N } be a set of training of N distinct samples, where x k = [ x k 1 , x k 2 , , x k n ] is an input data and t k = [ t k 1 , t k 2 , , t k m ] is a target. W ( x ) that represents the activation function, and ELM with M hidden nodes can be represented as the following mathematical model:
j = 1 M ρ j W ( w j , x i + d j ) = o i , i = 1 , , N ,
where ρ j = [ ρ j 1 , ρ j 2 , , ρ j m ] T is the weight vector that connects the hidden node and the j-th output node, w j = [ w j 1 , w j 2 , , w j n ] T is the weight vector that connects the hidden node and the j-th input nodes and d j is the j-th hidden node’s threshold.
The standard of SLFNs with M hidden nodes can be taken as samples of N without error. In other words, i = 1 N t i o i = 0 , that is, there exist ρ j , w j , d j such that
j = 1 M ρ j W ( w j , x i + d j ) = t i , i = 1 , , N .
From the above equations, it can be written as follows:
H ρ = T ,
H = W ( w 1 , x 1 + d 1 ) W ( w M , x 1 + d M ) W ( w 1 , x N + d 1 ) W ( w M , x N + d M ) ,
ρ = [ ρ 1 T , , ρ M T ] m × M T , T = [ t 1 T , , t N T ] m × N T .
For the model H ρ = T , we aim to estimate the parameter ρ for solving the minimization problem known as ordinary least square (OLS),
min ρ H ρ T 2 2 ,
where x 2 = i = 1 n | x i | 2 , T R N × m is the target data, ρ R M × m is an output weight, H R N × M is the hidden layer output matrix, N is the number of training data, and M is the number of unknown variables.
There are several ways to estimate the solution of Equation (22) using mathematical models. The output weight ρ can be obtained in different ways; see [42,46,47,48]. The solution ρ is obtained from ρ = H T when the Moore–Penrose generalized inverse H of H exists. However, the number of unknown variables M in a realistic situation is substantially more than the quantity of training data N, which might cause the network to become overfitted. The accuracy is low, whereas there are few M hidden nodes. Subset selection and ridge regression are the two classical methods for improving (22); see [49] for more detail. One well-known model for estimation of the output weight ρ , called least absolute shrinkage and selection operator (LASSO) [50],
min ρ H ρ T 2 2 + λ ρ 1 ,
where λ is a regularization parameter. The LASSO maintains the beneficial features of both ridge regression and subset selection, that is, regression analysis using LASSO improves the predictability and interpretability of the statistical model by performing both variable selection and regularization. Five years after, the regularization techniques and the original ELM were established to enhance OLS performance. In more general, we can rewrite (23) as a minimization of the sum of the following form:
min x H ( f ( x ) + g ( x ) )
where f , g : H ( , ] are proper lower semi-continuous functions such that f is differentiable. Let S : = a r g m i n ( f + g ) be the set of all solutions of the problem (24).
We consider the convex minimization problem (24). We also know that v ˘ is the solution of problem (24) if and only if v ˘ = T v ˘ , where T = p r o x μ g 1 ( I μ f 1 , ) and μ > 0 ; see [20] for more detail.
Several methods have been proposed to solve the convex minimization problem (24). Polyak [51] was the first to present a method for accelerating algorithms and providing an improved convergence behavior by including an inertial step. Since then, numerous authors have employed the inertial technique to speed up the convergence rate of their algorithms to solve various problems; see [30,34,41,52,53,54,55,56].
The fast iterative shrinkage–thresholding algorithm (FISTA) [30] which performs an inertial step, is one of the most well-known forward–backward-type algorithms. It is defined by
y n = T x n , t n + 1 = 1 + 1 + 4 t n 2 2 , θ n = t n 1 t n + 1 , x n + 1 = y n + θ n ( y n y n 1 ) ,
where n 1 , T : = p r o x 1 L g ( I 1 L f ) , x 1 = y 0 R n , t 1 = 1 , and θ n is the inertial step size, which was introduced by Nesterov [57]. Beck et al. [30] introduced FISTA and proved the convergence rate of this algorithm. They also applied these results to the image restoration problem.
Recently, Bussaban et al. [41] introduced parallel inertial S-iteration forward–backward algorithm (PISFBA) [41]. It is defined by
y n = x n + θ n ( x n x n 1 ) , z n = ( 1 β n ) x n + β n T n x n , x n + 1 = ( 1 α n ) T n y n + α n T n z n ,
where n 1 , x 0 = x 1 H , 0 < q < α n 1 , 0 < s < β n < r < 1 and n = 1 θ n x n x n 1 < . They proved the weak convergence theorem of PISFBA and applied this method to solve regression and data classification problems.
Finally, we constructed Algorithm 2 to solve the convex minimization problem (24) by applying Algorithm 1. Let T n = p r o x μ n g 1 ( I μ n f 1 ) and S n = p r o x κ n g 2 ( I κ n f 2 ) , where μ n ( 0 , 2 / L 1 ) , κ n ( 0 , 2 / L 2 ) and f i , g i : H ( , ] , i = 1 , 2 are proper lower semi-continuous functions such that f i are differentiable and that f i are a Lipschitz continuity with constant L i > 0 .
Algorithm 2. (FBIMSA) A forward–backward inertial modified S-algorithm.
1: Initial. Take arbitrary y 0 , x 1 C and n = 1 when β n , α n and ϱ n are the same as in
    Algorithm 1.
2: Step 1. Compute y n and z n :
z n = ( 1 β n ) x n + β n p r o x μ n g 1 ( I μ n f 1 ) x n , y n = ( 1 α n ) p r o x μ n g 1 ( I μ n f 1 ) x n + α n p r o x κ n g 2 ( I κ n f 2 ) z n .
    Step 2. Compute the inertial step:
x n + 1 = y n + ϱ n ( y n y n 1 ) .
    Then, n : = n + 1 and back to the first step.
In the next theorem, we use the result of the convergence theorem of Algorithm 1 to obtain the convergence theorem of Algorithm 2.
Theorem 2.
Let a sequence { x n } be generated by Algorithm 2. Then, x n v ˘ S , where S : = a r g m i n ( f 1 + g 1 ) a r g m i n ( f 2 + g 2 ) .
Proof. 
Let T n = p r o x μ n g 1 ( I μ n f 1 ) and S n = p r o x κ n g 2 ( I κ n f 2 ) , where μ n ( 0 , 2 / L 1 ) and κ n ( 0 , 2 / L 2 ) . Then, T n and S n are nonexpansive operators for all n . Similarly, we set T and S to be forward–backward operators of f 1 and f 2 with respect to μ and κ ,, respectively, where μ ( 0 , 2 / L 1 ) and κ ( 0 , 2 / L 2 ) . Then, T and S are nonexpansive operators. Thus, T = p r o x μ g 1 ( I μ f 1 ) and S = p r o x κ g 2 ( I κ f 2 ) . By Proposition 26.1 in [38], we know that n = 1 F ( T n ) = a r g m i n ( f 1 + g 1 ) and n = 1 F ( S n ) = a r g m i n ( f 2 + g 2 ) . It is derived from Lemma 5 that { T n } and { S n } satisfy the NST-condition (I) with T and S, respectively. Applying Theorem 1, we obtain the required result directly by setting the complete graph G = R n × R n on R n . □

5. Numerical Experiments

This section will present the basic ELM model and its fundamental supervised classification versions. We also give the result of data classification using each method.
For solving the convex minimization problem (24), we use the model of LASSO when W ( x ) is sigmoid. We set f 1 ( x ) = f 2 ( x ) = H ρ T 2 2 and g 1 ( x ) = g 2 ( x ) = λ ρ 1 for our algorithm. For other algorithms, we set f ( x ) = H ρ T 2 2 , g ( x ) = λ ρ 1 .
The values shown in Table 1 are set for all control parameters, L = 2 H 1 2 , where H 1 is a hidden layer output matrix of a training matrix, and I is an iterations number. We use the output data’s accuracy to measure the performance of each method which is calculated by
accuracy = correct predicted data all data × 100
Next, we use the Breast Cancer, Heart Disease UCI and Ionosphere data sets for classifying which are detailed as follows:
Wisconsin Breast Cancer data set [58]: W.H. Wolberg created this data set, at the General Surgery Department, University of Wisconsin, Clinical Sciences Center, W.N. Street, and O.L. Mangasarian, Computer Sciences Department, University of Wisconsin. It contains 2 classes, 569 observations, and 30 attributes.
Heart Disease UCI [59]: This data set contains 76 attributes. However, all published studies use only a subset of 14 of them. This data set shows the patient’s presence of heart disease. Our goal is to divide the data into two categories.
Ionosphere data set [60]: This radar data set, from the Ionosphere collection, was gathered by a system near Goose Bay, Labrador. This data set consists of 351 observations and 34 attributes. Radar results indicating signs of an ionosphere structure are considered “good”. Bad returns are those whose transmissions do not penetrate the ionosphere.
We set up the training and testing data on Table 2.
We performed the experiments in order to compare the performance of each studied algorithm, namely Algorithm 2, PISFBA, and FISTA. In each data set, we use the number of hidden nodes M and the number of iterations I as follows:
The number of hidden nodes M depends on the characteristic of each data set and the number of iterations for each data set is selected to achieve the highest performance for each studied algorithm.
The following numerical experiments are obtained by each algorithm and each data set under the control sequences in Table 1 and the selected parameters for each data set in Table 3.
In Table 4, we use acc.Train and acc.Test to represent the accuracy of training and testing, respectively.
We observe from Table 4 that our proposed algorithm, Algorithm 2, has a higher performance than PISFBA and FISTA in terms of the accuracy of training and testing of each data set. So, we can conclude from our experiments that Algorithm 2 can be used for data classifications of the selected data sets with higher accuracy compared to PISFBA and FISTA.
Remark 2.
Limitations of the proposed algorithm and its applications.
Our proposed algorithm, Algorithm 2, guarantees weak convergence in a setting of real Hilbert spaces under the control sequences { α n } and { β n } together with the inertial parameter ϱ n such that the conditions α n 1 , β n [ a , b ] ( 0 , 1 ) and ϱ n 0 , n = 1 ϱ n < . For applications of Algorithm 2, we have to choose { α n } , { β n } and { ϱ n } under above restrictions. However, in finite-dimensional Euclidean spaces, we obtain a strong convergence of Algorithm 2. Another limitation of Algorithm 2 is computation technique for Lipschitzian constant of f when f ( x ) = H ρ T 2 2 . In the case of big data sets, it may cause difficulty in such computation because of the large dimension of the matrix H .

6. Discussions

In this work, we propose a new accelerated common fixed-point algorithm, Algorithm 2, and employ it to solve data classifications of Breast Cancer, Heart Diseases and Ionosphere. A convergence theorem of the proposed algorithm is established under some control conditions α n 1 , β n [ a , b ] ( 0 , 1 ) and ϱ n 0 , n = 1 ϱ n < . From our experiments, Algorithm 2 has a higher performance than PISFBA and FISTA. The performance of our proposed algorithm depends on the inertial parameter ϱ n . We note that if we choose ϱ n which is closed to 1, then we obtain a higher performance of Algorithm 2. We also observe that the performance Algorithm 2 depends on the number of hidden nodes and characteristics of data sets. However, future research will focus on finding new methods or techniques that increase the performance of algorithms for the classification of big real data sets of NCDs of patients from the Sriphat medical center, the faculty of medicine, Chiang Mai University, Thailand.

7. Conclusions

We introduce and prove the weak convergence theorem of an inertial modified S-algorithm (IMSA) for finding a common fixed point of two countable families of G-nonexpansive mappings. Firstly, we proved the weak convergence of IMSA. Secondly, we proposed a new forward-backward inertial modified S-algorithm (FBIMSA) for solving the convex minimization problem. Finally, we applied the proposed algorithm to solve the data classification of Breast Cancer, Heart Diseases and Ionosphere. The numerical results demonstrated the advantages of the proposed algorithm.

Author Contributions

Conceptualization, S.S.; Formal analysis, K.J. and S.S.; Investigation, K.J.; Methodology, S.S.; Supervision, S.S.; Validation, S.S.; Writing—original draft, K.J.; Writing—review and editing, S.S. All authors have read and agreed to the published version of the manuscript.

Funding

NSRF via the program Management Unit for Human Resources & Institutional Development, Research and Innovation [grant number B05F640183].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data were obtained from https://archive.ics.uci.edu (accessed on 23 September 2022).

Acknowledgments

This research has also received funding support from the NSRF via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation (grant number B05F640183) and Chiang Mai University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Banach, S. Sur les oprations dans les ensembles abstraits et leur application aux quations intgrales. Fundam. Math. 1922, 3, 133–181. [Google Scholar] [CrossRef]
  2. Branciari, A. A fixed point theorem for mappings satisfying a general contractive condition of integral type. Int. J. Math. Math. Sci. 2002, 29, 531–536. [Google Scholar] [CrossRef]
  3. Suzuki, T. A generalized Banach contraction principle that characterizes metric completeness. Proc. Am. Math. Soc. 2008, 136, 1861–1869. [Google Scholar] [CrossRef]
  4. Zhang, X. Common fixed point theorems for some new generalized contractive type mappings. J. Math. Anal. Appl. 2007, 333, 780–786. [Google Scholar] [CrossRef] [Green Version]
  5. Ran, A.C.M.; Reurings, M.C.B. A fixed point theorem in partially ordered sets and some applications to matrix equations. Proc. Am. Math. Soc. 2004, 132, 1435–1443. [Google Scholar] [CrossRef]
  6. Jachymski, J. The contraction principle for mappings on a metric space with a graph. Proc. Am. Math. Soc. 2008, 136, 1359–1373. [Google Scholar] [CrossRef] [Green Version]
  7. Thianwan, T.; Yambangwai, D. Convergence analysis for a new two-step iteration process for G-nonexpansive mappings with directed graphs. Fixed Point Theory Appl. 2019, 2019, 44. [Google Scholar] [CrossRef]
  8. Bojor, F. Fixed point of ψ-contraction in metric spaces endowed with a graph. Anna. Univ. Crai. Math. Comp. Sci. Ser. 2010, 37, 85–92. [Google Scholar]
  9. Aleomraninejad, S.M.A.; Rezapour, S.; Shahzad, N. Some fixed point result on a metric space with a graph. Topol. Appl. 2012, 159, 659–663. [Google Scholar] [CrossRef] [Green Version]
  10. Tiammee, J.; Suantai, S. Coincidence point theorems for graph-preserving multi-valued mappings. Fixed Point Theory Appl. 2014, 2014, 70. [Google Scholar] [CrossRef] [Green Version]
  11. Sridarat, P.; Suparaturatorn, R.; Suantai, S.; Cho, Y.J. Convergence analysis of SP-iteration for G-nonexpansive mappings with directed graphs. Bull. Malays. Math. Sci. Soc. 2019, 42, 2361–2380. [Google Scholar] [CrossRef]
  12. Tripak, O. Common fixed points of G-nonexpansive mappings on Banach spaces with a graph. Fixed Point Theory Appl. 2016, 2016, 87. [Google Scholar] [CrossRef] [Green Version]
  13. Tiammee, J.; Kaewkhao, A.; Suantai, S. On Browder’s convergence theorem and Halpern iteration process for G-nonexpansive mappings in Hilbert spaces endowed with graphs. Fixed Point Theory Appl. 2015, 2015, 187. [Google Scholar] [CrossRef] [Green Version]
  14. Suantai, S.; Kankam, K.; Cholamjiak, P.; Cholamjiak, W. A parallel monotone hybrid algorithm for a finite family of G-nonexpansive mappings in Hilbert spaces endowed with a graph applicable in signal recovery. Comput. Appl. Math. 2021, 40, 145. [Google Scholar] [CrossRef]
  15. Janngam, K.; Wattanataweekul, R. A new accelerated fixed-point algorithm for classification and convex minimization problems in Hilbert spaces with directed graphs. Symmetry 2022, 14, 1059. [Google Scholar] [CrossRef]
  16. Janngam, K.; Wattanataweekul, R. An accelerated fixed-point algorithm with an inertial technique for a countable family of G-nonexpansive mappings applied to image recovery. Symmetry 2022, 14, 662. [Google Scholar] [CrossRef]
  17. Wattanataweekul, R.; Janngam, K. An accelerated common fixed point algorithm for a countable family of G-nonexpansive mappings with applications to image recovery. J. Inequal. Appl. 2022, 2022, 68. [Google Scholar] [CrossRef]
  18. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef] [Green Version]
  19. Cholamjiak, P.; Shehu, Y. Inertial forward-backward splitting method in Banach spaces with application to compressed sensing. Appl. Math. 2019, 64, 409–435. [Google Scholar] [CrossRef]
  20. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef] [Green Version]
  21. Suantai, S.; Eiamniran, N.; Pholasa, N.; Cholamjiak, P. Three-step projective methods for solving the split feasibility problems. Mathematics 2019, 7, 712. [Google Scholar] [CrossRef]
  22. Suantai, S.; Kesornprom, S.; Cholamjiak, P. Modified proximal algorithms for finding solutions of the split variational inclusions. Mathematics 2019, 7, 708. [Google Scholar] [CrossRef] [Green Version]
  23. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple set split feasibility problem and its applications. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef] [Green Version]
  25. Ben-Tal, A.; Nemirovski, A. Lectures on Modern Convex Optimization, Analysis, Algorithms, and Engineering Applications; MPS/SIAM Ser. Optim.; SIAM: Philadelphia, PA, USA, 2001. [Google Scholar]
  26. Bioucas-Dias, J.; Figueiredo, M. A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration. IEEE Trans. Image Process. 2007, 16, 2992–3004. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Chen, S.S.; Donoho, D.L.; Saunders, M.A. Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 1998, 20, 33–61. [Google Scholar] [CrossRef]
  28. Donoho, D.L.; Johnstone, I.M. Adapting to unknown smoothness via wavelet shrinkage. J. Am. Statist. Assoc. 1995, 90, 1200–1224. [Google Scholar] [CrossRef]
  29. Figueiredo, M.A.T.; Nowak, R.D.; Wright, S.J. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 2007, 1, 586–597. [Google Scholar] [CrossRef] [Green Version]
  30. Beck, A.; Teboulle, M. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  31. Johnsonbaugh, R. Discrete Mathematics; Pearson: Hoboken, NJ, USA, 1997. [Google Scholar]
  32. Tan, K.; Xu, H.K. Approximating fixed points of nonexpansive mappings by the ishikawa iteration process. J. Math. Anal. Appl. 1993, 178, 301–308. [Google Scholar] [CrossRef] [Green Version]
  33. Takahashi, W. Introduction to Nonlinear and Convex Analysis; Yokohama Publishers: Yokohama, Japan, 2009. [Google Scholar]
  34. Hanjing, A.; Suantai, S. A fast image restoration algorithm based on a fixed point and optimization method. Mathematics 2020, 8, 378. [Google Scholar] [CrossRef]
  35. Moudafi, A.; Al-Shemas, E. Simultaneous iterative methods for split equality problem. Trans. Math. Program. Appl. 2013, 1, 1–11. [Google Scholar]
  36. Nakajo, K.; Shimoji, K.; Takahashi, W. Strong convergence to a common fixed point of families of nonexpansive mappings in Banach spaces. J. Nonlinear Convex Anal. 2007, 8, 11–34. [Google Scholar]
  37. Suantai, S.; Donganont, M.; Cholamjiak, W. Hybrid methods for a countable family of G-nonexpansive mappings in Hilbert spaces endowed with graphs. Mathematics 2019, 7, 936. [Google Scholar] [CrossRef] [Green Version]
  38. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  39. Moreau, J.J. Fonctions convexes duales et points proximaux dans un espace hilbertien. Comptes Rendus Acad. Sci. Paris Ser. A Math. 1962, 255, 2897–2899. [Google Scholar]
  40. Beck, A. First-Order Methods in Optimization; Tel-Aviv University: Tel Aviv-Yafo, Israel, 2017; pp. 129–177. ISBN 978-1-61197-498-0. [Google Scholar]
  41. Bussaban, L.; Suantai, S.; Kaewkhao, A. A parallel inertial S-iteration forward-backward algorithm for regression and classification problems. Carpathian J. Math. 2020, 36, 21–30. [Google Scholar] [CrossRef]
  42. Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
  43. Ding, S.; Xu, X.; Nie, R. Extreme learning machine and its applications. Neural Comput. Appl. 2014, 25, 549–556. [Google Scholar] [CrossRef]
  44. Wang, Z.; Xin, J.; Sun, P.; Lin, Z.; Yao, Y.; Gao, X. Improved lung nodule diagnosis accuracy using lung CT images with uncertain class. Comput. Methods Programs Biomed. 2018, 162, 197–209. [Google Scholar] [CrossRef]
  45. Silitonga, A.S.; Shamsuddin, A.H.; Mahlia, T.M.I.; Milano, J.; Kusumo, F. Siswantoro, J.; Dharma, S.; Sebayang, A.H.; Masjuki, H.H.; Ong, H.C. Biodiesel synthesis from Ceiba pentandra oil by microwave irradiation-assisted transesterification: ELM modeling and optimization. Renew. Energy 2020, 146, 1278–1291. [Google Scholar] [CrossRef]
  46. Huang, G.-B.; Chen, L.; Siew, C.-K. Universal approximation using incremental constructive feedforward networks with random hidden nodes. Trans. Neural Netw. 2006, 17, 879–892. [Google Scholar] [CrossRef] [Green Version]
  47. Widrow, B.; Greenblatt, A.; Kim, Y.; Park, D. The no-prop algorithm: A new learning algorithm for multilayer neural networks. J. Comput. Graph. Stat. 2013, 17, 182–188. [Google Scholar] [CrossRef]
  48. Brunton, S.L.; Kutz, J.N. Singular Value Decomposition (SVD). In Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control; Cambridge University Press: Cambridge, UK, 2019; pp. 3–46. [Google Scholar]
  49. Tikhonov, A.N.; Arsenin, V.Y. Solutions of Ill-Posed Problems; V.H. Winston: Washington, DC, USA, 1977. [Google Scholar]
  50. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B Methodol. 1996, 58, 267–288. [Google Scholar] [CrossRef]
  51. Polyak, B. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  52. Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  53. Janngam, K.; Suantai, S. An accelerated forward-backward algorithm with applications to image restoration problems. Thai. J. Math. 2021, 19, 325–339. [Google Scholar]
  54. Alakoya, T.O.; Jolaoso, L.O.; Mewomo, O.T. Two modifications of the inertial Tseng extra gradient method with self-adaptive step size for solving monotone variational inequality problems. Demonstr. Math. 2020, 53, 208–224. [Google Scholar] [CrossRef]
  55. Gebrie, A.G.; Wangkeeree, R. Strong convergence of an inertial extrapolation method for a split system of minimization problems. Demonstr. Math. 2020, 53, 332–351. [Google Scholar] [CrossRef]
  56. Yatakoat, P.; Suantai, S.; Hanjing, A. On some accelerated optimization algorithms based on fixed point and linesearch techniques for convex minimization problems with applications. Adv. Contin. Discrete Models 2022, 2022, 25. [Google Scholar] [CrossRef]
  57. Nesterov, Y. A method for solving the convex programming problem with convergence rate O(1/k2). Dokl. Akad. Nauk SSSR 1983, 269, 543–547. [Google Scholar]
  58. Mangasarian, O.L.; Wolberg W., H. Cancer diagnosis via linear programming. SIAM News 1990, 23, 1–18. [Google Scholar]
  59. Lichman, M. UCI Machine Learning Repository. Available online: https://archive.ics.uci.edu/ml/datasets/Heart+Disease (accessed on 20 April 2020).
  60. Dua, D.; Graff, C. UCI Machine Learning Repository. Available online: https://archive.ics.uci.edu/ml/datasets/ionosphere (accessed on 23 September 2022).
Table 1. Selected parameters of each method.
Table 1. Selected parameters of each method.
MethodsSetting
Algorithm 2 α n = n n + 1 , β n = 0.99 , c = 1 / L , ϱ n = n n + 1 if
1 n I , and 1 / 2 n otherwise
PISFBA α n = β n = 0.9 n n + 1 , c = 1 / L , θ n = 1 2 n x n x n 1 if
x n x n 1 , and 0 otherwise
FISTA t 1 = 1 , t n + 1 = ( 1 + 1 + 4 t n 2 ) / 2 ,
θ n = ( t t 1 ) / t n + 1
Table 2. Data sets of Breast Cancer, Heart Disease UCI, and Ionosphere, 70% of training and 30% of testing of each data set.
Table 2. Data sets of Breast Cancer, Heart Disease UCI, and Ionosphere, 70% of training and 30% of testing of each data set.
Data SetFeaturesSample
Training SetTesting Set
Breast Cancer14478205
Heart Disease UCI1421390
Ionosphere34205146
Table 3. Number of hidden nodes and iterations for each data set.
Table 3. Number of hidden nodes and iterations for each data set.
Data SetsNumber of Hidden Nodes (M)Number of Iterations (I)
Breast Cancer100400
Heart Disease UCI350500
Ionosphere50100
Table 4. Performance comparison using different methods.
Table 4. Performance comparison using different methods.
Data SetAlgorithm 2PISFBAFISTA
acc.Trainacc.Testacc.Trainacc.Testacc.Trainacc.Test
Breast Cancer97.1197.4696.1195.8992.6493.25
Heart Disease UCI78.3479.0172.5473.8469.6468.52
Ionosphere93.9894.0990.5490.7191.3391.71
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Janngam, K.; Suantai, S. An Inertial Modified S-Algorithm for Convex Minimization Problems with Directed Graphs and Its Applications in Classification Problems. Mathematics 2022, 10, 4442. https://doi.org/10.3390/math10234442

AMA Style

Janngam K, Suantai S. An Inertial Modified S-Algorithm for Convex Minimization Problems with Directed Graphs and Its Applications in Classification Problems. Mathematics. 2022; 10(23):4442. https://doi.org/10.3390/math10234442

Chicago/Turabian Style

Janngam, Kobkoon, and Suthep Suantai. 2022. "An Inertial Modified S-Algorithm for Convex Minimization Problems with Directed Graphs and Its Applications in Classification Problems" Mathematics 10, no. 23: 4442. https://doi.org/10.3390/math10234442

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop