Next Article in Journal
Adjusting Electric Field Intensity Using Hybridized Dielectric Metamolecule
Previous Article in Journal
Some New Quantum Hermite–Hadamard-Type Estimates Within a Class of Generalized (s,m)-Preinvex Functions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Orthogonal Extreme Learning Machine for Regression and Classification Problems

1
Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024, China
2
Public Security Information Department, Liaoning Police College, Dalian 116036, China
3
Information Science and Technology School, Dalian Maritime University, Dalian 116026, China
*
Authors to whom correspondence should be addressed.
Symmetry 2019, 11(10), 1284; https://doi.org/10.3390/sym11101284
Submission received: 18 August 2019 / Revised: 7 October 2019 / Accepted: 10 October 2019 / Published: 14 October 2019

Abstract

:
An extreme learning machine (ELM) is an innovative algorithm for the single hidden layer feed-forward neural networks and, essentially, only exists to find the optimal output weight so as to minimize output error based on the least squares regression from the hidden layer to the output layer. With a focus on the output weight, we introduce the orthogonal constraint into the output weight matrix, and propose a novel orthogonal extreme learning machine (NOELM) based on the idea of optimization column by column whose main characteristic is that the optimization of complex output weight matrix is decomposed into optimizing the single column vector of the matrix. The complex orthogonal procrustes problem is transformed into simple least squares regression with an orthogonal constraint, which can preserve more information from ELM feature space to output subspace, these make NOELM more regression analysis and discrimination ability. Experiments show that NOELM has better performance in training time, testing time and accuracy than ELM and OELM.

1. Introduction

An extreme learning machine (ELM) is an innovative learning algorithm for the single hidden layer feed-forward neural networks (SLFNs for short), proposed by Huang et al [1], that is characterized by the internal parameters generated randomly without tuning. In essence, the ELM is a special artificial neural network model, whose input weights are generated randomly and fixed, so as to get the unique least-squares solution of the output weight [1], making the performance better [2,3,4]. In the conventional model there is a lack of convergence ability, generalization, over-fitting, local minimum and parameter adjustment, all of which make the ELM superior [1,5]. Considering the learning process of the ELM, it is relatively simple. Firstly, some internal parameters of the hidden layer are generated randomly, such as input weights connecting the input layer and hidden layer, the number of hidden layer neurons, etc., which are fixed during the whole process. Secondly, the non-linear mapping function is selected to map the inputting data to the feature space, and through analyzing the real output results and expected output results, the key parameter (output weight connecting the hidden layer and the output layer) can be directly obtained, omitting iterative tuning. So, its training speed is considerably faster than that of the conventional algorithms [6].
Due to the good performance of the ELM, it is used widely in regression and classification. For meeting higher requirements, researchers optimize and improve the ELM, and have proposed many better algorithms based on ELM. Di Wang et al. combined the local-weighted jackknife and RELM, proposed a novel conformal regressor (LW-JP-RELM), which complements ELM with interval predictions satisfying a given level of confidence [7]. For improving the generalization performance, Ying Yin et al. proposed enhancing the ELM by a Markov boundary-based feature selection, based on the feature interaction and the mutual information to reduce the number of features, so as to construct more compact network, whose generalization was improved greatly [8]. Ding et al. reformulated an optimization extreme learning machine to take a new regularization parameter, which is bounded between 0 and 1, and also easier to interpret as compared to the error penalty parameter C , it could achieve better generalization performance [9]. For solving sensibility of ELM to the ill-conditioned data, Hasan et al. proposed two novel algorithms based on ELM, ridge regression and almost unbiased ridge regression, and also gave three criteria to select the regularization parameter, which improved the generalization and stability of the ELM greatly [10]. Besides, there are more effective algorithms based on the ELM, such as Distributed Generalized Regularized ELM (DGR-ELM) [11], Self-Organizing Map ELM (SOM-ELM) [12], Data and Model Parallel ELM (DMP-ELM) [13], Genetic Algorithm ELM (GA-ELM) [14], Jaya optimization with mutation ELM (MJaya-ELM) [15], et al.
Either with a simple ELM or more complex algorithms based on ELM one must find the optimal solution of the two key parameters in ELM essentially, the number of hidden layer neurons and the output weights. From input layer to the output layer, essentially ELM learns the output weights based on the least squares regression analysis [16]. Therefore, many algorithms except those mentioned above are still proposed based on least squares regression, and their main work is to find an optimal transformation matrix, so as to minimize the error of sum-of-squares. Among these strategies [17,18], introducing orthogonal constraint into the optimization problem is required and also employed widely in the classification and subspace learning. Nie et al. showed that the performance of least squares discriminant and regression analysis after introducing orthogonal constraint is much better than those without orthogonal constraint [19,20]. After introducing orthogonal constraint into ELM, the optimization problem is seen as unbalanced procrustes problems, which is hard to be solved. Yong Peng et al. pointed out that the unbalanced procrustes problem can be transformed into a balanced procrustes problem, which is relatively simple [16]. Motivated by this research, in this paper we focus on the output weight, a novel orthogonal optimizing method (NOELM) is proposed to solve the unbalanced procrustes problem, and its main contribution is that the optimization of complex matrix is decomposed into optimizing the single column vector of the matrix, reducing the complexity of the algorithm.
The remainder of the paper is organized as follows. Section 2 reviews briefly the basic ELM model. In Section 3, the model formulation and the iterative optimization method are detailed. The convergence and complexity analysis is presented in Section 4. In Section 5, the experiments are conducted to show the performances of NOELM. Finally, Section 6 concludes the paper.

2. Extreme Learning Machine

Mathematically, given N discrete sample ( x i ,   y i ) i = 1 N , where N is the sample number, x i R n is the input vector, y i R m is the expected output vector, and the expected output of the i - t h sample is y i , y i = [ y i 1   y i 2     y i m   ] T , i = 1 , 2 , 3 , , N . For selected activation function, if the real output of the SLFNs is the same as the expected output y i , the mathematical representation of SLFNs is as follows:
f ( x j ) = i = 1 L β i g ( ω i , x j , b i ) = y j ,
where ω i = [ ω i 1 , , ω i n ] T is the input weight connecting the input layer to the i - t h hidden layer neuron, b i is the basis of the i - t h hidden layer neuron, β i = [ β i 1 , ,   β i m ] T is the output weight connecting the i - t h hidden layer neuron and the output layer, and L is the number of the hidden layer neurons, shown in Figure 1.
Equation (1) can be compactly rewritten as
H β = Y ,
where
H = [ g ( ω 1 , x 1 , b 1 ) g ( ω L , x 1 , b L ) g ( ω 1 , x N , b 1 ) g ( ω L , x N , b L ) ] N × L   ,
β = [ β 1 T β L T ] L × m ,   Y = [ y 1 T y N T ] N × m .
So, based on the theory of ELM, the optimal solution of Equation (2) is as follows,
β = H Y ,
where H is the Moore–Penrose inverse of the Matrix H , H =   ( H T H ) 1 H T . For further improving model precision, the regularization is introduced into ELM, the optimal problem is transformed as follows,
m i n :   1 2 H β Y 2 + C 2 β 2 ,
where C is the regularization parameter, which is used to balance the empirical risk and structural risk. Based on the Karush–Kuhn–Tucker condition, the optimal solution of β is obtained:
β = ( H T H + C I ) 1 H T Y .

3. Novel Orthogonal Extreme Learning Machine (NOELM)

The orthogonal constraint is introduced into ELM, shown in Figure 1, the optimal problem is transformed as follows,
J ( β L + 1 ) = min β L + 1 T β L + 1 = I H L + 1 β L + 1 Y 2 ,
where H L + 1 R N × ( L + 1 ) , β L + 1 R ( L + 1 ) × m is the output matrix and the output weight of the hidden layer, Y R N * m . Because of the orthogonal constraint, the input samples are mapped into an orthogonal subspace, where their metric structure could be preserved.
Set L > m , so the problem (8) is an unbalanced orthogonal procrustes problem which is difficult to be resolved directly because of the orthogonal constraint [16]. In this paper, an improved method is proposed to optimize the problem (8) based on the following lemma.
Lemma 1
[[21], Theorem 3.1]. If β L + 1 * = [ ρ 1 * , , ρ L + 1 * ] is the optimal solution for the problem (8) and its orthogonal complement is B L + 1 * , then H L + 1 T [ Y ,   H L + 1 T B L + 1 * ] is positive, semi-definite and symmetric, and
H L + 1 T ρ j * y j = min ρ j β ˜ j * , ρ j 2 = 1 H L + 1 ρ j y ^ j ,
where β ˜ j * = [ ρ 1 * , ρ j 1 * , ρ j + 1 * , , ρ L + 1 * ] , y ^ j is the j - t h column vector of Y .
The proof of Lemma 1 is simple, which can be found in the literature [21]. Motivated by Lemma 1, a local transformation is applied in the Equation (8), we relax the j - t h column ρ j ( j ( L + 1 ) ) and fix others, β ˜ j = [ ρ 1 , , ρ j 1 , ρ j + 1 , , ρ L + 1 ] , then the equation could be transformed into
J ( ρ j ) = min ρ j β ˜ j , ρ j 2 = 1 H L + 1 ρ j y ^ j 2 .
If ρ j * is the optimal solution of Equation (10), the approximation β L + 1 could be improved after replacing ρ j by ρ j * , and obviously, the modified β L + 1 * = [ ρ 1 , , ρ j 1 , ρ j * , ρ j + 1 , , ρ L + 1 ] is also orthogonal.
To resolve the constrained problem (10) is a little difficult, so, the orthogonal complement B L + 1 of β L + 1 can be used to simplify the Equation (10). Set P L + 1 = [ ρ j , B L + 1 ] , and it is known ρ j β ˜ j , then P L + 1 is the orthogonal complement of β ˜ j . So, in the constrained problem (10), the condition ρ j β ˜ j could be represented in another form, ρ j = P L + 1 x , x R n L is a unit vector. Thus, the problem (10) can be transformed into the following form with quadratic equality constraint:
J ( x ) = min x 2 = 1 H L + 1 P L + 1 x y ^ j 2 .
Clearly, after get the optimal solution x * of problem (11), the solution of problem (10) is ρ j = P L + 1 x * . If the orthogonal complement of β L + 1 * is B L + 1 * , then B L + 1 * = P L + 1 W , W is the orthogonal complement of x * , and it can be constructed easily, using the Householder reflection I 2 u ω T with ω = 1 , which meets ( I 2 ω ω T ) x * = s i g n ( x 1 * ) e 1 , x 1 * is the first component of x * . Indeed, partitioning P L + 1 ( I 2 ω ω T ) , ρ j * and B L + 1 * can be picked out from the following equation,
P L + 1 ( I 2 ω ω T ) = [ s i g n ( x 1 * ) ρ j * ,   B L + 1 * ] .
For resolving the problem (11), it first rewrites the Equation (11) in general form,
J ( x ) = min x 2 = 1 A x y 2 ,
where A = H L + 1 P L + 1 , y = y ^ j . A x y 2 is transformed in the following form
A x y 2 = A x 2 + y 2 2 t r a c e ( x T A T y ) A 2 + y 2 2 t r a c e ( x T A T y ) .
Known from the Equations (13) and (14), the parameters A and y are fixed, the minimum problem of function J ( x ) is transformed to the maximum of t r a c e ( x T A T y ) approximately, showing by Equation (15), denoted by W = A T y .
x = { X : X = arg max t r a c e ( X T W ) } .
Let Singular Value Decomposition of W be
W = U d i a g ( Σ k ,   O s k ) V T ,
where x R n L , s = n L , Σ k = d i a g ( σ 1 , , σ k ) , σ 1 σ 2 σ k > 0 , k = r a n k ( W ) , U and V are orthogonal.
Set X * = U T X V ,
t r a c e ( X T W ) = t r a c e ( X * d i a g ( Σ k ,   O s k ) ) .
So,
x = { X : X = U X * V T , X = arg max t r a c e ( X * d i a g ( Σ k ,   O s k ) ) } ,
As known above, x R n L is a unit vector, partitioning X *
X * = [ X 11 X 21 ] ,   X 11 R k ,   X 21 R ( s k ) .
Because of x R n L , then
max t r a c e ( X * d i a g ( Σ k ,   O s k ) ) = max t r a c e ( X 11 T Σ k ) .
Note that X * = [ x i j ] is unit and orthogonal, X * = 1 , then 1 x i j 1 , so based on the Equation (20), for the maximum, it can be deduced that x i j = 1 , i = j , then X 11 = I k , X 21 O s k , and k = 1 . Hence, G O s k ,
x = { X : X = U [ I k G ] V T } .
Based on the analysis above, the novel optimization to objective problem (8) is proposed in the position, its detail is as follows (Algorithm 1):
Algorithm 1: Optimization to objective problem (8)
Basic Information: training samples { ( x i ,   y i ) i = 1 N |   x i R n ,   y i R m }
Initialization: Set threshold τ and η
S1.Generate the input weight layer w and bas vector b ;
S2.Calculate the output matrix of the hidden layer H based on Equation (3);
S3.Calculate the orthogonal β of span H T Y , and its orthogonal complement B L + 1 , then r 0 = H β Y ;
S3.While j = 1 , 2 , , m
S4.    Relax the j - t h column ρ j , y ^ j from the matrix β , Y separately, and fix the rest;
S5.    Set P = [ ρ j , B L + 1 ] , then solve x = arg min x = 1 H P x y ^ j 2 ;
S6.    Set A = H P , y = y ^ j , then W = A T y . By SVD, W = U d i a ( Σ k ,   O s k ) V T , so as to obtain U and V ;
S7.    Based on the Equation (21), x = U I k V T ;
S8.    Calculate the vector u = ( x + s e 1 ) / x + s e 1 , s = s i g n ( x ) ;
S9.    Partition P ( I 2 u u T ) = [ s ρ j * , B L + 1 * ] so as to obtain ρ j * and B L + 1 , then replace ρ j of β   b y   ρ j * to obtain β * , B L + 1 = B L + 1 * and β = β * ;
End While
S10.Calculate r 1 = H β * Y , then if ( r 0 r 1 ) < τ r 1 r 1 η , terminate, otherwise, r 0 = r 1 , B L + 1 is the new orthogonal complement of β * , go to step S3.

4. Convergence and Complexity Analysis

Considering the convergence if the algorithm, Let { β i ,   j } be a sequence of β * generating during iteration, which converges to β , so, its orthogonal complement B i ,   j also converges to B , where i is the iterating number, and i is the operation of relaxing the j - t h column from original matrix, so, it follows P i , j = [ ρ j i , j ,   B i ,   j ] converges to P = [ ρ j ,   B ] .
Based on the equations above, it is know that ρ j * is the optimal solution of Equation (10). If H ρ j y ^ j H ρ j * y ^ j = η , then for i is large enough, P i , j and P meets
P i , j P = [ ρ j i , j ,   B i ,   j ] [ ρ j ,   B ] < η 4 H 2 ,
Set ρ j * = P x , x = 1 , and ρ ^ j = P i , j x , then based on Equation (22), it has
ρ ^ j ρ j * = P i , j x P x P i , j P x < η 4 H 2 ,
Based on Equations (10) and (22), it has
ρ ^ j ρ j * = P i , j x P x P i , j P x < η 4 H 2 ,
Based on the Equations (23) and (24), it has
f ( β i , j ) f ( β i + 1 , j ) = H β i , j Y H β i + 1 , j Y = H ρ j i , j y ^ j min ρ β ˜ j i + 1 , j , ρ = 1 H ρ y ^ j H ρ j i , j y ^ j H ρ ^ j y ^ j ( H ρ j y ^ j ) + H ( ρ j i , j ρ j ) + ( H ρ j * y ^ j ) + H ( ρ ^ j ρ j * ) ( H ρ j y ^ j ) H ( ρ j i , j ρ j ) ( H ρ j * y ^ j ) H ( ρ ^ j ρ j * ) ( H ρ j y ^ j ) ( H ρ j * y ^ j ) H ( ρ j i , j ρ j ) H ( ρ ^ j ρ j * ) η η 4 η 4   η 2
So, f ( β i , j ) f ( β i + 1 , j )   η 2 , based on the derivation of the inequality above, it can deduced that f ( β 1 , j ) > f ( β 2 , j ) > > f ( β n , j ) . By the same method and analysis, it also can be obtained that f ( β i , 1 ) > f ( β i , 2 ) > > ( β i , m ) . So, it is f ( β 1 , 1 ) > f ( β 1 , 2 ) > > f ( β 2 , 1 ) > > f ( β 3 , 1 ) > , so the sequence { f ( β i , j ) } is monotonically decreasing, and when i , f ( β i , j ) f ( β i + 1 , j ) 0 . In a word, after analysis above, the novel algorithm monotonically decreases the objective shown in Equation (8).
It is known that the complexity of ELM derives from the calculation of output weights β , or rather, it is mainly used to calculate the inverse of matrix H T H + C I . In most cases, the number of hidden layer neurons L is much smaller than the training sample size N , L N , thus the complexity is less than least square support vector machine (LS-SVM) and proximal support vector machine (PSVM), which need to calculate the inverse of N × N matrix [16]. As we know, the complexity of ELM and OELM is O ( L 3 ) , O ( t ( N L 2 + L 3 ) ) separately. As for the complexity of the novel algorithm proposed in the paper, its main calculation is from the loop. In each iteration, it   needs   to   find   the   optimal   solution   of   one   column relaxing from β , and during this, it needs to do SVD decomposition on the m × 1 matrix A T y , whose complexity is O ( m 2 ) , and then, the complexity of updating β once is O ( m ) . So, the complexity of the proposed algorithm is O ( t m 3 ) , where t is the number of updating β . In real application, regardless of classification or regression, the output dimension is much less than the number of hidden layer neurons and the training samples size.
As we know, ( H β ) T = Y T , then β T h i T ( x i ) = y i . Considering the Euclidean distance between any two data points y i and y j , because of the orthogonal constraint β T β = I , it has y i y j = h i ( x i ) h j ( x j ) . It is known that h i ( x i ) is the point in the ELM feature space, h i ( x i ) h j ( x j ) is the distance in ELM space, and y i y j is the distance in the subspace. From this analysis, the novel ELM with orthogonal constraints is superior in maintaining the metric structure from first to last.

5. Performance Evaluation

For testing the performances of the novel algorithm proposed in the paper, it is compared with other learning algorithms on the classification problems (EMG for Gestures, Avila and Ultrasonic Flowmeter) and regression problems (Auto price, Breast cancer, Buston housing, etc.), which are from the University of California Irvine (UCI) machine learning repository [22], shown in Table 1. These learning algorithms include ELM [1], OELM [16] and I-ELM [23,24], their activation function is the sigmoid function, and the number of hidden layer neurons is set as three times as the input dimension. For I-ELM, the initial number of hidden layer neurons is set to zero. In the real experiments, the key parameters such as the input weights, the biases, etc., are generated randomly from [ 1 ,   1 ] , and then, all samples are normalized into [ 1 ,   1 ] , and the outputs of the regression problems are normalized into [ 0 ,   1 ] [25]. All simulations are done in Matlab R2016a environment.
In the classification problems, ELM and OELM are selected to compare with NOELM. The experimental results are shown in Figure 2 and Figure 3. Figure 2 shows the convergence property of NOELM. At first, the convergence rate is larger, the objective value falls rapidly, when reaching about 0.8, it falls slowly, until stable. During the whole process, the number of iterations does not vary significantly, the maximum is not more than 20, and the minimum is only about 5, so in a word, the novel algorithm is a little more effective. Figure 3 shows the comparison of the training time and classification rate. Due to the complexity above, the traditional ELM is low in complexity, and its training time is shortest. The complexity of NOELM is less than OELM, then its training time is shorter than OELM, and longer than that of ELM because of too many iterations, but the difference is not larger than 0.05. Although NOELM is not the best in terms of training time, its classification is better than the other two, the largest rate can reach 0.9.
In the regression problems, ELM, OELM and I_ELM are selected to compare with NOELM, the experimental results are shown in Table 2. As mentioned above, the number of hidden layer neurons is determined based on the input dimension, so the hidden layer neurons of ELM, OELM and NOELM are fixed, and the others are dynamically increasing hidden layer neurons. Analyzing the information of Table 2, compared with I_ELM, the network complexity of NOELM is a little lower, and its structure is more compact, but it is a little worse than I_ELM in some datasets, the difference is not large and fully acceptable. As for the accuracy of training and testing from Table 3 and Table 4, comparing with ELM and OELM, the performances of NOELM is better, it has better stability. Because of characteristics of I_ELM, it constructs a more compact network and is a little superior in the training and testing accuracy in some datasets, and this is just the weak point of NOELM and other related algorithms. However, by introducing the orthogonal constraints and improving the algorithm, NOELM can greatly narrow this gap, and its performance is also acceptable.

6. Conclusions

In this paper, referring to the idea of OELM, the orthogonal constraint is introduced into the ELM, then a novel orthogonal ELM is proposed (NOELM), which is a special supervised learning algorithm theoretically. By contrast with the OELM, the main characteristic and contribution is to transform the complex unbalanced orthogonal procrustes problem to a simple least squares problem with orthogonal constraint based on the single vector, and to optimize the single column vector of the output weight matrix so as to obtain the optimal solution of the whole matrix. Compared with ELM and OELM, NOELM can achieve a much better neural network at fast convergence rate and higher training and testing accuracy. Although NOELM is a little weaker than I_ELM in some aspects, the gap is very narrow, and the result is still acceptable.

Author Contributions

L.C. proposed the original idea of the research and wrote some parts of the research. H.Z. carried out the experiments and analyzed the experiments result. H.L. gave related guidance.

Funding

This work was partially support by supported the Fundamental Research Funds for the Central Universities (No.3132019205 and 3132019354), by Liaoning Provincial Natural Science Foundation of China (Grant No.20170520196) and by Scientific Research Funds of Liaoning Provincial educational department (Grant No. JYT2019LQ01 and JYT2019LQ02).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
  2. Deo, R.C.; Şahin, M. Application of the Artificial Neural Network model for prediction of monthly Standardized Precipitation and Evapotranspiration Index using hydrometeorological parameters and climate indices in eastern Australia. Atmos. Res. 2015, 161–162, 65–81. [Google Scholar] [CrossRef]
  3. Acharya, N.; Singh, A.; Mohanty, U.C.; Nair, A.; Chattopadhyay, S. Performance of general circulation models and their ensembles for the prediction of drought indices over India during summer monsoon. Nat. Hazards 2013, 66, 851–871. [Google Scholar] [CrossRef]
  4. Deo, R.C.; Tiwari, M.K.; Adamowski, J.F.; Quilty, J.M. Forecasting effective drought index using a wavelet extreme learning machine (W-ELM) model. Stoch. Environ. Res. Risk Assess. 2017, 31, 1211–1240. [Google Scholar] [CrossRef]
  5. Huang, G.-B.; Chen, L. Convex incremental extreme learning machine. Neurocomputing 2007, 70, 3056–3062. [Google Scholar] [CrossRef]
  6. Zhou, Z.; Chen, J.; Zhu, Z. Regularization incremental extreme learning machine with random reduced kernel for regression. Neurocomputing 2018, 321, 72–81. [Google Scholar] [CrossRef]
  7. Wang, D.; Wang, P.; Shi, J. A fast and efficient conformal regressor with regularized extreme learning machine. Neurocomputing 2018, 304, 1–11. [Google Scholar] [CrossRef]
  8. Yin, Y.; Zhao, Y.; Zhang, B.; Li, C.; Guo, S. Enhancing ELM by Markov Boundary based feature selection. Neurocomputing 2017, 261, 57–69. [Google Scholar] [CrossRef]
  9. Ding, X.-J.; Lan, Y.; Zhang, Z.-F.; Xu, X. Optimization extreme learning machine with ν regularization. Neurocomputing 2017, 261, 11–19. [Google Scholar]
  10. Yildirim, H.; Özkale, M.R. The performance of ELM based ridge regression via the regularization parameters. Expert Syst. Appl. 2019, 134, 225–233. [Google Scholar] [CrossRef]
  11. Inaba, F.K.; Salles, E.O.T.; Perron, S.; Caporossi, G. DGR-ELM–Distributed Generalized Regularized ELM for classification. Neurocomputing 2018, 275, 1522–1530. [Google Scholar] [CrossRef]
  12. Miche, Y.; Akusok, A.; Veganzones, D.; Björk, K.-M.; Séverin, E.; du Jardin, P.; Termenon, M.; Lendasse, A. SOM-ELM—Self-Organized Clustering using ELM. Neurocomputing 2015, 165, 238–254. [Google Scholar] [CrossRef]
  13. Ming, Y.; Zhu, E.; Wang, M.; Ye, Y.; Liu, X.; Yin, J. DMP-ELMs: Data and model parallel extreme learning machines for large-scale learning tasks. Neurocomputing 2018, 320, 85–97. [Google Scholar] [CrossRef]
  14. Krishnan, G.S.; S., S.K. A novel GA-ELM model for patient-specific mortality prediction over large-scale lab event data. Appl. Soft Comput. 2019, 80, 525–533. [Google Scholar] [CrossRef]
  15. Nayak, D.R.; Zhang, Y.; Das, D.S.; Panda, S. MJaya-ELM: A Jaya algorithm with mutation and extreme learning machine based approach for sensorineural hearing loss detection. Appl. Soft Comput. 2019, 83, 105626. [Google Scholar] [CrossRef]
  16. Peng, Y.; Kong, W.; Yang, B. Orthogonal extreme learning machine for image classification. Neurocomputing 2017, 266, 458–464. [Google Scholar] [CrossRef]
  17. Peng, Y.; Lu, B.-L. Discriminative manifold extreme learning machine and applications to image and EEG signal classification. Neurocomputing 2016, 174, 265–277. [Google Scholar] [CrossRef]
  18. Peng, Y.; Wang, S.; Long, X.; Lu, B.-L. Discriminative graph regularized extreme learning machine and its application to face recognition. Neurocomputing 2015, 149, 340–353. [Google Scholar] [CrossRef]
  19. Zhao, H.; Wang, Z.; Nie, F. Orthogonal least squares regression for feature extraction. Neurocomputing 2016, 216, 200–207. [Google Scholar] [CrossRef]
  20. Nie, F.; Xiang, S.; Liu, Y.; Hou, C.; Zhang, C. Orthogonal vs. uncorrelated least squares discriminant analysis for feature extraction. Pattern Recognit. Lett. 2012, 33, 485–491. [Google Scholar] [CrossRef]
  21. Zhang, Z.; Du, K. Successive projection method for solving the unbalanced Procrustes problem. Sci. China Ser. A 2006, 49, 971–986. [Google Scholar] [CrossRef]
  22. Bache, K.; Lichman, M. UCI Machine Learning Repository. University of California, School of Information and Computer Sciences: Irvine, CA, USA, 2013. Available online: http://archive.ics.uci.edu/ml (accessed on 11 October 2019).
  23. Xu, Z.; Yao, M.; Wu, Z.; Dai, W. Incremental Regularized Extreme Learning Machine and It’s Enhancement. Neurocomputing 2015, 174, 134–142. [Google Scholar] [CrossRef]
  24. Huang, G.-B.; Chen, L.; Siew, C.-K. Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans. Neural Netw. 2006, 17, 879–892. [Google Scholar] [CrossRef] [PubMed]
  25. Ying, L. Orthogonal incremental extreme learning machine for regression and multiclass classification. Neural Comput. Appl. 2016, 27, 111–120. [Google Scholar] [CrossRef]
Figure 1. The Architecture of ELM Model.
Figure 1. The Architecture of ELM Model.
Symmetry 11 01284 g001
Figure 2. Convergence property of novel orthogonal ELM (NOELM).
Figure 2. Convergence property of novel orthogonal ELM (NOELM).
Symmetry 11 01284 g002
Figure 3. Comparison of training time and classification rate of ELM, OELM and NOELM.
Figure 3. Comparison of training time and classification rate of ELM, OELM and NOELM.
Symmetry 11 01284 g003
Table 1. The specification of the datasets.
Table 1. The specification of the datasets.
DatasetsTrainingTestingAttributesClass
Avila500020001012
Electro-Myo-Graphic data (EMG) for Gestures10000200068
Ultrasonic Flowmeter11269334
Stock4505009-
Abalone200011778-
Auto price807914-
Auto-Miles Per Gallon (MPG)320788-
Breast cancer1009432-
Buston housing25025613-
California housing8000126408-
Census house (8L)10000127848-
Table 2. Comparison of the network complexity and training time.
Table 2. Comparison of the network complexity and training time.
ELMOELMI_ELMNOELM
NodesTime(s)NodesTime(s)NodesTime(s)NodesTime(s)
Auto price420.0325420.0677500.0374420.0241
Breast cancer961.0217962.1285660.2324960.7568
Buston housing390.0453390.09441000.5672390.0336
Auto-MPG240.8835241.8406760.8173240.6544
Stock270.6392271.3317970.8039270.4735
Abalone240.4836241.0075400.3237240.3582
California housing240.4547240.9473696.0856240.3368
Census house (8L)240.7667241.5973575.2479240.5679
Table 3. Comparison of the average of training and testing (Root Mean Square Error).
Table 3. Comparison of the average of training and testing (Root Mean Square Error).
ELMOELMI_ELMNOELM
TrainTestTrainTestTrainTestTrainTest
Auto price0.12830.12970.11410.12120.09970.10890.10560.1161
Breast cancer0.131820.14990.11630.13400.11320.12190.10700.1245
Buston housing0.16950.17080.141220.15020.14030.13530.12430.1379
Auto-MPG0.15130.15840.12910.13940.13210.13630.11590.1279
Stock0.13800.14230.11950.12450.11970.12270.10840.1138
Abalone0.13270.13390.11710.11790.11090.11250.10770.1082
California housing0.25550.25740.22650.22800.19930.20350.20920.2103
Census house (8L)0.14390.14890.12540.12860.10170.10230.11430.1164
Table 4. Comparison of the standard deviation of training and testing (Root Mean Square Error).
Table 4. Comparison of the standard deviation of training and testing (Root Mean Square Error).
ELMOELMI_ELMNOELM
TrainTestTrainTestTrainTestTrainTest
Auto price0.0033 0.0234 0.0024 0.0215 0.0031 0.01960.0018 0.0204
Breast cancer0.0099 0.0209 0.0088 0.0188 0.0085 0.01670.0082 0.0176
Buston housing0.0130 0.0183 0.0085 0.0148 0.0126 0.0135 0.0059 0.0126
Auto-MPG0.0142 0.0179 0.0101 0.0141 0.0134 0.0162 0.0077 0.0118
Stock0.0161 0.0172 0.0125 0.0131 0.0147 0.0158 0.0103 0.0106
Abalone0.0058 0.0066 0.0053 0.0059 0.0049 0.0056 0.0050 0.0054
California housing0.0047 0.0061 0.0068 0.0082 0.0027 0.00380.0080 0.0094
Census house (8L)0.0034 0.0038 0.0026 0.0043 0.0006 0.00280.0033 0.0046

Share and Cite

MDPI and ACS Style

Cui, L.; Zhai, H.; Lin, H. A Novel Orthogonal Extreme Learning Machine for Regression and Classification Problems. Symmetry 2019, 11, 1284. https://doi.org/10.3390/sym11101284

AMA Style

Cui L, Zhai H, Lin H. A Novel Orthogonal Extreme Learning Machine for Regression and Classification Problems. Symmetry. 2019; 11(10):1284. https://doi.org/10.3390/sym11101284

Chicago/Turabian Style

Cui, Licheng, Huawei Zhai, and Hongfei Lin. 2019. "A Novel Orthogonal Extreme Learning Machine for Regression and Classification Problems" Symmetry 11, no. 10: 1284. https://doi.org/10.3390/sym11101284

APA Style

Cui, L., Zhai, H., & Lin, H. (2019). A Novel Orthogonal Extreme Learning Machine for Regression and Classification Problems. Symmetry, 11(10), 1284. https://doi.org/10.3390/sym11101284

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop