Next Article in Journal
Probabilistic Three-Party Sharing of Operation on a Remote Qubit
Next Article in Special Issue
Synchronicity from Synchronized Chaos
Previous Article in Journal
Factorization and Criticality in the Anisotropic XY Chain via Correlations
Previous Article in Special Issue
Parameters Estimation of Uncertain Fractional-Order Chaotic Systems via a Modified Artificial Bee Colony Algorithm
Article Menu

Export Article

Open AccessArticle
Entropy 2015, 17(2), 818-840; doi:10.3390/e17020818

Distributed Extreme Learning Machine for Nonlinear Learning over Network

Department of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China
*
Author to whom correspondence should be addressed.
Received: 26 December 2014 / Revised: 20 January 2015 / Accepted: 6 February 2015 / Published: 12 February 2015
(This article belongs to the Special Issue Recent Advances in Chaos Theory and Complex Networks)
View Full-Text   |   Download PDF [419 KB, uploaded 24 February 2015]   |  

Abstract

Distributed data collection and analysis over a network are ubiquitous, especially over a wireless sensor network (WSN). To our knowledge, the data model used in most of the distributed algorithms is linear. However, in real applications, the linearity of systems is not always guaranteed. In nonlinear cases, the single hidden layer feedforward neural network (SLFN) with radial basis function (RBF) hidden neurons has the ability to approximate any continuous functions and, thus, may be used as the nonlinear learning system. However, confined by the communication cost, using the distributed version of the conventional algorithms to train the neural network directly is usually prohibited. Fortunately, based on the theorems provided in the extreme learning machine (ELM) literature, we only need to compute the output weights of the SLFN. Computing the output weights itself is a linear learning problem, although the input-output mapping of the overall SLFN is still nonlinear. Using the distributed algorithmto cooperatively compute the output weights of the SLFN, we obtain a distributed extreme learning machine (dELM) for nonlinear learning in this paper. This dELM is applied to the regression problem and classification problem to demonstrate its effectiveness and advantages. View Full-Text
Keywords: distributed learning; extreme learning machine; nonlinear learning; diffusion; least-mean square (LMS); recursive least squares (RLS); regression; classification distributed learning; extreme learning machine; nonlinear learning; diffusion; least-mean square (LMS); recursive least squares (RLS); regression; classification
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Huang, S.; Li, C. Distributed Extreme Learning Machine for Nonlinear Learning over Network. Entropy 2015, 17, 818-840.

Show more citation formats Show less citations formats

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top