Next Article in Journal
A Parsimonious Granger Causality Formulation for Capturing Arbitrarily Long Multivariate Associations
Next Article in Special Issue
A Novel Active Learning Regression Framework for Balancing the Exploration-Exploitation Trade-Off
Previous Article in Journal
Secrecy Enhancing Scheme for Spatial Modulation Using Antenna Selection and Artificial Noise
Previous Article in Special Issue
Kernel Risk-Sensitive Mean p-Power Error Algorithms for Robust Learning
Article Menu

Export Article

Open AccessArticle

Smooth Function Approximation by Deep Neural Networks with General Activation Functions

Department of Statistics, Seoul National University, Seoul 08826, Korea
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(7), 627; https://doi.org/10.3390/e21070627
Received: 4 June 2019 / Revised: 21 June 2019 / Accepted: 25 June 2019 / Published: 26 June 2019
(This article belongs to the Special Issue Information Theoretic Learning and Kernel Methods)
  |  
PDF [380 KB, uploaded 2 July 2019]

Abstract

There has been a growing interest in expressivity of deep neural networks. However, most of the existing work about this topic focuses only on the specific activation function such as ReLU or sigmoid. In this paper, we investigate the approximation ability of deep neural networks with a broad class of activation functions. This class of activation functions includes most of frequently used activation functions. We derive the required depth, width and sparsity of a deep neural network to approximate any Hölder smooth function upto a given approximation error for the large class of activation functions. Based on our approximation error analysis, we derive the minimax optimality of the deep neural network estimators with the general activation functions in both regression and classification problems. View Full-Text
Keywords: function approximation; deep neural networks; activation functions; Hölder continuity; convergence rates function approximation; deep neural networks; activation functions; Hölder continuity; convergence rates
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Ohn, I.; Kim, Y. Smooth Function Approximation by Deep Neural Networks with General Activation Functions. Entropy 2019, 21, 627.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top