Next Article in Journal
Efficient Dynamic Flow Algorithms for Evacuation Planning Problems with Partial Lane Reversal
Next Article in Special Issue
Scattered Data Interpolation and Approximation with Truncated Exponential Radial Basis Function
Previous Article in Journal
On the Integral of the Fractional Brownian Motion and Some Pseudo-Fractional Gaussian Processes
Previous Article in Special Issue
Prediction of Discretization of GMsFEM Using Deep Learning
Open AccessArticle

Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations

Department of Mathematics, Texas A&M, College Station, TX 77843, USA
Mathematics 2019, 7(10), 992; https://doi.org/10.3390/math7100992
Received: 29 September 2019 / Revised: 15 October 2019 / Accepted: 16 October 2019 / Published: 18 October 2019
(This article belongs to the Special Issue Computational Mathematics, Algorithms, and Data Processing)
This article concerns the expressive power of depth in neural nets with ReLU activations and a bounded width. We are particularly interested in the following questions: What is the minimal width w min ( d ) so that ReLU nets of width w min ( d ) (and arbitrary depth) can approximate any continuous function on the unit cube [ 0 , 1 ] d arbitrarily well? For ReLU nets near this minimal width, what can one say about the depth necessary to approximate a given function? We obtain an essentially complete answer to these questions for convex functions. Our approach is based on the observation that, due to the convexity of the ReLU activation, ReLU nets are particularly well suited to represent convex functions. In particular, we prove that ReLU nets with width d + 1 can approximate any continuous convex function of d variables arbitrarily well. These results then give quantitative depth estimates for the rate of approximation of any continuous scalar function on the d-dimensional cube [ 0 , 1 ] d by ReLU nets with width d + 3 . View Full-Text
Keywords: Deep Neural Nets; ReLU Networks; Approximation Theory Deep Neural Nets; ReLU Networks; Approximation Theory
MDPI and ACS Style

Hanin, B. Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations. Mathematics 2019, 7, 992.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop