Abstract
This article concerns the expressive power of depth in neural nets with ReLU activations and a bounded width. We are particularly interested in the following questions: What is the minimal width so that ReLU nets of width (and arbitrary depth) can approximate any continuous function on the unit cube arbitrarily well? For ReLU nets near this minimal width, what can one say about the depth necessary to approximate a given function? We obtain an essentially complete answer to these questions for convex functions. Our approach is based on the observation that, due to the convexity of the ReLU activation, ReLU nets are particularly well suited to represent convex functions. In particular, we prove that ReLU nets with width can approximate any continuous convex function of d variables arbitrarily well. These results then give quantitative depth estimates for the rate of approximation of any continuous scalar function on the d-dimensional cube by ReLU nets with width .
1. Introduction
Over the past several years, neural nets, particularly deep nets, have become the state-of-the-art in a remarkable number of machine learning problems, from mastering go to image recognition/segmentation and machine translation (see the review article [] for more background). Despite all their practical successes, a robust theory of why they work so well is in its infancy. Much of the work to date has focused on the problem of explaining and quantifying the expressivity (the ability to approximate a rich class of functions) of deep neural nets [,,,,,,,,,]. Expressivity can be seen both as an effect of both depth and width. It has been known since at least the work of Cybenko [] and Hornik-Stinchcombe-White [] that if no constraint is placed on the width of a hidden layer, then a single hidden layer is enough to approximate essentially any function. The purpose of this article, in contrast, is to investigate the “effect of depth without the aid of width.” More precisely, for each , we would like to estimate:
Here, are the natural numbers and ReLU is the so-called “rectified linear unit,” which is the most popular non-linearity used in practice (see (4) for the exact definition). In Theorem 1, we prove that This raises two questions:
- Q1.
- Is the estimate in the previous line sharp?
- Q2.
- How efficiently can ReLU nets of a given width approximate a given continuous function of d variables?
A priori, it is not clear how to estimate and whether it is even finite. One of the contributions of this article is to provide reasonable bounds on (see Theorem 1). Moreover, we also provide quantitative estimates on the corresponding rate of approximation. On the subject of Q1, we will prove in forthcoming work with M.Sellke [] that in fact, When , the lower bound is simple to check, and the upper bound follows for example from Theorem 3.1 in []. The main results in this article, however, concern Q1 and Q2 for convex functions. For instance, we prove in Theorem 1 that:
where:
This illustrates a central point of the present paper: the convexity of the ReLU activation makes ReLU nets well-adapted to representing convex functions on
Theorem 1 also addresses Q2 by providing quantitative estimates on the depth of a ReLU net with width that approximates a given convex function. We provide similar depth estimates for arbitrary continuous functions on but this time for nets of width Several of our depth estimates are based on the work of Balázs-György-Szepesvári [] on max-affine estimators in convex regression.
In order to prove Theorem 1, we must understand what functions can be exactly computed by a ReLU net. Such functions are always piecewise affine, and we prove in Theorem 2 the converse: every piecewise affine function on can be exactly represented by a ReLU net with hidden layer width at most . Moreover, we prove that the depth of the network that computes such a function is bounded by the number affine pieces it contains. This extends the results of Arora-Basu-Mianjy-Mukherjee (e.g., Theorem 2.1 and Corollary 2.2 in []).
Convex functions again play a special role. We show that every convex function on that is piecewise affine with N pieces can be represented exactly by a ReLU net with width and depth
2. Statement of Results
To state our results precisely, we set notation and recall several definitions. For and a continuous function write:
Further, denote by:
the modulus of continuity of whose value at is the maximum that f can change when its argument moves by at most Note that by the definition of a continuous function, as Next, given and we define a feed-forward neural net with ReLU activations, input dimension , hidden layer width w, depth and output dimension to be any member of the finite-dimensional family of functions:
that map to In (4),
are affine transformations, and for every :
We often denote such a net by and write:
for the function it computes. Our first result contrasts both the width and depth required to approximate continuous, convex, and smooth functions by ReLU nets.
Theorem 1.
Let and be a positive function with . We have the following three cases:
- 1. (f is continuous)
- There exists a sequence of feed-forward neural nets with ReLU activations, input dimension hidden layer width and output dimension such that:In particular, Moreover, write for the modulus of continuity of and fix There exists a feed-forward neural net with ReLU activations, input dimension hidden layer width output dimension and:such that:
- 2. (f is convex)
- There exists a sequence of feed-forward neural nets with ReLU activations, input dimension hidden layer width and output dimension such that:Hence, Further, there exists such that if f is both convex and Lipschitz with Lipschitz constant then the nets in (8) can be taken to satisfy:
- 3. (f is smooth)
- There exists a constant K depending only on d and a constant C depending only on the maximum of the first K derivative of f such that for every , the width nets in (5) can be chosen so that:
The main novelty of Theorem 1 is the width estimate and the quantitative depth estimates (9) for convex functions, as well as the analogous estimates (6) and (7) for continuous functions. Let us briefly explain the origin of the other estimates. The relation (5) and the corresponding estimate are a combination of the well-known fact that ReLU nets with one hidden layer can approximate any continuous function and a simple procedure by which a ReLU net with input dimension d and a single hidden layer of width n can be replaced by another ReLU net that computes the same function, but has depth and width For these width nets, we are unaware of how to obtain quantitative estimates on the depth required to approximate a fixed continuous function to a given precision. At the expense of changing the width of our ReLU nets from to however, we furnish the estimates (6) and (7). On the other hand, using Theorem 3.1 in [], when f is sufficiently smooth, we obtain the depth estimates (10) for width ReLU nets. Indeed, since we are working on a compact set , the smoothness classes from [] reduce to classes of functions that have sufficiently many bounded derivatives.
Our next result concerns the exact representation of piecewise affine functions by ReLU nets. Instead of measuring the complexity of such a function by its Lipschitz constant or modulus of continuity, the complexity of a piecewise affine function can be thought of as the minimal number of affine pieces needed to define it.
Theorem 2.
Let and be the function computed by some ReLU net with input dimension d, output dimension and arbitrary width. There exist affine functions such that f can be written as the difference of positive convex functions:
Moreover, there exists a feed-forward neural net with ReLU activations, input dimension hidden layer width output dimension and:
that computes f exactly. Finally, if f is convex (and hence, h vanishes), then the width of can be taken to be , and the depth can be taken to be
The fact that the function computed by a ReLU net can be written as (11) follows from Theorem 2.1 in []. The novelty in Theorem 2 is therefore the uniform width estimate in the representation on any function computed by a ReLU net and the width estimate for convex functions. Theorem 2 will be used in the proof of Theorem 1.
3. Relation to Previous Work
This article is related to several strands of prior work:
- Theorems 1 and 2 are “deep and narrow” analogs of the well-known “shallow and wide” universal approximation results (e.g., Cybenko [] and Hornik-Stinchcombe-White []) for feed-forward neural nets. Those articles show that essentially any scalar function on the d-dimensional unit cube can be arbitrarily well approximated by a feed-forward neural net with a single hidden layer with arbitrary width. Such results hold for a wide class of nonlinear activations, but are not particularly illuminating from the point of understanding the expressive advantages of depth in neural nets.
- The results in this article complement the work of Liao-Mhaskar-Poggio [] and Mhaskar-Poggio [], who considered the advantages of depth for representing certain hierarchical or compositional functions by neural nets with both ReLU and non-ReLU activations. Their results (e.g., Theorem 1 in [] and Theorem 3.1 in []) give bounds on the width for approximation both for shallow and certain deep hierarchical nets.
- Theorems 1 and 2 are also quantitative analogs of Corollary 2.2 and Theorem 2.4 in the work of Arora-Basu-Mianjy-Mukerjee []. Their results give bounds on the depth of a ReLU net needed to compute exactly a piecewise linear function of d variables. However, except when they do not obtain an estimate on the number of neurons in such a network and hence cannot bound the width of the hidden layers.
- Our results are related to Theorems II.1 and II.4 of Rolnick-Tegmark [], which are themselves extensions of Lin-Rolnick-Tegmark []. Their results give lower bounds on the total size (number of neurons) of a neural net (with non-ReLU activations) that approximates sparse multivariable polynomials. Their bounds do not imply a control on the width of such networks that depends only on the number of variables, however.
- This work was inspired in part by questions raised in the work of Telgarsky [,,]. In particular, in Theorems 1.1 and 1.2 of [], Telgarsky constructed interesting examples of sawtooth functions that can be computed efficiently by deep width 2 ReLU nets that cannot be well approximated by shallower networks with a similar number of parameters.
- Theorems 1 and 2 are quantitative statements about the expressive power of depth without the aid of width. This topic, usually without considering bounds on the width, has been taken up by many authors. We refer the reader to [,] for several interesting quantitative measures of the complexity of functions computed by deep neural nets.
- Finally, we refer the reader to the interesting work of Yarofsky [], which provides bounds on the total number of parameters in a ReLU net needed to approximate a given class of functions (mainly balls in various Sobolev spaces).
4. Proof of Theorem 2
Proof of Theorem 2.
We first treat the case:
when f is convex. We seek to show that f can be exactly represented by a ReLU net with input dimension hidden layer width , and depth Our proof relies on the following observation.
Lemma 1.
Fix and let be an arbitrary function and be affine. Define an invertible affine transformation by:
Then, the image of the graph of T under:
is the graph of viewed as a function on
Proof.
We have Hence, for each we have:
□
We now construct a neural net that computes We note that the construction is potentially applicable to the study of avoiding sets (see the work of Shang []). Define invertible affine functions by:
and set:
Further, define:
where is the standard basis vector so that is the linear map from to that maps to Finally, set:
where maps to the graph of the zero function. Note that the ReLU in this initial layer is linear. With this notation, repeatedly using Lemma 1, we find that:
therefore has input dimension hidden layer width depth N, and computes f exactly.
Next, consider the general case when f is given by:
as in (11). For this situation, we use a different way of computing the maximum using ReLU nets.
Lemma 2.
There exists a ReLU net with input dimension hidden layer width 2, output dimension 1, and depth 2 such that:
Proof.
Set and define:
We have for each :
as desired. □
We now describe how to construct a ReLU net with input dimension d, hidden layer width output dimension and depth that exactly computes f. We use width d to copy the input x, width 2 to compute successive maximums of the positive affine functions using the net from Lemma 2 above, and width 1 as memory in which we store while computing The final layer computes the difference □
5. Proof of Theorem 1
Proof of Theorem 1
We begin by showing (8) and (9). Suppose is convex, and fix A simple discretization argument shows that there exists a piecewise affine convex function such that By Theorem 2, g can be exactly represented by a ReLU net with hidden layer width This proves (8). In the case that f is Lipschitz, we use the following, a special case of Lemma 4.1 in [].
Proposition 1.
Suppose is convex and Lipschitz with Lipschitz constant L. Then, for every , there exist k affine maps such that:
Combining this result with Theorem 2 proves (9). We turn to checking (5) and (10). We need the following observations, which seems to be well known, but not written down in the literature.
Lemma 3.
Let be a ReLU net with input dimension a single hidden layer of width and output dimension There exists another ReLU net that computes the same function as , but has input dimension d and hidden layers with width
Proof.
Denote by the affine functions computed by each neuron in the hidden layer of so that:
Let be sufficiently large so that:
The affine transformations computed by the hidden layer of are then:
and:
We are essentially using width d to copy in the input variable, width 1 to compute each , and width 1 to store the output. □
Recall that positive continuous functions can be arbitrarily well approximated by smooth functions and hence by ReLU nets with a single hidden layer (see, e.g., Theorem 3.1 []). The relation (5) therefore follows from Lemma 3. Similarly, by Theorem 3.1 in [], if f is smooth, then there exists and a constant depending only on the maximum value of the first K derivatives of f such that:
where the infimum is over ReLU nets with a single hidden layer of width n. Combining this with Lemma 3 proves (10).
It remains to prove (6) and (7). To do this, fix a positive continuous function with modulus of continuity Recall that the volume of the unit d-simplex is , and fix Consider the partition:
of into copies of times the standard d-simplex. Here, each denotes a single scaled copy of the unit simplex. To create this partition, we first sub-divide into at most cubes of side length at most . Then, we subdivide each such smaller cube into copies of the standard simplex (which has volume ) rescaled to have side length . Define to be a piecewise linear approximation to f obtained by setting equal to f on the vertices of the ’s and taking to be affine on their interiors. Since the diameter of each is we have:
6. Conclusions
We considered in this article the expressive power of ReLU networks with bounded hidden layer widths. In particular, we showed that ReLU networks of width and arbitrary depth are capable of arbitrarily good approximations of any scalar continuous function of d variables. We showed further that this bound could be reduced to in the case of convex functions and gave quantitative rates of approximation in all cases. Our results show that deep ReLU networks, even at a moderate width, are universal function approximators. Our work leaves open the question of whether such function representations can be learned by (stochastic) gradient descent from a random initialization. We will take up this topic in future work.
Funding
This research was funded by NSF Grants DMS-1855684 and CCF-1934904.
Acknowledgments
It is a pleasure to thank Elchanan Mossel and Leonid Hanin for many helpful discussions. This paper originated while I attended EM’s class on deep learning []. In particular, I would like to thank him for suggesting proving quantitative bounds in Theorem 2 and for suggesting that a lower bound can be obtained by taking piece-wise linear functions with many different directions. He also pointed out that the width estimates for the continuous function in Theorem 1 were sub-optimal in a previous draft. I would also like to thank Leonid Hanin for detailed comments on several previous drafts and for useful references to the results in approximation theory. I am also grateful to Brandon Rule and Matus Telgarsky for comments on an earlier version of this article. I am also grateful to BR for the original suggestion to investigate the expressivity of neural nets of width two. I also would like to thank Max Kleiman-Weiner for useful comments and discussion. Finally, I thank Zhou Lu for pointing out a serious error what used to be Theorem 3 in a previous version of this article. I have removed that result.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Bengio, Y.; Hinton, G.; LeCun, Y. Deep learning. Nature 2015, 521, 436–444. [Google Scholar]
- Arora, R.; Basu, A.; Mianjy, P.; Mukherjee, A. Understanding deep neural networks with Rectified Linear Units. In Proceedings of the International Conference on Representation Learning, Vancouver, BC, Canada, 30 April 30–3 May 2018. [Google Scholar]
- Liao, Q.; Mhaskar, H.; Poggio, T. Learning functions: When is deep better than shallow. arXiv 2016, arXiv:1603.00988v4. [Google Scholar]
- Lin, H.; Rolnick, D.; Tegmark, M. Why does deep and cheap learning work so well? arXiv 2016, arXiv:1608.08225v3. [Google Scholar] [CrossRef]
- Mhaskar, H.; Poggio, T. Deep vs. shallow networks: An approximation theory perspective. Anal. Appl. 2016, 14, 829–848. [Google Scholar] [CrossRef]
- Poole, B.; Lahiri, S.; Raghu, M.; Sohl-Dickstein, J.; Ganguli, S. Exponential expressivity in deep neural networks through transient chaos. Adv. Neural Inf. Process. Syst. 2016, 29, 3360–3368. [Google Scholar]
- Raghu, M.; Poole, B.; Kleinberg, J.; Ganguli, S.; Dickstein, J. On the expressive power of deep neural nets. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; Volume 70, pp. 2847–2854. [Google Scholar]
- Telgrasky, M. Representation benefits of deep feedforward networks. arXiv 2015, arXiv:1509.08101. [Google Scholar]
- Telgrasky, M. Benefits of depth in neural nets. In Proceedings of the JMLR: Workshop and Conference Proceedings, New York, NY, USA, 19 June 2016; Volume 49, pp. 1–23. [Google Scholar]
- Telgrasky, M. Neural networks and rational functions. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; Volume 70, pp. 3387–3393. [Google Scholar]
- Yarotsky, D. Error bounds for approximations with deep ReLU network. Neural Netw. 2017, 94, 103–114. [Google Scholar] [CrossRef] [PubMed]
- Cybenko, G. Approximation by superpositions of a sigmoidal function. Math. Control. Signals Syst. (MCSS) 1989, 2, 303–314. [Google Scholar] [CrossRef]
- Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. J. Neural Netw. 1989, 2, 359–366. [Google Scholar] [CrossRef]
- Hanin, B.; Sellke, M. Approximating Continuous Functions by ReLU Nets of Minimal Width. arXiv 2017, arXiv:1710.11278. [Google Scholar]
- Balázs, G.; György, A.; Szepesvári, C. Near-optimal max-affine estimators for convex regression. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, San Diego, CA, USA, 9–12 May 2015; Volume 38, pp. 56–64. [Google Scholar]
- Rolnick, D.; Tegmark, M. The power of deeper networks for expressing natural functions. In Proceedings of the International Conference on Representation Learning, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Shang, Y. A combinatorial necessary and sufficient condition for cluster consensus. Neurocomputing 2016, 216, 611–616. [Google Scholar] [CrossRef]
- Mossel, E. Mathematical Aspects of Deep Learning. Available online: http://elmos.scripts.mit.edu/mathofdeeplearning/mathematical-aspects-of-deep-learning-intro/ (accessed on 10 September 2019).
© 2019 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).