Neural Network Models for Empirical Finance †

: This paper presents an overview of the procedures that are involved in prediction with machine learning models with special emphasis on deep learning. We study suitable objective functions for prediction in high-dimensional settings and discuss the role of regularization methods in order to alleviate the problem of overﬁtting. We also review other features of machine learning methods, such as the selection of hyperparameters, the role of the architecture of a deep neural network for model prediction, or the importance of using different optimization routines for model selection. The review also considers the issue of model uncertainty and presents state-of-the-art methods for constructing prediction intervals using ensemble methods, such as bootstrap and Monte Carlo dropout. These methods are illustrated in an out-of-sample empirical forecasting exercise that compares the performance of machine learning methods against conventional time series models for different ﬁnancial indices. These results are conﬁrmed in an asset allocation context.


Introduction
Statistical science has changed a great deal in the past ten years, and it is continuing to change, in response to technological advances in science and industry.The world is awash with big and complicated data, and researchers are trying to make sense out of it.While traditionally scientists fit a few statistical models by hand, they now use sophisticated computational tools in order to search through a large number of models, looking for meaningful patterns and accurate predictions.Standard statistical models have been extended in many ways.Models now allow for more predictors than observations, accommodating nonlinear relationships, interactions between the predictors, and, in particular, the presence of strong correlations (multicollinearity).One of the main advantages of these novel models based on machine learning techniques is the gain in predictive performance when compared to standard statistical models and the ease of manipulation due to the availability of toolboxes and off-the-shelf routines that make their implementation straightforward, even in large dimensions that are characterized by many covariates and increasingly complex datasets.
Nowadays, machine learning (ML) technology is widespread: from web searches to content filtering on social networks to recommendations on e-commerce websites.ML identifies objects in images, transcribes speech into text, matches news items, posts or products with users' interests, and selects the relevant results of the search, making use of a class of techniques, called deep learning.Deep learning allows for computational models that are composed of multiple processing layers to learn representations of big complex datasets, uncovering intricate structure within them.These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection, and many other domains, such as drug discovery and genomics, being increasingly present in consumer products, such as cameras, smartphones, or computerized personal assistants.For example, Apple's Siri, Amazon's Alexa, Google Now, or Microsoft's Cortana employ deep neural networks to recognize, understand, and answer human questions.
A defining characteristic of machine learning models is its ability to accommodate a large set of potential predictor variables and different functional forms.The definition of machine learning is often context-specific.We use the term to describe a diverse collection of high-dimensional models for statistical prediction, combined with regularization methods for model selection that is based on a variety of penalty functions.The development of algorithms to implement the optimization procedures in an efficient manner is also an intrinsic part of this novel methodology.Machine learning was initially developed for prediction; this is particularly relevant in empirical finance, in which the object of interest is usually the prediction of an asset return or its conditional volatility.The literature on machine learning for empirical finance modeling has grown enormously in recent years, see, for example, Chinco et al. (2019).These authors apply the Least Absolute Shrinkage and Selection Operator (lasso) to make rolling one-minute-ahead return forecasts using the entire cross-section of lagged returns as candidate predictors.The lasso increases both the out-of-sample fit and forecast-implied Sharpe ratios.This out-of-sample success comes from identifying predictors that are unexpected, short-lived, and sparse.Another recent influential study on empirical finance modeling is Gu et al. (2020).These authors perform a comparative analysis of machine learning methods for measuring asset risk premiums.These authors study a set of candidate models that include linear regression, generalized linear models with penalization, dimension reduction via principal components regression and partial least squares, and compare these methods against machine learning methods, such as regression trees (including boosted trees and random forests) and neural networks.This study demonstrates the presence of large economic gains to investors while using forecasts from regression trees and neural networks, in some cases doubling the performance of leading regression-based strategies from the literature.A more general treatment of the topic can be found in Friedman (1994), which provides an early unifying review across the relevant disciplines (applied mathematics, statistics, engineering, artificial intelligence, and connectionism); LeCun et al. (2015) that provides a general overview of deep learning, and Goodfellow et al. (2016), which provides a thorough textbook treatment.
Our aim in this paper is to present an overview of machine learning methods that complements the work of Gu et al. (2020).Rather than introducing the main features of the above methods for prediction in high-dimensional settings, we focus on feedforward neural networks and, in particular, in deep learning models.Our objective is to explain, in detail, the optimization problem that characterizes the prediction in machine learning models.Model overfit is an important feature of these models, due to the estimation of a large number of parameters.To correct for this, regularization methods are introduced and their properties discussed at length.We distinguish different types of penalty functions in a mean square prediction error setting and discuss the properties of methods, such as lasso, elastic net, or ridge regressions.We also pay particular emphasis to the role of the tuning parameters that determine the quality of the predictions, such as the length and depth of a neural network, the constant characterizing the contribution of the penalty function to the optimization problem, and the effect of other hyperparameters that are fine tuned through cross-validation, model dropout (e.g., Smyl 2020), and other optimization methods.In contrast to Gu et al. (2020), our overview is more focused on the understanding of the underlying mechanisms necessary to implement an artificial neural network.In this overview, we are also concerned with recent topics that have gained significant attention in the deep learning literature, such as the optimality of the architecture (e.g., Calvo-Pardo et al. 2020) or the measurement of the uncertainty around model predictions.We discuss, in some detail, the choice of bootstrap methods, see Tibshirani (1996) for a simulation-based review of the topic, and the Monte Carlo dropout of Smyl (2020).
Finally, in the same spirit of Chinco et al. (2019) and, more specifically, Gu et al. (2020), we also propose an application of these methods that illustrates its relevance in empirical finance.Whereas these authors highlight the advantages of using regression trees and neural networks for asset pricing (measuring the risk premium on risky assets) as compared to linear regression and techniques that are based on dimension reduction, our empirical exercise performs a comparative study against conventional time series models that are widely used for empirical finance modeling.In particular, we present a forecasting exercise of the conditional mean and volatility of asset returns for three U.S. financial indices.Our objective in this section is twofold.First, we aim to assess the predictive performance of a modern deep neural network model and then compare it against a traditional time series model that carries out a transitory-permanent decomposition of the asset price.The permanent component captures the trend of the log-price and the transitory component models the log-returns on the financial indices.The transitory component also accommodates the presence of conditional heteroscedasticity by fitting a GARCH(1,1) model.The statistical comparison in predictive performance is done by implementing a Diebold and Mariano (1995) test of predictive accuracy.The results of the empirical analysis provide overwhelming evidence in favor of the neural network model for the three financial indices.Second, as in Gu et al. (2020), we add economic significance to the comparison.To do this, we compare the Sharpe ratios between optimal portfolios that were constructed from a combination of the three financial indices.The optimal combination is obtained while using Markowitz's (1952) mean-variance and minimum-variance portfolios as the investor's objective functions.Portfolio performance is done estimating an out-of-sample Sharpe ratio.The results confirm the above findings on the outperformance of machine learning models over sophisticated time series models in terms of economic performance.
The paper is structured as follows.Section 2 discusses the choice of suitable objective functions in machine learning problems.Section 3 presents recent advances in deep learning and focuses on deep neural networks.Section 4 studies the role of uncertainty in machine learning models and discusses recent advances on the analysis of uncertainty while using these novel procedures.Section 5 presents an empirical comparative study of these methods for modeling the conditional mean and volatility of financial returns for several financial indices.Section 6 summarizes the contributions of the study.

The Objective Function in Machine Learning Problems-Minimization Versus Regularization
This section lays out the optimization problem that is common in the machine learning literature.Machine learning describes a diverse collection of high-dimensional models for statistical prediction, combined with regularization methods for model selection and mitigation of overfitting.The high-dimensional nature of machine learning enhances the flexibility of the methodology relative to more traditional econometric prediction techniques.However, with enhanced flexibility comes a higher propensity to overfitting the data.Therefore, it is necessary to consider objective functions that penalize the excessive parametrization of the model.The final goal of machine learning methods is to achieve an approximate optimal specification with a manageable computational cost.In this section, we describe candidate optimization functions for supervised and unsupervised machine learning problems and then discuss the role of regularization.

Unsupervised Learning
ML algorithms can be broadly categorized as unsupervised or supervised.Unsupervised learning algorithms aim at uncovering useful properties of the structure of the input dataset, i.e., there is no y, and given that the true data generating process (DGP) p data (X) is unknown, the goal is to learn p data (X), or some useful properties of it, from a random sample of i = 1...N realizations of input data only, {X i }, on the basis of which the empirical distribution p data (X) obtains.Letting p model (X; θ) be a parametric family of probability distributions indexed by θ that estimates the unknown true p data (X), unsupervised learning corresponds to finding the parameter vector θ that minimizes the dissimilarity/distance between p model (X; θ) and p data (X): noticing that θ ML is the maximum likelihood estimaton and D KL ( p data ||p model ) denotes the Kullback-Leibler divergence.To obtain this, we note that and p model (X; θ) = ∏ N i=1 p model (X i ; θ), which, after taking logs and dividing by N, is equivalent to by the analogy principle.
The cross-entropy in the above expression is simply −E X∼ p data [log p model (X; θ)]: since log p data (X) does not depend on θ, minimizing D KL is equivalent to minimizing the cross-entropy, or 'empirical risk minimization', e.g., the mean-squared error is the cross-entropy between the empirical distribution and a Gaussian model.In machine learning (ML), the cross-entropy is called 'cost function', J(θ), while, in statistics, it is called the 'loss function', l(θ) ≡ L[ p data (X), p model (X; θ)].Examples of popular unsupervised deep learning models, not necessarily parametric, are k-means clustering, auto-encoders, and generative adversarial networks (GANs).

Supervised Learning
Supervised learning methods aim to develop a computational relationship (formula/algorithm) between P inputs (predictors, features, explanatory or independent variables), X = {...x p ...}, and K outputs (dependent or response variables), y = {...y k ...}, for determining/predicting/estimating values for y, given only the values of X, in the presence of unobserved/uncontrolled quantities z = {...z u ...} : where g k (•) denotes a functional form relating the input observed variables, the unobserved variables and the dependent variables.In order to reflect the uncertainty that is associated with the unobserved inputs z, the above relationship is replaced by a statistical model: By construction, this model satisfies that E ε [y k |...x p ...] = f k (...x p ...), with E ε [•| ] denoting the conditional expectation evaluated under the distribution function of the error term conditional on the information set .For simplicity, we drop the k subscript, which indicates that we are assuming that there are separate models for each output k, ignoring that they depend on the same set of input variables: i.e., to the extent that the error term ε is a random variable, the output variable y becomes a random variable. 1Specifying a set of observed input values X, specifies a distribution of output values, y, the mean of which is the target function f (X).The input and output variables can be real or categorical, but categories can be always converted into 'indicators' or 'dummies' that are real-valued.More specifically, supervised learning algorithms aim to obtain a useful approximation f (X) to the true (unknown) 'target' function f (X) in (3) by modifying (under constraints) the input/output 1 In practice, strategies that treat the K outputs as a joint system often improve accuracy.relationship f (X) that it produces, in response to differences {y i − y i } (errors) between the predicted y i = f (X i ) and real y i system outputs: where L(•, •) is the 'loss function', or a measure of distance (error) between y i and y and therefore, cross-entropy minimization corresponds to mean squared error (MSE) minimization when the model is hypothesized to be Gaussian with mean g(X; θ).In addition, this example shows that optimally choosing the parameter vector θ = θ ML , which characterizes f (X) = g(X; θ), is equivalent to solving (4) when L[y i , Therefore, approximating/learning the unknown function f (X) corresponds to estimating the unknown true conditional probability p(y|X), once we conjecture a parameterization p model (y|X; θ) for it.Popular supervised deep learning models, which are not necessarily parametric, are support vector machines (SVMs) based on kernel methods, k-nearest neighbor regression, or decision trees.
Notice that (4) is the available sample {y i , X i } analog to solving for the global prediction error in (3): where p data (X) is the unknown true data generating process.
As an example, replace L[y i , y i ] = [y i − y i ] 2 in (6) in order to obtain the standard expressions for the bias-variance trade-off in the Mean Squared Error (MSE): Variance of the noise ε where the MSE( f ) denotes the MSE of f (X) averaged over all training samples of size N that could be realized from the system with probabilities that are governed by p data (X) and F ε (ε).It can be further decomposed as: )]} 2 measures the square of the difference between the target function f (X) and the average approximation value at a particular sample Problem (6) defines the target performance measure for prediction in supervised learning/function approximation: as future new input only observations become available, collected in a prediction or test sample ' ', {y i , X i } N i=1 , we want to predict (estimate) a likely output value using f (X i ), such that y i = f (X i ), where f (X) was obtained from (4) exploiting the available sample, {y i , X i } N i=1 .Computing then 1 N ∑ N i=1 L[y i , y i ] allows for the researcher to evaluate the out-of-sample performance of the algorithm/function approximation f (X), showing that accurate approximation and future prediction are one and the same objective. 2As future data is unavailable, the standard practice is to divide the available sample {y i , X i } N i=1 into two disjoint parts: a training/learning sample ' ' {y i , X i } N i=1 in (4) where f (X) obtains, and a prediction/test sample {y i , X i } N i=1 where the out-of-sample predictive performance of f (X) is evaluated, so that N = N + N .
More complex forms of the unknown target function f (X) naturally call for bigger training samples N in order to obtain better representations/approximations f (X).However, this comes at the expense of increasing the chances of f (X) 'overfitting'.Overfitting happens when a model that represents the training data very well represents very poorly unseen data N in the 'prediction/test phase'. 3The reason for overfitting lies on the 'curse-of-dimensionality' that the complexity of the 2 Yet another goal of supervised learning is interpretation, as opposed to prediction: there, interest lies in the structural form of the approximating function that was obtained from (4) in order to understand the mechanism that produced the data.The identification of the input variables that are most relevant to explain the variation in output, or the nature of that dependence and how it changes with changes in other inputs, are instead the primary objectives, and the aim is to understand how the system works.An intuitive way to understand why is as follows.Suppose that we have a sample of size N with which we are trying to approximate a function of N variables f (x 1 , ..., x N ).If Kolmogorov's conjecture was right, then we could instead approximate a degree N polynomial function of just one variable, say x 1 , f (x 1 , ..., x N ) = g(x 1 ) = ∑ N i=1 a i x i 1 and problem (4) would reduce to a parametric least squares (OLS) solution: Because there are N normal equations (one for each) in N unknowns (sample observations), we would obtain a unique solution a, corresponding to a 'perfect fit' of the sample/training data.If then one more sample observation was collected, N = {y N+1 , x N+1 1 }, and we wanted to test the predictive ability of f (X) = ∑ N i=1 a i x i 1 , almost with probability one ) i , i.e., the prediction error [y N+1 − y N+1 ] 2 will be very big, indicating 'overfitting'.In big unknown target function creates: as the number of input variables P upon which f (X) depends increases, the necessary sample size for accurately approximating f (X) grows exponentially, i.e., at a rate N 1/P , rendering all training samples very sparsely populated.Note that this is the case, even if we set ε = 0 in (3), converting (4) into an interpolation problem, i.e. reducing the MSPE to an MSE-only problem still requires a large enough training sample for the approximation to be accurate.
Because N is finite, problem (4) does not have a unique solution4 .Therefore, one must restrict the set of admissible functions to a smaller set G than the set of all possible functions g(X).To see the effect of restricting the class of admissible functions in (4), denote, by f * (X) ∈ arg min and by f * G (X) ∈ arg min )] the best approximation in the unrestricted and restricted classes of functions, respectively, both in terms of out-of-sample performance, N .The difference in out-of-sample performance between the solution from (4) and f * (X) ('excess test error' E ) can then be decomposed, as follows: .
The approximation error increases the more restrictive the class of functions G is, unless the true unknown target function f (X) happens to belong to G, in which case f * G (X) = f * (X).The estimation error depends on how good the algorithm/approximation f (X) is (1st term) as well as on how well the selected class of functions G can best represent the complexity of the unknown target function f (X) (second term).
'Universal approximators' for the class of all continuous target functions f (X) are classes of functions G = {g(X) : g(X) = ∑ Z z=1 a z b(X|γ z ), γ z ∈ R q } that could exactly represent f (X) if the sample size was not finite, i.e., f (X) = ∑ ∞ z=1 a * z b(X|γ z ) for some set of expansion coefficient values {a * z } ∞ z=1 , and that nonetheless approximate well with a small number Z of coefficients.Therefore, universal approximators minimize the approximation error and estimation error, minimizing the out-of-sample performance difference E between the solution from (4) and f * (X), i.e., if the training sample size was infinite, lim , and therefore, lim Choosing Z corresponds then to 'model selection': as entries {a z } Z z=1 are added, the approximation is able to better fit the training data, increasing the variance component of ( 6), but decreasing the bias.The bias decreases because adding entries enlarges the function space spanned by the approximation f (X).With a finite sample size, the goal is to choose a small Z that keeps the variance and bias small, so that (6) can be expected to remain small.
Examples of function classes that are universal approximators beyond feed-forward neural networks (described below), are radial basis functions, tensor product methods, and regression trees.Regression trees and their extension, Random Forests, are 'tree-structured' methods that are commonly used for flexibly estimating regression functions where out-of-sample performance is important.'Tree-structured' methods have dictionaries of the form {1 {X∈R} } R , where 1 {.} is an indicator function, data problems, where P > N (or is close to N), overfitting means that the approximation obtained from (4) will almost surely perform poorly in unseen data, i.e., in (6).and R represents subregions of the space of all possible values of X ∈ R P , R ⊆ R P .The most common example is 1 {X∈R} = ∏ P p=1 1 {u p ≤x p ≤v p } , with the 2P coefficients {u p , v p } P p=1 representing the respective lower and upper limits of the region (hyper-rectangle) on each input x p axis.Usually, only Z disjoint regions are chosen, {R z } Z z=1 , so that X ∈ R z =⇒ f (X) = a z , meaning that X in the same region have the same 'approximation' value a z (with an obvious abuse of notation, but with a similar interpretation).Recursive partitioning tree-structured methods are also universal approximators, in the sense defined previously, i.e., f ( Choosing the optimal number of regions Z is a formidable combinatorial optimization problem, but recursive partitioning is an approximate solution when employing greedy optimization strategies.This effectively results in sequentially splitting the initial sample {y i , X i } N i=1 , starting with the single covariate x p that minimizes the mean-squared error of the resulting subsamples (or leaves).When considering one different covariate at a time, the mean-squared error is therefore sequentially reduced.However, too many subsamples (a very deep tree) would correspond to a very large Z, which risks overfitting.Therefore, in practice, a very deep tree is estimated and then pruned (or regularized) to a more sparse tree, using cross-validation to select the optimal depth. 5

Regularization Methods
In general, the choice of the set of admissible functions G is based on considerations outside the data and it is usually done by the choice of a learning method. 6Choosing a learning method can be modeled as adding a penalty term λΩ[g(X)] to restrict solutions to (4): where λ ('regularization parameter') modulates the strength of the penalty functional Ω[•] over all possible functions g(X).The choice of a penalty functional is made on the basis of 'outside the data information' about the unknown target f (X), e.g., on the basis of a prior over the class of models g(X), Pr[g(X)].A natural choice for f (X) would then be the function that is most probable given the data: Pr[g(X)|{y i , which is known as maximum a posteriori probability (MAP) estimate.According to Bayes' theorem, the probability of a model given the training data is proportional to the likelihood that the training data have been generated by the model times the probability of the model: ). Substituting the above expression into (9), taking logs and discarding terms not involving g(X) yields an equivalent expression to (8): 1 See Friedman (1994) or Athey and Imbens (2019) for further details. 6 The class of functions g(X) = ∑ M m=1 a m b(X|γ m ), γ m ∈ R q are commonly known as 'dictionaries'.The choice of a learning method selects a particular dictionary.Examples of dictionaries that are universal approximators are feed-forward neural networks, radial basis functions, recursive partitioning tree-structured methods, and tensor product methods.See Friedman (1994) for additional details.
that coincides with (7) if L(•, •) is the quadratic loss function and λΩ[g(X)] = −2σ 2 log Pr[g(X)].The quantity λΩ[g(X)] naturally captures that reductions in the noise variance σ 2 lead to increasing weight on the training data part Pr[{y i , X i }|g(X)] in determining the approximation f (X), relative to the prior Pr[g(X)].For example, restricting g(X) ∈ G, as above, can be achieved by setting 4) reduces to parameter learning, f (X; λ) = g(X; θ, λ),where θ = {a z , γ z } Z z=1 .Additional parametric or non-parametric penalty terms can be added to (7), with the result of further restricting the solutions in the approximation subspace of G that respect that particular penalty.By the addition of a penalty term (or 'regularization'), the aim is to improve the out-of-sample performance of the approximation f (X; λ), reducing its chances to 'overfit', without affecting its training error.Non-parametric penalties can be of the form ) 2 is the norm of the gradient of the functions in the class, with larger values of λ penalizing functions that oscillate more (i.e., that are 'less smooth').Parametric penalties would, instead, penalize functions g(X) not in a particular parametric family 4) into an equivalent parameter estimation problem: where the penalty function [θ] admits different forms that are widely used in the recent ML literature: (i) 'ridge' (L 2 regularization): [θ] = ∑ q j=1 θ 2 j , penalizing approximations with large parameter values 7 ; (ii) 'subset selection': [θ] = ∑ q j=1 1 {θ j =0} , which penalizes approximations with a large number of parameters (requiring combinatorial optimization); and, (iii) 'bridge': which coincides with 'ridge' when v = 2 and it is a continuous approximation of the subset selection penalty as v → 0. When v = 1, L 1 regularization obtains, akin to the 'least absolute shrinkage and selection operator', popularly known as LASSO; (iv) 'weight decay': 1+(θ j /w) 2 approaches 'ridge' as w → ∞ and subset selection as w → 0. Smaller values of v and w privilege approximations with a small number of parameters.(v) '(Stochastic) Gradient descent': when searching for the value θ λ that minimizes (10) with f (X; λ) = k(X| θ), i.e. a high value of λ privileges 'τ−paths' θ τ+1 = θ τ − θ [θ τ ] that reach θ λ taking the least possible number of steps τ, each of which depends on or 'learning rate'.Because governs the strength of the gradient θ [θ τ ] in the updating of θ τ , choosing λ is equivalent to the choice of , a free hyperparameter to be 'fine tuned' or optimized during training.
When instead of using all available N observations in the training sample, we randomly subsample from {y i , X i } and form a 'minibatch' with ] is called a 'stochastic gradient descent (SGD) penalty'.SGD can be combined with 'momentum', where the size of the updating step depends on how large an exponentially decaying moving average sequence of past gradients is, α : Momentum then adds another hyperparameter α, with larger values of α ∈ (0, 1) corresponding to a higher reliance on previous gradients, leading to a larger step size when updating.Current optimization methods, like AdaGrad, RMSProp, or Adam, supplement SGD (with or without 'momentum') to allow the learning rate to 7 'Early stopping' the number of training iterations ('epochs') over the learning sample once the out-of-sample performance of the approximation starts to increase, can be shown to be equivalent to L 2 regularization (Goodfellow et al. 2016).Similarly, 'dropout' when applied to neural network (NN) methods, has been shown to be equivalent to L 2 regularization with a penalty strength parameter λ that is inversely proportional to the precision of the prior of a deep Gaussian process characterizing the NN parameters (Gal and Ghahramani 2016).
'adapt', shrinking or expanding according to the entire history.For example, Adam combines RSMProp and momentum, which is directly incorporated with exponential decay rates, ρ 1 , ρ 2 ∈ [0, 1), for the first two moment estimates, s 1 and s 2 , of the gradient θ [θ τ ], initialized at the origin, s 1 = s 2 = 0. Subsequently, the bias-corrected updates of the first and second moments, , are used in order to update the parameters: An alternative optimization method is exponentially decaying the average of the squared gradient, so that the updating can converge even faster.For example, RSMProp uses an exponentially decaying average with decay rate ρ ∈ [0, 1) that discards history from the extreme past and employs the squared gradient, initializing at the origin, s = 0. Subsequently, the update of s given by s . Back-propagation is the method for computing the gradient of the cost function in (10) which itself is a function of the gradients of the loss function and penalty terms.Those gradients are computed 'backwards', as dictated by the 'chain rule of calculus', since they are compositions of functions of the parameters θ.Once those gradients are computed, SGD or other optimization algorithms are used to perform the learning/approximation exploiting them.
Finally 'bagging' ('bootstrap aggregating') is also a powerful regularization method that can combine parametric and non-parametric penalties.It involves creating B different datasets from the training sample N by sampling with replacement N B = N observations, and solving (7) on each of the B different training datasets, f B (X; λ).The out-of-sample performance of the B-ensemble predictor is then 1 Because sampling is done with replacement, each dataset b for b = 1, • • • , B is missing some of the observations from the original dataset N with high probability, which reults in different approximations f b (X; λ) which make different errors in the test sample N .Those errors will tend to cancel out if sampling is random, improving the out-of-sample performance of the B-ensemble model relative to its members.
How is λ determined?Because choosing the strength of the penalty λ determines the solution approximation f (X; λ) to (7)-and hence (10)-this is referred to as 'model selection'.Ideally, one would like to choose the λ that maximizes the out-of-sample performance of f (X; λ): However, different 'splittings' of the available sample into complementary learning and test subsamples, N = N + N , are going to provide different values of λ.To avoid the computational burden that are associated with computing λ for all possible assignments ( N N ) and then minimizing the average over these replications, this process is instead approximated by dividing the available sample of size N into K disjoint subsamples of approximately equal size, N/K.Each of the subsamples denoted as N k , for k = 1, . . ., K is used as 'test sample' in (11), such that the complement sample N − N k is used as training sample in (7) to fit the model.By doing so, we obtain K different approximations f K (X; λ), each of which is evaluated once on the test sample N k .Averaging the results over K in (11), we obtain 1 , and solving for λ returns λ K , as determined by 'K-fold' cross validation.

Neural Networks for Prediction
This section analyzes artificial neural networks.This is, arguably, the most powerful modeling device in machine learning and the preferred approach for complex machine learning problems, such as computer vision, natural language processing, pattern recognition, biomedical diagnosis, and others (see Schmidhuber (2015) and LeCun et al. (2015) for overviews of the topic).Artificial neural networks are divided into shallow and deep networks, depending on the number of hidden layers used to predict the output.The flexibility of neural networks with several layers draws from their ability in order to incorporate nonlinear interactions between the predictors, being denominated deep neural networks or, more generally, deep learning methods.The complexity of these methods entails, by construction, a lack of interpretation and transparency for disentangling the relationship between the predictors and the output.
Our analysis focuses on traditional feedforward networks.These consist of an input layer of predictor variables, one or more hidden layers that interact and nonlinearly transform the predictors, and an output layer that aggregates hidden layers into an ultimate outcome prediction.Deep learning builds on feedforward neural networks (NN) or multi-layer perceptrons (MLPs) in order to learn unknown target functions of increasing complexity.MLPs are then compositions of single-layer/shallow NNs, each hidden unit of which (or 'neuron') is fully connected to the hidden units of the subsequent layer, to capture the fact that information flows forward from the inputs X to the output y.Thus, artificial neural networks, or MLPs, are similar to biological neural networks: they are collections of connected units called neurons.An artificial neuron receives inputs from other neurons, computes the weighted sum of the inputs, and maps the sum via an activation function to the neurons in the next layer, and so on until it reaches the last layer or output.Accordingly, the network is free of cycles or feedback connections that pass information backward. 8 Single-layer/shallow NNs are universal approximators (Hornik 1991;Cybenko 1989) and they have dictionaries of functions of the form {b(X|γ is a vector-valued 'activation function' (i.e., applied unit-wise), mapping the output from the single hidden layer h z=1 of the function class G that is defined above, i.e., θ 1 = (w 2 , b 2 ; b 1 , W 1 ) ≡ (a; γ 1 ).Popular choices for the activation function include: (i) Rectified linear units (ReLu), s(h) = max{0, h}; , where the number of hidden units z in layer l, Z l , is divided into groups of k values, {(z 1 , ..., z k ), ..., (z Z l −k+1 , ..., z Z l )}, and G i = {(i − 1)k + 1, ..., ik} is the set of indices into the inputs for group i.All of the activation functions s(•) have in common that a certain threshold must be overcome for information to be passed forward, much alike neurons in the human brain, which need to receive a certain amount of stimuli in order to be activated.The threshold hurdle creates a nonlinearity that allows for artificial NNs to learn nonlinear and non-convex unknown target functions f (X).
Single-layer NNs are also known as 'three-layer' networks, where the inputs X form the first layer.The second or 'hidden' layer h 1 is comprised of (b 1 , W 1 , s(•)) : h 1 = s(W 1 X + b 1 ), and the third corresponds to the output layer, y = w 2 s(h 1 ) + b 2 ∈ R. A deep NN (DNN) is constructed by adding hidden layers, with each subsequent one taking as inputs the output of the previous ones.For example, a 'four-layer' NN that adds one hidden layer to a 'three-layer' NN (or shallow/single-layer NN), rather than simply taking the linear combination of the dictionary entries of single-layered NNs, {b(X|γ 1 )} , would result in the collection of functions that are represented by the dictionary {b(X|γ Adding hidden layers then results in parameter addition, increasing the variance, and reducing the bias.The overall effect on performance (i.e., on generalization/test error) will depend on how well the resulting dictionary matches the unknown target function f (X).Additionally, although it is an open question in the deep learning literature, why do over-parameterized DNNs perform well in terms of generalization/test error, original contributions due to Pascanu et al. (2013), andMontufar et al. (2014) 8 MLPs that allow information to flow backwards are called recurrent neural networks and they are discussed in Goodfellow et al. (2016).
show that deeper ReLu architectures have more flexibility to express the behavior of the unknown target function, relative to equally sized single-layer/shallow architectures.
An incipient strand of the literature (e.g., Arora et al. 2019;Allen-Zhu et al. 2019) building on the Rademacher complexity of both the function class being approximated and of the dataset shows that the dictionaries of deeper architectures can better capture interactions between the units of different layers through the composition of functions that they can represent.
Generally, a DNN approximation f (•) : R P → R of size Z = ∑ L l=1 Z l with L ∈ N hidden layers and Z l ∈ N nodes per layer l, is of the form: where s(•) : R Z L−1 → R Z L is the vector-valued activation function that maps the output from the previous hidden layer ] and [ , λ, α] to be learned and/or 'fined tuned' by the optimization algorithm Approximating the unknown target function f (X) with a DNN is then equivalent to parameter estimation: where it is standard practice to 'cross-validate' the choice of hyperparameters [Z, L, {Z l } L l=1 ] and [ , λ, α] before estimating the parameters that characterize the restricted class of functions/models that are represented by the dictionary {b(X|γ , usually combined with momentum α, as optimization method; and, (iv) network architecture size, depth, and nodes per layer, [Z, L, {Z l } L l=1 ], as well as learning rate, , that depend on the characteristics of the dataset, {y i , X i } N i=1 .Performance is then assessed on the test sample, from evaluating In practice, 'tuning' or optimizing the hyperparameters is a daunting task in terms of processing time and computational capacity, e.g., only determining the optimal depth (number of layers L) and nodes per layer ({Z l } L l=1 ) for architectures of a given size Z involves solving an NP-hard combinatorial optimization problem because L, {Z l } l ∈ N, i.e., are integer values (Judd 1990).Yet, in Calvo-Pardo et al. ( 2020), we show that recent advances in combinatorial optimization software (RStudio) can be exploited to optimally allocate hidden units ({Z l } L l=1 ) within ('width') and across ('depth', L) layers in deep architectures of a given size Z = ∑ L l=1 Z l .Adopting the lower bound on the maximal number of linear regions that a ReLu DNN can approximate as the maximization criterion, see Montufar et al.'s (2014), we obtain ).We effectively solve ( 12) in two-stages: The first stage optimization ( 13) solves for the optimal depth L and the number of hidden units per layer (or optimal width, layer-wise) { Z l } L l=1 given the network architecture size, Z = ∑ L l=1 Z l . 9The outcome of the first stage is an optimal deep network architecture in the sense of maximizing the expressive power of the approximation f (X; Λ L ) within the restricted class of functions that are generated by the dictionary {b(X|γ L ) : γ L = (b 1 , ..., b L , W 1 , ..., W L )}.The second stage optimization ( 14) proceeds, just as in ( 12), but takes as given the optimal values of the hyperparameters ( L, { Z l } L l=1 ) from the first stage (13), i.e., Λ L ( L, Rather than engaging into time and computer intensive 'fine tuning' of the whole set of hyperparameters [Z, L, {Z l } L l=1 ; , λ, α] while training the deep architecture to estimate/learn θ L , as in ( 12), proceeding in two-stages considerably saves on runtime and memory while improving performance, as we show in the next section.Finally, notice that being the first stage conditional on the architecture size, bigger and more complex datasets {y i , X i } N i=1 will naturally summon architectures with more hidden units, Z. Deep neural networks have become so powerful, because of (i) the availability of large datasets, necessary to 'train' them, and because of the rapid improvements in (ii) computational power 10 and in (iii) optimization algorithms and software.Deep neural networks are characterized by a large number of parameters that need to be 'optimized' during 'training'.This is called 'fine-tuning' or 'optimally fitting a neural network' to the 'training sample'.The backpropagation optimization algorithm informs the machine of how it should change the internal parameters used to compute the representation in each layer from the representation in the previous layer.Software optimization methods (e.g, Adam, 9 The first stage optimization ( 13) is a constrained combinatorial optimization problem: where {µ l } L l=1 ∈ R L is the collection of L Lagrange multipliers that are associated with the L − 1 constraints, Z l ≥ P, l = 1...L − 1, and with the constraint L > 0, because the constraint on the architecture size Z = ∑ L l=1 Z l is incorporated into the maximand.Since L, {Z l } L l=1 ∈ N, we also solve in two stages to reduce the computational burden.In the first stage of ( 13), the number of hidden units are optimally allocated for a given depth, L, { Z l } L l=1 , while, in the second stage of ( 13), the optimal depth is sought after for a given allocation of hidden units, L, {Z l } L l=1 . 10Particularly, of graphics processing units (GPUs), suited to perform the linear algebra operations at the root of 'fitting' neural networks, e.g., Google DeepMind optimized a deep neural network while using 176 GPUs for 40 days to beat the best human players in the game Go.
Adagrad, RMSprop) that implement SGD or any of its variants, allow for substantial gains in the necessary time and computational power when training models with millions of parameters, and it is nowadays often paired with step size 'adaptive regularization'.It is also now standard practice to do regularization while optimizing (e.g., via 'weight decay', 'dropout', or 'batch normalization') to prevent overfitting and improve the performance of DNNs 'out-of-sample'.'Batch normalization' (Ioffe and Szegedy 2015; not to be mistaken with 'minibatch regularization') is a method of adaptive reparameterization that is best suited for training very deep models that involve the composition of several functions or layers.By normalizing the output of each layer before forwarding it as input to the next layer, the unexpected effect of many functions being composed together changing simultaneously is removed, allowing for the gradient to update the parameters under the assumption that the other layers do not change.As a result, it allows the use of higher learning rates, , which are less sensible to the initialization of parameters.Concretely, the normalization involves computing: with δ ≈ 10 −8 being set to prevent the undefined value √ 0, and B denoting a minibatch of output units h zl in layer l = 1...L.
Another recent methodology introducing randomness into deep neural networks is 'Dropout'.This method discards a small, but random, portion of the neurons during each iteration of training to prevent neurons from co-adapting, providing a powerful regularization method (Srivastava et al. 2014).The intuition is that, since several neurons are likely to model the same nonlinear relationship simultaneously, discarding a random fraction of them forces them to perform well, regardless of which other hidden units are in the model.
With dropout, each input and hidden unit z in layer l = 1...L, h zl , is pre-multiplied by a random variable r zl ∼ F(r zl ), h zl = r zl • h zl , ∀(z, l), prior to being fed forward to the activation function of the next layer, h zl+1 = s z (∑ Z l z=1 w zl+1 h zl + b zl+1 ), ∀z = 1...Z l+1 .For any layer, l, r l is then a vector of independent random variables, r l = [r 1l , ..., r Z l l ] ∈ R Z l .Standard choices for the probability distribution F(r l ) are (i) the Normal, i.e., F(r l ) = N(1, I), or (ii) the Bernoulli, in which case each r zl has probability p of being 1 (and 1 − p of being 0).The vector r l is then sampled and multiplied element-wise with the outputs of that layer, h zl , in order to create the thinned outputs, h zl , which are then used as input to the next layer, h zl+1 .When this process is applied at each layer l = 1...L, this amounts to sampling a sub-network from a larger network.In the ML literature, common choices for p are 0.8 for the input layer, l = 1, and 0.5 for the units in hidden layers, in l = 2...L.
During learning, the derivatives of the loss function are backpropagated through the sub-network.At test time, the weights are scaled down as W l = pW l , l = 1...L, resulting in a DNN (without dropout) that allows for the conduct of approximate inference.It is actually exact for many classes of models that do not have nonlinear hidden units, like the softmax regression classifier, regression networks with conditionally normal outputs, or deep networks with hidden layers without nonlinearities.This efficient test time procedure is an approximate model combination that (i) scales down the weights of the trained neural network, (ii) works well with other distributed representation models, e.g., restricted Boltzmann machines, and (iii) acts as a regularizer.Beyond the MLPs discussed, an array of alternative architectures have been proposed, including convolutional and recurrent NNs, which target specific data structures, like vision tasks and sequential data handling, respectively.See Goodfellow et al. (2016) for a detailed textbook treatment.

Uncertainty and Deep Learning
Neural networks are widely used in prediction tasks due to their unrivaled performance and flexibility in modeling complex unknown functions of the data.Although these methods provide accurate predictions, the development of tools for estimating the uncertainty around their predictions is still in its infancy.As explained in Hüllermeier and Waegeman (2020) and Pearce et al. (2018), out-of-sample pointwise accuracy is not enough.The predictions of deep neural network models need to be supported by measures of uncertainty that shed light on the reliability of the predictions.Recent literature in machine learning has focused on the construction of algorithms in order to measure the uncertainty around the predictions of neural network methods.The first subsection reviews methods for assessing the uncertainty regarding the model predictions.

Uncertainty in Model Prediction
Despite their unrivaled success in prediction and forecasting tasks, deep learning models struggle in conveying the uncertainty or degree of statistical confidence/reliability associated with those forecasts.Some recent contributions in the ML literature have made progress in the provision of prediction intervals for the point forecasts that are provided by deep learning models trained with dropout.For example, Gal and Ghahramani (2016) show that a NN with arbitrary depth and nonlinearities, with dropout being applied before every hidden layer and a parametric L 2 penalty , minimizes the Kullback-Leibler divergence between an approximate (variational) distribution, q(θ)-over matrices θ = (W 1 , ..., W L ) with columns randomly set to zero, .., Z l -and the posterior of a deep Gaussian process, p(θ|y; X),, which is intractable: where the first and second terms in the sum are approximated.In the first term, each term in the sum over N is approximated by Monte Carlo integration with a single sample θ b ∼ q(θ) to obtain an unbiased estimate of log p(y i |X i ; θ).In the second, l denotes prior length-scale, and τ model precision, i.e., p(y|X; θ) = N( y(X; θ), , equals the sample variance of B stochastic forward passes through the NN plus the inverse model precision, providing a measure of the uncertainty that is attached to the deep NN point forecast.
Under the assumption that the approximation error is negligible, the predictive variance can be estimated as with 2 a consistent estimator of σ 2 e under homoscedasticity of the error term, also see Smyl (2020) and Kendall and Gal (2017).A suitable prediction interval for y i under the assumption that p An alternative approach to MC dropout for estimating the uncertainty about the predictions is to use bootstrap methods, see Tibshirani (1996).Bootstrap procedures provide a reliable solution in order to obtain predictive intervals of the output variable.We proceed to explain how bootstrap works in a DNN context.Let {X i } N i=1 be a sample of N observations of the set of covariates, with P+1) .Applying the naive bootstrap that was proposed by Efron (1979) to this multivariate dataset, we generate the bootstrapped dataset X ¬, = {X ¬, i } N i=1 = {y i , X i } N i=1 by sampling with replacement from the original dataset X ¬ .By repeating this procedure B times, it is possible to obtain B bootstrapped samples defined as {X ¬, (b) } B b=1 .Each bootstrap sample is fitted to a single neural network in order to obtain an empirical distribution of bootstrap predictions f (X (b) ; θ (b) ); with θ (b) the set of bootstrap parameters for b = 1, . . ., B. In this context, a suitable bootstrap prediction interval for y i at an α significance level is [ q α/2 , q 1−α/2 ], with q α the empirical α−quantile obtained from the bootstrap distribution of f (X i ; θ (b) ), for b = 1, . . ., B.
Alternatively, under the assumption that the error is normally distributed, we can refine the empirical predictive interval using the critical value from the Normal distribution.A suitable prediction interval for X i , with i = 1, . . ., N , is with f (X i ; θ (b) ) the pointwise prediction of the model and z 1−α/2 the critical value of a N(0, 1) distribution at an α significance level; Under homoscedasticity of the error term i , the aleatoric uncertainty σ 2 e is estimated from the test sample as with θ the set of parameter estimates that were obtained from the original sample X ¬ .The epistemic uncertainty is estimated from the bootstrap samples as This bootstrap prediction interval can be further refined by exploiting the average prediction in (18).In this case, the variance of the predictor is with σ 2 = σ 2 θ (X i ) + σ 2 e , where σ 2 e = 1 N ∑ N i=1 y i − f (X i ) 2 .This expression assumes that the covariance between the predictions from the different bootstrap samples is zero.

Causal Inference and Interpretability
A recent area of interest in machine learning methods is the development of methods allowing to add interpretability to the outputs of these models.A typical example is to assess the causal relationship between the input and output variables.Recent progress in this direction has been done by Belloni et al. (2014); Farrell (2015); Athey and Imbens (2019);and, Farrell et al. (2019).
The goal of interpretation tasks is to use the structural form of the approximating function f (X) to try to understand the mechanism that produced the data {y i , X i } N i=1 .Interest lies then in the identification of those input variables that are the most relevant to the variation in the output, the nature of the dependence of the output on the most relevant inputs, or how that dependence changes with changes in the values of other inputs.Conducting valid inference rests on the amount of correct information learned about the system (i.e., minimizing the bias at the expense of increasing the variance), rather than just prediction accuracy (where some bias is optimally traded-off against the resulting reduction in the variance).Although both are often in conflict, which limits the inferential abilities of ML methods, it is not always the case.Athey and Imbens (2019) note that one way to perform valid (causal) inference would be to adapt the 'out-of-sample' performance objective in ML cost/loss functions to control for confounders or for discovering treatment effect heterogeneity, as is standard in the model-based statistics and econometrics literatures.Allen-Zhu et al. (2019) within the ML literature, and Farrell (2015) within the econometrics literature, obtain nonasymptotic bounds.Based on Farrell (2015), the latter obtains conditions for valid two-step causal inference after first-step deep learning estimation.A survey regarding the differences between the two literatures and recent progress made along integrating both are provided in Athey and Imbens (2019).

Empirical Application
The aim of this section is to illustrate the suitability of machine learning methods for prediction in empirical finance modeling.We follow a similar structure to Gu et al. (2020).These authors perform a comparative study of machine learning methods for the canonical problem of empirical asset pricing: measuring asset risk premiums.Gu et al. (2020) consider neural network models and regression trees with the aim of identifying the best performing methods against more conventional methods that are based on linear regression models and ordinary least squares.In a similar spirit, we conduct a prediction exercise to compare the suitability of feedforward neural networks against conventional time series models for the conditional mean and volatility of asset returns.
We consider the monthly prices of the S&P, the Dow Jones, and the Nasdaq indices, starting from 30-02-1972 until 30-07-2020.The out-of-sample forecasting accuracy is compared against a GARCH(1, 1) benchmark; the comparison is conducted in terms of out-of-sample mean squared prediction error (MSPE) and in terms of optimal portfolio allocation while using out-of-sample Sharpe ratios.In order to obtain the out-of-sample forecasts, a fixed rolling window approach with 50 steps is applied.Thus, the period following 30-06-2016 (included) is used for out-of-sample evaluation.
First, the asset prices are transformed into log returns, and apply standard stationarity tests of the analyzed series.We conduct the Dickey-Fuller test allowing for a maximum of 10 lags.The unit root null hypothesis is rejected at 0.01 significance level in all cases; additionally, we also perform the KPSS test and fail to reject the null hypothesis of stationarity in all cases at 0.1 significance level.
Following the recent literature on deep learning and time series forecasting focused on enhancing the forecasting accuracy of DNNs by using time series decomposition (see Smyl 2020;Hansen and Nelson 2003; Méndez-Jiménez and Cárdenas-Montes 2018 among others), the present paper couples the MC-dropout approach of Gal and Ghahramani (2016) with time series decomposition.In this framework, we usually identify a trend component T t , a seasonal component Ψ t , and a random component Ξ t .Assuming additive decomposition, the time series can be modeled as X t = Ξ t + Ψ t + T t .Figure 1 reports an additive decomposition of the analyzed time series11 .
Based on the algorithm of Smyl (2020), the present paper will fit and forecast the trend component while using an exponential smoothing model, and the random component using either a DNN or a GARCH(1,1) model12 .When a GARCH(1,1) is fitted, the final forecast will be the sum of the individual forecasts Ξt+1 + Tt+1 .When the DNN model is considered, B stochastic forward passes are performed in order to forecast the random component Ξ t+1 ; to each of these random stochastic forward passes the forecasted trend Tt+1 is added, and B point forecasts of { Xb t+1 } B b=1 are obtained.The point forecast of the log prices is the mean Xt+1 over the B forward passes.
For each time series analyzed, a neural network with three hidden layers of 50 nodes each, trained with Adam optimizer with learning rate 0.001, an exponential decay rate for the first moment estimates (β 1 ) equal to 0.900, and an exponential decay rate for the second moment estimates (β 2 ) equal to 0.999 is fitted.We also consider a dropout rate of 0.1 across all layers and 300 epochs.The input layer comprises the multivariate time series with relative lagged values (up to k = 10).Additionally, in order to ensure the proper training of the network, the input data Ξ t−k for k = 1, • • • , 10 are normalized in order to guarantee that the regressors have zero mean and unit standard deviation.For each time series analyzed, a neural network with three hidden layers of 50 nodes each, trained with Adam optimizer with learning rate 0.001, an exponential decay rate for the 1 st moment estimates (β 1 ) equal to 0.900, and an exponential decay rate for the 2 st moment estimates (β 2 ) equal to 0.999 is tted.We also consider a dropout rate of 11 The seasonal component is not reported as the magnitude was approximately 0 with the highest value observed 3e −04 . 12As robustness exercise, we also consider a GARCH(1,1) tted on the time series X t .
26 As mentioned earlier, a fixed rolling window is implemented in order to obtain 50 one-step-ahead forecasts.We first evaluate the performance of the proposed approach against a GARCH(1, 1) model in terms of MSPE while using the Diebold-Mariano (DM) test (1995), with hypothesis: and the alternative is with i = 1, 2, 3 indicating the three time series analyzed and j = 1, 2 defining the two alternative methodologies used to predict with a GARCH(1,1) model-a first methodology that decomposes the analyzed time series and combines the point forecast from a GARCH(1,1) with an exponential smoothing, and a second methodology that does not decompose the time series and directly forecast with a GARCH(1,1).The results of the predictive ability test are as follows.For the S&P index, the test statistic of the DM 1 is 2.3970 with a p-value of 0.0102 and the test statistic of DM 2 is 2.5262 with p-value of 0.0074.For the Dow Jones index, the test statistic of the DM 1 is 2.4729 with a p-value of 0.0084 and the test statistic of DM 2 is 2.0435, with p-value of 0.0232.For the Nasdaq index, the test statistic of the DM 1 is 2.7578 with a p-value of 0.0041 and the test statistic of DM 2 is 3.9139 with p-value of 0.0001.The reported p-values show the outperformance of the DNN approach against a GARCH(1,1) benchmark.
In order to further validate the out-of-sample performance of the proposed approach, the present paper compares the MC-dropout against a GARCH(1,1) benchmark in terms of portfolio returns for a given optimal strategy.In particular, we will consider a mean-variance portfolio, with the weights defined as: with ω ∈ R 3 is the vector of the portfolio weights invested in the three indices considered, Σ ∈ R 3×3 is the estimated covariance matrix, x ∈ R 3 is the vector of the expected returns, and 1 ∈ R 3 is a vector of ones.The covariance matrix is defined as: with diag( σ) being the diagonal matrix with estimated standard deviations, and P the correlation matrix 13 .The present paper considers two portfolio strategies: the mean-variance and minimum-variance portfolios (the latter obtains by imposing x = 0 in the constrained minimization in ( 22)).Knowing that holding the portfolio ω strategy t for a time ∆t gives the out-of-sample return for t + ∆t and by imposing ∆t = 1, the rolling window approach used in order to evaluate the out-of-sample performance of a given strategy is as follows: at time t, the one-step-ahead conditional mean and volatility of the three stocks are forecasted while using either a GARCH(1, 1) or a DNN model.We construct the dynamic covariance matrix Σt+1 from estimates of the conditional variances and covariances over rolling windows.Based on the forecasted Xt+1 and Σt+1 , the constrained minimization in ( 22) is solved and weights ω strategy t computed.The return of the portfolio in t + 1 will be the weighted mean of the observed returns of the three stocks in t + 1, with weights ω strategy t : Υ t+1 = ω strategy t x t+1 .By implementing a fixed rolling window forecasting exercise, the above procedure is repeated 50 times to obtain 50 out-of-sample Υ t+1 from either the GARCH(1, 1) or the DNN model.This allows for us to estimate the out-of-sample Sharpe ratios as: with Υp being the mean return of the portfolio, Υ r f the risk-free rate (assumed equal to 0), and σp the portfolio standard deviation.Figure 2 reports the cumulative returns of the four different strategies considered.One could notice how a portfolio strategy (either mean-variance or minimum variance) that is based on DNN forecasts outperforms a strategy based on GARCH(1,1) forecasts.In particular, the annualized Sharpe ratios of the mean-variance and minimum-variance portfolios obtained from the forecasted return and volatility from a DNN are: 0.6777 and 0.7562, respectively; the annualized Sharpe ratios that are obtained from a GARCH(1,1) forecasts are 0.2686 for the mean-variance and 0.3175 for the minimum-variance portfolio.
The above results extend some of the empirical findings in Gu et al. (2020).These authors compare the forecasting performance of ReLu DNNs against linear models and tree-based approaches also in terms of out-of-sample portfolio returns.Gu et al. (2020), based on the out-of-sample forecasts of the individual stock returns, construct a zero-net investment portfolio-that buys and sells the highest and lowest expected returns stocks respectively-and a value weight portfolio.By comparing the out-of-sample returns of the portfolio strategies exploiting the forecasts of the competing models, they show that portfolio strategies that are based on NN forecasts dominate those based on forecasts of both linear models and tree-based algorithms.If Gu et al. (2020) show that ReLu DNNs can be used to define portfolio strategies based only on the forecasted conditional means of the asset returns, the 13 The constrained minimization in (22) allows for short selling but not for leverage effect.present paper-by considering the minimum-variance and mean-variance portfolios-improves upon their results, showing that optimal portfolio allocation strategies can also be constructed on ReLu DNNs' forecasted conditional volatilities, or on a combination of conditional mean and conditional volatilities of stock returns.

Conclusions
We frame our paper in a recent literature on machine learning for empirical finance such as Chinco et al. (2019) and Gu et al. (2020).In contrast to these studies, we present an overview of the procedures that are involved in prediction with machine learning models and pay special emphasis on deep learning.We study suitable loss functions for classification and prediction, regularization methods, learning algorithms for model selection, and optimal architectures of deep neural networks.The paper also analyzes modern methods for constructing prediction intervals in deep neural networks and providing a gentle introduction to causal inference.
Empirically, we illustrate the relevance of machine learning methods for financial forecasting and portfolio allocation and assess its performance as compared to traditional time series models while using statistical and economic performance measures.In line with the empirical findings of Gu et al. (2020), we find overwhelming evidence in favor of machine learning techniques, in particular, deep learning methods. 3 L ) : γ L = (b 1 , ..., b L , W 1 , ..., W L )} augmented by the output layer weights and bias, (w L+1 , b L+1 ), θ L = [(w L+1 , b L+1 )...(W 1 , b 1 )], that solve the 'empirical risk minimization' problem (12).In deep learning, standard choices are: (i) a cross-entropy cost/loss function, L[•, •]; (ii) a ReLu activation function s(•), which naturally leads to sparse settings, whereby a large portion of hidden units are not activated, thus having zero output (LeCun et al. 2015); (iii) a SGD penalty [θ] ) and variance-covariance matrix 1 τ I.The sampled θ b result in realizations from the Bernoulli distribution [r b l ] equivalent to the binary variables in the dropout case, i.e., sampling B sets of vectors of realizations from the Bernoulli distribution {[r b l ]} B b=1 with [r b l ] = [r b zl ] Z l z=1 , giving {W b 1 , ..., W b L } B b=1 , with which the first two moments of the predictive distribution p(y i |X i ; θ) are estimated (by moment-matching).The first moment, 1 B ∑ B b=1 y(X; W b 1 , ..., W b L ), is known as Monte Carlo (MC) dropout and, in practice, it corresponds to performing B stochastic forward passes through the NN and averaging the results (model averaging).The second moment, 1 τ

Figure 1 .
Figure 1.Monthly returns and trend and random component decompositions.

Figure 2 :
Figure 2: Out-of-sample cumulative returns of the four portfolio strategies analyzed: in black the minimum-variance portfolio from the DNN, in green the minimum-variance portfolio obtained from GARCH forecasts, in blue the mean-variance portfolio obtained from a DNN, in red the mean-variance constructed from GARCH forecasts.

Figure 2 .
Figure 2.Out-of-sample cumulative returns of the four portfolio strategies analyzed: in black the minimum-variance portfolio from the deep neural network (DNN), in green the minimum-variance portfolio obtained from GARCH forecasts, in blue the mean-variance portfolio obtained from a DNN, in red the mean-variance constructed from GARCH forecasts.
Common examples are L[y i , y i ] = |y i − y i | which plugged into (4) corresponds to selecting the median, Med, of the conditional distribution.More formally, f (X) = Med y,X∼ p data[y|X]that minimizes the Mean Absolute Error (MAE), or L[y i , y i ] = [y i − y i ] 2 , which selects the f (X) = E y,X∼ p data [y|X] that minimizes the Mean Squared Error (MSE) in (4).Alternatively stated, consider a random sample of i = 1...N realizations, {y i , X i }, constituting the empirical distribution p data (y, X), the goal of supervised learning is to learn to predict y from X, estimating p(y|X).Letting p model (y|X; θ) be a parametric family of probability distributions that are indexed by θ that estimates the unknown true p(y|X), supervised learning corresponds to finding the parameter vector θ that minimizes the dissimilarity/distance between p model (y|X; θ) and p data (y|X): Similarly, upper bounds, or maximal number of linear regions of a function approximated by a network architecture with rectified linear units of size Z, have been recently characterized byRaghu et al.