# Are Markets Truly Efficient? Experiments Using Deep Learning Algorithms for Market Movement Prediction

^{1}

^{2}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

## 2. Neural Networks

#### 2.1. Working of a Neural Network

#### 2.2. Calibration via Backpropagation

## 3. Data Engineering

#### 3.1. Sourcing and Formatting the Data

#### 3.2. Feature Engineering

#### 3.3. Statistical Features of the Dataset

## 4. Experiments

#### 4.1. Look Back Period (L) and Prediction Function (M)

#### 4.2. Forecast Period (F)

#### 4.3. Training Set Size (N)

#### 4.4. Training and Validation

`h2o`cluster. Therefore, each episode of training and test takes less than 30 s.

#### 4.5. Prediction Models

`R`programming language using the deep learning package from

`h2o.ai`. (See http://h2o.ai). Nesterov accelerated gradient descent is applied with mini batch size equal to 1 by default (i.e., online stochastic gradient descent). We also implemented the model using TensorFlow using Python, and obtained similar results.) The model is a fully connected, feed-forward network with no CNN (convolutional neural net) or RNN (recurrent neural net) features. Standard backpropagation is applied.

#### 4.6. Prediction Performance

- Forecast period average accuracy (FPAA): In this measure we calculate the ratio of the number of days per forecast period that our model predicts the index movement correctly (i.e., accurately predicts that the index went up when it actually went up and vice versa) to the total number of forecast period observations. Next, we take the ratio of the number of periods that our model was over 50% accurate to the total number of forecast periods. For example, suppose over three forecast periods our model was accurate 60%, 60% and 30% of the time. Then, the FPAA would be 66.66%, as our model predicted over 50% accuracy twice in the three forecast periods. This measure gives us a point in time measure of how our model is performing and is labeled as average accuracy in each of the runs discussed below. The measure indicates whether identical trades over the forecast period F will lead to a gain on average. The FPAA measure is a new measure that we developed for this paper.
- Overall accuracy (OA): In this measure we focus on the daily measures of accuracy. We take individual day level predictions and compare it to the actuals to generate a confusion matrix for each experiment, from which we calculate accuracy. This measure of accuracy gives us a rolling historical measure of the performance of our model. This measure is the number of correctly predicted days across all days in all experiments.
- Precision, Recall, F1, AUC: These are standard measures for prediction algorithms based on a binary confusion matrix. We recap these here, where $TP$ is the number of true positives; $FP$ is false positives; $TN$ is true negatives; and $FN$ is false negatives. Precision is $TP/(TP+FP)$; Recall is $TP/(TP+FN)$; and $F1$ is the harmonic mean of precision and recall, i.e., $2/(1/Precision+1/Recall)$. The area under the ROC curve (AUC) measures the tradeoff between the false positive rate ($FPR=FP/(FP+TN)$) and the true positive rate ($TPR=Recall$). AUC is one of the most comprehensive measures of prediction performance.

#### 4.7. Identifying the Sources of Market Efficiency

- In-sample: Using 5000 observations, starting from the first day of the sample, we train the neural net, and then test it on three sets: (i) a randomly chosen set of 10 observations from the training sample; (ii) a randomly chosen set of 30 observations from the training sample; (iii) a randomly chosen set of 1000 observations from the training sample; and (iv) the entire training sample is treated as the test sample. We then roll forward 20 days and repeat this experiment. This will give us 423 such experiments, each with four accuracy values (one from each test set) for “OA” and for “FPAA”.
- Stationary out-of-sample: Here we try to maintain stationarity in the sample by bifurcating the sample into separate training and test groups but from the same sample period using a block of consecutive 5000 observations. (i) Randomly select 4990 observations for training and keep the remaining 10 for testing; (ii) randomly select 4970 observations for training and keep the remaining 30 for testing; (iii) randomly select 4000 observations for training and keep the remaining 1000 for testing. as in the previous case, roll forward 20 days and repeat this experiment.
- Nonstationary out-of-sample: Starting from the first observation, pick a block of consecutive 5000 observations, and train the deep learning model. Then, (i) test the model on the next 10 observations; (ii) test the model on the next 30 observations; (iii) test the model on the next 1000 observations. Roll forward 20 days and repeat this experiment.

## 5. Empirical Results

## 6. Concluding Discussion

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Fama, E.F. The Behavior of Stock-Market Prices. J. Bus.
**1965**, 38, 34–105. [Google Scholar] [CrossRef] - Fama, E.F. Efficient Capital Markets: II. J. Financ.
**1991**, 46, 1575–1617. [Google Scholar] [CrossRef][Green Version] - Abraham, A.; Nath, B.; Mahanti, P.K. Hybrid Intelligent Systems for Stock Market Analysis. In Computational Science—ICCS 2001; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2001; pp. 337–345. [Google Scholar]
- Atsalakis, G.S.; Dimitrakakis, E.M.; Zopounidis, C.D. Elliott Wave Theory and neuro-fuzzy systems, in stock market prediction: The WASP system. Expert Syst. Appl.
**2011**, 38, 9196–9206. [Google Scholar] [CrossRef] - Armano, G.; Marchesi, M.; Murru, A. A Hybrid Genetic-neural Architecture for Stock Indexes Forecasting. Inf. Sci.
**2005**, 170, 3–33. [Google Scholar] [CrossRef] - Atsalakis, G.S.; Valavanis, K.P. Forecasting stock market short-term trends using a neuro-fuzzy based methodology. Expert Syst. Appl.
**2009**, 36, 10696–10707. [Google Scholar] [CrossRef] - Atsalakis, G.S.; Protopapadakis, E.E.; Valavanis, K.P. Stock trend forecasting in turbulent market periods using neuro-fuzzy systems. Oper. Res.
**2016**, 16, 245–269. [Google Scholar] [CrossRef] - Atsalakis, G.S.; Valavanis, K.P. Surveying Stock Market Forecasting Techniques—Part II: Soft Computing Methods. Expert Syst. Appl.
**2009**, 36, 5932–5941. [Google Scholar] [CrossRef] - Bahrammirzaee, A. A Comparative Survey of Artificial Intelligence Applications in Finance: Artificial Neural Networks, Expert System and Hybrid Intelligent Systems. Neural Comput. Appl.
**2010**, 19, 1165–1195. [Google Scholar] [CrossRef] - Doeksen, B.; Abraham, A.; Thomas, J.; Paprzycki, M. Real stock trading using soft computing models. In Proceedings of the ITCC 2005 International Conference on Information Technology: Coding and Computing, Las Vegas, NV, USA, 4–6 April 2005; Volume 2, pp. 162–167. [Google Scholar] [CrossRef]
- Huang, K.Y.; Jane, C.J. A hybrid model for stock market forecasting and portfolio selection based on ARX, grey system and RS theories. Expert Syst. Appl.
**2009**, 36, 5387–5392. [Google Scholar] [CrossRef] - Yeh, C.Y.; Huang, C.W.; Lee, S.J. A multiple-kernel support vector regression approach for stock market price forecasting. Expert Syst. Appl.
**2011**, 38, 2177–2186. [Google Scholar] [CrossRef] - Yu, L.; Chen, H.; Wang, S.; Lai, K.K. Evolving Least Squares Support Vector Machines for Stock Market Trend Mining. Trans. Evol. Comput.
**2009**, 13, 87–102. [Google Scholar] [CrossRef][Green Version] - Lo, A.W.; MacKinlay, A.C. Stock Market Prices Do Not Follow Random Walks: Evidence from a Simple Specification Test. Rev. Financ. Stud.
**1988**, 1, 41–66. [Google Scholar] [CrossRef][Green Version] - Fama, E.F.; Fisher, L.; Jensen, M.C.; Roll, R. The Adjustment of Stock Prices to New Information. Int. Econ. Rev.
**1969**, 10, 1–21. [Google Scholar] [CrossRef] - Elton, E.J.; Gruber, M.J.; Das, S.; Hlavka, M. Efficiency with Costly Information: A Reinterpretation of Evidence from Managed Portfolios. Rev. Financ. Stud.
**1993**, 6, 1–22. [Google Scholar] [CrossRef] - McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys.
**1943**, 5, 115–133. [Google Scholar] [CrossRef] - Hornik, K. Approximation Capabilities of Multilayer Feedforward Networks. Neural Netw.
**1991**, 4, 251–257. [Google Scholar] [CrossRef] - Telegarsky, M. Benefits of Depth in Neural Networks. JMLR Workshop Conf. Proc.
**2016**, 49, 1–23. [Google Scholar] - Kelley, H.J. Gradient Theory of Optimal Flight Paths. ARS J.
**1960**, 30, 947–954. [Google Scholar] [CrossRef] - Bryson, A. A gradient method for optimizing multi-stage allocation processes. In Proceedings of the Harvard University Symposium on Digital Computers and Their Applications; Harvard University Press: Cambridge, MA, USA, 1961. [Google Scholar]
- Dreyfus, S. The computational solution of optimal control problems with time lag. IEEE Trans. Autom. Control
**1973**, 18, 383–385. [Google Scholar] [CrossRef] - Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature
**1986**, 323, 533–536. [Google Scholar] [CrossRef] - Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; The MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Geron, A. Hands-On Machine Learning with Scikit-Learn & TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems; O’Reilly: Newton, MA, USA, 2017; OCLC: 984835526. [Google Scholar]

**Figure 1.**A simple neural network with two hidden layers of two nodes each, four inputs, and a single output node.

**Figure 2.**Depiction of the source data, prices and returns. The top panel shows the sparse data on prices and the lower panel shows the data converted to returns, using the formula: ${R}_{t}=({S}_{t}-{S}_{t-1})/{S}_{t}$, where $R,S$ stand for returns and stock prices, respectively. One row of data is lost when converting prices to returns. Note that for the index, we convert the data into sign of movement, where the value is 1 if the index moved up, else it is 0. The “NA” values show days for which there is no data observation.

**Figure 3.**Depiction of the returns for each day in percentiles, where 500 stocks returns are reduced to 19 values for the following percentiles: 1, 2, 3, 5, 10, 15, 20, 30, 40, 50, 60, 70, 80, 85, 90, 95, 97, 98, and 99. Note that for the index, the values remain as the sign of movement, where the value is 1 if the index moved up, else it is 0, just as shown in Figure 2.

**Figure 4.**Data for prediction analysis after rearrangement. The label is the sign of the S&P movement, shown in the last column. The feature set comprises L sets of $C=19$ percentiles for L lagged days.

**Figure 5.**The distribution of daily S&P 500 index returns from 1963–2016. The mean return is $0.00031$ and the standard deviation is $0.01$. The skewness and kurtosis are $-0.62$ and $20.68$, respectively.

**Figure 8.**Forecast period average accuracy, where the look back period is $L=30$ days and the forecast period is $F=10,\phantom{\rule{3.33333pt}{0ex}}30$ days. Forecast period average accuracy is 64% for $F=10$ days and 77% for $F=30$ days, both well over 50% as can be seen from the number of points in Figure 8 that lie above the $0.5$ level on the y-axis. The overall accuracy is slightly over 50%, i.e., 53.9% for $F=10$ days and 52.7% for $F=30$ days.

**Figure 9.**Histograms of the results from all experiments, where the test data was in-sample (IS). The four plots are, reading left to right, top to bottom, the cases when $F=10,\phantom{\rule{3.33333pt}{0ex}}30,\phantom{\rule{3.33333pt}{0ex}}1000,\phantom{\rule{3.33333pt}{0ex}}5000$ days, respectively. The red line is at the 52.7% accuracy cut off.

**Figure 10.**Histograms of the results from all experiments, where the test data was for the Stationary out-of-sample experiment (case OS). The three plots are the cases when $F=10,\phantom{\rule{3.33333pt}{0ex}}30,\phantom{\rule{3.33333pt}{0ex}}1000$ days, respectively. The red line is at the 52.7% accuracy cut off.

**Figure 11.**Histograms of the results from all experiments, where the test data was for the Nonstationary out-of-sample experiment (case NS). The three plots are the cases when $F=10,\phantom{\rule{3.33333pt}{0ex}}30,\phantom{\rule{3.33333pt}{0ex}}1000$ days, respectively. The red line is at the 52.7% accuracy cut off.

**Table 1.**Accuracy levels for three cases of experiments on stock market predictability. We report the overall accuracy (OA) and forecast period average accuracy (FPAA). The first number in each cell is the OA and the second number is the FPAA, computed at a threshold of 50%, i.e., $FPAA=1$ if the percentage of correct forecasts in the prediction period (F) is greater than 0.5, else $FPAA=0$. We may also compute FPAA for a threshold of 0.527, i.e., the baseline percentage of times the stock market rises, but we get identical results for $F=10,30$ and slightly lower values for $F=1000$. We also report how far from the threshold of 0.527 (in standard deviations) the OA is. In no case, is the predictive power significant. In addition, standard metrics such as precision, recall, AUC, and F1 scores are provided.

Experimental Case | $\mathit{F}=10$ | $\mathit{F}=30$ | $\mathit{F}=1000$ | $\mathit{F}=5000$ |
---|---|---|---|---|

${A}_{1}$. In-sample (IS) | ||||

(OA, FPAA) | (0.580, 0.579) | (0.556, 0.638) | (0.525, 0.882) | (0.522, 0.927) |

No of $\sigma $s away from 0.527 | 0.31 | 0.30 | 0.12 | 0.31 |

(Precision, Recall) | (0.570, 0.983) | (0.545, 0.986) | (0.525,0.991) | (0.526,0.999) |

(AUC, F1) | (0.528, 0.702) | (0.517, 0.696) | (0.506,0.688) | (0.505,0.689) |

${A}_{2}$. Stationary (OS) | ||||

(OA, FPAA) | (0.592, 0.622) | (0.551, 0.619) | (0.524, 0.910) | - |

No of $\sigma $s away from 0.527 | 0.38 | 0.25 | 0.17 | - |

(Precision, Recall) | (0.580,0.978) | (0.541,0.984) | (0.525,0.999) | - |

(AUC, F1) | (0.540,0.707) | (0.513,0.692) | (0.501,0.688) | - |

${A}_{3}$. Nonstationary (NS) | ||||

(OA, FPAA) | (0.582, 0.589) | (0.552, 0.667) | (0.534, 0.887) | - |

No of $\sigma $s away from 0.527 | 0.36 | 0.28 | 0.26 | - |

(Precision, Recall) | (0.573,0.981) | (0.546,0.992) | (0.535,0.999) | - |

(AUC, F1) | (0.526,0.705) | (0.508,0.699) | (0.502,0.696) | - |

**Table 2.**Forecast metrics as the forecast period varies in the set $F=\{5,10,15,20,25,30\}$ days. These results are for the OS nonstationary case. Metrics correspond to those reported in Table 1.

Forecast Period F in days | ||||||
---|---|---|---|---|---|---|

Metric | 5 | 10 | 15 | 20 | 25 | 30 |

OA | 0.605 | 0.582 | 0.567 | 0.565 | 0.556 | 0.552 |

FPAA | 0.648 | 0.587 | 0.702 | 0.641 | 0.697 | 0.667 |

Precision | 0.594 | 0.573 | 0.557 | 0.558 | 0.549 | 0.546 |

Recall | 0.986 | 0.981 | 0.987 | 0.986 | 0.989 | 0.992 |

AUC | 0.546 | 0.526 | 0.517 | 0.517 | 0.515 | 0.508 |

F1 | 0.710 | 0.705 | 0.702 | 0.703 | 0.699 | 0.699 |

**Table 3.**Prediction accuracy for various machine learning models. We report the overall accuracy (OA) and FPAA. The training sample is of size $N=5000$, with a lookback period of $L=30$ days and forecast period of $F=10$ days. We also report how far from the threshold of 0.527 (in standard deviations) the OA is, shown in parenthesis next to the OA value. In no case, is the predictive power significant.

Prediction Model | OA | FPAA |
---|---|---|

Deep Learning | 0.582 (0.36) | 0.589 |

Logistic Regression | 0.517 (0.06) | 0.667 |

Decision Tree | 0.502 (0.15) | 0.633 |

Random Forest | 0.491 (0.24) | 0.598 |

Linear Discriminant Analysis | 0.509 (0.11) | 0.658 |

k Nearest Neighbors | 0.511 (0.10) | 0.645 |

Naive Bayes | 0.497 (0.19) | 0.618 |

Support Vector Machine | 0.517 (0.07) | 0.678 |

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Das, S.R.; Mokashi, K.; Culkin, R. Are Markets Truly Efficient? Experiments Using Deep Learning Algorithms for Market Movement Prediction. *Algorithms* **2018**, *11*, 138.
https://doi.org/10.3390/a11090138

**AMA Style**

Das SR, Mokashi K, Culkin R. Are Markets Truly Efficient? Experiments Using Deep Learning Algorithms for Market Movement Prediction. *Algorithms*. 2018; 11(9):138.
https://doi.org/10.3390/a11090138

**Chicago/Turabian Style**

Das, Sanjiv R., Karthik Mokashi, and Robbie Culkin. 2018. "Are Markets Truly Efficient? Experiments Using Deep Learning Algorithms for Market Movement Prediction" *Algorithms* 11, no. 9: 138.
https://doi.org/10.3390/a11090138