# Deep Learning for Stock Market Prediction

^{1}

^{2}

^{3}

^{4}

^{5}

^{6}

^{7}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Materials and Methods

#### 2.1. Tree-Based Models

#### 2.2. Artificial Neural Networks

_{1}, X

_{2}, …, and Xn are inputs, w

_{1}, w

_{2}, …, and w

_{n}are weights respectively, n is the number of the input for the final node, f is activation function and z is the output.

_{t}= tanh (W

_{t}h

_{t−1}+ W

_{x}x

_{t}),

_{t}= W

_{y}h

_{t,}

_{t}, h

_{t}, x

_{t}, and W

_{h}are output vector, hidden layer vector, input vector, and weighting matrix respectively

_{t−1}is output at the prior time (t − 1), and x

_{t}is input at the current time (t) into Sigmoid function (S(t)). All W and b are the weight matrices and bias vectors that require to be learned during the training process. f

_{t}defines how much information will be remembered or forgotten. The input gate defines which new information remember in cell state by Equations (5)–(7). the value of i

_{t}is generated to determine how much new information cell state need to be remembered. A tanh function gains an election message to be added to the cell state by inputting the output (h

_{t−1}) at the prior time (t − 1) and adding the current time t input information (x

_{t}). C

_{t}gets the updated information that must be added to the cell state (Equation (8)). The output gate defines which information will be output in cell state. The value of o

_{t}is between 0 and 1; which is employed to indicate how many cells state information that need to output (Equation (9)). The result of h

_{t}is the LSTM block’s output information at time t (Equation (10)) [45].

_{f}· [h

_{t−1}, x

_{t}] + b

_{f})

_{t}= σ(W

_{i}· [h

_{t−1}, x

_{t}] + b

_{i})

_{t}= tanh(W

_{c}· [h

_{t−1,}x

_{t}] + b

_{c})

_{t}= f

_{t}× C

_{t−1}+ i

_{t}× Ĉ

_{t}

_{t}= σ(W

_{o}· [h

_{t−1}, x

_{t}] + b

_{o})

_{t}= o

_{t}× tanh(C

_{t})

## 3. Research Data

## 4. Evaluation Measures

#### 4.1. Mean Absolute Percentage Error

_{t}is the actual value and F

_{t}is the forecast value. In the formula, the absolute value of the difference between those is divided by A

_{t}. The absolute value is summed for every forecasted value and divided by the number of data. Finally, the percentage error is made by multiplying to 100.

#### 4.2. Mean Absolute Error

_{t}is the true value and F

_{t}is the prediction value. In the formula, the absolute value of the difference between those is divided by n (number of samples) and summed for every forecasted value.

#### 4.3. Relative Root Mean Square Error

_{t}is the observed value, F

_{t}is the prediction value and n is the number of samples.

#### 4.4. Mean Squared Error

_{t}is the observed value, F

_{t}is the prediction value and n is the number of samples.

## 5. Results

- Decision Tree always has the lowest rank for predictions because it is not an ensemble method (average of MAPE: 2.07, 2.70, 2.18, 1.41)
- For Diversified Financials and Petroleum, the best average performance belongs to Adaboost regressor (average of MAPE: 1.59 and 2.22)
- For Non-metallic minerals and Basic metals, there is a stiff competition between Adaboost regressor, Gradient Boosting regressor and XGBoost regressor
- Decision Tree has the lowest runtime and then Adaboost regressor is the fastest predictor (0.009 ms and 1.308 ms)
- The runtime of XGBoost is considerably more than other performers (up to 65%)
- Adaboost regressor is the best by considering accuracy, the strength of fitting and runtime all together

- ANN generally occupies the bottom for forecasting (average of MAPE: 3.86, 5.52, 4.67, 3.17)
- LSTM model outperforms RNN significantly with lower error values (for example in Diversified Financials, MAPE: 0.60 versus 1.85)
- The average runtime of LSTM is noticeably larger than RNN (802.902 ms versus 20.630 ms, roughly four times more)

- According to MAPE and RRMSE, the models are able to predict future values for Metals and Diversified Financials better than two other groups
- Deep learning methods (RNN and LSTM) indicate a powerful ability to predict stock market prices because of using a large number of epochs and values related to some days before.
- Based on RRMSE and MSE, the deep learning methods have a high ability to make the best fitting curve with the minimum distribution of residuals around it.
- The average runtime of deep learning models is high compared to others
- LSTM is powerfully the best model for prediction all stock market groups with the lowest error and the best ability to fit, but the problem is the great runtime

## 6. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Asadi, S.; Hadavandi, E.; Mehmanpazir, F.; Nakhostin, M.M. Hybridization of evolutionary Levenberg–Marquardt neural networks and data pre-processing for stock market prediction. Knowl.-Based Syst.
**2012**, 35, 245–258. [Google Scholar] [CrossRef] - Akhter, S.; Misir, M.A. Capital markets efficiency: Evidence from the emerging capital market with particular reference to Dhaka stock exchange. South Asian J. Manag.
**2005**, 12, 35. [Google Scholar] - Miao, K.; Chen, F.; Zhao, Z. Stock price forecast based on bacterial colony RBF neural network. J. Qingdao Univ. (Nat. Sci. Ed.)
**2007**, 2, 210–230. [Google Scholar] - Lehoczky, J.; Schervish, M. Overview and History of Statistics for Equity Markets. Annu. Rev. Stat. Its Appl.
**2018**, 5, 265–288. [Google Scholar] [CrossRef] - Aali-Bujari, A.; Venegas-Martínez, F.; Pérez-Lechuga, G. Impact of the stock market capitalization and the banking spread in growth and development in Latin American: A panel data estimation with System GMM. Contaduría y Administración
**2017**, 62, 1427–1441. [Google Scholar] [CrossRef][Green Version] - Naeini, M.P.; Taremian, H.; Hashemi, H.B. Stock market value prediction using neural networks. In Proceedings of the 2010 international conference on computer information systems and industrial management applications (CISIM), Krakow, Poland, 8–10 October 2010; pp. 132–136. [Google Scholar]
- Qian, B.; Rasheed, K. Stock market prediction with multiple classifiers. Appl. Intell.
**2007**, 26, 25–33. [Google Scholar] [CrossRef] - Shah, D.; Isah, H.; Zulkernine, F. Stock market analysis: A review and taxonomy of prediction techniques. Int. J. Financ. Stud.
**2019**, 7, 26. [Google Scholar] [CrossRef][Green Version] - Olivas, E.S. Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques: Algorithms, Methods, and Techniques; IGI Global: Hershey, PA, USA, 2009. [Google Scholar]
- Ballings, M.; Poel, D.V.D.; Hespeels, N.; Gryp, R. Evaluating multiple classifiers for stock price direction prediction. Expert Syst. Appl.
**2015**, 42, 7046–7056. [Google Scholar] [CrossRef] - Aldin, M.M.; Dehnavi, H.D.; Entezari, S. Evaluating the employment of technical indicators in predicting stock price index variations using artificial neural networks (case study: Tehran Stock Exchange). Int. J. Bus. Manag.
**2012**, 7, 25. [Google Scholar] - Tsai, C.-F.; Lin, Y.-C.; Yen, D.C.; Chen, Y.-M. Predicting stock returns by classifier ensembles. Appl. Soft Comput.
**2011**, 11, 2452–2459. [Google Scholar] [CrossRef] - Cavalcante, R.C.; Brasileiro, R.C.; Souza, V.L.; Nobrega, J.P.; Oliveira, A.L. Computational intelligence and financial markets: A survey and future directions. Expert Syst. Appl.
**2016**, 55, 194–211. [Google Scholar] [CrossRef] - Selvin, S.; Vinayakumar, R.; Gopalakrishnan, E.; Menon, V.K.; Soman, K. Stock price prediction using LSTM, RNN and CNN-sliding window model. In Proceedings of the 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Udupi, India, 13–16 September 2017; pp. 1643–1647. [Google Scholar]
- Sachdeva, A.; Jethwani, G.; Manjunath, C.; Balamurugan, M.; Krishna, A.V. An Effective Time Series Analysis for Equity Market Prediction Using Deep Learning Model. In Proceedings of the 2019 International Conference on Data Science and Communication (IconDSC), Bangalore, India, 1–2 March 2019; pp. 1–5. [Google Scholar]
- Guo, T.; Xu, Z.; Yao, X.; Chen, H.; Aberer, K.; Funaya, K. Robust online time series prediction with recurrent neural networks. In Proceedings of the 2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA), Montreal, QC, Canada, 17–19 October 2016; pp. 816–825. [Google Scholar]
- Chen, P.-A.; Chang, L.-C.; Chang, F.-J. Reinforced recurrent neural networks for multi-step-ahead flood forecasts. J. Hydrol.
**2013**, 497, 71–79. [Google Scholar] [CrossRef] - Long, J.; Chen, Z.; He, W.; Wu, T.; Ren, J. An integrated framework of deep learning and knowledge graph for prediction of stock price trend: An application in Chinese stock exchange market. Appl. Soft Comput.
**2020**, 106205. [Google Scholar] [CrossRef] - Pang, X.; Zhou, Y.; Wang, P.; Lin, W.; Chang, V. An innovative neural network approach for stock market prediction. J. Supercomput.
**2018**, 11, 1–21. [Google Scholar] [CrossRef] - Kelotra, A.; Pandey, P. Stock Market Prediction Using Optimized Deep-ConvLSTM Model. Big Data
**2020**, 8, 5–24. [Google Scholar] [CrossRef] [PubMed][Green Version] - Bouktif, S.; Fiaz, A.; Awad, M. Augmented Textual Features-Based Stock Market Prediction. IEEE Access
**2020**, 8, 40269–40282. [Google Scholar] [CrossRef] - Zhong, X.; Enke, D. Predicting the daily return direction of the stock market using hybrid machine learning algorithms. Financ. Innov.
**2019**, 5, 4. [Google Scholar] [CrossRef] - Das, S.R.; Mishra, D.; Rout, M. Stock market prediction using Firefly algorithm with evolutionary framework optimized feature reduction for OSELM method. Expert Syst. Appl. X
**2019**, 4, 100016. [Google Scholar] [CrossRef] - Hoseinzade, E.; Haratizadeh, S. CNNpred: CNN-based stock market prediction using a diverse set of variables. Expert Syst. Appl.
**2019**, 129, 273–285. [Google Scholar] [CrossRef] - Kumar, K.; Haider, M.T.U. Blended computation of machine learning with the recurrent neural network for intra-day stock market movement prediction using a multi-level classifier. Int. J. Comput. Appl.
**2019**, 1–17. [Google Scholar] [CrossRef] - Chung, H.; Shin, K.-S. Genetic algorithm-optimized multi-channel convolutional neural network for stock market prediction. Neural Comput. Appl.
**2020**, 2, 7897–7914. [Google Scholar] [CrossRef] - Sim, H.S.; Kim, H.I.; Ahn, J.J. Is deep learning for image recognition applicable to stock market prediction? Complexity
**2019**, 2019. [Google Scholar] [CrossRef] - Wen, M.; Li, P.; Zhang, L.; Chen, Y. Stock Market Trend Prediction Using High-Order Information of Time Series. IEEE Access
**2019**, 7, 28299–28308. [Google Scholar] [CrossRef] - Rekha, G.; D Sravanthi, B.; Ramasubbareddy, S.; Govinda, K. Prediction of Stock Market Using Neural Network Strategies. J. Comput. Theor. Nanosci.
**2019**, 16, 2333–2336. [Google Scholar] [CrossRef] - Lee, J.; Kim, R.; Koh, Y.; Kang, J. Global stock market prediction based on stock chart images using deep Q-network. IEEE Access
**2019**, 7, 167260–167277. [Google Scholar] [CrossRef] - Liu, G.; Wang, X. A numerical-based attention method for stock market prediction with dual information. IEEE Access
**2018**, 7, 7357–7367. [Google Scholar] [CrossRef] - Baek, Y.; Kim, H.Y. ModAugNet: A new forecasting framework for stock market index value with an overfitting prevention LSTM module and a prediction LSTM module. Expert Syst. Appl.
**2018**, 113, 457–480. [Google Scholar] [CrossRef] - Chung, H.; Shin, K.-S. Genetic algorithm-optimized long short-term memory network for stock market prediction. Sustainability
**2018**, 10, 3765. [Google Scholar] [CrossRef][Green Version] - Chen, L.; Qiao, Z.; Wang, M.; Wang, C.; Du, R.; Stanley, H.E. Which artificial intelligence algorithm better predicts the Chinese stock market? IEEE Access
**2018**, 6, 48625–48633. [Google Scholar] [CrossRef] - Zhou, X.; Pan, Z.; Hu, G.; Tang, S.; Zhao, C. Stock market prediction on high-frequency data using generative adversarial nets. Math. Probl. Eng.
**2018**, 2018. [Google Scholar] [CrossRef][Green Version] - Chong, E.; Han, C.; Park, F.C. Deep learning networks for stock market analysis and prediction: Methodology, data representations, and case studies. Expert Syst. Appl.
**2017**, 83, 187–205. [Google Scholar] [CrossRef][Green Version] - Long, W.; Lu, Z.; Cui, L. Deep learning-based feature engineering for stock price movement prediction. Knowl.-Based Syst.
**2019**, 164, 163–173. [Google Scholar] [CrossRef] - Moews, B.; Herrmann, J.M.; Ibikunle, G. Lagged correlation-based deep learning for directional trend change prediction in financial time series. Expert Syst. Appl.
**2019**, 120, 197–206. [Google Scholar] [CrossRef][Green Version] - Garcia, F.; Guijarro, F.; Oliver, J.; Tamošiūnienė, R. Hybrid fuzzy neural network to predict price direction in the German DAX-30 index. Technol. Econ. Dev. Econ.
**2018**, 24, 2161–2178. [Google Scholar] [CrossRef] - Cervelló-Royo, R.; Guijarro, F. Forecasting stock market trend: A comparison of machine learning algorithms. Financ. Mark. Valuat.
**2020**, 6, 37–49. [Google Scholar] - Papageorgiou, K.I.; Poczeta, K.; Papageorgiou, E.; Gerogiannis, V.C.; Stamoulis, G. Exploring an Ensemble of Methods that Combines Fuzzy Cognitive Maps and Neural Networks in Solving the Time Series Prediction Problem of Gas Consumption in Greece. Algorithms
**2019**, 12, 235. [Google Scholar] [CrossRef][Green Version] - Murphy, K.P. Machine Learning: A Probabilistic Perspective; MIT Press: Cambridge, MA, USA, 2012. [Google Scholar]
- Nabipour, M.; Nayyeri, P.; Jabani, H.; Mosavi, A. Deep learning for Stock Market Prediction. arXiv
**2020**, arXiv:2004.01497. [Google Scholar] - Amari, S. The Handbook of Brain Theory and Neural Networks; MIT press: Cambridge, MA, USA, 2003. [Google Scholar]
- Lin, T.; Horne, B.G.; Tino, P.; Giles, C.L. Learning long-term dependencies in NARX recurrent neural networks. IEEE Trans. Neural Netw.
**1996**, 7, 1329–1338. [Google Scholar] - TSETMC. Available online: www.tsetmc.com (accessed on 4 April 2020).
- Nosratabadi, S.; Mosavi, A.; Duan, P.; Ghamisi, P. Data science in economics. arXiv
**2020**, arXiv:2003.13422. [Google Scholar] - Kara, Y.; Boyacioglu, M.A.; Baykan, Ö.K. Predicting direction of stock price index movement using artificial neural networks and support vector machines: The sample of the Istanbul Stock Exchange. Expert Syst. Appl.
**2011**, 38, 5311–5319. [Google Scholar] [CrossRef] - Patel, J.; Shah, S.; Thakkar, P.; Kotecha, K. Predicting stock market index using fusion of machine learning techniques. Expert Syst. Appl.
**2015**, 42, 2162–2172. [Google Scholar] [CrossRef] - Patel, J.; Shah, S.; Thakkar, P.; Kotecha, K. Predicting stock and stock price index movement using trend deterministic data preparation and machine learning techniques. Expert Syst. Appl.
**2015**, 42, 259–268. [Google Scholar] [CrossRef] - Matloff, N. Statistical Regression and Classification: From Linear Models to Machine Learning; Chapman and Hall/CRC: Boca Raton, FL, USA, 2017. [Google Scholar]

Simple n-day moving average = $\frac{{C}_{t}+{C}_{t-1}+\dots +{C}_{t-n+1}}{n}$ |

Weighted 14-day moving average = $\frac{n\ast {C}_{t}+\left(n-1\right)\ast {C}_{t-1}+\dots +{C}_{t-n+1}}{n+\left(n-1\right)+\dots +1}$ |

Momentum = ${C}_{t}-{C}_{t-n+1}$ |

Stochastic K% = $\frac{{C}_{t}-L{L}_{t\_\_t-n+1}}{H{H}_{t\_\_t-n+1}-L{L}_{t\_\_t-n+1}}$ × 100 |

Stochastic D% = $\frac{{K}_{t}+{K}_{t-1}+\dots +{K}_{t-n+1}}{n}$ × 100 |

Relative strength index (RSI) = 100 − $\frac{100}{1+\raisebox{1ex}{${{\displaystyle \sum}}_{i=1}^{n-1}U{P}_{t-i}$}\!\left/ \!\raisebox{-1ex}{${{\displaystyle \sum}}_{i=1}^{n-1}D{W}_{t-i}$}\right.}$ |

Signal(n)_{t} = MACD_{t} × $\frac{2}{n+1}$ + Signal(n)_{t−1} × (1 − $\frac{2}{n+1}$) |

Larry William’s R% = $\frac{H{H}_{t\_\_t-n+1}-{C}_{t}}{H{H}_{t\_\_t-n+1}-L{L}_{t\_\_t-n+1}}$ × 100 |

Accumulation/Distribution (A/D) oscillator: $\frac{{\mathrm{H}}_{\mathrm{t}}-{\mathrm{C}}_{\mathrm{t}}}{{\mathrm{H}}_{\mathrm{t}}-{\mathrm{L}}_{\mathrm{t}}}$ |

CCI (Commodity channel index) = $\frac{{M}_{t}-S{M}_{t}}{0.015{D}_{t}}$ |

While: |

n is number of days |

C_{t} is the closing price at time t |

L_{t} and H_{t} is the low price and high price at time t, respectively |

$L{L}_{t\_\_t-n+1}$ and $H{H}_{t\_\_t-n+1}$ is the lowest low and highest high prices in the last n days, respectively |

UP_{t} and DW_{t} means upward price change and downward price change at time t, respectively |

EMA(K)_{t} = EMA(K)_{t−1} × (1 − $\frac{2}{k+1}$) + C_{t} × $\frac{2}{k+1}$ |

Moving average convergence divergence (MACD_{t}) = EMA(12)_{t} − EMA(26)_{t} |

M_{t} = $\frac{{\mathrm{H}}_{\mathrm{t}}+{\mathrm{L}}_{\mathrm{t}}+{\mathrm{C}}_{\mathrm{t}}}{3}$ |

SM_{t} = $\frac{{{\displaystyle \sum}}_{i=0}^{n-1}{M}_{t-i}}{n}$ |

D_{t} = $\frac{{{\displaystyle \sum}}_{i=0}^{n-1}|{M}_{t-i}-S{M}_{t}|}{n}$ |

Feature | Max | Min | Mean | Standard Deviation |
---|---|---|---|---|

Diversified Financials | ||||

SMA | 6969.46 | 227.5 | 1471.201 | 1196.926 |

WMA | 3672.226 | 119.1419 | 772.5263 | 630.0753 |

MOM | 970.8 | −1017.8 | 21.77033 | 126.5205 |

STCK | 99.93224 | 0.159245 | 53.38083 | 19.18339 |

STCD | 96.9948 | 14.31843 | 53.34332 | 15.28929 |

RSI | 68.96463 | 27.21497 | 50.18898 | 6.471652 |

SIG | 310.5154 | −58.4724 | 16.64652 | 51.62368 |

LWR | 99.84076 | 0.06776 | 46.61917 | 19.18339 |

ADO | 0.99986 | 0.000682 | 0.504808 | 0.238426 |

CCI | 270.5349 | −265.544 | 14.68813 | 101.8721 |

Basic Metals | ||||

SMA | 322,111.5 | 7976.93 | 69,284.11 | 60,220.95 |

WMA | 169,013.9 | 4179.439 | 36,381.48 | 31,677.51 |

MOM | 39,393.8 | −20,653.8 | 1030.265 | 4457.872 |

STCK | 98.47765 | 1.028891 | 54.64576 | 16.41241 |

STCD | 90.93235 | 12.94656 | 54.64294 | 13.25043 |

RSI | 72.18141 | 27.34428 | 49.8294 | 6.113667 |

SIG | 12,417.1 | −4019.14 | 803.5174 | 2155.701 |

LWR | 98.97111 | 1.522349 | 45.36526 | 16.43646 |

ADO | 0.999141 | 0.00097 | 0.498722 | 0.234644 |

CCI | 264.6937 | −242.589 | 23.4683 | 99.14922 |

Non-metallic Minerals | ||||

SMA | 15,393.62 | 134.15 | 1872.483 | 2410.316 |

WMA | 8081.05 | 69.72762 | 985.1065 | 1272.247 |

MOM | 1726.5 | −2998.3 | 49.21097 | 264.0393 |

STCK | 100.00 | 0.154268 | 54.71477 | 20.2825 |

STCD | 96.7883 | 13.15626 | 54.68918 | 16.37712 |

RSI | 70.89401 | 24.07408 | 49.67247 | 6.449379 |

SIG | 848.558 | −127.47 | 37.36441 | 123.9744 |

LWR | 99.84573 | −2.66648 | 45.28523 | 20.2825 |

ADO | 0.998941 | 0.00036 | 0.501229 | 0.238008 |

CCI | 296.651 | −253.214 | 20.06145 | 101.9735 |

Petroleum | ||||

SMA | 1,349,138 | 16,056.48 | 243,334.2 | 262,509.8 |

WMA | 707,796.4 | 8580.536 | 127,839.1 | 138,101 |

MOM | 227,794 | −136,467 | 4352.208 | 26,797.25 |

STCK | 100.00 | 0.253489 | 53.78946 | 22.0595 |

STCD | 95.93565 | 2.539517 | 53.83312 | 17.46646 |

RSI | 75.05218 | 23.26627 | 50.02778 | 6.838486 |

SIG | 71830.91 | −33132 | 3411.408 | 11,537.98 |

LWR | 99.74651 | −1.8345 | 46.23697 | 22.02162 |

ADO | 0.999933 | 0.000288 | 0.498381 | 0.239229 |

CCI | 286.7812 | −284.298 | 14.79592 | 101.8417 |

Model | Parameters | Value(s) |
---|---|---|

Decision Tree | Number of Trees (ntrees) | 1 |

Bagging | Number of Trees (ntrees) | 50, 100, 150, 200, 250, 300, 350, 400, 450, 500 |

Max Depth | 10 | |

Random Forest | Number of Trees (ntrees) | 50, 100, 150, 200, 250, 300, 350, 400, 450, 500 |

Max Depth | 10 | |

Adaboost | Number of Trees (ntrees) | 50, 100, 150, 200, 250, 300, 350, 400, 450, 500 |

Max Depth | 10 | |

Learning Rate | 0.1 | |

Gradient Boosting | Number of Trees (ntrees) | 50, 100, 150, 200, 250, 300, 350, 400, 450, 500 |

Max Depth | 10 | |

Learning Rate | 0.1 | |

XGBoost | Number of Trees (ntrees) | 50, 100, 150, 200, 250, 300, 350, 400, 450, 500 |

Max Depth | 10 | |

Learning Rate | 0.1 |

Model | Parameters | Value(s) |
---|---|---|

Artificial neural networks (ANN) | Number of Neurons | 500 |

Activation Function | Relu | |

Optimizer | Adam (${\beta}_{1}=0.9,{\beta}_{2}=0.999$) | |

Learning Rate | 0.01 | |

Epochs | 100, 200, 500, 1000 | |

Recurrent neural network (RNN) | Number of Neurons | 500 |

Activation Function | tanh | |

Optimizer | Adam (${\beta}_{1}=0.9,{\beta}_{2}=0.999$) | |

Learning Rate | 0.0001 | |

Training Days (ndays) | 1, 2, 5, 10, 20, 30 | |

Epochs (w.r.t. ndays) | 100, 200, 300, 500, 800, 1000 | |

Long short-term memory (LSTM) | Number of Neurons | 200 |

Activation Function | tanh | |

Optimizer | Adam (${\beta}_{1}=0.9,{\beta}_{2}=0.999$) | |

Learning Rate | 0.0005 | |

Training Days (ndays) | 1, 2, 5, 10, 20, 30 | |

Epochs (w.r.t. ndays) | 50, 50, 70, 100, 200, 300 |

Prediction Models | Parameters | Error Measures | |||
---|---|---|---|---|---|

MAPE | MAE | rRMSE | MSE | ||

ntrees | |||||

Decision Tree | 1 | 1.29 | 23.05 | 0.0235 | 4948.07 |

Bagging | 400 | 0.92 | 15.80 | 0.0142 | 1403.24 |

Random Forest | 300 | 0.92 | 15.51 | 0.0141 | 1290.91 |

Adaboost | 250 | 0.91 | 15.09 | 0.0132 | 912.51 |

Gradient Boosting | 300 | 1.02 | 19.19 | 0.0203 | 4312.09 |

XGBoost | 100 | 0.88 | 14.86 | 0.0120 | 804.97 |

epochs | |||||

ANN | 1000 | 1.01 | 16.07 | 0.0146 | 1107.02 |

ndays | |||||

RNN | 1 | 1.59 | 14.70 | 0.0242 | 362.26 |

LSTM | 5 | 0.43 | 4.46 | 0.0065 | 48.06 |

Prediction Models | Parameters | Error Measures | |||
---|---|---|---|---|---|

MAPE | MAE | rRMSE | MSE | ||

ntrees | |||||

Decision Tree | 1 | 1.52 | 25.93 | 0.0250 | 2893.88 |

Bagging | 150 | 1.10 | 18.31 | 0.0160 | 1320.55 |

Random Forest | 500 | 1.12 | 18.39 | 0.0171 | 1322.25 |

Adaboost | 400 | 1.11 | 19.56 | 0.0164 | 1687.85 |

Gradient Boosting | 300 | 1.14 | 19.51 | 0.0199 | 1781.31 |

XGBoost | 150 | 1.14 | 19.81 | 0.0162 | 1724.65 |

epochs | |||||

ANN | 1000 | 1.41 | 23.35 | 0.0208 | 2614.08 |

ndays | |||||

RNN | 10 | 1.66 | 14.75 | 0.0243 | 423.14 |

LSTM | 2 | 0.54 | 5.21 | 0.0076 | 72.71 |

Prediction Models | Parameters | Error Measures | |||
---|---|---|---|---|---|

MAPE | MAE | rRMSE | MSE | ||

ntrees | |||||

Decision Tree | 1 | 1.66 | 28.94 | 0.0298 | 4715.42 |

Bagging | 150 | 1.45 | 24.00 | 0.0215 | 2146.32 |

Random Forest | 500 | 1.47 | 24.46 | 0.0216 | 2317.71 |

Adaboost | 400 | 1.39 | 23.91 | 0.0198 | 2494.78 |

Gradient Boosting | 350 | 1.35 | 23.05 | 0.0142 | 2002.51 |

XGBoost | 300 | 1.45 | 24.12 | 0.0202 | 2056.23 |

epochs | |||||

ANN | 1000 | 2.27 | 39.69 | 0.0322 | 7156.56 |

ndays | |||||

RNN | 10 | 1.77 | 15.21 | 0.0263 | 468.32 |

LSTM | 30 | 0.55 | 6.02 | 0.0077 | 91.66 |

Prediction Models | Parameters | Error Measures | |||
---|---|---|---|---|---|

MAPE | MAE | rRMSE | MSE | ||

ntrees | |||||

Decision Tree | 1 | 2.09 | 34.00 | 0.0382 | 5129.32 |

Bagging | 250 | 1.88 | 31.47 | 0.0283 | 3219.30 |

Random Forest | 300 | 1.86 | 31.36 | 0.0279 | 3246.80 |

Adaboost | 200 | 1.58 | 25.63 | 0.0251 | 2122.81 |

Gradient Boosting | 500 | 1.74 | 28.00 | 0.0322 | 3356.01 |

XGBoost | 500 | 1.77 | 31.07 | 0.0257 | 3600.53 |

epochs | |||||

ANN | 1000 | 4.12 | 65.38 | 0.0556 | 18,866.04 |

ndays | |||||

RNN | 5 | 1.91 | 16.98 | 0.0280 | 528.71 |

LSTM | 10 | 0.57 | 6.84 | 0.0087 | 131.16 |

Prediction Models | Parameters | Error Measures | |||
---|---|---|---|---|---|

MAPE | MAE | rRMSE | MSE | ||

ntrees | |||||

Decision Tree | 1 | 2.28 | 41.29 | 0.0451 | 11,051.93 |

Bagging | 100 | 2.24 | 37.61 | 0.0349 | 4997.20 |

Random Forest | 50 | 2.24 | 37.28 | 0.0349 | 4755.32 |

Adaboost | 300 | 1.83 | 28.83 | 0.0301 | 3146.31 |

Gradient Boosting | 200 | 1.97 | 35.95 | 0.0390 | 8759.44 |

XGBoost | 500 | 2.03 | 35.37 | 0.0305 | 5534.65 |

epochs | |||||

ANN | 1000 | 5.05 | 85.46 | 0.0696 | 29,483.87 |

ndays | |||||

RNN | 10 | 1.95 | 19.09 | 0.0307 | 644.50 |

LSTM | 20 | 0.61 | 7.06 | 0.0112 | 150.86 |

Prediction Models | Parameters | Error Measures | |||
---|---|---|---|---|---|

MAPE | MAE | rRMSE | MSE | ||

ntrees | |||||

Decision Tree | 1 | 2.80 | 49.12 | 0.0571 | 14,227.06 |

Bagging | 100 | 2.56 | 42.43 | 0.0388 | 5916.19 |

Random Forest | 450 | 2.57 | 42.66 | 0.0393 | 6008.33 |

Adaboost | 450 | 2.01 | 33.25 | 0.0309 | 4340.09 |

Gradient Boosting | 350 | 2.17 | 39.10 | 0.0385 | 8573.37 |

XGBoost | 500 | 2.30 | 39.30 | 0.0358 | 6406.16 |

epochs | |||||

ANN | 1000 | 5.66 | 126.69 | 0.0790 | 42,701.88 |

ndays | |||||

RNN | 20 | 1.96 | 19.47 | 0.0314 | 668.82 |

LSTM | 10 | 0.75 | 7.25 | 0.0113 | 170.14 |

Prediction Models | Parameters | Error Measures | |||
---|---|---|---|---|---|

MAPE | MAE | rRMSE | MSE | ||

ntrees | |||||

Decision Tree | 1 | 2.83 | 48.39 | 0.0587 | 12,924.43 |

Bagging | 350 | 3.21 | 54.37 | 0.0467 | 8803.66 |

Random Forest | 50 | 3.18 | 54.06 | 0.0465 | 8799.45 |

Adaboost | 350 | 2.33 | 37.63 | 0.0374 | 5369.06 |

Gradient Boosting | 500 | 2.54 | 43.59 | 0.0485 | 9354.03 |

XGBoost | 400 | 2.48 | 42.85 | 0.0378 | 6306.78 |

epochs | |||||

ANN | 1000 | 7.48 | 126.69 | 0.0994 | 54,940.25 |

ndays | |||||

RNN | 20 | 2.11 | 20.20 | 0.0322 | 1355.35 |

LSTM | 10 | 0.77 | 10.03 | 0.0121 | 376.82 |

Prediction Models | Error Measures | |||
---|---|---|---|---|

MAPE | MAE | rRMSE | MSE | |

Decision Tree | 2.07 | 35.82 | 0.0396 | 7984.30 |

Bagging | 1.91 | 32.00 | 0.0288 | 3973.92 |

Random Forest | 1.91 | 31.96 | 0.0288 | 3962.97 |

Adaboost | 1.59 | 26.27 | 0.0248 | 2867.63 |

Gradient Boosting | 1.70 | 29.91 | 0.0318 | 5662.68 |

XGBoost | 1.72 | 29.63 | 0.0255 | 3776.28 |

ANN | 3.86 | 69.05 | 0.0530 | 22,409.96 |

RNN | 1.85 | 17.20 | 0.0281 | 635.85 |

LSTM | 0.60 | 6.70 | 0.0093 | 148.77 |

Prediction Models | Error Measures | |||
---|---|---|---|---|

MAPE | MAE | rRMSE | MSE | |

Decision Tree | 2.70 | 7613.54 | 0.0528 | 502,831,775.59 |

Bagging | 2.62 | 6640.41 | 0.0397 | 212,982,692.85 |

Random Forest | 2.62 | 6649.18 | 0.0400 | 212,239,589.62 |

Adaboost | 2.22 | 5279.15 | 0.0362 | 163,264,613.31 |

Gradient Boosting | 2.26 | 6402.08 | 0.0403 | 305,274,334.62 |

XGBoost | 2.33 | 5947.22 | 0.0363 | 175,385,973.35 |

ANN | 5.52 | 14,045.78 | 0.0753 | 1,123,371,989.92 |

RNN | 3.40 | 4097.20 | 0.0596 | 57,606,535.91 |

LSTM | 1.18 | 1653.79 | 0.0198 | 8,175,371.29 |

Prediction Models | Error Measures | |||
---|---|---|---|---|

MAPE | MAE | rRMSE | MSE | |

Decision Tree | 2.18 | 52.75 | 0.0456 | 22,287.11 |

Bagging | 2.12 | 47.88 | 0.0331 | 13,333.59 |

Random Forest | 2.12 | 47.89 | 0.0331 | 13,045.77 |

Adaboost | 1.84 | 41.31 | 0.0305 | 11,798.23 |

Gradient Boosting | 1.78 | 43.26 | 0.0339 | 15,155.18 |

XGBoost | 1.86 | 42.15 | 0.0312 | 10,815.16 |

ANN | 4.67 | 100.28 | 0.0662 | 98,705.75 |

RNN | 5.23 | 44.18 | 0.0875 | 9227.55 |

LSTM | 1.52 | 16.94 | 0.0228 | 1289.28 |

Prediction Models | Error Measures | |||
---|---|---|---|---|

MAPE | MAE | rRMSE | MSE | |

Decision Tree | 1.41 | 1159.46 | 0.0274 | 11,082,872.18 |

Bagging | 1.36 | 1046.64 | 0.0207 | 5,314,782.99 |

Random Forest | 1.36 | 1043.30 | 0.0207 | 5,192,173.88 |

Adaboost | 1.18 | 862.73 | 0.0191 | 3,361,111.64 |

Gradient Boosting | 1.16 | 960.52 | 0.0212 | 7,029,319.85 |

XGBoost | 1.21 | 963.42 | 0.0191 | 4,619,506.50 |

ANN | 3.17 | 2441.71 | 0.0420 | 31,250,640.68 |

RNN | 1.48 | 663.45 | 0.0238 | 1,434,974.44 |

LSTM | 0.54 | 272.95 | 0.0077 | 225,333.35 |

Tree-Based | ||||||
---|---|---|---|---|---|---|

Models | Decision Tree | Bagging | Random Forest | Adaboost | Gradient Boosting | XGBoost |

Average runtime per sample (ms) | 0.009 | 1.399 | 1.316 | 1.308 | 1.483 | 2.373 |

ANNs-based | ||||||

Models | ANN | RNN | LSTM | |||

Average runtime per sample (ms) | 20.088 | 20.630 | 80.902 |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Nabipour, M.; Nayyeri, P.; Jabani, H.; Mosavi, A.; Salwana, E.; S., S. Deep Learning for Stock Market Prediction. *Entropy* **2020**, *22*, 840.
https://doi.org/10.3390/e22080840

**AMA Style**

Nabipour M, Nayyeri P, Jabani H, Mosavi A, Salwana E, S. S. Deep Learning for Stock Market Prediction. *Entropy*. 2020; 22(8):840.
https://doi.org/10.3390/e22080840

**Chicago/Turabian Style**

Nabipour, M., P. Nayyeri, H. Jabani, A. Mosavi, E. Salwana, and Shahab S. 2020. "Deep Learning for Stock Market Prediction" *Entropy* 22, no. 8: 840.
https://doi.org/10.3390/e22080840