# Forecasting Purpose Data Analysis and Methodology Comparison of Neural Model Perspective

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Data Acquisition

## 3. Forecasting Methodology

#### 3.1. MLP and DNN Model Methodology

_{j}(j = 0, 1, 2, …, n) is a bias on the jth unit; w

_{ij}(i = 0, 1, 2, …, m; j = 0, 1, 2, …, n) is the connection weights between layers of the model; h(·) is the transfer function of the hidden layer; m is the number of input nodes; and n is the number of hidden nodes. The MLP model performs a nonlinear functional mapping from past observations (y

_{k}

_{−1}, y

_{k}

_{−2}, …, y

_{k}

_{−p}), to future values (y

_{k}), i.e.,

#### 3.2. Implementation and Avioding the Overfitting Problem

Algorithm 1. MLP and DNN by using Python Language | |

Step 1: | x <- Input nodes y <- Output nodes INIT(layer1, layer2, layer3) # <- random values with normalized 0 to 1 dropout_percent, do_dropout = (0.2, True) layer_0 <- x INIT(synapse_0, synapse_1, synapse_2, synapse_3) # <- random values with normalized 0 to 1 alpha <- 0.1 |

Step 2: | for j in range(500): for i in range (len(x)): layer_0 = x[i] layer_1 = sigmoid(np.dot(layer_0,synapse_0)) layer_2 = sigmoid(np.dot(layer_1,synapse_1)) layer_3 = sigmoid(np.dot(layer_2,synapse_2)) |

Step 3: | if(do_dropout): layer_1*=np.random.binomial([np.ones((len(X),hidden_dim))], 1-dropout_percent)[0] * (1.0/(1-dropout_percent) layer_2*=np.random.binomial([np.ones((len(H),hidden_dim))], 1-dropout_percent)[0] * (1.0/(1-dropout_percent)) |

Step 4: | layer_3_error = layer_3 – y[i] #layer_3_error = y [i]- layer_3 layer_3_delta = layer_3_error * sigmoid_output_to_derivative(layer_3) layer_2_error = layer_3_delta.dot(synapse_2.T) layer_2_delta = layer_2_error * sigmoid_output_to_derivative(layer_2) layer_1_error = layer_2_delta.dot(synapse_1.T) layer_1_delta = layer_1_error * sigmoid_output_to_derivative(layer_1) |

Step 5: | #layer_3_error = layer_3 − y synapse_2 −= alpha * np.reshape(layer_2,( −1,1))*layer_3_delta synapse_1 −= alpha * np.reshape(layer_1,( −1,1))*layer_2_delta synapse_0 −= alpha * np.reshape(layer_0,( −1,1))*layer_1_delta #layer_3_error = y - layer_3 synapse_2 += alpha * np.reshape(layer_2,(−1,1))*layer_3_delta synapse_1 += alpha * np.reshape(layer_1,( −1,1))*layer_2_delta synapse_0 += alpha * np.reshape(layer_0,( −1,1))*layer_1_delta |

## 4. Analysis and Results

#### 4.1. Optimal Paramters

#### 4.2. Analysis and Results

## 5. Conclusions

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## References

- Coch, D.; Ansari, D. Thinking about mechanisms is crucial to connecting neuroscience and education. Cortex
**2009**, 45, 546–547. [Google Scholar] [CrossRef] [PubMed] - Hamid, S.A.; Iqbal, Z. Using neural networks for forecasting volatility of S&P 500 Index futures prices. J. Bus. Res.
**2004**, 57, 1116–1125. [Google Scholar] - Huang, G.B.; Saratchandran, P.; Sundararajan, N. A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation. IEEE Trans. Neural Netw.
**2005**, 16, 57–67. [Google Scholar] [CrossRef] [PubMed] - Jain, A.; Kumar, A.M. Hybrid neural network models for hydrologic time series forecasting. Appl. Soft Comput.
**2007**, 7, 585–592. [Google Scholar] [CrossRef] - Singh, K.P.; Basant, A.; Malik, A.; Jain, G. Artificial neural network modeling of the river water quality—A case study. Ecol. Model.
**2009**, 220, 888–895. [Google Scholar] [CrossRef] - Eakins, S.G.; Stansell, S.R. Can value-based stock selection criteria yield superior risk-adjusted returns: An application of neural networks. Int. Rev. Financ. Anal.
**2003**, 12, 83–97. [Google Scholar] [CrossRef] - Trinkle, B.S.; Baldwin, A.A. Interpretable credit model development via artificial neural networks. Intell. Syst. Account. Financ. Manag.
**2007**, 15, 123–147. [Google Scholar] [CrossRef] - Leung, M.T.; Daouk, H.; Chen, A.-S. Forecasting stock indices: A comparison of classification and level estimation models. Int. J. Forecast.
**2000**, 16, 173–190. [Google Scholar] [CrossRef] - Enke, D.; Thawornwong, S. The use of data mining and neural networks for forecasting stock market returns. Expert Syst. Appl.
**2005**, 29, 927–940. [Google Scholar] [CrossRef] - Atsalakis, G.S.; Valavanis, K.P. Surveying stock market forecasting techniques—Part II: Soft computing methods. Expert Syst. Appl.
**2009**, 36, 5932–5941. [Google Scholar] [CrossRef] - Avellaneda, M.; Lee, J.-H. Statistical arbitrage in the US equities market. Quant. Financ.
**2010**, 10, 761–782. [Google Scholar] [CrossRef] - Fama, E.F.; French, K.R. A five-factor asset pricing model. J. Financ. Econ.
**2015**, 116, 1–22. [Google Scholar] [CrossRef] - Dixon, M.; Klabjan, D.; Bang, J.H. Implementing deep neural networks for financial market prediction on the Intel Xeon Phi. In Proceedings of the 8th Workshop on High Performance Computational Finance, New York, NY, USA, 15 November 2015; pp. 1–6. [Google Scholar]
- Hagan, M.T.; Menhaj, M.B. Training feedforward networks with the Marquardt algorithm. IEEE Trans. Neural Netw.
**1994**, 5, 989–993. [Google Scholar] [CrossRef] [PubMed] - Lozano, A.M.; Lang, A.E.; Galvez-Jimenez, N.; Miyasaki, J.; Duff, J.; Hutchinson, W.D.; Dostrovsky, J.O. Effect of GPi pallidotomy on motor function in Parkinson’s disease. Lancet
**1995**, 346, 1383–1387. [Google Scholar] [CrossRef] - Donaldson, R.G.; Kamstra, M. Forecast combining with neural networks. J. Forecast.
**1996**, 15, 49–61. [Google Scholar] [CrossRef] - Zhang, Y.; Wu, L. Stock market prediction of S&P 500 via combination of improved BCO approach and BP neural network. Expert Syst. Appl.
**2009**, 36, 8849–8854. [Google Scholar] - Mario, C.; Giovanni, P. An innovative approach for forecasting of energy requirements to improve a smart home management system based on BLE. IEEE Trans. Green Commun. Netw.
**2017**, 1, 112–120. [Google Scholar] - Manabe, Y.; Chakraborty, B. Estimating embedding parameters using structural learning of neural network. In Proceedings of the IEEE 2005 International Workshop on Nonlinear Signal and Image Processing (NSIP 2005), Sapporo, Japan, 18–20 May 2005. [Google Scholar]
- Wu, S.L.; Li, K.L.; Huang, T.Z. Exponential stability of static neural networks with time delay and impulses. J. IET Control Theory Appl.
**2011**, 5, 943–951. [Google Scholar] [CrossRef] - Hornik, K.; Stinchcombe, M.; White, H. Multi-layered feedforward networks are universal approximators. Neural Netw.
**1989**, 2, 359–366. [Google Scholar] [CrossRef] - Huang, W.; Nakamori, Y.; Wang, S. Forecasting stock market movement direction with support vector machine. Comput. Oper. Res.
**2005**, 32, 2513. [Google Scholar] [CrossRef] - Nils, O.; Alessandra, N.; Pierre, C. MLP Tools: A PyMOL plugin for using the molecular lipophilicity potential in computer-aided drug design. J. Comput. Aided Mol. Des.
**2014**, 28, 587–596. [Google Scholar] - Jefferys, W.H.; Berger, J.O. Ockham’s razor and Bayesian analysis. Am. Sci.
**1992**, 80, 64–72. [Google Scholar] - Gardner, H. Frames of Mind: The Theory of Multiple Intelligences; Basic books: New York, NY, USA, 2011. [Google Scholar]
- Lee, S.; Kim, H.; Park, D.; Chung, Y.; Jeong, T. CPU-GPU hybrid computing for feature extraction from video stream. IEICE Electron. Express
**2014**, 11. [Google Scholar] [CrossRef] - Lee, S.; Kim, H.; Sa, J.; Park, B.; Chung, Y. Real-time processing for intelligent-surveillance applications. IEICE Electron. Express
**2017**, 14. [Google Scholar] [CrossRef] - Lee, S.; Jeong, T. Cloud-based parameter-driven statistical services and resource allocation in a heterogeneous platform on enterprise environment. Symmetry
**2016**, 8, 103. [Google Scholar] [CrossRef]

**Figure 2.**Comparison of recent yearly data by x = 0 values from 20%, 10%, 8%, 6%, respectively. (

**a**) MLP(1-HL); (

**b**) DNN (3-HL).

**Figure 3.**Comparison of recent yearly data by x = 0 values from 20%, 13%, 6%, respectively. (

**a**) MLP (1-HL); (

**b**) DNN (3-HL).

# of Input | # of HL | Mean Square Error (MSE) | # of HL | MSE |
---|---|---|---|---|

Weekly | ||||

77 | 115 | 0.005 | 35 | 0.004 |

77 | 0.005 | 19 | 0.005 | |

50 | 75 | 0.007 | 25 | 0.005 |

50 | 0.006 | 12 | 0.006 | |

31 | 46 | 0.004 | 15 | 0.001 |

31 | 0.002 | 7 | 0.003 | |

23 | 34 | 0.003 | 11 | 0.002 |

23 | 0.001 | 5 | 0.003 | |

Monthly | ||||

17 | 25 | 0.021 | 8 | 0.022 |

17 | 0.017 | 4 | 0.029 | |

11 | 16 | 0.005 | 5 | 0.004 |

11 | 0.003 | 2 | 0.006 | |

7 | 10 | 0.003 | 3 | 0.018 |

7 | 0.006 | 1 | 0.222 |

# of Input | # of HL | Mean Square Error (MSE) | # of HL | MSE |
---|---|---|---|---|

Weekly | ||||

77 | 345 | 0.006 | 105 | 0.004 |

231 | 0.005 | 57 | 0.005 | |

50 | 225 | 0.007 | 75 | 0.005 |

150 | 0.006 | 36 | 0.005 | |

31 | 138 | 0.004 | 45 | 0.001 |

93 | 0.003 | 21 | 0.001 | |

23 | 102 | 0.003 | 33 | 0.001 |

69 | 0.002 | 15 | 0.002 | |

Monthly | ||||

17 | 75 | 0.021 | 24 | 0.018 |

51 | 0.022 | 12 | 0.023 | |

11 | 48 | 0.007 | 15 | 0.006 |

33 | 0.007 | 6 | 0.008 | |

7 | 30 | 0.002 | 9 | 0.018 |

21 | 0.001 | 3 | 0.022 |

Forecasting Horizon (Weeks) | Model | Mean | MAE | MSE |
---|---|---|---|---|

77 | MLP(1-HL) | 0.194 | 0.136 | 0.021 |

DNN(3-HL) | 0.07 | 0.053 | 0.004 | |

50 | MLP(1-HL) | 0.097 | 0.529 | 0.005 |

DNN(3-HL) | 0.046 | 0.034 | 0.001 | |

31 | MLP(1-HL) | 0.055 | 0.034 | 0.002 |

DNN(3-HL) | 0.034 | 0.027 | 0.001 | |

23 | MLP(1-HL) | 0.033 | 0.027 | 0.001 |

DNN(3-HL) | 0.031 | 0.023 | 0.001 |

^{2}.

Forecasting Horizon (Months) | Model | Mean | MAE | MSE |
---|---|---|---|---|

17 | MLP(1-HL) | 0.185 | 0.130 | 0.017 |

DNN(3-HL) | 0.064 | 0.046 | 0.003 | |

11 | MLP(1-HL) | 0.096 | 0.070 | 0.005 |

DNN(3-HL) | 0.010 | 0.033 | 0.001 | |

7 | MLP(1-HL) | 0.072 | 0.054 | 0.003 |

DNN(3-HL) | 0.033 | 0.025 | 0.001 |

^{2}.

Forecasting Horizon (Weeks) | Model | Mean | MAE | MSE |
---|---|---|---|---|

77 | MLP(1HL) | 0.132 | 0.132 | 0.018 |

DNN(3HL) | 0.139 | 0.139 | 0.020 | |

50 | MLP(1HL) | 0.070 | 0.070 | 0.005 |

DNN(3HL) | 0.079 | 0.079 | 0.006 | |

31 | MLP(1HL) | 0.044 | 0.044 | 0.002 |

DNN(3HL) | 0.040 | 0.040 | 0.001 | |

23 | MLP(1HL) | 0.026 | 0.026 | 0.001 |

DNN(3HL) | 0.011 | 0.012 | 0.001 |

^{2}.

Forecasting Horizon (Months) | Model | Mean | MAE | MSE |
---|---|---|---|---|

17 | MLP(1HL) | 0.128 | 0.128 | 0.017 |

DNN(3HL) | 0.135 | 0.135 | 0.018 | |

11 | MLP(1HL) | 0.071 | 0.071 | 0.005 |

DNN(3HL) | 0.081 | 0.081 | 0.006 | |

7 | MLP(1HL) | 0.028 | 0.028 | 0.001 |

DNN(3HL) | 0.011 | 0.011 | 0.001 |

^{2}.

© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Lee, S.; Jeong, T.
Forecasting Purpose Data Analysis and Methodology Comparison of Neural Model Perspective. *Symmetry* **2017**, *9*, 108.
https://doi.org/10.3390/sym9070108

**AMA Style**

Lee S, Jeong T.
Forecasting Purpose Data Analysis and Methodology Comparison of Neural Model Perspective. *Symmetry*. 2017; 9(7):108.
https://doi.org/10.3390/sym9070108

**Chicago/Turabian Style**

Lee, Sungju, and Taikyeong Jeong.
2017. "Forecasting Purpose Data Analysis and Methodology Comparison of Neural Model Perspective" *Symmetry* 9, no. 7: 108.
https://doi.org/10.3390/sym9070108