# Increasing Neurons or Deepening Layers in Forecasting Maximum Temperature Time Series?

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Data and Methods

## 3. Results

#### 3.1. Effect of the Number of Parameters

#### 3.2. Effect of the Number of Hidden Layers

## 4. Summary and Conclusions

## Supplementary Materials

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Valverde Ramírez, M.C.; De Campos Velho, H.F.; Ferreira, N.J. Artificial neural network technique for rainfall forecasting applied to the São Paulo region. J. Hydrol.
**2005**, 301, 146–162. [Google Scholar] [CrossRef] - Umakanth, N.; Satyanarayana, G.C.; Simon, B.; Rao, M.C.; Babu, N.R. Long-term analysis of thunderstorm-related parameters over Visakhapatnam and Machilipatnam, India. Acta Geophys.
**2020**, 68, 921–932. [Google Scholar] [CrossRef] - Verma, A.P.; Swastika Chakraborty, B. Now casting of Orographic Rain Rate Using ARIMA and ANN Model. In Proceedings of the IEEE 2020, Aranasi, India, 12–14 February 2020. [Google Scholar] [CrossRef]
- Berbić, J.; Ocvirk, E.; Carević, D.; Lončar, G. Application of neural networks and support vector machine for significant wave height prediction. Oceanologia
**2017**, 59, 331–349. [Google Scholar] [CrossRef] - Agrawal, K. Modelling and prediction of rainfall using artificial neural network and ARIMA techniques. J. Ind. Geophys. Union
**2006**, 10, 141–151. [Google Scholar] - Ozoegwu, C.G. Artificial neural network forecast of monthly mean daily global solar radiation of selected locations based on time series and month number. J. Clean. Prod.
**2019**, 216, 1–13. [Google Scholar] [CrossRef] - Young, C.-C.; Liu, W.-C.; Hsieh, W.-L. Predicting the Water Level Fluctuation in an Alpine Lake Using Physically Based, Artificial Neural Network, and Time Series Forecasting Models. Math. Probl. Eng.
**2015**, 1–11. [Google Scholar] [CrossRef] [Green Version] - Alotaibi, K.; Ghumman, A.R.; Haider, H.; Ghazaw, Y.M.; Shafiquzzaman, M. Future predictions of rainfall and temperature using GCM and ANN for arid regions: A case study for the Qassim region, Saudi Arabia. Water (Switzerland)
**2018**, 10, 1260. [Google Scholar] [CrossRef] [Green Version] - Smith, B.A.; Mcclendon, R.W.; Hoogenboom, G. Improving Air Temperature Prediction with Artificial Neural Networks. Int. J. Comput. Inf. Eng.
**2007**, 1, 3159. [Google Scholar] - Paras, S.; Kumar, A.; Chandra, M. A feature based neural network model for weather forecasting. World Acad. Sci. Eng.
**2007**, 4, 209–216. [Google Scholar] - Rajendra, P.; Murthy, K.V.N.; Subbarao, A.; Boadh, R. Use of ANN models in the prediction of meteorological data. Model. Earth Syst. Environ.
**2019**, 5, 1051–1058. [Google Scholar] [CrossRef] - Tsai, C.Y.; Lee, Y.H. The parameters effect on performance in ANN for hand gesture recognition system. Expert Syst. Appl.
**2011**, 38, 7980–7983. [Google Scholar] [CrossRef] - Hung, N.Q.; Babel, M.S.; Weesakul, S.; Tripathi, N.K. Hydrology and Earth System Sciences An artificial neural network model for rainfall forecasting in Bangkok, Thailand. Hydrol. Earth Syst. Sci.
**2009**, 13, 1413–1416. [Google Scholar] [CrossRef] [Green Version] - Tran, T.T.K.; Lee, T.; Shin, J.-Y.; Kim, J.-S.; Kamruzzaman, M. Deep Learning-Based Maximum Temperature Forecasting Assisted with Meta-Learning for Hyperparameter Optimization. Atmosphere
**2020**, 11, 487. [Google Scholar] [CrossRef] - Tran, T.T.K.; Lee, T. Is Deep Better in Extreme Temperature Forecasting? J. Korean Soc. Hazard Mitig.
**2019**, 19, 55–62. [Google Scholar] [CrossRef] - Sarle, W.S. Stopped Training and Other Remedies for Overfitting. Proc. 27th Symp. Comput. Sci. Stat.
**1995**, 17, 352–360. [Google Scholar] - Chollet, F. Home Keras Documentation. Available online: https://keras.io/ (accessed on 4 May 2020).
- Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. arXiv
**2016**, arXiv:1603.04467. [Google Scholar] - Chen, Y.; Tong, Z.; Zheng, Y.; Samuelson, H.; Norford, L. Transfer learning with deep neural networks for model predictive control of HVAC and natural ventilation in smart buildings. J. Clean. Prod.
**2020**, 254, 119866. [Google Scholar] [CrossRef] - Lee, J.; Kim, C.G.; Lee, J.E.; Kim, N.W.; Kim, H. Application of artificial neural networks to rainfall forecasting in the Geum River Basin, Korea. Water (Switzerland)
**2018**, 10, 1448. [Google Scholar] [CrossRef] [Green Version] - Alam, S.; Kaushik, S.C.; Garg, S.N. Assessment of diffuse solar energy under general sky condition using artificial neural network. Appl. Energy
**2009**, 86, 554–564. [Google Scholar] [CrossRef]

**Figure 2.**Schematic of the artificial neural network (ANN) with the total parameter number of 49 corresponding to (

**a**) 1 hidden layer; (

**b**) 3 hidden layers; and (

**c**) 5 hidden layers.

**Figure 3.**The rate of change of mean root mean square error (RMSE) of the 1-hidden layer ANN corresponding to 49, 113, 169, 353, and 1001 trainable parameters for the test data for all 55 stations in South Korea.

**Figure 4.**The rate of change of the mean RMSE of the 3-hidden layer ANN corresponding to 49, 113, 169, 353, and 1001 trainable parameters for the test data for all 55 stations in South Korea.

**Figure 5.**The rate of change of the mean RMSE of the 5-hidden layer ANN corresponding to 49, 113, 169, 353, and 1001 trainable parameters for the test data for all 55 stations in South Korea.

**Figure 6.**Panel (

**a**) shows the RMSE of the test data of the 1-hidden layer model with 49 parameters; panel (

**b**) shows the difference of the RMSE between the 1-hidden layer and the 3-hidden layer model (RMSE difference <0 indicates that the 1-hidden layer model performs better than the 3-hidden layer model and vice versa); and panel (

**c**) shows the RMSE difference between the 1-hidden layer and the 5-hidden layer model (RMSE difference <0 indicates the that the 1-hidden layer model performs better, RMSE difference >0 shows the opposite).

**Figure 7.**Panel (

**a**) shows the RMSE of the test data of the 1-hidden layer model with 113 parameters; panel (

**b**) shows the difference of the RMSE between the 1-hidden layer and the 3-hidden layer model (RMSE difference <0 indicates that the 1-hidden layer model performs better than the 3-hidden layer model and vice versa); and panel (

**c**) shows the RMSE difference between the 1-hidden layer and the 5-hidden layer model (RMSE difference <0 indicates that the 1-hidden layer model performs better, RMSE difference >0 shows the opposite).

**Figure 8.**Panel (

**a**) shows the RMSE of the test data of the 1-hidden layer model with 169 parameters; panel (

**b**) shows the difference of the RMSE between the 1-hidden layer and the 3-hidden layer model (RMSE difference <0 indicates that the 1-hidden layer model performs better than the 3-hidden layer model and vice versa); and panel (

**c**) shows the RMSE difference between the 1-hidden layer and the 5-hidden layer model (RMSE difference <0 indicates the that the 1-hidden layer model performs better, RMSE difference >0 shows the opposite).

**Figure 9.**Panel (

**a**) shows the RMSE of the test data of the 1-hidden layer model with 353 parameters; panel (

**b**) shows the difference of the RMSE between the 1-hidden layer and the 3-hidden layer model (RMSE difference <0 indicates that the 1-hidden layer model performs better than the 3-hidden layer model and vice versa); and panel (

**c**) shows the RMSE difference between the 1-hidden layer and the 5-hidden layer model (RMSE difference <0 indicates the that the 1-hidden layer model performs better, RMSE difference >0 shows the opposite).

**Figure 10.**Panel (

**a**) shows the RMSE of the test data of the 1-hidden layer model with 1001 parameters; panel (

**b**) shows the difference of the RMSE between 1-hidden layer and the 3-hidden layer model (RMSE difference <0 indicates that the 1-hidden layer model performs better than the 3-hidden layer model and vice versa); and panel (

**c**) shows the RMSE difference between the 1-hidden layer and the 5-hidden layer model (RMSE difference <0 indicates the that the 1-hidden layer model performs better, RMSE difference >0 shows the opposite).

Number of Parameters | Number of Hidden Layers | Structure |
---|---|---|

49 | 1 | 6-6-1 |

3 | 6-3-3-3-1 | |

5 | 6-5-1-1-1-1-1 | |

113 | 1 | 6-14-1 |

3 | 6-6-5-5-1 | |

5 | 6-4-4-4-4-4-1 | |

169 | 1 | 6-21-1 |

3 | 6-7-7-7-1 | |

5 | 6-6-6-4-4-6-1 | |

353 | 1 | 6-44-1 |

3 | 6-11-11-11-1 | |

5 | 6-8-8-8-8-8-1 | |

1001 | 1 | 6-125-1 |

3 | 6-20-20-20-1 | |

5 | 6-18-17-16-9-10-1 |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Tran, T.T.K.; Lee, T.; Kim, J.-S.
Increasing Neurons or Deepening Layers in Forecasting Maximum Temperature Time Series? *Atmosphere* **2020**, *11*, 1072.
https://doi.org/10.3390/atmos11101072

**AMA Style**

Tran TTK, Lee T, Kim J-S.
Increasing Neurons or Deepening Layers in Forecasting Maximum Temperature Time Series? *Atmosphere*. 2020; 11(10):1072.
https://doi.org/10.3390/atmos11101072

**Chicago/Turabian Style**

Tran, Trang Thi Kieu, Taesam Lee, and Jong-Suk Kim.
2020. "Increasing Neurons or Deepening Layers in Forecasting Maximum Temperature Time Series?" *Atmosphere* 11, no. 10: 1072.
https://doi.org/10.3390/atmos11101072