Next Article in Journal
Landslide Forecast by Time Series Modeling and Analysis of High-Dimensional and Non-Stationary Ground Motion Data
Next Article in Special Issue
Model-Free Time-Aggregated Predictions for Econometric Datasets
Previous Article in Journal
Different Forecasting Horizons Based Performance Analysis of Electricity Load Forecasting Using Multilayer Perceptron Neural Network
Previous Article in Special Issue
A Real-Time Data Analysis Platform for Short-Term Water Consumption Forecasting with Machine Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bootstrapped Holt Method with Autoregressive Coefficients Based on Harmony Search Algorithm

1
Department of Statistics, Faculty of Arts and Science, Giresun University, Giresun 28200, Turkey
2
Department of Statistics, Faculty of Arts and Science, Marmara University, Istanbul 34722, Turkey
*
Author to whom correspondence should be addressed.
Forecasting 2021, 3(4), 839-849; https://doi.org/10.3390/forecast3040050
Submission received: 20 September 2021 / Revised: 29 October 2021 / Accepted: 3 November 2021 / Published: 4 November 2021
(This article belongs to the Special Issue Feature Papers of Forecasting 2021)

Abstract

:
Exponential smoothing methods are one of the classical time series forecasting methods. It is well known that exponential smoothing methods are powerful forecasting methods. In these methods, exponential smoothing parameters are fixed on time, and they should be estimated with efficient optimization algorithms. According to the time series component, a suitable exponential smoothing method should be preferred. The Holt method can produce successful forecasting results for time series that have a trend. In this study, the Holt method is modified by using time-varying smoothing parameters instead of fixed on time. Smoothing parameters are obtained for each observation from first-order autoregressive models. The parameters of the autoregressive models are estimated by using a harmony search algorithm, and the forecasts are obtained with a subsampling bootstrap approach. The main contribution of the paper is to consider the time-varying smoothing parameters with autoregressive equations and use the bootstrap method in an exponential smoothing method. The real-world time series are used to show the forecasting performance of the proposed method.

1. Introduction

Exponential smoothing methods were published in the late 1950s [1,2,3], and they are known as some of the most successful forecasting methods in the literature. There are many exponential smoothing methods in the literature, such as the single exponential smoothing method, Holt method, Holt-Winters method, etc. Each exponential smoothing method is used in different situations. If data has no trend and no seasonality, a simple exponential smoothing method is used for forecasting. If data has a linear trend and no seasonality, the Holt method is used for forecasting. If data has both trend and seasonality, the Holt-Winters method is used for forecasting. In the coming years, the damped trend model was proposed by [4] if data has an over-trend. The reason why exponential smoothing methods are popular in the literature is that the forecasting success of exponential smoothing methods is superior to complicated approaches such as [5,6,7]. In addition to these methods, [8] proposed a simple modification of the exponential smoothing method named the ATA method, which is an effective and simple method to use compared with complex approaches in recent years.
Moreover, ref. [9,10] developed state-of-the-art guidelines for the application of the exponential smoothing methodology. Ref. [11] proposed a uniformly-sampled-autoregressive-moving-average model for a second-order linear stochastic system. Ref. [12] introduced the optimal procedure of the Boolean Kalman filter over a finite horizon. Ref. [13] presented a general benchmarking framework applicable to computational intelligence algorithms for solving forecasting problems. Ref. [14] proposed a new enhanced optimization model based on the bagged echo state network and improved by a differential evolution algorithm to estimate energy consumption. Ref. [15] introduced a two-stage Bayesian optimization framework for scalable and efficient inference in state-space models.
The method proposed by [2] is one of the effective exponential smoothing methods for forecasting data with trend. The Holt method has a forecasting equation and two smoothing equations, which are for the level of the series and slope of the trend as given in Equations (1)–(3).
x ^ n + 1 = l ^ n + b ^ n
l ^ n = λ 1 x n + 1 λ 1 x n
b ^ n = λ 2 l ^ n l ^ n 1 + 1 λ 2 b ^ n 1
In Equations (1)–(3), λ 1 and λ 2 are the smoothing parameters of mean level and slope, respectively, and these parameters get values between zero and one. In these equations, the initial values are obtained by applying simple linear regression to the series. In addition, in these equations, trend and level update formulas are only based on a lag.
In this study, the Holt method is modified by using time-varying smoothing parameters instead of fixed on time, and the smoothing parameters of mean level and slope are obtained for each observation with first-order autoregressive models. The parameters of the autoregressive models are estimated by using the harmony search algorithm (HSA). With these contributions, the proposed method eliminates the initial parameter determination problem. Moreover, the forecasts for the proposed method are obtained from sampling distributions of forecasts.
The proposed method is applied to Istanbul Stock Exchange data sets between the years 2000 and 2017 with different test lengths. The obtained results are compared with many methods in the literature. The brief information for HSA is given in Section 2. The proposed method is introduced, and the implementation results are given in Section 3 and Section 4 respectively. The final section is for conclusion and discussion.

2. Harmony Search Algorithm

HSA algorithm was proposed by [16]. HSA is a heuristic algorithm that simulates the notes of musicians. The principle of HSA is that the musicians in an orchestra play the best melody harmonically with the notes they play. Just as a chromosome in the genetic algorithm or a particle in particle swarm optimization represents a solution, a harmony in a harmony memory represents a solution in the harmony search algorithm. In HSA, each musician has a decision variable and each note in the memory of each musician corresponds to a different solution of that decision variable. Each harmony consists of different notes and each note corresponds to the decision variable. HSA aims to investigate whether the obtained solution vector is better than the worst solution in memory. The HSA is given below in steps in Algorithm 1.
Algorithm 1 The algorithm of HSA
Step 1. Determination of parameters to be used in HSA:
 • XHM: Harmony memory;
 • HMS: Harmony memory search;
 • HMCR: Harmony memory considering rate;
 • PAR: Pitch adjusting rate;
 • n: the number of variables.
Step 2. Creating of the harmony memory.
 HM for HSA is generated as in Equation (4).
H M = x 11 x 12 x 1 n x 21 x 22 x 2 n x H M S 1 x H M S 2 x H M S 3 x H M S n = x 1 x 2 x H M S
 Here, x i j , i = 1 , 2 , H M S   ; j = 1 , 2 , , n is expressed as a note value and is generated randomly.
 In HSA, each solution vector is denoted by x i , i = 1 , 2 , , H M S . In HSA, there are HMS solution vectors. The representation of the first solution vector is given in Equation (5).
x 1 = x 11 , x 12 , , x 1 n
Step 3. Calculation of objective function values.
 The objective function values are calculated for each solution vector generated randomly as given in Equation (6).
x 11 x 12 x 1 n x 21 x 22 x 2 n x H M S 1 x H M S 2 x H M S 3 x H M S n = x 1 x 2 x H M S = f ( x 1 ) f ( x 2 ) f ( x H M S )
Step 4. Improvement of a new harmony.
 While the probability of H M C R with a value between 0 and 1 is to select a value from the existing values in the HM, (1-HMCR) value is the ratio of a random value selected from the possible value ranges. The new harmony is obtained with the help of Equation (7).
x i j n e w = x i j n e w x i j ; i = 1 , , 2 , , HMS if rnd < HMCR x i j n e w min ( x i j ) , max ( x i j ) ;   i = 1 , 2 ,   H M S otherwise
 It is decided by the P A R parameter whether the toning process can be applied to each selected decision variable with the possibility of H M C R or not as given in Equation (8).
x i j n e w p i t c h = Y e s r n d < P A R N o o t h e r w i s e
 In Equation (8), r n d is generated randomly between U 0 , 1 . If this random number is smaller than the P A R value, this value is changed to the closest value to it. If the tonalization will be made for each x i j n e w decision variable and the value of x i j n e w is assumed to be the k th value within the vector of the value variable, the new value of x i j n e w k is x i j x i j k + m , and m , 2 , 1 , 1 , 2 , is the neighboring index.
Step 5. Updating the harmony memory.
 If the new harmony vector is better than the worst vector in the H M , the worst vector is removed from the memory, and the new harmony vector is included in the HM instead of the removed vector.
Step 6. Stop condition check.
 Steps 4–6 are repeated until the termination criteria are met. Possible values for HMCR and PAR in literature are between 0.7–0.95 and 0.05–0.7, respectively [17].

3. Proposed Method

Although the Holt method is used as an efficient forecasting method, it has many problems that are obvious and need to be resolved. The first of these problems is the determination of initial trend and level values. The second problem of the Holt method is that the trend and level update formulas are only based on a lag. To avoid these problems and increase the forecasting performance of the Holt method, the advantages and innovations of the proposed method are given step by step as below:
  • The smoothing parameters are varied from observation to observation using first-order autoregressive equations;
  • The optimal parameters of the Holt method are determined with HSA;
  • The forecasts are obtained by the Sub-sampling Bootstrap method.
The algorithm of the proposed method is also given in Algorithm 2.
Algorithm 2 The algorithm of the proposed method
Step 1. Determine the parameters of the training process:
 • # observation of test set: n t e s t ;
 • HMS;
 • HMCR;
 • PAR;
 • # bootstrap samples: nbst;
 • bootstrap sample size: bss.
Step 2. Select bootstrap samples from the training set randomly.
 Steps from 2.1. to 2.2 are repeated n b s t times. x t , j * presents j th bootstrap time series.
Step 2.1. Select a starting point of the block ( s p b ) as an integer from a discrete uniform distribution with parameters [ 1 , n t r a i n -bss+1 ] .
Step 2.2. Create bootstrap time series as given in Equation (9).
x t , j * = x s p b , x s p b + 1 , , x s p b + b s s 1 ,         j = 1 , 2 , n b s t
Step 3. Apply regression analysis to determine the initial bounds for level L 0 and trend B 0 parameters by using x t , j * bootstrap time series as the training set by using Equations (10)–(12).
X = 1   1   1 ; 1   2   b s s b s s 2
Y = x t , j = x s p b , x s p b + 1 , , x s p b + b s s 1
β ^ = β ^ 0 β ^ 1 = X X 1 X Y
( L 0 β ^ 0 / 2 , 2 β ^ 0 ) and trend ( B 0 β ^ 1 / 2 , 2 β ^ 1 )
Step 4. HSA is used to obtain the optimal parameters of the Holt method with autoregressive coefficients for each bootstrap time series. Steps 4.1 and 4.4 are repeated for each bootstrap time series.
Step 4.1. Generate the initial positions of HSA. The positions of harmony are L 0 ,   B 0 , λ 1 0 , λ 2 0 , ϕ 11 , ϕ 12 , ϕ 21   a n d   ϕ 22 .
L 0 and B 0 are generated from U β 0 ^ / 2 , 2 β 0 ^   a n d   U β 1 ^ / 2 , 2 β 1 ^ , respectively. λ 1 0 , λ 2 0 , ϕ 11 and ϕ 21 are generated from U 0 , 1 . ϕ 12 and ϕ 22 are generated from U 1 , 1 . The creation of the harmony memory for the proposed method is given in Equation (13), and the parameters that correspond to k th harmony are given in Table 1.
H M = x 1 1 x 2 1 x 3 1 x 8 1 x 1 2 x 2 2 x 3 2 x 8 2 x 1 H M S x 2 H M S x 3 H M S x 8 H M S
Step 4.2. According to the initial positions of each harmony, fitness functions are calculated. The root of mean square error (RMSE) is preferred to use as a fitness function and is calculated as given in Equation (14).
f i = R M S E i = 1 b s s t = 1 b s s x t , j * x ^ t , j 2 ,   i = 1 , 2 , , H M S
 In Equation (14), x ^ t , j is the output for j th bootstrap time series data and k th harmony. x ^ t , j * is obtained by using Equations (15)–(19).
λ 1 t = ϕ 11 + ϕ 12 λ 1 t 1
λ 2 t = ϕ 21 + ϕ 22 λ 2 t 1
L t = λ 1 t x t , j * + 1 λ 1 t L t 1 + B t 1
B t = λ 2 t L t L t 1 + 1 λ 2 t B t 1
x ^ t + 1 , j * = L t + B t
 Obtain RMSE values for each harmony, and save the best harmony which has the smallest RMSE.
Step 4.3. Improve new harmony.
H M C R shows the probability that the value of a decision variable is selected from the current harmony memory. (1- H M C R ) represents the random selection of the new decision variable from the existing solution space. x i shows the new harmony, obtained as in Equation (20).
x i = x i x i 1 , x i 2 , ,   x i H M S   i f   rand < H M C R x i X , o t h e r w i s e
 After this step, each decision variable is evaluated to determine whether a tonal adjustment is necessary. This is determined by the PAR parameter, which is the tone adjustment ratio. The new harmony vector is produced according to the randomly selected tones in the memory of harmony as given in Equation (21). Whether the variables are selected from the harmonic memory is determined by the HMCR ratio, which is between 0 and 1.
x i = x i + r n d 0 , 1 * b w i f   rnd < PAR x i o t h e r w i s e
b w is a bandwidth selected randomly; r n d (0; 1) represents a random number generated between 0 and 1.
Step 4.4. Harmony memory update.
 In this step, the comparison between the newly created harmonies and the worst harmonies in the memory is made in terms of the values of the objective functions. If the newly created harmony vector is better than the worst harmony, the worst harmony vector is removed from the memory, and the new harmony vector is substituted for it.
 Calculate RMSE values for j t h bootstrap time series data and k th harmony. Find the best harmony which has the minimum RMSE value for j th bootstrap time series data.
Step 5. Calculate the forecasts for test data by using the best harmony for each bootstrap sample and their statistics.
 The obtained forecasts from the updated Equations for j th bootstrap time series at t time is represented by F t i . Forecasts and their statistics are calculated just as in Table 2. In addition, the flowchart of the proposed method is given in Figure 1.

4. Applications

To evaluate the performance of the proposed method, the proposed method is applied to the Istanbul Stock Exchange (BIST) data sets observed daily between the years 2000 and 2017 with different test lengths as 10 and 20. To evaluate the performance of the proposed method, the proposed method is compared with the ATA method proposed by [8], Holt method, fuzzy regression functions approach (FF) proposed by [18], random walk (RW), multilayer perceptron artificial neural networks (MLP-ANN) and adaptive neural-fuzzy inference systems (ANFIS) method proposed by [19]. For a fair comparison of the methods, we used both statistical and computational intelligence forecasting methods. While the random walk was used as a simple forecasting method, the Holt and ATA methods were used as statistical forecasting methods. Moreover, MLP-ANN, ANFIS, and FF methods were used as computational intelligence forecasting methods. In the analysis process, the number of bootstrap samples and the bootstrap sample size is given as 100 for each data set. The RMSE and MAPE criteria were used for the comparison of the methods. The mean absolute percentage error (MAPE) is one of the most widely used measures of forecast accuracy, due to its advantages of scale-independency and interpretability [20]. The use of RMSE is very common, and it is considered an excellent general-purpose error metric for numerical predictions [21]. Table 3 gives the all-analysis results for each data set for the RMSE criterion when the length of the test set is 10.
In Table 3, the proposed method has 59% success compared with the other methods in terms of the RMSE criterion when the test set is 10. To see the actual comparison results of the proposed method with other methods, we compare the rank values of each method and obtain the average rank values. For this purpose, we rank each method according to their success status for each time series analyzed. In such a ranking, the method with the lowest RMSE value will be named as the best method, and the rank value of it will be taken as 1. For this purpose, all methods were calculated according to rank order considering the RMSE criterion when the length of the test set is 10, and average rank values were obtained as in Figure 2.
From Figure 2, it is seen that the proposed method has a minimum average rank value compared with other methods, and the proposed method is the best method for RMSE criterion when the length of the test set is 10. In addition, Table 4 gives the all-analysis results for each data set for the MAPE criterion given in Equation (22) when the length of the test set is 10.
M A P E = 1 n t e s t t = 1 n t e s t X t X ^ t X t
In Table 4, the proposed method has 39% success compared with the other methods in terms of the MAPE criterion when the test set is 10. Looking at the rank evaluation results for the MAPE criterion when the test set length is 10 given in Figure 3, it is seen that the proposed method is in third place among all methods.
Table 5 also gives the all-analysis results for each data set for the RMSE criterion when the length of the test set is 20. In Table 5, the proposed method has a 61% success rate. Considering the situations where the proposed method is not the best, it stands out as the second-best method in many time-series analyses. Moreover, the rank evaluation results for all methods for the RMSE criterion when the length of the test set is 20 are given in Figure 4. In addition, Table 6 gives the all-analysis results for each data set for the MAPE criterion when the length of the test set is 20.
When the analysis results given in Table 6 are examined, even in the analyses in which the proposed method is not the best method, the proposed method often appears to be either the second-best or third-best method. We examine rank values to verify and highlight these results given in Figure 5.
Considering the average rank obtained from all methods, it can be said that the proposed method for the MAPE criterion has more successful results than other methods. As a final comment, when all analysis results are examined, it can be said from both average rank results and analysis results that the proposed method is a more successful method than other methods used in the comparison.

5. Conclusions and Discussion

Although the Holt method is used as a traditional time series forecasting method, it is known that it has some problems, such as the determination of the initial trend and level values and determining the trend and level update formulas. In this study, to overcome these problems, the parameters of the Holt method are optimized by using HSA, the smoothing parameters are varied by using first-order autoregressive equations, and the forecasting performance is improved by using the subsample bootstrap method.
When comparing the classical Holt method and the proposed method, it is clear that time-varying smoothing parameters and HSA provide important improvements in the forecasting results. The proposed method produces smaller RMSE values than the classical Holt method by about 70% in all analyses. If we compare the computation time of the proposed method with the classical Holt method, the proposed method needs more computation time because of using bootstrap and HSA algorithms, as expected. However, the computation time of the proposed method is very close to computational intelligence forecasting methods, and the computation time is not a problem for today’s personal computers. For the BIST series, the computation time is about three minutes.
In future studies, different artificial intelligence optimization techniques can be used to determine the optimal parameters of the Holt method, or the forecasts can be obtained by different bootstrap methods.

Author Contributions

Conceptualisation, E.E. and E.B.; methodology, E.E., U.Y. and E.B.; software, E.E. and E.B.; validation, E.E., U.Y. and E.B.; formal analysis, E.B.; investigation, E.E., U.Y. and E.B.; writing—original draft preparation, E.E. and E.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data set is available at https://datastore.borsaistanbul.com/. The access date is 1 November 2020.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Brown, R.G. Statistical Forecasting for Inventory Control; McGraw-Hill: New York, NY, USA, 1959. [Google Scholar]
  2. Holt, C.E. Forecasting Seasonals and Trends by Exponentially Weighted Averages (O.N.R. Memorandum No. 52); Carnegie Institute of Technology Pittsburgh: Pittsburgh, PA, USA, 1957. [Google Scholar]
  3. Winters, P.R. Forecasting sales by exponentially weighted moving averages. Manag. Sci. 1960, 6, 324–342. [Google Scholar] [CrossRef]
  4. Gardner, E.S., Jr.; McKenzie, E. Forecasting trends in time series. Manag. Sci. 1985, 31, 1237–1246. [Google Scholar] [CrossRef]
  5. Makridakis, S.G.; Fildes, R.; Hibon, M.; Parzen, E. The forecasting accuracy of major time series methods. J. R. Stat. Soc. Ser. D Stat. 1985, 34, 261–262. [Google Scholar]
  6. Makridakis, S.; Hibon, M. The M3-competition: Results, conclusions and implications. Int. J. Forecast. 2000, 16, 451–476. [Google Scholar] [CrossRef]
  7. Koning, A.J.; Franses, P.H.; Hibon, M.; Stekler, H.O. The M3 competition: Statistical tests of the results. Int. J. Forecast. 2005, 21, 397–409. [Google Scholar] [CrossRef]
  8. Yapar, G.; Selamlar, H.T.; Capar, S.; Yavuz, I. ATA method. Hacet. J. Math. Stat. 2019, 48, 1838–1844. [Google Scholar] [CrossRef]
  9. Gardner, E.S., Jr. Exponential smoothing. The state of the art. J. Forecast. 1985, 4, 1–28. [Google Scholar] [CrossRef]
  10. Gardner, E.S., Jr. Exponential smoothing: The state of the art—Part II. Int. J. Forecast. 2006, 22, 637–666. [Google Scholar] [CrossRef]
  11. Pandit, S.M.; Wu, S.M. Exponential smoothing as a special case of a linear stochastic system. Oper. Res. 1974, 22, 868–879. [Google Scholar] [CrossRef]
  12. Imani, M.; Braga-Neto, U.M. Optimal finite-horizon sensor selection for Boolean Kalman Filter. In Proceedings of the 2017 51st Asilomar Conference on Signals, Systems, and Computers, IEEE, Pacific Grove, CA, USA, 29 October–1 November 2017; pp. 1481–1485. [Google Scholar]
  13. Oprea, M. A general framework and guidelines for benchmarking computational intelligence algorithms applied to forecasting problems derived from an application domain-oriented survey. Appl. Soft Comput. 2020, 89, 106103. [Google Scholar] [CrossRef]
  14. Hu, H.; Wang, L.; Peng, L.; Zeng, Y.R. Effective energy consumption forecasting using enhanced bagged echo state network. Energy 2020, 193, 116778. [Google Scholar] [CrossRef]
  15. Imani, M.; Ghoreishi, S.F. Two-Stage Bayesian Optimization for Scalable Inference in State-Space Models. IEEE Trans. Neural Netw. Learn. Syst. 2021, 1–12. [Google Scholar] [CrossRef]
  16. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  17. Geem, Z.W. Optimal cost design of water distribution networks using harmony search. Eng. Optim. 2006, 38, 259–277. [Google Scholar] [CrossRef]
  18. Turkşen, I.B. Fuzzy functions with LSE. Appl. Soft Comput. 2008, 8, 1178–1188. [Google Scholar] [CrossRef]
  19. Jang, J.S. Anfis: Adaptive-network-based fuzzy inference system. IEEE Trans. Syst. Man Cybern. 1993, 23, 665–685. [Google Scholar] [CrossRef]
  20. Kim, S.; Kim, H. A new metric of absolute percentage error for intermittent demand forecasts. Int. J. Forecast. 2016, 32, 669–679. [Google Scholar] [CrossRef]
  21. Neill, S.P.; Hashemi, M.R. Ocean Modelling for Resource Characterization. Fundam. Ocean Renew. Energy 2018, 193–235. [Google Scholar] [CrossRef]
Figure 1. The flowchart of the proposed method.
Figure 1. The flowchart of the proposed method.
Forecasting 03 00050 g001
Figure 2. The average rank values of each method for RMSE criterion when the length of the test set is 10.
Figure 2. The average rank values of each method for RMSE criterion when the length of the test set is 10.
Forecasting 03 00050 g002
Figure 3. The average rank values of each method for MAPE criterion when the length of the test set is 10.
Figure 3. The average rank values of each method for MAPE criterion when the length of the test set is 10.
Forecasting 03 00050 g003
Figure 4. The average rank values of each method for RMSE criterion when the length of the test set is 20.
Figure 4. The average rank values of each method for RMSE criterion when the length of the test set is 20.
Forecasting 03 00050 g004
Figure 5. The average rank values of each method for MAPE criterion when the length of the test set is 20.
Figure 5. The average rank values of each method for MAPE criterion when the length of the test set is 20.
Forecasting 03 00050 g005
Table 1. The parameters corresponding to kth harmony.
Table 1. The parameters corresponding to kth harmony.
x 1 k x 2 k x 3 k x 4 k x 5 k x 6 k x 7 k x 8 k
L 0 B 0 λ 1 0 λ 2 0 ϕ 11 ϕ 12 ϕ 21 ϕ 22
Table 2. Forecasts for bootstrap samples.
Table 2. Forecasts for bootstrap samples.
Time ( t ) /Bootstrap Sample 1 2 n b s t MedianStandard Deviation
1 F 1 1 F 1 2 F 1 n b s t F ^ 1 SE ( F ^ 1 )
2 F 2 1 F 2 2 F 2 n b s t F ^ 2 SE ( F ^ 2 )
ntest F n t e s t 1 F n t e s t 2 F n t e s t n b s t F ^ n t e s t SE ( F ^ n t e s t )
Table 3. All analysis results for each data set for RMSE criterion when the length of the test set is 10.
Table 3. All analysis results for each data set for RMSE criterion when the length of the test set is 10.
DataATAHoltFFRWMLP-ANNANFISPP
BIST2000279.79296.17310.42286.15343.9619.21278.82
BIST2001204.84237.69272.31206.51106.89710.82189.75
BIST2002325.08319.78357331.87620.78399.13332.13
BIST2003354.79355.55380.82349.791859.21420.75328.25
BIST2004315.62315.79390.15325.691807.8641.43313.7
BIST2005316.75315.36328.84342.692071.98559.2304.98
BIST2006354.03348.58352.07356.81423.98389.3346.98
BIST2007768.29734.55673.14734.14897.02550.97728.92
BIST2008283.99277.2256.98253.67444.74340.41260.52
BIST2009505.05483.8558.06551.973117.96736.78473.4
BIST2010577.68594.9583.52591.88725.15588.36576.4
BIST2011697.64710.04849.68726.51733.881037.87737.83
BIST2012355.5350.46368.26358.173237.45406.68358.15
BIST20131905.641898.612105.051922.144369.352104.391871.45
BIST20141068.361025.181177.561059.972631.251435.011036.6
BIST2015772.84767.71758.89779.071080.69714.69751.41
BIST2016431.86433.52450.67434.01520.1424.34652.25
BIST2017861.26869.231113.74911.213777.621283.75827.15
Table 4. All-analysis results for each data set for MAPE criterion when the length of the test set is 10.
Table 4. All-analysis results for each data set for MAPE criterion when the length of the test set is 10.
DataATAHoltFFRWMLP-ANNANFISPP
BIST20000.02220.02330.02680.02360.02930.05070.0223
BIST20010.0110.01240.01610.01120.08180.05060.0103
BIST20020.02530.02410.02670.02560.0430.02870.0256
BIST20030.01630.01630.01780.01620.10080.02080.0154
BIST20040.00990.010.01290.01030.07350.02410.0099
BIST20050.00680.00690.00690.00740.05190.01260.0066
BIST20060.00680.00660.0070.00670.00820.00760.0073
BIST20070.00980.010.00870.00950.01380.0080.0095
BIST20080.00820.00750.00730.00710.01540.00930.0075
BIST20090.00670.00660.00770.00760.05950.01140.0071
BIST20100.0060.00640.00580.00630.00850.00680.0061
BIST20110.01130.01160.01370.01180.03160.01650.0119
BIST20120.0040.00390.0040.00390.0410.00420.004
BIST20130.0220.02190.02540.02230.06080.02580.0216
BIST20140.00920.0090.01010.00940.03040.01180.0089
BIST20150.00830.00820.00780.00820.01090.00870.0082
BIST20160.00490.00480.00470.00480.00510.00460.0062
BIST20170.00520.00530.00760.00580.03180.00920.0049
Table 5. All-analysis results for each data set for RMSE criterion when the length of the test set is 20.
Table 5. All-analysis results for each data set for RMSE criterion when the length of the test set is 20.
DataATAHoltFFRWMLP-ANNANFISPP
BIST2000680.61680.33713.87682.742868.94825.58681.58
BIST2001315.19326.20372.36312.961030.82540.36296.32
BIST2002388.51389.17390.47393.48392.16432.21383.70
BIST2003313.25339.08456.83311.182201.77558.18288.38
BIST2004329.12329.30366.48335.161479.79554.62319.35
BIST2005426.84415.74496.57433.662940.74632.79463.17
BIST2006539.71551.20581.55547.72742.07625.98556.77
BIST2007814.90783.40789.45774.91854.08660.30762.16
BIST2008575.72571.80589.64542.31766.02624.59541.21
BIST2009492.91510.09518.55516.252794.96623.04492.17
BIST2010867.04921.85885.97850.141193.33965.97864.93
BIST2011757.81728.63849.50790.691141.08772.13774.14
BIST2012592.96564.85605.32544.815641.931224.80517.44
BIST20131687.261680.691888.991709.072453.561821.801669.36
BIST20141318.631315.911323.781315.911936.511610.111318.91
BIST20151242.981263.711223.851225.072322.701189.751213.33
BIST2016650.22662.26648.96599.52699.81728.46604.62
BIST20171010.731011.041165.701031.372981.641134.55833.03
Table 6. All-analysis results for each data set for MAPE criterion when the length of the test set is 20.
Table 6. All-analysis results for each data set for MAPE criterion when the length of the test set is 20.
DataATAHoltFFRWMLP-ANNANFISPP
BIST20000.05400.05470.06150.05570.30910.07480.0546
BIST20010.01760.01820.02120.01750.07460.03550.0178
BIST20020.02610.02630.02750.02720.02600.03110.0269
BIST20030.01450.01470.02190.01460.12160.02420.0144
BIST20040.01040.01010.01210.01080.05870.01840.0107
BIST20050.00870.00820.00970.00910.07440.01340.0096
BIST20060.00980.01020.01090.01020.01430.01240.0105
BIST20070.01130.01080.01060.01040.01270.00950.0105
BIST20080.01800.01750.01930.01640.02230.01830.0167
BIST20090.00760.00790.00800.00800.05290.00970.0074
BIST20100.01010.01120.01030.00990.01290.01250.0100
BIST20110.01180.01100.01310.01240.01880.01170.0121
BIST20120.00630.00610.00640.00580.07280.01420.0056
BIST20130.01800.01790.02060.01820.02780.02020.0180
BIST20140.01210.01210.01220.01220.02030.01510.0120
BIST20150.01400.01450.01330.01350.02850.01290.0134
BIST20160.00650.00650.00590.00580.00680.00670.0058
BIST20170.00730.00730.00880.00770.02460.00870.0065
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bas, E.; Egrioglu, E.; Yolcu, U. Bootstrapped Holt Method with Autoregressive Coefficients Based on Harmony Search Algorithm. Forecasting 2021, 3, 839-849. https://doi.org/10.3390/forecast3040050

AMA Style

Bas E, Egrioglu E, Yolcu U. Bootstrapped Holt Method with Autoregressive Coefficients Based on Harmony Search Algorithm. Forecasting. 2021; 3(4):839-849. https://doi.org/10.3390/forecast3040050

Chicago/Turabian Style

Bas, Eren, Erol Egrioglu, and Ufuk Yolcu. 2021. "Bootstrapped Holt Method with Autoregressive Coefficients Based on Harmony Search Algorithm" Forecasting 3, no. 4: 839-849. https://doi.org/10.3390/forecast3040050

Article Metrics

Back to TopTop