Next Article in Journal
Effect of Family Control on Earnings Management: The Role of Leverage
Next Article in Special Issue
Towards a More Resilient Festival Industry: An Analysis of the Adoption of Risk Management Models for Sustainability
Previous Article in Journal / Special Issue
Perceived Risks of Autonomous Vehicles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluating the Effectiveness of Modern Forecasting Models in Predicting Commodity Futures Prices in Volatile Economic Times

1
Doctoral School of Management and Business Administration, Hungarian University of Agriculture and Life Sciences, Kaposvári Campus, 7400 Kaposvár, Hungary
2
Department of Statistics, Finances and Controlling, Széchenyi István University, 9026 Győr, Hungary
3
Department of Investment, Finance and Accounting, Hungarian University of Agriculture and Life Sciences, Kaposvári Campus, 7400 Kaposvár, Hungary
*
Author to whom correspondence should be addressed.
Risks 2023, 11(2), 27; https://doi.org/10.3390/risks11020027
Submission received: 28 December 2022 / Revised: 9 January 2023 / Accepted: 17 January 2023 / Published: 22 January 2023
(This article belongs to the Special Issue New Advance of Risk Management Models)

Abstract

:
The paper seeks to answer the question of how price forecasting can contribute to which techniques gives the most accurate results in the futures commodity market. A total of two families of models (decision trees, artificial intelligence) were used to produce estimates for 2018 and 2022 for 21- and 125-day periods. The main findings of the study are that in a calm economic environment, the estimation accuracy is higher (1.5% vs. 4%), and that the AI-based estimation methods provide the most accurate estimates for both time horizons. These models provide the most accurate forecasts over short and medium time periods. Incorporating these forecasts into the ERM can significantly help to hedge purchase prices. Artificial intelligence-based models are becoming increasingly widely available, and can achieve significantly better accuracy than other approximations.

1. Introduction

Forecasting commodity prices can affect the performance of companies. The turnover of trading firms and firms producing goods is affected by future prices. The costs of users of commodities are also determined by the prices of these products. The magnitude of profits naturally affects the value of firms (Carter et al. 2017). Commodity prices should be taken into account when planning revenues and costs. Hedging strategies should be developed to reduce profit volatility. Hedging the risk of price changes also has costs. Therefore, more accurate forecasting of commodity price movements is a value-adding factor in the above considerations.
Changes in commodity prices also affect the cost of credit (Donders et al. 2018). Of course, the volatility of a firm’s profits also affects the perception of credit risk. Higher volatility leads to higher financing costs. Proper forecasting of commodity price movements can support the design of a strategy to reduce profit volatility.
A variety of methods are used to forecast the prices of different commodities and shares. They are basically grouped into three main categories. The first includes traditional statistical methods, the second some kind of artificial intelligence-based methods, and the third so-called hybrid methods (Kim and Won 2018; Vidal and Kristjanpoller 2020; Zolfaghari and Gholami 2021). In our paper, we focus only on the group of predictive models based on artificial intelligence, which includes the following algorithms: ANNs (Artificial Neural Networks), DNNs (Deep Neural Networks), GAs (Genetic Algorithms), SVM (Support Vector Machine), and FNNs (Fuzzy Neural Networks). Artificial intelligence-based models have several advantages over traditional statistical models because of their complexity and much higher predictive accuracy. Because of their learning ability, AI-based models are able to recognise patterns in the data, such as non-linear movements. Exchange rates exhibit non-stationary and non-linear movements that traditional statistical models are unable to detect, and AI-based methodologies have taken the lead in this area over time.
In their study, Gonzalez Miranda and Burgess (1997) modelled the implied volatility of IBEX35 index options using a multi-layer perceptual neural network over the period November 1992 to June 1994. Their experience shows that forecasting with nonlinear NNs generally produces results that dominate over forecasts from traditional linear methods. This is due to the fact that the NN takes into account potentially complex nonlinear relationships that traditional linear models cannot handle well.
Hiransha et al. (2018) produced forecasts of stock price movements on the NSE and NYSE. They based their analysis on the following models: multi-layer perceptual model, RNN, LSTM (Long-Short-Term Memory) and CNN (Convolutional Neural Network). Based on empirical analysis, CNN performed the best. The results were also compared with the outputs of the ARIMA method and in this comparison, CNN was also the optimal choice.
Ormoneit and Neuneier (1996) used a multilayer perceptual and density estimator neural network to predict the volatility of the DAX index for the period January 1983 to May 1991. In comparing the two models, they concluded that the density estimator neural network outperformed the perceptual method without a specific target distribution.
Hamid and Iqbal (2004) applied the ANN methodology to predict the volatility of S&P 500 index futures. From their empirical analysis, they concluded that ANNs’ forecasts are better than implied volatility estimation models.
Ou and Wang (2009) conducted research on trend forecasting of the Hang Seng index using tree-based classification, K-nearest neighbor (KNN), SVM, Bayesian clustering and neural network models. The final result of the analysis showed that SVM is able to outperform the other predictive methods.
Ballings et al. (2015) compared the AdaBoost, Random Forest, Kernel factory, SVM, KNN, logistic regression and ANN methods using stock price data from European companies. They tried to predict stock price trajectories one year ahead. The final result showed that Random Forest was the best performer.
Nabipour et al. (2020) compared the predictive ability of nine different machine learning and two deep learning algorithms (Recurrent Neural Network, Long-Short-Term Memory) on stock data of financial, oil, non-metallic mineral and metallic materials companies on the Tehran Stock Exchange. They concluded that RNN and LSTM outperformed all other predictive models.
Long et al. (2020) used machine learning (Random Forest, Adaptive Boosting), bi-directional deep learning (BiLSTM) and other neural network models to investigate the predictability of Chinese stock prices. BiLSTM was able to achieve the highest performance, far outperforming the other forecasting methods.
Fischer and Krauss (2018) examined data from the S&P500 index between 1992 and 2015. Random Forest, logistic regression and LSTM were used for the forecasts. Their final conclusion was that the long short-term memory algorithm gave the best results.
Nelson et al. (2017) applied the multi-layer perceptual, Random Forest, and LSTM models to Brazilian stock market data to answer the question of which of the three models is the most accurate predictor. They concluded that the LSTM was the most accurate.
Nikou et al. (2019) analysed the daily price movements of the iShares MSCI UK exchange-traded fund over the period January 2015 to June 2018. ANN, Support Vector Machine (SVM), Random Forest and LSTM models were used to generate the predicted values. LSTM obtained the best score, while SVM was the second-most accurate.
Recent research (van der Lugt and Feelders 2019; Hajiabotorabi et al. 2019) comparing the predictive ability of ANNs and RNNs concluded that RNNs can outperform traditional neural networks. Also prominent among these methods is the long-short-term memory (LSTM) model, which has been applied to a wide range of sequential datasets. This model variant has the advantage of showing high adaptability in the analysis of time series (Petersen et al. 2019).
In their research, Thi Kieu Tran et al. (2020) demonstrated that the temporal impact of past information is not taken into account by ANNs for predicting time series, and therefore deep learning methods (DNN) have recently been increasingly used. A prominent group of these are Recurrent Neural Networks (RNNs), which have the advantage of providing feedback in their architecture.
Kaushik and Giri (2020) compared LSTM, vector autoregression (VAR) and SVM for predicting exchange rate changes. Their analysis revealed that the LSTM model outperformed both SVM and VAR methods in forecasting.
Basak et al. (2019) used XGBoost, logistic regression, SVM, ANN and Random Forest to predict stock market trends. The results showed that Random Forest outperformed the others.
Siami-Namini et al. (2018) examined data from the S&P500 and Nikkei 225 indices in their study. The final conclusion was that the superiority of LSTM over ARIMA prevailed.
Liu (2019) focused on the prediction of the S&P500 index and Apple stock price in his study. He concluded that over a longer forecasting time horizon, LSTM and SVM outperform the GARCH model.
Based on the above, the LSTM is considered to be quite good in terms of predictive performance, but it has a serious shortcoming, namely that it cannot represent the multi-frequency characteristics of time series, and therefore it does not allow the frequency domain of the data to be modelled. To overcome this problem, Zhang et al. (2017) proposed the use of Fourier transform to extract time-frequency information. In their research, they combined this method with a neural network model; however, these two types are mutually exclusive since the information content of the time domain is not included in the frequency domain and the information of the frequency domain does not appear in the time domain. This ambiguity is addressed by the wavelet transform (WT). This and the ARIMA model were compared by Skehin et al. (2018) with respect to FAANG (Facebook, Apple, Amazon, Netflix, Google) stocks listed on the NASDAQ. They concluded that in all cases except Apple, ARIMA outperformed WT-LSTM for the next-day stock price prediction.
Liang et al. (2019) investigated the predictive performance of the traditional LSTM and the LSTM model augmented with wavelet transform on S&P500 index data. Their work demonstrated that WT-LSTM can outperform the traditional long-short-term memory method.
Liang et al. (2022) studied the evolution of the gold price. They propose the application of a novel decomposition model to predict the price. First, decomposition into sub-layers of different frequencies is performed. Then, the prediction is performed for each sublayer using long short-term memory, convolutional neural networks and convolutional attention block module (LSTM-CNN-CBAM). Secondly, the long-short-term memory, convolutional neural networks, and convolutional block attention module (LSTM-CNN-CBAM) joint forecasting all sublayers. The last step is to summarise the partial results. Their results show that the collaboration between LSTM, CNN and CBAM can enhance the modelling capability and improve prediction accuracy. In addition, the ICEEMDAN decomposition algorithm can further improve the accuracy of the prediction, and the prediction effect is better than other decomposition methods.
Nickel is becoming an increasingly important raw material as electromobility develops. Ozdemir et al. (2022) discuss the medium- and long-term price forecast of nickel in their study. They employ two advanced deep learning architectures, namely LSTM and GRU. The MAPE criterion is used to evaluate the forecasting performance. For both models, their forecasting ability has been demonstrated. In addition to the prediction capability, the speed of the calculations is tested. When processing high resolution data, speed can be an important factor. The study found that GRU networks were 33% faster than LSTM networks.
For copper price prediction, Luo et al. (2022) propose a two-phase prediction architecture. The first phase is the initial forecasting phase. The second phase is error correction. In the first phase, factors that could affect the price of copper are selected. After selecting the three most influential factors, a GALSTM model is developed. This is necessary to make the initial forecasts. A 30-year historical data series is then used to validate the model.
Companies are diverse in terms of financial risk (Ali et al. 2022). A more accurate price forecasting model can contribute to better enterprise risk management. It can help reduce earnings volatility. It can also support the development of a more effective hedging strategy.
The study aims to test modern forecasting techniques. A total of two families of models (decision trees, artificial intelligence) will be used to produce estimates. The question is which of the tested techniques gives more accurate forecasting results. The tests are carried out for eight commodity products across the categories of oil, gas, and precious metals.

2. Data and Methods

The most commonly used metrics in the literature for evaluating the predictive models and assessing their accuracy are root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) (Nti et al. 2020).
(a)
Root mean squared error (RMSE): this performance indicator shows an estimation of the residuals between actual and predicted values.
R M S E = 1 n i = 1 n   ( y i y ^ i ) 2
where y ^ i is the estimated value produced by the model, y i is the actual value, and n is the number of observations.
(b)
Mean Absolute Error (MAE): this indicator measures the average magnitude of the error in a set of predictions.
M A E = 1 n i = 1 n | y i y ^ i |
(c)
Mean Absolute Percentage Error (MAPE): this indicator measures the average magnitude of the error in a set of predictions and shows the deviations in percentage.
M A P E = 1 n i = 1 n | y i y ^ i y i |
The forecasts are more reliable and accurate if these indicators have lower values. It is important to note that RMSE minimizes larger deviations more due to squaring, so this metric can give more extreme values than MAE. The former should be interpreted as a value, while the MAPE should be interpreted as a percentage (deviations expressed as a percentage of the original value). For this reason, the MAPE can be used to compare different instruments because it does not depend on the nominal size of the price. As our study examined indices from around the world and the effects of two different negative economic events, we used the MAPE indicator for comparability in the overall assessment of the models.
  • Support Vector Machine (SVM)
The SVM is used to predict time series where the state of the variables is not constant or the classical methods are not justified due to the high complexity. SVRs are a subtype of SVM used to predict future price paths. SVM is able to eliminate irrelevant and high variance data in the predictive process and improve the accuracy of the prediction. SVM is based on the structural minimization of risk taken from statistical training theory. SVM can be used in financial data modelling as long as there are no strong assumptions. SVM is based on a linear classification of the data that seeks to maximize reliability. The optimal fit of the data is achieved using second-order programming methods, which are well-known methods for solving constraint problems. Prior to linear classification, the data is propagated through a phi function into a wider space so that the algorithm can classify highly complex data. This algorithm thus uses a nonlinear mapping to convert the main data to a higher dimension and linear optimality to separate the hyperplane (Nikou et al. 2019).
The decision boundary is defined in Equation (4), where SVMs can map the input vectors x i     R d into a high dimensional feature space Φ ( x i ) H and Φ is mapped by the kernel function K ( x i , x j ) .
f ( x ) = s g n ( i = 1 n α i y i K ( x ,   x i ) + b )
SVMs convert non-separable classes into separable ones with linear, non-linear, sigmoid, radial basis and polynomial kernel functions. The formula of the kernel functions is shown in Equations (5)–(7), where γ is the constant of the radial basis function and d is the degree of the polynomial function. The two adjustable parameters in the sigmoid function are the slope α and the intercepted constant c.
RBF :   K ( x i x j ) = exp ( γ   x i x j 2 )
Polinomial :   K ( x i x j ) = ( x i x j + 1 ) d
Sigmoid :   K ( x i x j ) = tanh ( α x i T y + c )
SVMs are often very efficient in high-dimensional spaces and in cases where the number of dimensions is larger than the number of samples. However, to avoid overfitting, the number of features in the choice of regularisation members and kernel functions should be much larger than the number of samples (Nabipour et al. 2020). The hyperparameters of SVM model can be found in Table 1.
  • Random Forest (RF)
Random Forest (RF) is a combination of several decision trees, developed to achieve better prediction performance than when using only a single decision tree. Each decision tree in RF is based on a bootstrap pattern using binary recursive partitioning (BRP). In the BRP algorithm, a random subset of input vectors is selected and then all possible combinations of all input vectors are evaluated. The resulting best split is then used to create a binary classification. These processes are repeated recursively within each successive partition and are terminated when the partition size becomes equal to 1. Two important fine-tuning parameters are used in the modelling of RF: one is the number of branches in the cluster (p), and the other is the number of input vectors to be sampled in each split (k). Each decision tree in RF is learned from a random sample of the data set (Ismail et al. 2020).
To build the RF model, three parameters must be defined beforehand: the number of trees (n), the number of variables (K) and the maximum depth of the decision trees (J). Learning sets (Di, i = 1, …., n) and variable sets (Vi, i = 1, …., n) of decision trees are created by random sampling with replacement, which is called bootstrapping. Each decision tree of depth J generates a weak learner τ i from each set of learners and variables. The hyperparameters of RF model can be found in Table 2. Then, these weak learners are used to predict the test data, and an ensemble of n trees { τ i } i = 1 n is then generated. For a new sample, the RF can be defined as follows (Park et al. 2022):
R ^ ( x ) = 1 n i = 1 n r ^ i ( x )
where r ^ i ( x ) is the predicted value of τ i .
  • Extreme Gradient Boost (XGBoost)
XGBoost is a model based on decision trees. Compared to other tree-based models, XGBoost can achieve higher estimation accuracy and much faster learning speeds due to parallelisation and decentralisation. Other advantages of the XGBoost method are that it uses regularisation to prevent overfitting, has built-in cross-validation capability, and handles missing data professionally. The XGBoost model incorporates multiple classification and regression trees (CART). XGBoost performs binary splitting and generates a decision tree by segmenting a subset of the data set using all predictors. This creates two subnodes. The XGBoost model with multiple CART can be defined as follows:
y ^ i = k = 1 K f k ( x i ) ,   f k   F ,   F = { f ( x ) = w q ( x ) } ( q : R m T ,   w     R T )
where R is the number of trees and F is the total number of CARTs occurring. f k corresponds to each independent tree and the weight of each leaf. The objective function of the XGBoost model can be defined as follows:
O b j = i l ( y i , y ^ i ) + k Ω ( f k ) ,   Ω   ( f ) = γ T + 1 2   λ w 2
where l is a loss function measuring the difference between y i and y ^ i . Ω   ( f k ) is a regularisation term that prevents overfitting by defining the complexity of the model. Assuming that y ^ t is the predicted value at time t, the objective function can be written as (Han et al. 2023):
O b j ( t ) = i = 1 l ( y i , y ^ i ( t 1 ) + f t ( x i ) ) + Ω   ( f t )
The hyperparameters of XGBoost model can be found in Table 3.
  • Gated Recurrent Unit (GRU)
GRU is a type of recurrent neural network (RNN) that can provide outstanding performance in predicting time series. It is similar to the other neural network model (LSTM) discussed in more detail in the next subchapter, but GRU has lower computational power requirements, which can greatly improve learning efficiency.
It has the same input and output structure as a simple RNN. The internal structure of the GRU unit contains only two gates: the update gate z t , and the reset gate r t . The update gate z t determines the value of the previous memory saved for the current time, and the restore gate rt determines how the new input information is to be combined with the previous memory value. Unlike the LSTM algorithm, the z t update gate can both forget and select the memory contents, which improves computational performance and reduces runtime requirements. The GRU context can be defined by the following equations:
z t = σ ( W z h t 1 + U z x t )
r t = σ ( W r h t 1 + U r x t )
h ˜ t = tanh ( W 0 ( h t 1   r ) + U 0 x t )
h t = z t h ˜ t + ( 1 z t )     h t 1
where σ() is a logistic sigmoid function, i.e., σ ( x ) = 1 / 1 + e x ;   h t 1 is the hidden state of the neuron at the last moment. W z   and   U z are the weight matrices of the update gate. W r   and   U r are the weight matrices of the reset gate. W 0   and   U 0 are the weight matrices of the temporary output. x t is the input value at time t, and h ˜ t and h t are the information vectors that provide hidden layer output and temporary unit state at time t (Xiao et al. 2022). The hyperparameters of GRU model can be found in Table 4.
  • Long-Short-Term Memory (LSTM)
LSTM is a type of recurrent neural network (RNN) often used in sequential data research. Long-term memory refers to learning weights and short-term memory refers to the internal states of cells. LSTM was created to solve the problem of the vanishing gradient of RNNs, the main change of which is the replacement of the middle layer of the RNN by a block (LSTM block). The main feature of LSTM is the possibility of long-term affilation learning, which was impossible for RNNs. To predict the data associated with the next time point, it is necessary to update the weight values of the network, which requires the maintenance of the initial time interval data. An RNN could only learn a limited number of short-term affilations; however, RNNs cannot learn long-term time series. LSTM can handle them adequately. The structure of the LSTM model is a set of recurrent subnets, called memory blocks. Each block contains one or more autoregressive memory cells and three multiple units (input, output, and forget) that perform continuous write, read, and cell operation control (Ortu et al. 2022). The LSTM model is defined by the following equations:
Input   gate :   I t = σ ( X t W x i + H t 1 W h i + b i
Forgetting   gate :   F t = σ ( X t W x f + H t 1 W h f + b f
Gated   unit :   C ˜ t = tan h ( X t W x c + H t 1 W h c + b c
C t = F t   C t 1 + I t   C ˜ t
Output   gate :   O t = σ ( X t W x o + H t 1 W h o + b o )
where h is the number of hidden units, X t is the small batch input of a given time unit t, H t 1 is the hidden state of the data from the previous period, σ is the sigmoid function, W x i and W h i are the weight matrix of the input gate, and b i is the offset term of the input gate. W x f and W h f are the weight matrix of the forgetting gate, and b f is the offset term of the forgetting gate. C ˜ t is the candidate memory cells, W x c and W h c are the weight matrix of the gated unit, and b c is the offset term of the gated unit. C t is the new cell state at the current time, and C t 1 is the cell state at the previous time. W x o and W h o are the weight matrix of the output gate, and b o is the offset term of the output gate (Dai et al. 2022).
The hyperparameters of the models are specified in Table 5. In order to make an even more accurate comparison, we tried to harmonize the hyperparameters of algorithms belonging to the same main type.
In the study, a total of eight commodities were included: Brent oil, copper, crude oil, gold, natural gas, palladium, platinum and silver. The data was downloaded from Yahoo Finance using Python. An appropriate database size is important in forecasting (Hewamalage et al. 2022). The complete database contains the futures price data for the period 1 January 2010 to 31 August 2022, which was split into two parts. In the first case, focusing on the interval between 1 January 2010 and 31 August 2018, and in the second case, focusing on the interval between 1 January 2014 and 31 August 2022, we included commodity market instruments that are considered to be the most liquid, with turnover that stands out among other assets. For both studies, we split the datasets into learning and validation samples in the proportion of approximately 94% and 6%. In the first estimation, learning database covers the period between 1 January 2010 and 28 February 2018 (98 months), while the validation interval between 1 March 2018 and 31 August 2018 (6 months). In the second estimation (2022), the learning database covers the period between 1 January 2014 and 28 February 2022 (98 months), while the validation interval between 1 March 2022 and 31 August 2022 (6 months). The descriptive statistics of the commodity market for the full dataset are presented in Table 6.
The two periods (2018 and 2022) reflect a significantly different general economic situation, which is not reflected in the descriptive statistics for the whole period. To obtain a more accurate picture and to assess the performance of the forecasting algorithms, it is important to know the period for which the forecast is made. This is shown in the following two tables (Table 7 and Table 8).
The two tables above show that not only are the average and median rates in 2022 higher, but the standard deviation is also several times higher than in 2018. The relative standard deviation values are also 2–3 times higher. In such an environment, the accuracy of the forecast is reduced, but its importance for enterprise risk management is increased.
According to the correlation matrix (Table 9), Brent and crude oil (0.8344), gold and silver (0.8023) move together, while the other commodities show negligible correlation and no common movement.

3. Results and Discussion

The basis for the evaluation of the results is the MAPE indicator, which is scale-independent and therefore suitable for comparing both over time and across instruments. The results are calculated for forecast periods of 21 and 125 days based on five different forecasting algorithms for robustness. The forecast was made for both 2018 and 2022, the reason being that the macroeconomic environment had changed significantly by 2022, due to the Russian-Ukrainian war, among other factors. The different forecasting methods are described in detail in the methodology chapter.
The results of the Support Vector Machine (SVM) forecast are presented in Table 10, which shows the mean absolute percentage error (MAPE) values per commodity for 2018 and 2022, for 21- and 125-day forecast horizons.
The average error of the SVM-based estimation ranges from 4.36% to 4.96% for the selected commodities in the sample, with no significant difference between the forecast horizons and the years under study. Of course, the longer the time interval, the worse the accuracy of the forecast, but not significantly. When looking on a product-by-product basis, more significant changes are already visible. For the two types of oil, but especially for crude oil, the 125-day forecast is estimated with a multiple error compared to the 21-day forecast. For natural gas, the 125-day estimate for 2022 is more than three times greater than in 2018, with a significant increase in pricing uncertainty. There is no more significant change in the forecast than this, with only silver showing an increase of 1.5 times error rate.
A special feature of the Random Forest (RF) decision tree is that it uses several types of decision trees, so it employs a very different methodological approach than the Support Vector Machine (SVM) that preceded it. The estimation results are presented in Table 11, in the same structure as the SVM.
For this forecast procedure, there is already a significant difference between the overall mean error for 2018 and 2022. For the 21-day forecast for 2018 and 2022, the sample MAPEs are significantly different (p = 0.004), while for the 125-day forecast the difference is not statistically confirmed. Random forest gives forecasts that are orders of magnitude more accurate than SVM for the four-week time horizon. However, when looking at the year 2022, including the 125-day forecast, it can be seen there is a significant error in the RF. For natural gas, there is a difference is much higher over the six-month time horizon, a significant error of over 20%. In addition, it is important to highlight the positive results—for example, the gold price forecast shows an error of less than 1% in three out of four cases, which is outstanding, but relatively accurate forecasts are also seen for copper and silver.
The Extreme Gradient Boost (XGBoost) prediction can be classified in the same family of decision trees as Random Forest (RF). The results of XGBoost are shown in Table 12.
Comparing the results of XGBoost with RF, very similar results can be seen, without any significant difference. For the 21-day forecast, the average difference between the years 2018 and 2022 is significantly different (p = 0.003), i.e., on average XGBoost was able to give a more accurate forecast in 2018 than in 2022, with less noise. Similar to the RF, the longer-term natural gas forecast contains the largest error, and precious metals show the lowest MAPE values. Díaz et al. (2020) found that decision trees (random forest and gradient boosting regression tree) provide a more reliable prediction of the copper price than linear methods, but the random walk process performs still better.
The results of the Gated Recurrent Unit (GRU) forecast are shown in Table 13. The GRU model can be classified as a family of neural networks, i.e., the prediction is based on an algorithm using artificial intelligence. This is a very different concept to decision trees.
The results of the Gated Recurrent Unit (GRU) confirm expectations, with the average MAPE value being the lowest in all categories compared to previous models (SVM, RF, XGBoost). The 21- and 125-day predictions for the year 2018 are considered to be outstandingly good, with an average error for the sample of around 1.3%. The results of the t-tests show that for the first time, the error rates for 2018 are statistically justified (p = 0.002 and p = 0.047) to be lower than the 2022 values for both time horizons (21 and 125 days). Based on these results, the GRU-based estimator produces very good forecasts for the four-week period in a relatively smooth economic period free of turbulence. It is also important to highlight that the forecasts for 2022 show the highest accuracy, with an improvement of more than 1 percentage point over the 125-day period to 2022 compared to previous models. The forecast for natural gas, which showed an error of over 20% for the decision trees, has been reduced to 11% for GRU, although this is still a significant error. It is also interesting to note that for the neural network-based estimation, the trend was reversed, with the longer time horizon estimation resulting in a lower average error, but the two results are not significantly different. As before, the prediction of precious metals shows high accuracy. Ozdemir et al. (2022) used GRU and LSTM models to forecast the price of nickel from 2022 to 2031 over the long term. The MAPE of the two estimation methods are very similar (about 7%), but the GRU models required on average 33% less computation time.
The Long-Short-Term Memory (LSTM) is part of a family of neural network models similar to the Gated Recurrent Unit (GRU) algorithm, and is built to provide reliable estimates primarily over longer time scales. The results of the LSTM are shown in Table 14.
The LSTM results, similar to the GRU, are very convincing, but not better than the overall sample averages. It is important to note that the differences are only a few decimal points. At the 21-day time horizon, the 2018 values are significantly lower (p = 0.005); the same can be said for the 125-day horizon at 10% confidence level (p = 0.087). Busari and Lim (2021) used LSTM and GRU methodologies among others to forecast spot oil prices in their study. In their study, the forecast time horizon is 6 days and the training-validation database ratio is 75–25%. Their results show a forecasting accuracy (MAPE) of around 10–11%, compared to our results where the 21-day forecast error is below 2% for the year 2018 and 6.8–7% for the year 2022.
For natural gas, which is considered critical, the MAPE for the 125-day estimate for 2022 is 13.37%, also higher than the 11.4% for GRU. In another study, also dealing with natural gas forecasting, GRU outperformed LSTM models (Wang et al. 2021). The MAPE value is higher than our own results, but the authors used weekly data. The most spectacular MAPE improvement was between hybrid GRU (PSO-GRU) and LSTM, with a difference of more than 1 percentage point. The advantage of LSTM should be reflected in the long-range forecast, but compared to GRU, it performed better in 3/8 cases for 2018 and 2/8 cases for 2022 for the longer 125-day time horizon. Of course, this does not exclude the possibility that LSTM might not perform better on a longer time horizon, but there is no conclusive data available to test this for 2022.

4. Conclusions

The study aimed to answer the question of how accurate the futures price forecasts of eight selected commodities (incorporating oil, gas and precious metals) using different models (decision trees, neural networks) are in different economic environments, and to what extent these forecasts can be used for corporate risk management. The six months of the year 2022 (from March to August), characterized by inflationary pressures, the Russian-Ukrainian war and global chip shortages, while the control period was chosen as six months in 2018 before the COVID-19 epidemic, which is considered a calm economic environment.
Enterprise risk management has different time horizons depending on the exposure to risk. For commodities, we assumed production lead times and warehousing, and therefore did not look at the very short term. Two periods were defined—a short period of one month (21 days), and a medium period (125 days). These are the time periods that can be planned for inventory management. Enterprise risk management comes to the fore in at least two respects when it comes to raw material replenishment. The necessary stock should be available at the right place and time, and stocks should be purchased at the best possible price. The second aspect is the use of forecasting algorithms of varying complexity as a decision support tool. The most accurate forecast possible helps achieve the so-called perfect timing that all investors—in this case, the purchasing department—desire. Purchasing at the best possible price means lower cost price and therefore higher profits and a market advantage over competitors.
The results show that forecast accuracy is higher in calmer economic environments, which is due to the fact that in a less volatile environment, forecasting is easier (see descriptive statistics for 2018 and 2022). More importantly, artificial intelligence neural networks also produce better results in commodity markets than decision trees and other approach models. For the year 2022, the MAPE (Mean Absolute Percentage Error) indicators show an average value of around 4%, i.e., the difference between the model estimate and the real data. In the control period, this indicator is around 1.5%. This difference is approximately equal to the increase in standard deviation and relative standard deviation that has occurred. Due to the Russian-Ukrainian war, oil and especially gas prices displayed the worst accuracy, followed by palladium. Precious metals price forecasts showed the highest accuracy. It can also be concluded that in the calmer economic period (also supported by the average MAPE values), we obtained a more accurate estimate in the case of shorter forecast periods, while the same conclusion cannot be drawn in the more volatile period (2022). Overall, we managed to achieve the most accurate estimate with the GRU model, which even slightly outperformed the LSTM algorithm. In the case of the examined instruments and periods, the weakest performance was produced by the SVM.
Our study is limited insofar as the models were only used individually and not combined. However, hybrid models may have several advantages over the traditional approach. Liang et al. (2022) used different hybrid neural models to predict spot and forward gold prices. Their results show that hybrid models provide more accurate estimates than LSTM models. The implementation of an error-correction hybrid model for copper price forecasting has resulted in significant MAPE improvements of anything up to 1 percentage point or more (Luo et al. 2022). The study also does not cover the use of independent variables such as technical analysis tools. Their application can help the training of models and thus the recognition of past technical levels.

Author Contributions

Conceptualization, L.V., T.T. and T.B.; methodology, L.V.; formal analysis, T.B.; data curation, L.V.; writing—original draft preparation, L.V., T.T. and T.B.; writing—review and editing, L.V., T.T. and T.B.; funding acquisition, T.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data used in the present study are publicly available.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ali, Shoukat, Ramiz ur Rehman, Wang Yuan, Muhammad Ishfaq Ahmad, and Rizwan Ali. 2022. Does Foreign Institutional Ownership Mediate the Nexus between Board Diversity and the Risk of Financial Distress? A Case of an Emerging Economy of China. Eurasian Business Review 12: 553–81. [Google Scholar] [CrossRef]
  2. Ballings, Michel, Dirk Van den Poel, Nathalie Hespeels, and Ruben Gryp. 2015. Evaluating multiple classifiers for stock price direction prediction. Expert Systems with Applications 42: 7046–56. [Google Scholar] [CrossRef]
  3. Basak, Suryoday, Saibal Kar, Snehanshu Saha, Luckyson Khaidem, and Sudeepa Roy Dey. 2019. Predicting the direction of stock market prices using tree-based classifiers. The North American Journal of Economics and Finance 47: 552–67. [Google Scholar] [CrossRef]
  4. Busari, Ganiyu Adewale, and Dong Hoon Lim. 2021. Crude Oil Price Prediction: A Comparison between AdaBoost-LSTM and AdaBoost-GRU for Improving Forecasting Performance. Computers & Chemical Engineering 155: 107513. [Google Scholar] [CrossRef]
  5. Carter, David A., Daniel A. Rogers, Betty J. Simkins, and Stephen D. Treanor. 2017. A review of the literature on commodity risk management. Journal of Commodity Markets 8: 1–17. [Google Scholar] [CrossRef]
  6. Dai, Yeming, Qiong Zhou, Mingming Leng, Xinyu Yang, and Yanxin Wang. 2022. Improving the Bi-LSTM model with XGBoost and attention mechanism: A combined approach for short-term power load prediction. Applied Soft Computing 130: 109632. [Google Scholar] [CrossRef]
  7. Díaz, Juan D., Erwin Hansen, and Gabriel Cabrera. 2020. A Random Walk through the Trees: Forecasting Copper Prices Using Decision Learning Methods. Resources Policy 69: 101859. [Google Scholar] [CrossRef]
  8. Donders, Pablo, Mauricio Jara, and Rodrigo Wagner. 2018. How sensitive is corporate debt to swings in commodity prices? Journal of Financial Stability 39: 237–58. [Google Scholar] [CrossRef]
  9. Fischer, Thomas, and Christopher Krauss. 2018. Deep learning with long short-term memory networks for financial market predictions. European Journal of Operational Research 270: 654–69. [Google Scholar] [CrossRef] [Green Version]
  10. Gonzalez Miranda, F., and Andrew Neil Burgess. 1997. Modelling market volatilities: The neural network perspective. The European Journal of Finance 3: 137–57. [Google Scholar] [CrossRef]
  11. Hajiabotorabi, Zeinab, Aliyeh Kazemi, Faramarz Famil Samavati, and Farid Mohammad Maalek Ghaini. 2019. Improving DWT-RNN model via B-spline wavelet multiresolution to forecast a high-frequency time series. Expert Systems with Applications 138: 112842. [Google Scholar] [CrossRef]
  12. Hamid, Shaikh A., and Zahid Iqbal. 2004. Using neural networks for forecasting volatility of S&P 500 Index futures prices. Journal of Business Research 57: 1116–25. [Google Scholar] [CrossRef] [Green Version]
  13. Han, Yechan, Jaeyun Kim, and David Enke. 2023. A machine learning trading system for the stock market based on N-period Min-Max labeling using XGBoost. Expert Systems with Applications 211: 118581. [Google Scholar] [CrossRef]
  14. Hewamalage, Hansika, Klaus Ackermann, and Christoph Bergmeir. 2022. Forecast Evaluation for Data Scientists: Common Pitfalls and Best Practices. Data Mining and Knowledge Discovery. [Google Scholar] [CrossRef]
  15. Hiransha, M., Gopalakrishnan E. A., Vijay Krishna Menon, and Soman K. P. 2018. NSE stock market prediction using deep-learning models. Procedia Computer Science 132: 1351–62. [Google Scholar] [CrossRef]
  16. Ismail, Mohd Sabri, Mohd Salmi Md Noorani, Munira Ismail, Fatimah Abdul Razak, and Mohd Almie Alias. 2020. Predicting next day direction of stock price movement using machine learning methods with persistent homology: Evidence from Kuala Lumpur Stock Exchange. Applied Soft Computing 93: 106422. [Google Scholar] [CrossRef]
  17. Kaushik, Manav, and Arun Kumar Giri. 2020. Forecasting Foreign Exchange Rate: A Multivariate Comparative Analysis between Traditional Econometric, Contemporary Machine Learning & Deep Learning Techniques. arXiv arXiv:2002.10247. doi:10.48550/arXiv.2002.10247. [Google Scholar] [CrossRef]
  18. Kim, Ha Young, and Chang Hyun Won. 2018. Forecasting the volatility of stock price index: A hybrid model integrating LSTM with multiple GARCH-type models. Expert Systems with Applications 103: 25–37. [Google Scholar] [CrossRef]
  19. Liang, Xiaodan, Zhaodi Ge, Liling Sun, Maowei He, and Hanning Chen. 2019. LSTM with Wavelet Transform Based Data Preprocessing for Stock Price Prediction. Mathematical Problems in Engineering 2019: 1340174. [Google Scholar] [CrossRef] [Green Version]
  20. Liang, Yanhui, Yu Lin, and Qin Lu. 2022. Forecasting gold price using a novel hybrid model with ICEEMDAN and LSTM-CNN-CBAM. Expert Systems with Applications 206: 117847. [Google Scholar] [CrossRef]
  21. Liu, Yang. 2019. Novel volatility forecasting using deep learning–long short term memory recurrent neural networks. Expert Systems with Applications 132: 99–109. [Google Scholar] [CrossRef]
  22. Long, Jiawei, Zhaopeng Chen, Weibing He, Taiyu Wu, and Jiangtao Ren. 2020. An integrated framework of deep learning and knowledge graph for prediction of stock price trend: An application in Chinese stock exchange market. Applied Soft Computing, 106205. [Google Scholar] [CrossRef]
  23. Luo, Hongyuan, Deyun Wang, Jinhua Cheng, and Qiaosheng Wu. 2022. Multi-step-ahead copper price forecasting using a two-phase architecture based on an improved LSTM with novel input strategy and error correction. Resources Policy 79: 102962. [Google Scholar] [CrossRef]
  24. Nabipour, Mojtaba, Pooyan Nayyeri, Hamed Jabani, S. Shahab, and Amir Mosavi. 2020. Predicting stock market trends using machine learning and deep learning algorithms via continuous and binary data; a comparative analysis on the Tehran stock exchange. IEEE Access 8: 150199–150212. [Google Scholar] [CrossRef]
  25. Nelson, David M., Adriano C. Pereira, and Renato A. de Oliveira. 2017. Stock market’s price movement prediction with LSTM neural networks. Paper presented at 2017 International joint conference on neural networks (IJCNN), Anchorage, AK, USA, May 14–19; pp. 1419–26. [Google Scholar] [CrossRef]
  26. Nikou, Mahla, Gholamreza Mansourfar, and Jamshid Bagherzadeh. 2019. Stock price prediction using DEEP learning algorithm and its comparison with machine learning algorithms. Intelligent Systems in Accounting, Finance and Management 26: 164–74. [Google Scholar] [CrossRef]
  27. Nti, Isaac Kofi, Adebayo Felix Adekoya, and Benjamin Asubam Weyori. 2020. A systematic review of fundamental and technical analysis of stock market predictions. Artificial Intelligence Review 53: 3007–57. [Google Scholar] [CrossRef]
  28. Ormoneit, Dirk, and Ralph Neuneier. 1996. Experiments in predicting the German stock index DAX with density estimating neural networks. Paper presented at IEEE/IAFE 1996 Conference on Computational Intelligence for Financial Engineering (CIFEr), York, NY, USA, March 24–26; pp. 66–71. [Google Scholar] [CrossRef]
  29. Ortu, Marco, Nicola Uras, Claudio Conversano, Silvia Bartolucci, and Giuseppe Destefanis. 2022. On technical trading and social media indicators for cryptocurrency price classification through deep learning. Expert Systems with Applications 198: 116804. [Google Scholar] [CrossRef]
  30. Ou, Phichhang, and Hengshan Wang. 2009. Prediction of stock market index movement by ten data mining techniques. Modern Applied Science 3: 28–42. [Google Scholar] [CrossRef]
  31. Ozdemir, Ali Can, Kurtuluş Buluş, and Kasım Zor. 2022. Medium-to long-term nickel price forecasting using LSTM and GRU networks. Resources Policy 78: 102906. [Google Scholar] [CrossRef]
  32. Park, Hyun Jun, Youngjun Kim, and Ha Young Kim. 2022. Stock market forecasting using a multi-task approach integrating long short-term memory and the random forest framework. Applied Soft Computing 114: 108106. [Google Scholar] [CrossRef]
  33. Petersen, Niklas Christoffer, Filipe Rodrigues, and Francisco Camara Pereira. 2019. Multi-output bus travel time prediction with convolutional LSTM neural network. Expert Systems with Applications 120: 426–35. [Google Scholar] [CrossRef] [Green Version]
  34. Siami-Namini, Sima, Neda Tavakoli, and Akbar Siami Namin. 2018. A comparison of ARIMA and LSTM in forecasting time series. Paper presented at 2018 17th IEEE international conference on machine learning and applications (ICMLA), Orlando, FL, USA, December 17–20; pp. 1394–401. [Google Scholar] [CrossRef]
  35. Skehin, Tom, Martin Crane, and Marija Bezbradica. 2018. Day ahead forecasting of FAANG stocks using ARIMA, LSTM networks and wavelets. Paper presented at the 26th AIAI Irish Conference on Artificial Intelligence and Cognitive Science (AICS 2018), Dublin, Ireland, December 6–7. [Google Scholar]
  36. Thi Kieu Tran, Trang, Taesam Lee, Ju-Young Shin, Jong-Suk Kim, and Mohamad Kamruzzaman. 2020. Deep learning-based maximum temperature forecasting assisted with meta-learning for hyperparameter optimization. Atmosphere 11: 487. [Google Scholar] [CrossRef]
  37. van der Lugt, Bart J., and Ad J. Feelders. 2019. Conditional forecasting of water level time series with RNNs. In International Workshop on Advanced Analysis and Learning on Temporal Data. Cham: Springer, pp. 55–71. [Google Scholar] [CrossRef]
  38. Vidal, Andrés, and Werner Kristjanpoller. 2020. Gold volatility prediction using a CNN-LSTM approach. Expert Systems with Applications 157: 113481. [Google Scholar] [CrossRef]
  39. Wang, Jun, Junxing Cao, Shan Yuan, and Ming Cheng. 2021. Short-Term Forecasting of Natural Gas Prices by Using a Novel Hybrid Method Based on a Combination of the CEEMDAN-SE-and the PSO-ALS-Optimized GRU Network. Energy 233: 121082. [Google Scholar] [CrossRef]
  40. Xiao, Haohan, Zuyu Chen, Ruilang Cao, Yuxin Cao, Lijun Zhao, and Yunjie Zhao. 2022. Prediction of shield machine posture using the GRU algorithm with adaptive boosting: A case study of Chengdu Subway project. Transportation Geotechnics 37: 100837. [Google Scholar] [CrossRef]
  41. Zhang, Liheng, Charu Aggarwal, and Guo-Jun Qi. 2017. Stock Price Prediction via Discovering Multi-Frequency Trading Patterns. Paper presented at the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining—KDD ’17, Halifax, NS, Canada, August 13–17. [Google Scholar]
  42. Zolfaghari, Mehdi, and Samad Gholami. 2021. A hybrid approach of adaptive wavelet transform, long short-term memory and ARIMA-GARCH family models for the stock index prediction. Expert Systems with Applications 182: 115149. [Google Scholar] [CrossRef]
Table 1. Hyperparameters of SVM model.
Table 1. Hyperparameters of SVM model.
ModelParametersValue
SVRKernelRBF
C100
Gamma0.1
Source: own editing.
Table 2. Hyperparameters of RF model.
Table 2. Hyperparameters of RF model.
ModelParametersValue
RFRMax depth10
Number of trees100
Source: own editing.
Table 3. Hyperparameters of XGBoost model.
Table 3. Hyperparameters of XGBoost model.
ModelParametersValue
XGBoostMax depth10
Number of trees100
Source: own editing.
Table 4. Hyperparameters of GRU model.
Table 4. Hyperparameters of GRU model.
ModelParametersValue
GRUHidden Layers2
Hidden layer neuron count150
Batch size32
Epochs100
Activationtanh
Learning rate0.001
OptimizerAdam
Source: own editing.
Table 5. Hyperparameters of LSTM model.
Table 5. Hyperparameters of LSTM model.
ModelParametersValue
LSTMHidden Layers2
Hidden layer neuron count150
Batch size32
Epochs100
Activationtanh
Learning rate0.001
OptimizerAdam
Source: own editing.
Table 6. Descriptive statistics of variables.
Table 6. Descriptive statistics of variables.
CommodityNAverageMedianStDRsdMinMax
Brent oil315177.3973.0826.010.3419.33127.98
Copper31873.203.110.670.211.944.93
Crude oil318870.9968.7822.950.32−37.63123.70
Gold31861442.471335.55248.220.171050.802051.50
Natural gas31883.413.091.210.351.489.68
Palladium31531118.43790.50649.640.58407.952985.40
Platinum31691194.001062.20317.810.27595.901905.70
Silver318521.4518.966.540.3011.7348.58
Source: own editing.
Table 7. Descriptive statistics of variables (1 January 2018–31 August 2018).
Table 7. Descriptive statistics of variables (1 January 2018–31 August 2018).
CommodityNAverageMedianStDRsdMinMax
Brent oil16872.0872.724.460.0662.5979.80
Copper1683.023.070.180.062.563.29
Crude oil16966.4266.363.580.0559.1974.15
Gold1681290.651311.4549.770.041176.201362.40
Natural gas1692.842.820.180.062.553.63
Palladium164981.96974.7756.110.06845.201112.35
Platinum164907.36909.6566.870.07768.701029.30
Silver16816.2416.400.690.0414.4217.55
Source: own editing.
Table 8. Descriptive statistics of variables (1 January 2022–31 August 2022).
Table 8. Descriptive statistics of variables (1 January 2022–31 August 2022).
CommodityNAverageMedianStDRsdMinMax
Brent oil168104.01105.1210.830.1078.98127.98
Copper1684.204.360.460.113.214.93
Crude oil168100.12100.0210.880.1176.08123.70
Gold1671841.901836.2076.610.041699.502040.10
Natural gas1686.566.661.780.273.729.68
Palladium1672158.802129.00255.400.121769.202979.90
Platinum167963.41958.3069.720.07826.401152.50
Silver16722.3222.362.280.1017.7626.89
Source: own editing.
Table 9. Correlation between variables.
Table 9. Correlation between variables.
Brent OilCopperCrude OilGoldNatural GasPallad.Platin.Silver
Brent oil1
Copper0.30611
Crude oil0.83440.31621
Gold0.12980.27560.14041
Natural gas0.08820.06810.10770.01271
Palladium0.24480.42100.25250.38670.05471
Platinum0.24610.42220.24590.60260.06030.58301
Silver0.21160.40940.22110.80230.04090.46240.64861
Source: own editing.
Table 10. Support Vector Machine (SVM) forecast results.
Table 10. Support Vector Machine (SVM) forecast results.
SVM21 Days125 Days
2018202220182022
Brent oil0.03660.08150.02570.0511
Copper0.01860.02270.02440.0203
Crude oil0.02920.10230.02240.0836
Gold0.01800.01550.01490.0100
Natural gas0.01600.03690.01500.1231
Palladium0.02340.07160.02690.0423
Platinum0.07180.02830.09250.0286
Silver0.13550.02230.14350.0374
Average0.04360.04760.04560.0496
Source: own editing.
Table 11. Random Forest (RF) forecast results.
Table 11. Random Forest (RF) forecast results.
RFR21 Days125 Days
2018202220182022
Brent oil0.01830.07120.01830.0405
Copper0.01060.02430.01400.0211
Crude oil0.02800.06080.02220.0419
Gold0.00850.01460.00740.0105
Natural gas0.01680.04710.01690.2354
Palladium0.01610.06640.01890.0420
Platinum0.01020.03020.01590.0232
Silver0.01250.02770.01270.0211
Average0.01510.04280.01580.0545
Source: own editing.
Table 12. Extreme Gradient Boost (XGBoost) forecast results.
Table 12. Extreme Gradient Boost (XGBoost) forecast results.
XGBoost21 Days125 Days
2018202220182022
Brent oil0.01980.07160.01830.0408
Copper0.01130.02610.01420.0212
Crude oil0.02660.06070.02300.0414
Gold0.00800.01520.00690.0107
Natural gas0.01570.04270.01670.2105
Palladium0.01630.06530.01880.0426
Platinum0.00980.03030.01580.0233
Silver0.01130.02800.01180.0204
Average0.01480.04250.01570.0514
Source: own editing.
Table 13. Gated Recurrent Unit (GRU) forecast results.
Table 13. Gated Recurrent Unit (GRU) forecast results.
GRU21 Days125 Days
2018202220182022
Brent oil0.01800.06910.01870.0387
Copper0.01070.02260.01340.0202
Crude oil0.01920.06800.01810.0393
Gold0.00850.01690.00650.0105
Natural gas0.01550.04350.01390.1140
Palladium0.01360.06110.01760.0400
Platinum0.00860.02870.01180.0231
Silver0.00960.02630.01120.0250
Average0.01300.04200.01390.0389
Source: own editing.
Table 14. Long-Short-Term Memory (LSTM) forecast results.
Table 14. Long-Short-Term Memory (LSTM) forecast results.
LSTM21 Days125 Days
2018202220182022
Brent oil0.01670.07320.01760.0413
Copper0.01340.02120.01560.0225
Crude oil0.01980.06990.01910.0405
Gold0.00850.01700.00640.0108
Natural gas0.02380.04720.02280.1337
Palladium0.01350.06190.01760.0424
Platinum0.00870.02880.01140.0229
Silver0.01530.02440.01780.0188
Average0.01500.04290.01600.0416
Source: own editing.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vancsura, L.; Tatay, T.; Bareith, T. Evaluating the Effectiveness of Modern Forecasting Models in Predicting Commodity Futures Prices in Volatile Economic Times. Risks 2023, 11, 27. https://doi.org/10.3390/risks11020027

AMA Style

Vancsura L, Tatay T, Bareith T. Evaluating the Effectiveness of Modern Forecasting Models in Predicting Commodity Futures Prices in Volatile Economic Times. Risks. 2023; 11(2):27. https://doi.org/10.3390/risks11020027

Chicago/Turabian Style

Vancsura, László, Tibor Tatay, and Tibor Bareith. 2023. "Evaluating the Effectiveness of Modern Forecasting Models in Predicting Commodity Futures Prices in Volatile Economic Times" Risks 11, no. 2: 27. https://doi.org/10.3390/risks11020027

APA Style

Vancsura, L., Tatay, T., & Bareith, T. (2023). Evaluating the Effectiveness of Modern Forecasting Models in Predicting Commodity Futures Prices in Volatile Economic Times. Risks, 11(2), 27. https://doi.org/10.3390/risks11020027

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop