Next Article in Journal / Special Issue
Sovereign Risk Indices and Bayesian Theory Averaging
Previous Article in Journal / Special Issue
Triple the Gamma—A Unifying Shrinkage Prior for Variance and Variable Selection in Sparse State Space and TVP Models
Open AccessArticle

BACE and BMA Variable Selection and Forecasting for UK Money Demand and Inflation with Gretl

1
Faculty of Finance and Management, WSB University in Torun, ul. Młodzieżowa 31a, 87-100 Toruń, Poland
2
Faculty of Economic Sciences and Management, Nicolaus Copernicus University, ul. Gagarina 13a, 87-100 Toruń, Poland
*
Author to whom correspondence should be addressed.
Econometrics 2020, 8(2), 21; https://doi.org/10.3390/econometrics8020021
Received: 27 September 2019 / Revised: 27 April 2020 / Accepted: 20 May 2020 / Published: 22 May 2020
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)

Abstract

In this paper, we apply Bayesian averaging of classical estimates (BACE) and Bayesian model averaging (BMA) as an automatic modeling procedures for two well-known macroeconometric models: UK demand for narrow money and long-term inflation. Empirical results verify the correctness of BACE and BMA selection and exhibit similar or better forecasting performance compared with a non-pooling approach. As a benchmark, we use Autometrics—an algorithm for automatic model selection. Our study is implemented in the easy-to-use gretl packages, which support parallel processing, automates numerical calculations, and allows for efficient computations.
Keywords: model uncertainty; Bayesian pooling; MPI; model averaging model uncertainty; Bayesian pooling; MPI; model averaging

1. Introduction

In this paper we consider two procedures of variable selection and forecasting for linear dynamic single-equation models: Bayesian averaging of classical estimates (BACE), introduced by Sala-i-Martin et al. (2004) and Bayesian model averaging (BMA) (see Raftery et al. 1997). The BACE and BMA methods are natural extension of the standard Bayesian inference methods in which one does not only make inference using single model, but also allowing pooling approach with combined estimation and prediction. As a benchmark we use Autometrics procedure which is non-pooling method based on general-to-specific approach with multiple path of searching algorithm implemented in OxMetrics software (see Doornik 2009; Doornik and Hendry 2013). We present empirical results for two non-trivial UK macroeconometric models: demand for narrow money (see Krolzig and Hendry 2001), and long-term inflation (see Hendry 2001).
In the last two decades we can observe a growing number of publications related to Bayesian model averaging in many fields of science like engineering, medicine, biology, sociology and others (see Fragoso et al. 2018). Not surprisingly, this type of approach has also been long-used in economics and it is still important, especially for identification of the sources of economic growth (Fernández et al. 2001a, 2001b; Błazejowski et al. 2019). The ideas stated in this paper were reviewed, among others, by Hoeting et al. (1999) and Wasserman (2000). A comprehensive study on the use of model averaging in economics, both from the frequentist and Bayesian approaches, was discussed in  Steel (2019).
The BACE approach is an approximation of BMA and it is not purely Bayesian but relies on Schwarz approximation to compute the Bayes factor (see Ley and Steel 2009). Nevertheless, both the BACE and BMA approaches account for model uncertainty, which often requires the consideration of many possible linear combinations of variables and lead to a large model space that needs to be explored and then demands intensive computational effort. As an approximation, BACE is usually faster than the standard BMA approach; nevertheless, obtaining the output in a reasonable amount of time still remains a significant challenge, especially in time-series modeling and forecasting.  Raftery et al. (1997) showed that standard variable selection procedures lead to different estimates and conflicting conclusions about the main questions of interest. Moreover, econometric models that are firmly based on economic theory do not always work for forecasting. The BACE and BMA approaches combine the knowledge obtained from many possible models and accounts for uncertainty by averaging the parameter estimates from different specifications. Consequently, both methods can better identify significant determinants of a dependent variable and generate more accurate forecasts without any specific knowledge.
In BACE and BMA we can penalize large dynamic models using different model prior assumptions putting higher probabilities for more parsimonious models. This type of approach does not cover all possible solutions. One of the potential alternative is assigning lower prior probabilities over lags length, such as Minnesota Prior (see Doan et al. 1984). We also assume stability of the relation between dependent and independent variables over time and, as a consequence, all slope parameters and other posterior characteristics in our BACE and BMA packages are time invariant. In the case of time-varying parameters, it is possible to employ dynamic model averaging presented in  Drachal (2018); Raftery et al. (2010).
The remainder of this paper is structured as follows. In Section 2, we discuss some aspects of BACE. Section 3 briefly outlines Bayesian model averaging for dynamic linear regression models along with short information about implementation of both packages in the gretl. In Section 4 we provide basic information about Autometrics—a PcGive module for automatic model selection. Section 5 presents empirical results for two selected UK macroeconometric models: demand for narrow money and long-term inflation. In this section, we also compare the variable selection strategies and forecasting performance of BACE and BMA with those of Autometrics. Additionally, in Section 6 we analyze the robustness and computational run-times of BACE and BMA. Finally, we conclude the paper in Section 7.

2. The BACE Method

In Sala-i-Martin et al. (2004), the authors proposed averaging parameter estimates using a technique—Bayesian averaging of classical estimates—that enabled the measurement of the importance of particular potential regressors. In this method, parameter estimates are obtained by applying ordinary least squares (OLS) and then averaged across all possible combinations of models. The BACE approach is not purely Bayesian but relies on Schwarz approximation to compute the Bayes factor. This approach is an alternative to the familiar and earlier-applied BMA technique, from which it differs i.e., by using non-informative prior assumptions of regression parameters1. A full discussion that compares BACE and BMA is presented in  Ley and Steel (2009).
Among the many articles that have applied the BACE technique are  van Dijk (2004) for US inflation and  Białowolski et al. (2014) for gross domestic product, inflation, and unemployment in Poland. Jones and Schneider (2006) used BACE analysis to verify the human capital effect on economic growth, while  Mapa and Briones (2007) and Simo-Kengne (2016) used BACE analysis to obtain variables associated with economic growth.  Cuaresma and Doppelhofer (2007) extended the BACE approach by allowing for the uncertainty of nonlinear threshold effects to identify determinants of long-term economic growth. Bergh and Karlsson (2010) applied BACE to investigate the relation between government size and the control of economic freedom and globalization for a panel of rich countries. In an empirical investigation of industrial production forecasting, Feldkircher (2012) focused on the forecasting performance resulting from model averaging by measuring the root-mean-square error (RMSE). Albis and Mapa (2014) used BACE to verify misspecification issues in vector autoregressive models for artificial data.
Let us consider the following dynamic linear regression model M j ( j = 1 , 2 , , K ) :
y = X j β j + ϵ
where y is a ( T x 1 ) vector of observations, X j is a ( T x k j ) matrix, where X j = [ Y j Z j ] and Y j is a ( T × k j y ) matrix containing k j y lagged values of dependent variable, while Z j is ( T × k j z ) matrix of exogenous variables, β j = [ β j y β j z ] is a vector of unknown parameters, where β j y R k j y , β j z R k j z , ϵ  is an ( T x 1 ) vector of errors that are assumed to be normally distributed (i.e., ϵ N ( 0 T , σ 2 I T ) ) and N ( μ , Σ ) denotes a normal distribution with location μ and covariance Σ .
From OLS estimates (see Sala-i-Martin et al. 2004), we can calculate the approximation of the posterior probability of model M j (i.e., Pr ( M j y ) ) using the following formula:
Pr ( M j y ) Pr ( M j ) T k j / 2 S S E j T / 2 i = 1 2 K Pr ( M i ) T k i / 2 S S E i T / 2 ,
where S S E j and S S E i are the OLS sum of squared errors, 2 K denotes the total number of potential combinations of K independent variables, and  k j and k i are the number of regression parameters β j and β i . In  T k j / 2 S S E j T / 2 p ( y M j ) , p ( y M j ) denotes the density of the marginal distribution of y conditional on model M j .
Prior probabilities Pr ( M j ) and Pr ( M i ) of models M j and M i are binomially distributed; that is,
Pr ( M j ) = θ k i ( 1 θ ) K k i , θ [ 0 , 1 ] .
The binomial distribution implies that we only need to specify a prior expected model size E ( Ξ ) = K θ , where E ( Ξ ) ( 0 , K ] . For example, if we define the value of E ( Ξ ) , then our BACE package will automatically produce a value of prior inclusion probability for all competitive models. If θ = 0.5 , then the prior expected model size is equal to the average of the number of potential regressors, and the model prior distribution is uniform Pr ( M i ) = 2 K and reflects a lack of previous knowledge about the models.
Using BACE, we can also easily evaluate the mean and variance of the posterior distribution of regression parameters β for the whole model space (see Leamer 1978; Sala-i-Martin et al. 2004):
E ( β y ) i = 1 2 K Pr ( M i y ) β ^ i ,
V a r ( β y ) i = 1 2 K Pr ( M i y ) V a r ( β i y , M i ) + i = 1 2 K Pr ( M i y ) β ^ i E ( β y ) 2 ,
where β ^ i = E ( β i y , M i ) and V a r ( β i y , M i ) are the OLS estimates of β i from model M i .
Another useful and popular characteristic of the BACE approach is posterior inclusion probability (PIP), which is defined as the posterior probability that the variable x i is relevant in the explanation of the dependent variable (see Leamer 1978; Mitchell and Beauchamp 1988). In our case, the PIP is calculated as the sum of the posterior model probabilities for all of the models that include a specific variable:
Pr β i 0 y = i = 1 2 K Pr M i β i 0 , y .
For model averaging, a Bayesian pooling strategy can also provide useful information about future observations of the dependent variable on the basis of the whole model space:
E ( y f y ) i = 1 2 K Pr ( M i y ) E ( y f y , M i ) ,
V a r ( y f y ) i = 1 2 K Pr ( M i y ) V a r ( y f y , M i ) + i = 1 2 K Pr ( M i y ) E ( y f y , M i ) E ( y f y ) 2 ,
where E ( y f y ) and V a r ( y f y ) denote the mean and variance of future observations y f .

3. The BMA Method

Another model building strategy is Bayesian model averaging wherein we can make an inference based on full posterior distribution. From Bayesian perspective uncertainty is a natural way of decision making process and therefore it can be easily included in the model selection rules (Koop 2003; Zellner 1971). Among the many seminal papers about Bayesian model averaging are  Hoeting et al. (1999) and  Fernández et al. (2001a, 2001b). The most recent detailed overview is presented in Steel (2019).
Once again, we are dealing with a problem which model and variables are the most appropriate in the analysis of the dependencies, but in this case we use a natural and explicit way of combining prior information with data, without any approximation of marginal data density and Bayes factors. We consider two variants of BMA framework. The first one where we impose stationary conditions for autoregressive parameters, and the second one without restrictions.
Now let us consider the first one:
y = X j β j + ε ,
where y is a vector of T observations, X j is ( T × k j ) matrix, and  β j is a ( k j × 1 ) vector of parameters, ε is a vector of dimensions ( T × 1 ) with a normal distribution N ( 0 , σ 2 I T ) , where σ 2 is a variance of random error ε and I T is an identity matrix of size T. Moreover X j = [ Y j Z j ] , where Y j is a ( T × k j y ) matrix containing k j y lagged values of dependent variable, while Z j is ( T × k j z ) matrix of exogenous variables. Furthermore, β j = [ β j y β j z ] is a vector of unknown parameters, where β j y Γ R k j y , β j z R k j z , and Γ is stationary region for the parameters of autoregressive processes. We also assume that we observe initial values y ( 0 ) .
Let us consider a prior density of the following form:
p β j , h M j = p β j h , M j p h ,
where:
p β j h , M j f N β j β ̲ j , h 1 V ̲ j I β j y Γ
and f N ( β j μ , Σ ) denotes the multivariate normal density with mean μ and covariance matrix Σ , I ( A ) is the indicator function, β ̲ j is k j -vector of prior means for regression coefficients and V ̲ j is a k j × k j positive definite prior covariance matrix of the form:
V ̲ j = g j X j X j 1 .
For the error precision h which is defined as h = 1 / σ 2 we use noninformative prior:
p h h 1 , h > 0 .
The factor of proportionality g j ( j = 1 , 2 , , K ) is part of the so-called g-prior, as introduced in Zellner (1986). In our research we use Benchmark prior, recommended by Fernández et al. (2001a):
g j = 1 / K 2 for   T K 2 1 / T for   T > K 2
Assuming the prior structure in Equation (10) we obtain the following joint posterior density:
p β j , h y , M j = c j · f N G β j , h β ¯ j , V ¯ j , s ¯ j 2 , v ¯ j I β j y Γ ,
where c j is normalizing constant and f N G is normal-gamma density (see Koop et al. 2007). In our case constant c j plays important role to obtain the Bayes factor between competitive models and can be obtained by Monte Carlo simulations.
Using the properties of normal-gamma density, Equation (15) leads to:
p β j h , y , M j f N β ¯ j , h 1 V ¯ j I β j y Γ ,
p h y = f G s j ¯ 2 , v ¯ j ,
where
V ¯ j = V ̲ j 1 + X j X j 1 ,
β ¯ j = V ¯ j V ̲ j 1 β ̲ j + X j X j β ^ j ,
and v ¯ j = T . We also have:
β ^ j = X j X j 1 X j y ,
s j 2 = y X j β ^ j y X j β ^ j v j ,
v ¯ j s ¯ j 2 = v j s j 2 + β ^ j β ̲ j V ̲ j + X j X j 1 1 β ^ j β ̲ j ,
where v j = T k j .
The marginal data density p ( y M j ) as well as posterior means and standard deviations of regression coefficients can be calculated numerically using Monte Carlo integration by sampling from the posterior distribution in Equation (15). In our case, we first draw error precision h from Equation (17) and then we draw β j from Equation (16). We only accept those candidate values which lie in stationary region for the parameters of autoregressive processes. The constant c j can be calculated as an inverse of the acceptance ratio i.e., inverse of the fraction of random numbers accepted in Monte Carlo simulation.
The posterior probability of any variant of regression model M j can be calculated by the following formula, which is crucial for Bayesian model averaging:
Pr M j y = Pr M j p y M j i = 1 2 K Pr M i p y M i ,
where Pr ( M 1 ) , Pr ( M 2 ) , , Pr ( M K ) denote the prior probabilities of competitive models (see Equation (3)). Other characteristics, like posterior model probabilities (PIP) as well as the mean and variance of the posterior distribution of regression parameters β for the whole model space can be calculated in the same manner as described for BACE method.
In case of BMA variant without stationary restrictions for the parameters of autoregressive processes, we assume that β j y R k j y and Equations (11), (15) and (16) takes the following form, while the other equations remain unchanged:
p β j h , M j = f N β j β ̲ j , h 1 V ̲ j ,
p β j , h y , M j = f N G β j , h β ¯ j , V ¯ j , s ¯ j 2 , v ¯ j ,
p β j h , y , M j = f N β ¯ j , V ¯ j .

BACE and BMA in Gretl

In order to perform Bayesian averaging of classical estimates, we used the BACE 2.0 package2. This code implements an automatic BACE procedure that is available in the gretl3 program as an open-source software. In the procedure’s main window, we can specify e.g., the following parameters: the list of independent variables, the prior distribution over the model space, the number of out-of-sample forecasts, and general parameters for the Monte Carlo simulation. As a result, the BACE package prints basic posterior characteristics, such as PIP, and the posterior means of coefficients, together with their standard errors. In addition, the package presents rankings of the most probable specifications according to their explanatory power and generates forecasts of the dependent variable.
In this paper we also use a software package that implements Bayesian model averaging for Autoregressive Distributed Lag models BMA_ADL ver. 0.9 in gretl4. The BMA_ADL package as well as the output is similar to BACE package. Although these two packages are similar there is a principle difference between them. In BMA_ADL we draw samples from posterior distribution of slope parameters β and we check roots of characteristic polynomial of the autoregressive process, although this feature can be explicitly switched off by user. Detailed information about the package can be found in Błażejowski and Kwiatkowski (2020).
Since exploration of many possible models demands intensive computational effort, we run BACE and BMA_ADL through the Message Passing Interface (MPI)5. It is especially useful for a large number of explanatory variables, which results in a computational complexity that exceeds the computing power of modern PCs, because it performs parallel computations through MPI.
Both packages are written in gretl’s internal scripting language Hansl (see Cottrell and Lucchetti 2019b) with an easy-to-use graphical user interface (GUI). Therefore, they can be treated as an automatic model selection procedure and would be a useful for users who are not familiar with model averaging.

4. Autometrics

Autometrics procedure of variables model selection is built on the basis of PcGets module implemented in OxMetrics software and is is fully described in  Doornik (2009) and Doornik and Hendry (2013). This conception is based on general-to-specific approach with multiple path of searching (reducing) algorithm. The assumption underlying empirical model selection is discovering the local data generating process (LDGP), which is tend to explain the relations which occur in real world in the space of available variables. One needs to select the model from the set of potential specifications ensuring that model is congruent and encompasses LDGP. The crucial issue here is test-based reduction strategy (see Desboulets 2018).
The procedure of selecting variables in Autometrics is proceed in following stages. The starting point for automatic procedure is general unrestricted model (GUM), which should remain congruent and include all potentially relevant variables according to sample size, theoretical, or empirical considerations. This solution increases chance that LDGP is nested in GUM and can be discovered. The variables, which have strong importance and influence should remain in model and the less significant ones should not be retained. The mis-specification test is applied to ensure the congruence postulate and to avoid badly-specified GUM. Additionally, encompassing test is used to evaluate if small model can explain larger one, which is encompassed within. Such a procedure can be treated as progressive research strategy of reduction and defines partial order of considered model specifications (see Hendry et al. 2008).
Autometrics ensures the tree-search approach for examining the whole model space using three strategies of path evaluation: pruning, bunching and chopping (see Hendry and Doornik 2014). Before the main multi-path reduction of variables is launched, the pre-search is initiated to eliminate most insignificant and irrelevant variables. During reductions and simplifications diagnostic tests are employed to ensure the congruence and to find valid specification. The terminal model is received at the end of each branch of the tree-search algorithm. Moreover, Autometrics also take into consideration following modeling issues: cointegration, functional form, economic theory or data accuracy.
Application of Autometrics or PcGets can be found in works, for example: Clements and Hendry (2008); Hendry (2001) for UK inflation,  Hendry (2011) for consumers’ expenditures,  Castle et al. (2012) for US real interest rates,  Ericsson and Kamin (2009) for Argentine broad money demand,  Marczak and Proietti (2016) for industrial production,  Kamarudin and Ismail (2016) for water quality index, or  Ackah and Asomani (2015) for renewable energy.

5. Empirical Results

In this section, we analyze the BACE and BMA results for data used in two well-known dynamic macroeconometric models: model for M1 money demand in UK (UKM1), which was proposed in Hendry and Ericsson (1991), and the long-term UK inflation model, as introduced in Hendry (2001). We focus on the modeling and forecasting of narrow money and inflation in the UK using the BACE and BMA methods along with the Autometrics program6. We analyze the estimation results using standard posterior characteristics, such as posterior inclusion probabilities, the posterior means of regression parameters, and the posterior standard deviations, as defined in Section 2. We compare forecasting accuracies using three measures, namely, root-mean-square error (RMSE), mean average percentage error (MAPE), and Theil’s U coefficient (see Theil 1966, pp. 33–36) divided into three factors: bias proportion UM (measures differences between averages of actual and predicted values), regression proportion UR (evaluates the slope coefficient from a regression of changes in actual values on changes in predicted values), and disturbance proportion UD (measures proportion of forecast error associated with random disturbance). Two first factors stand for systematic error and should be 0, while disturbance proportion is an unsystematic element and should equal 1.
Based on the methods discussed here, we also consider two additional models associated with BACE and BMA output. Barbieri and Berger (2004) introduced median probability model, which is defined as the model consisting of those variables which have an overall posterior probability greater than or equal to 0.5 of being in a model. According to the authors, median probability model considerably outperforms the most probable model in terms of predictive accuracy. Therefore it seems reasonable to include this model in our analysis and compare its forecasting performance.

5.1. Modeling and Forecasting Demand for Narrow Money in the UK: UKM1

We based the first empirical illustration on the UKM1 model proposed in  Hendry and Ericsson (1991) in the following form:
Δ ( m p ) t = 0.69 Δ p t 0.17 Δ ( m p y ) t 1 0.63 R n t 0.093 ( m p y ) t 1 + 0.023
where small letters indicate the log-transformed variables defined as follows7:
  • M t : nominal narrow money, M1 aggregate in million £,
  • Y t : real total final expenditure (TFE) for 1985 prices in million £,
  • P t : deflator of TFE,
  • R n t : net interest rate of the cost of holding money (calculated as the difference between the three-month interest rate and learning-adjusted own interest rate).
The data for the UK narrow money M1 aggregate are quarterly and span from 1964:3 to 1989:28. Figure 1 presents plots of the time series used in the analysis.
Model (27) was later replicated as an unrestricted autoregressive distributed lag (ADL) model in PcGets (see Krolzig and Hendry 2001, p. 29). In their paper, narrow money was measured in nominal terms instead of real terms, so the ADL representation of the general unrestricted model (GUM) was defined as follows:
m t = s = 1 4 α s m t s + s = 0 4 β s p t s + s = 0 4 γ s y t s + s = 0 4 δ s R n t s + c o n s t + ε t .
After reduction9, they obtained the following empirical model:
m ^ t = 0.67 m t 1 + 0.21 m t 4 + 0.33 p t 0.20 p t 3 + 0.13 y t 0.58 R n t 0.34 R n t 2 .
In our research, following Krolzig and Hendry (2001), we estimated the GUM in the form shown in Equation (28) using the sample 1964:1–1985:2 ( T = 86 ) and used the last 4 years (1985:3–1989:2) for forecasting purposes. We compared the variable selection and forecasting accuracy of BACE and BMA with those of Autometrics, which is an alternative automatic model selection procedure. Table 1 presents the estimation and variable selection results for UKM1 in the ADL form, Equation (28). Model space consists of 2 20 = 1,048,576 specifications that must be considered. The total number of variables is 20, including current values of an explanatory variables and their lags (up to order 4), lagged values of the dependent variable (up to order 4) and constant.
According to the results in Table 1, the variables used in the BACE analysis can be divided into three groups: high-probability determinants ( m t 1 , R n t , p t ) with P I P 2 / 3 , medium-probability determinants ( m t 2 , m t 4 , p t 1 , y t 1 ) with 1 / 3 P I P < 2 / 3 , and low-probability determinants (the remaining variables) with P I P < 1 / 3 . The top four most probable variables are the same as those selected by Autometrics, although only three of them are classified as highly probable determinants (one variable, i.e.,  y t 1 is close to being highly probable). This discrepancy between the two selections can be explained by the fact that the Autometrics model, which is the most probable one in BACE, has only 1.53% of the total posterior probability mass (see Table 2).
In case of BMA with stationarity restrictions, we get similar results although there are some slight changes. Again the top four most probable variables are the same as those selected by BACE and Autometrics, however the only two them ( m t 1 , R n t ) can be classified as highly probable, while in the group of medium-probability determinants we can include: p t , p t 1 , y t 1 . The most likely model is again the model selected by Autometrics, although in this case the posterior probability for the top model is higher than in BACE and equals to 8.09% (see Table 3). The BMA procedure without stationarity restrictions points the m t 1 and R n t as highly probable variables, while p t and y t 1 are medium probable. The most probable model is still the same as indicated in BMA with stationarity restrictions, but models ( M 2 ) and ( M 3 ) have changed the order in the list (see Table 4). In the vast majority of cases, BMA PIP coefficients take lower values than in the case of BACE. As a consequence, we find little difference in posterior mean and variance of regression parameters comparing BACE to BMA. Nevertheless, it is difficult to state clearly which point estimates are closer to the results obtained by Autometrics, however, the BACE ranking is more consistent with the Autometrics output. It seems that the both BMA methods prefer a more parsimonious specifications that do not include some variables found to be important in the BACE and Autometrics.
In the next step, we decided to compare the accuracy of forecasts. Table 5 presents the BACE, B A (with and without stationarity restriction) and Autometrics forecasts of nominal narrow money in the UK for the period from 1985:3 to 1989:2, which covers 16 quarters. The second column includes the logs of the actual values of the dependent variable. The next columns contain the weighted averages of individual model forecasts and errors for BACE, BMA, and Autometrics, respectively. Additionally, we include results for so-called median probability models introduced by  Barbieri and Berger (2004). Moreover, the five bottom rows of the table contain well-known measures of forecast error: RMSE, MAPE, and Theil’s U coefficients.
Two accuracy measures indicate that the BACE forecasts are relatively close to the real values of nominal narrow money in the UK. For BACE, RMSE is 0.0224 and MAPE is 0.17%, while RMSE and MAPE for BMA with stationarity restrictions are 0.0592 and 0.43%, respectively. Forecast generated by BMA without stationarity restrictions have slightly lower forecast errors then forecasts from BMA with stationarity restrictions. In the other cases, i.e., for Autometrics and median probability models the results are in the range of 0.0994–0.1018 and 0.72–0.74%, respectively, while both median BMA approaches give exact equal results. It means that the RMSE calculated for BACE is two and a half times smaller than the RMSE resulting from BMA and almost five times smaller than for Autometrics. We can see almost the same scenario for MAPE measure where BACE measure returns the smallest errors. As the last conclusion about these results is that the median probability models do not outperform the mixture of models in terms of predictive performance.
For all methods, the largest factor of forecast error is bias proportion, which has a considerable impact on forecast accuracy, although for BACE is the smallest one. This is clearly reflected in Figure 2, which shows the actual and forecasted values of nominal narrow money. In this figure, the bias in the forecasts generated by BMA, Autometrics and two other methods substantially grows as the forecast horizon increases.
One potential explanation of this observation is that the forecasts in Autometrics and median probability models are generated by only one model. According to the BACE and BMA with stationarity restrictions results, the use of a single model ( M 1 ) leaves 98.5% and 91.9% of the total posterior probability mass. On the other hand, BACE and BMA calculates forecasts from the whole model space and accounts for the mixture of all considered specifications, which are weighted by their posterior probabilities.

5.2. Modeling and Forecasting Long-Term UK Inflation

In the second empirical example, we used the long-term UK inflation model developed in Hendry (2001) for 1875–1991 years ( T = 117 ) . The data set10 used in this research is described in Table 6, and Figure 3 presents the time-series plots (small letters indicate log-transformed variables).
The final specification after a number of variable transformations and model pre-reduction was as follows11) (see Hendry 2001):
Δ p t = f ( Δ p t 1 , y t 1 d , m t 1 d , n t 1 d , U t 1 d , S t 1 , R l , t 1 , Δ p e , t , Δ p e , t 1 , Δ U r , t 1 ,   Δ w t 1 , Δ c t 1 , Δ m t 1 , Δ n t 1 , Δ R s , t 1 , Δ R l , t 1 , Δ p o , t 1 , I d , t , π t 1 * ; ε t ) ,
where π t * = 0.25 e r , t 0.675 ( c p ) t * 0.075 ( p o p ) t + 0.11 I 2 , t + 0.25 , ( c p ) t * = c t p t + 0.006 × ( t r e n d 69.5 ) + 2.37 , and  I d is a combination of year indicator dummies. Model space consists of 2 20 = 1,048,576 linear combinations that must be considered. After the reduction at a 1% significance level, specification in Equation (30) was reduced to the following empirical model (see Hendry 2001):
Δ p ^ t = 0.18 y t 1 d + 0.19 Δ m t 1 0.83 S t 1 + 0.62 Δ R s , t 1 + 0.19 π t 1 * + 0.27 Δ p e , t + 0.04 I d , t + 0.04 Δ p o , t 1 + 0.27 Δ p t 1 .
The results in Table 7 show that the BACE, BMA, and Autometrics identify the same set of significant determinants of UK inflation as in Hendry (2001). Moreover, both BMA procedures, with and without stationarity restrictions, give exactly the same results. This can be explained by the fact that dependent variable Δ p t is far away from non-stationary region, so imposing stationarity restrictions does not result in rejecting any of draws from posterior. Hereafter, we formulate comments without division into restricted or unrestricted case. BACE and BMA indicate that the following variables are highly probable: π t 1 * , I d , t , Δ p e , t , S t 1 , Δ p t 1 , y t 1 d , Δ R s , t 1 , Δ m t 1 , Δ p o , t 1 . Autometrics selects the same set, reducing model (30) at the 1% significance level.
Table 8, Table 9 and Table 10 present the BACE and BMA posterior probability and coefficient estimates for the top 10 models. In the case of BACE the most probable model ( M 1 ) has a posterior probability of 21.9%, while the second model in the ranking ( M 2 ) has a probability of 6.4%. For the other models, the posterior probability does not exceed 4.7%. Although the posterior probability of the highest-ranked model ( M 1 ) is more than three times larger than that of the second model ( M 2 ) , an inference that is based only on M 1 leaves 78.1% of the posterior probability mass. As a consequence, estimates of the average mean of coefficients are slightly different from those in Autometrics. We can meet a similar situation in the case of BMA, although the highest-ranked model ( M 1 ) is even more preferred by the data with posterior probability equals to 32.66%. The second model in the ranking ( M 2 ) is almost two times less likely.
Table 11 presents detailed information about the predictions for UK inflation resulting from BACE, BMA, Autometrics, and median probability models. This table includes actual values, forecast values, and forecast standard errors, as well as accuracy measures, for the period from 1982 to 1991, which covers 10 years. The actual and forecast values of UK inflation are presented in Figure 4. For BACE, RMSE is 0.0179 and MAPE is 26.06%, for BMA we have RMSE—0.0175 and MAPE—25.85%, while RMSE and MAPE in Autometrics are 0.0151 and 25.06%, respectively. Forecast errors of the median probability models are the largest compared to other methods. As we can see BACE, BMA, and Autometrics generate forecasts of almost the same quality, but the sources of errors are different. For BACE and BMA, the greatest factor of forecast error is bias proportion, while the greatest factor for Autometrics is disturbance proportion.
One explanation of the equivalent forecasting performances is the fact that, according to the results in Table 8, Table 9 and Table 10, the top 10 most probable specifications have almost the same set of nine the most probable variables and cover over 50% of the posterior probability mass, including the second-ranked model ( M 2 ) selected by Autometrics.

6. Robustness and Run Time Analysis

6.1. Robustness

In order to confirm the empirical findings for variable and model selection obtained by BACE and BMA, we performed a robustness analysis using different prior model assumptions. We apply philosophy proposed in  Osiewalski and Steel (1993) and we set different variants of the prior average model size in order to penalize large models. In Section 5, the prior average model size is set to E ( Ξ ) = K / 2 (where K is the total number of independent variables). This means that we do not prefer any specification, so all possible models are equally probable. We considered robustness scenario as different specifications of prior model size, estimating the models in Equations (28) and (30) using three competitive variants: E ( Ξ ) = K / 4 , E ( Ξ ) = K / 5 , and E ( Ξ ) = K / 8 (the most restrictive case). Table 12 and Table 13 present the BACE estimates, Table 14 and Table 15 show results for BMA with stationarity restrictions, while Table 16 and Table 17 relate to results for BMA without stationarity restrictions.
According to the results concerning BACE method, which are presented in Table 12 and Table 13, there are no substantial differences in the output between E ( Ξ ) = K / 4 , E ( Ξ ) = K / 5 , and  E ( Ξ ) = K / 8 . Similar conclusion can be formulated for BMA outcome included in Table 14, Table 15, Table 16 and Table 17. Moreover, comparing the results in Table 12, Table 13, Table 14, Table 15, Table 16 and Table 17 with those in Table 1 and Table 7 reveals that the observed differences are negligible.

6.2. BACE and BMA Run Times

Table 18, Table 19 and Table 20 present computational timings12 of BACE and BMA analysis conducted in two general variants, with and without forecasting, for two earlier considered empirical examples, i.e., UKM1 and inflation. In the case of BACE analysis, total number of iterations in MC3 sampling algorithm equals 500,000, while for BMA is 150,000. The difference in the total number of MC3 iterations in BACE and BMA gretl packages is related with different ways of employing MPI parallel computations, which results in different pace of convergence.
Increasing the number of CPUs decreases run times of both packages more or less linearly. The highest boost is in the case of BACE for UKM1 with almost equal ratio. Both packages slow down in forecasting, but again with different ratios. In case of BACE, timings of simulations increase approximately by 10–20%, no matter how many CPUs are used. In the case of BMA, run times are multiply by a factor ranging from 12.64 to 26.07 for restricted case and from 32.33 to 825.29 for unrestricted case. Such a big increase of computational timings is related with necessity of generating predictive distributions step-by-step in order to compute dynamic forecasts. Longer run times for BMA are also related with stationarity restrictions for autoregressive parameters. We believe that in the future we will be able to improve speed of BMA computations.

7. Conclusions

In this paper, we discussed the possibility of using the model averaging methods—BACE and BMA as a tool for selecting variables and forecasting in dynamic econometric modeling. Empirical examples with known, non-trivial macroeconometric models confirmed that the BACE and BMA procedures, by calculating a PIP value for each independent variable, correctly indicates the determinants of the dependent process. Robustness analysis results confirm the stability of variable selection in our examples.
Moreover, the BACE and BMA take into account model uncertainty and generates reasonably close or more accurate forecasts compared with Autometrics or median probability model. The UK inflation forecasts generated by BACE, BMA, and Autometrics have similar RMSE and MAPE values, but the demand for narrow money forecasts generated by BACE and BMA have several times smaller errors than those generated by Autometrics. A similar conclusion also applies to median probability models. Forecasts generated by simple averaging of all predictive models can provide better results than forecasts from single models. This advantage over the non-pooling inference is particularly evident when there is no one dominant model but rather many competing specifications with relatively low explanatory power.

Supplementary Materials

The Supplementary Materials are available at https://www.mdpi.com/2225-1146/8/2/21/s1.

Author Contributions

This is a collaborative project, and the all authors contributed equally. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Center of Science, Poland, grant number 2016/21/B/HS4/01970.

Acknowledgments

We would like to thank Mark F.J. Steel for the invitation to contribute to the Special Issue on Bayesian and Frequentist Model Averaging. We are also grateful to Riccardo Lucchetti for all inspiring discussions and getting us access to the haavelmo machine located at Ancona, Italy.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ADLAutoregressive Distributed Lag
BACEBayesian Averaging of Classical Estimates
BMABayesian Model Averaging
DGPData Generating Process
GDPGross Domestic Product
GUIGraphical User Interface
GUMGeneral Unrestricted Model
LDGPLocal Data Generating Process
MAPEMean Absolute Percentage Error
MPIMessage Passing Interface
MC3Markov Chain Monte Carlo Model Composition
RMSERoot-Mean-Square Error
TFETotal Final Expenditure
UKM1Model for M1 Money Demand in UK

References

  1. Ackah, Ishmael, and McCmari Asomani. 2015. Empirical Analysis of Renewable Energy Demand in Ghana with Autometrics. International Journal of Energy Economics and Policy 5: 754–58. [Google Scholar]
  2. Albis, Manuel Leonard F., and Dennis S. Mapa. 2014. Bayesian Averaging of Classical Estimates in Asymmetric Vector Autoregressive (AVAR) Models. MPRA Paper 55902. Munich: University Library of Munich. [Google Scholar]
  3. Barbieri, Maria Maddalena, and James O. Berger. 2004. Optimal predictive model selection. The Annals of Statistics 32: 870–97. [Google Scholar] [CrossRef]
  4. Bergh, Andreas, and Martin Karlsson. 2010. Government size and growth: Accounting for economic freedom and globalization. Public Choice 142: 195–213. [Google Scholar] [CrossRef]
  5. Białowolski, Piotr, Tomasz Kuszewski, and Bartosz Witkowski. 2014. Bayesian averaging of classical estimates in forecasting macroeconomic indicators with application of business survey data. Empirica 41: 53–68. [Google Scholar] [CrossRef]
  6. Błażejowski, Marcin, Jacek Kwiatkowski, and Jakub Gazda. 2019. Sources of Economic Growth: A Global Perspective. Sustainability 11: 275. [Google Scholar] [CrossRef]
  7. Błażejowski, Marcin, Paweł Kufel, and Jacek Kwiatkowski. 2020. Model simplification and variable selection: A Replication of the UK inflation model by Hendry (2001). In Journal of Applied Econometrics. forthcoming. [Google Scholar] [CrossRef]
  8. Błażejowski, Marcin, and Jacek Kwiatkowski. 2018. Bayesian Averaging of Classical Estimates (BACE) for gretl. Gretl Working Papers 6. Ancona, Italy: Dipartimento di Scienze Economiche e Sociali, Universita’ Politecnica delle Marche (I). [Google Scholar]
  9. Błażejowski, Marcin, and Jacek Kwiatkowski. 2020. Bayesian Model Averaging for Autoregressive Distributed Lag (BMA_ADL) in gretl. MPRA Paper 98387. Munich: University Library of Munich. [Google Scholar]
  10. Castle, Jennifer L., Jurgen A. Doornik, and David F. Hendry. 2012. Model selection when there are multiple breaks. Journal of Econometrics 169: 239–46. [Google Scholar] [CrossRef]
  11. Clements, Michael P., and David F. Hendry. 2008. Forecasting Annual UK Inflation Using an Econometric Model over 1875–1991. In Frontiers of Economics and Globalization. Bingley: Emerald Publishing, vol. 3, pp. 3–39. [Google Scholar] [CrossRef]
  12. Cottrell, Allin, and Riccardo Lucchetti. 2019a. Gretl + MPI. October. Available online: http://ricardo.ecn.wfu.edu/~cottrell/gretl/gretl-mpi.pdf (accessed on 27 April 2020).
  13. Cottrell, Allin, and Riccardo Lucchetti. 2019b. A Hansl Primer. December. Available online: http://ricardo.ecn.wfu.edu/pub/gretl/manual/PDF/hansl-primer-a4.pdf (accessed on 27 April 2020).
  14. Cottrell, Allin, and Riccardo Lucchetti. 2020. Gretl User’s Guide. January. Available online: http://ricardo.ecn.wfu.edu/pub/gretl/gretl-guide.pdf (accessed on 27 April 2020).
  15. Cuaresma, Jesus Crespo, and Gernot Doppelhofer. 2007. Nonlinearities in cross-country growth regressions: A Bayesian Averaging of Thresholds (BAT) approach. Journal of Macroeconomics 29: 541–54. [Google Scholar] [CrossRef]
  16. Desboulets, Loann David Denis. 2018. A Review on Variable Selection in Regression Analysis. Econometrics 6: 45. [Google Scholar] [CrossRef]
  17. Doan, Thomas, Robert Litterman, and Christopher Sims. 1984. Forecasting and conditional projection using realistic prior distributions. Econometric Reviews 3: 1–100. [Google Scholar] [CrossRef]
  18. Doornik, Jurgen A. 2009. Autometrics. In The Methodology and Practice of Econometrics: A Festschrift in Honour of David F. Hendry. Number 9780199237197 in OUP Catalogue. Edited by Jennifer Castle and Neil Shephard. Oxford: Oxford University Press. [Google Scholar] [CrossRef]
  19. Doornik, Jurgen A., and David F. Hendry. 2013. Empirical Econometric Modelling using PcGive: Volume I. London: Timberlake Consultants Press. [Google Scholar]
  20. Drachal, Krzysztof. 2018. Dynamic Model Averaging in Economics and Finance with fDMA: A Package for R. Warsaw: Faculty of Economic Sciences, University of Warsaw. [Google Scholar]
  21. Ericsson, Neil R., and Steven B. Kamin. 2009. Constructive Data Mining: Modelling Argentine Broad Money Demand. In The Methodology and Practice of Econometrics. Oxford: Oxford University Press, pp. 412–40. [Google Scholar] [CrossRef]
  22. Feldkircher, Martin. 2012. Forecast combination and Bayesian model averaging: A prior sensitivity analysis. Journal of Forecasting 31: 361–76. [Google Scholar] [CrossRef]
  23. Fernández, Carmen, Eduardo Ley, and Mark F. J. Steel. 2001a. Benchmark Priors for Bayesian Model Averaging. Journal of Econometrics 100: 381–427. [Google Scholar] [CrossRef]
  24. Fernández, Carmen, Eduardo Ley, and Mark F. J. Steel. 2001b. Model uncertainty in cross-country growth regressions. Journal of Applied Econometrics 16: 563–76. [Google Scholar] [CrossRef]
  25. Fragoso, Tiago M., Wesley Bertoli, and Francisco Louzada. 2018. Bayesian Model Averaging: A Systematic Review and Conceptual Classification. International Statistical Review 86: 1–28. [Google Scholar] [CrossRef]
  26. Hendry, David F. 1995. Dynamic Econometrics. Oxford: Oxford University Press. [Google Scholar] [CrossRef]
  27. Hendry, David F. 2001. Modelling UK Inflation, 1875–1991. Journal of Applied Econometrics 16: 255–75. [Google Scholar] [CrossRef]
  28. Hendry, David F. 2011. Revisiting UK consumers’ expenditure: Cointegration, breaks and robust forecasts. Applied Financial Economics 21: 19–32. [Google Scholar] [CrossRef]
  29. Hendry, David F. 2015. Introductory Macro-Econometrics: A New Approach. London: Timberlake Consultants. [Google Scholar]
  30. Hendry, David F., and Jurgen A. Doornik. 2014. Empirical Model Discovery and Theory Evaluation: Automatic Selection Methods in Econometrics. Cambridge: Mit Press. [Google Scholar]
  31. Hendry, David F., and Neil R. Ericsson. 1991. Modeling the demand for narrow money in the United Kingdom and the United States. European Economic Review 35: 833–81. [Google Scholar] [CrossRef]
  32. Hendry, David F., Massimiliano Marcellino, and Grayham E. Mizon. 2008. Special issue on encompassing. Oxford Bulletin of Economics and Statistics 70: 711–938. [Google Scholar] [CrossRef]
  33. Hendry, David F., and Bent Nielsen. 2012. Econometric Modeling: A Likelihood Approach. Princeton: Princeton University Press. [Google Scholar]
  34. Hoeting, Jennifer A., David Madigan, Adrian E. Raftery, and Chris T. Volinsky. 1999. Bayesian model averaging: A tutorial. Statistical Science 14: 382–401. [Google Scholar] [CrossRef]
  35. Jones, Garett, and W. Joel Schneider. 2006. Intelligence, human capital, and economic growth: A Bayesian Averaging of Classical Estimates (BACE) approach. Journal of Economic Growth 11: 71–93. [Google Scholar] [CrossRef]
  36. Kamarudin, Nur Azulia, and Suzilah Ismail. 2016. Model selection approaches of water quality index data. Global Journal of Pure and Applied Mathematics 12: 1821–29. [Google Scholar]
  37. Koop, Gary. 2003. Bayesian Econometrics. Chichester: John Wiley & Sons Ltd. [Google Scholar]
  38. Koop, Gary, Dale J. Poirier, and Justin L. Tobias. 2007. Bayesian Econometric Methods. New York: Cambridge University Press. [Google Scholar]
  39. Krolzig, Hans-Martin, and David F. Hendry. 2001. Computer automation of general-to-specific model selection procedures. Journal of Economic Dynamics and Control 25: 831–66. [Google Scholar] [CrossRef]
  40. Leamer, Edward. 1978. Specification Searches. Hoboken: John Wiley & Sons. [Google Scholar]
  41. Ley, Eduardo, and Mark F. J. Steel. 2009. On the effect of prior assumptions in Bayesian model averaging with applications to growth regression. Journal of Applied Econometrics 24: 651–74. [Google Scholar] [CrossRef]
  42. Mapa, Dennis S., and Kristine Joy S. Briones. 2007. Robustness procedures in economic growth regression models. Philippine Review of Economics 44: 71–84. [Google Scholar]
  43. Marczak, Martyna, and Tommaso Proietti. 2016. Outlier detection in structural time series models: The indicator saturation approach. International Journal of Forecasting 32: 180–202. [Google Scholar] [CrossRef]
  44. Mitchell, Toby J., and John. J. Beauchamp. 1988. Bayesian variable selection in linear regression. Journal of the American Statistical Association 83: 1023–32. [Google Scholar] [CrossRef]
  45. Osiewalski, Jacek, and Mark F. J. Steel. 1993. Una perspectiva bayesiana en selección de modelos. Cuadernos Económicos de ICE 55: 327–51. [Google Scholar]
  46. Raftery, Adrian E., Miroslav Kárný, and Pavel Ettler. 2010. Online Prediction Under Model Uncertainty via Dynamic Model Averaging: Application to a Cold Rolling Mill. Technometrics 52: 52–66. [Google Scholar] [CrossRef]
  47. Raftery, Adrian E., David Madigan, and Jennifer A. Hoeting. 1997. Bayesian model averaging for linear regression models. Journal of the American Statistical Association 92: 179–91. [Google Scholar] [CrossRef]
  48. Sala-i-Martin, Xavier, Gernot Doppelhofer, and Ronald I. Miller. 2004. Determinants of Long-Term Growth: A Bayesian Averaging of Classical Estimates (BACE) Approach. American Economic Review 94: 813–35. [Google Scholar] [CrossRef]
  49. Simo-Kengne, Beatrice D. 2016. What Explains the Recent Growth Performance in Sub-Saharan Africa? Results from a Bayesian Averaging of Classical Estimates (BACE) Approach. Ersa Working Paper. Cape Town, South Africa: Economic Research Southern Africa. [Google Scholar]
  50. Steel, Mark F. J. 2019. Model Averaging and Its Use in Economics. Journal of Economic Literature. forthcoming. [Google Scholar]
  51. Theil, Henri. 1966. Applied Economic Forecasting. Amsterdam: North-Holland. [Google Scholar]
  52. van Dijk, Dick. 2004. Forecasting US Inflation Using Model Averaging. In Econometric Society 2004 Australasian Meetings. Cleveland: Econometric Society. [Google Scholar]
  53. Wasserman, Larry. 2000. Bayesian model selection and model averaging. Journal of Mathematical Psychology 44: 92–107. [Google Scholar] [CrossRef] [PubMed]
  54. Zellner, Arnold. 1971. An Introduction to Bayesian Inference in Econometrics. New York: John Wiley & Sons. [Google Scholar]
  55. Zellner, Arnold. 1986. On Assessing Prior Distributions and Bayesian Regression Analysis with g-Prior Distributions. In Bayesian Inference and Decision Techniques: Essays in Honor of Bruno de Finetti. Edited by Prem K. Goel and Arnold Zellner. Amsterdam and Holland: Elsevier. [Google Scholar]
1.
BACE is implicitly based on fixed Zellner’s g-prior, whereas, in the BMA framework, g-prior can be set explicitly.
2.
The BACE 2.0 package is available at http://ricardo.ecn.wfu.edu/gretl/cgi-bin/gretldata.cgi?opt=SHOW_FUNCS and was developed by co-authors (see Błażejowski and Kwiatkowski 2018).
3.
Gretl is an open-source software for econometric analysis and is available at http://gretl.sf.net.
4.
The BMA_ADL package for gretl is available in Supplementary Materials along with scripts to replicate all analysis.
5.
MPI is a standard that supports running a given program simultaneously on several CPU cores, so it supports a very flexible type of parallelism of Monte Carlo integration see (Cottrell and Lucchetti 2020, 2019a).
6.
We used gretl version 2019d-git and PcGive version 14.2 with Ox Professional version 7.20 on a PC machine running under Debian GNU/Linux 64 bits.
7.
Exogeneity of variables used in UKM1 model is discussed in (Hendry and Nielsen 2012, pp. 266–67; Hendry 1995, pp. 605–6; Hendry 2015, pp. 127–33) and the results show that modeling demand for narrow money in UK as a single equation is valid in general.
8.
9.
Authors understand ‘reduction’ as a structured path of elimination insignificant variables based on t-statistics together with pre-search analysis and encompassing tests.
10.
All series are freely available in the Journal of Applied Econometrics Data Archive at http://qed.econ.queensu.ca/jae/2001-v16.3/hendry. Exogeneity of variables used in this model is mentioned in (Hendry 2001, p. 261; Hendry 2015, p. 150).
11.
The full replication of this model using the BACE approach, together with a detailed discussion on variable selection strategy and discovering the reduction path, is presented in Błażejowski et al. (2020).
12.
All computations were performed on so-called haavelmo machine (located at Dipartimento di Scienze Economiche e Sociali (DiSES), Ancona, Italy) which consists on 20 Hyper-Threaded Intel® Xeon® CPU E5-2640 v4 @ 2.40GHz with 256 GB operational memory running under Debian GNU/Linux 64 bits.
Figure 1. Times-series used in the model for M1 money demand in UK (UKM1).
Figure 1. Times-series used in the model for M1 money demand in UK (UKM1).
Econometrics 08 00021 g001
Figure 2. Actual values and forecasts of the logs of M1 for the period from 1985:3 to 1989:2.
Figure 2. Actual values and forecasts of the logs of M1 for the period from 1985:3 to 1989:2.
Econometrics 08 00021 g002aEconometrics 08 00021 g002b
Figure 3. Time-series used in the long-term UK inflation model.
Figure 3. Time-series used in the long-term UK inflation model.
Econometrics 08 00021 g003aEconometrics 08 00021 g003b
Figure 4. Actual and forecast values (expressed in logs) of differences in UK inflation for the period 1982–1991.
Figure 4. Actual and forecast values (expressed in logs) of differences in UK inflation for the period 1982–1991.
Econometrics 08 00021 g004aEconometrics 08 00021 g004b
Table 1. Bayesian averaging of classical estimates (BACE), Bayesian model averaging (BMA), and Autometrics estimates for narrow money demand model (28).
Table 1. Bayesian averaging of classical estimates (BACE), Bayesian model averaging (BMA), and Autometrics estimates for narrow money demand model (28).
VariableBACEBMA (Restricted)BMA (Unrestricted)Autometrics
PIPAvg.Avg.PIPAvg.Avg.PIPAvg.Avg.Coeff.Std.
MeanStd. Dev.MeanStd. Dev.MeanStd. Dev.Error
m t 1 1.00000.76560.12331.00000.83340.09631.00000.82860.10030.87100.0221
m t 2 0.35900.07120.11860.19910.03570.08580.21380.03890.0894
m t 3 0.1511−0.01010.05940.0565−0.00130.02840.0663−0.00200.0309
m t 4 0.41070.06590.10030.14630.01700.05390.17590.02120.0601
p t 0.66760.15900.16240.61640.10260.11640.64850.11290.12170.11400.0163
p t 1 0.37730.08040.20560.35210.05930.14900.32250.05450.1487
p t 2 0.2991−0.06230.17540.1928−0.02660.12360.2009−0.02960.1285
p t 3 0.2995−0.05590.12740.1643−0.02310.08230.1718−0.02490.0869
p t 4 0.2138−0.02000.07570.1085−0.00810.04420.1161−0.00990.0488
y t 0.22890.01820.05820.20210.01970.05230.22440.02120.0548
y t 1 0.64950.11740.11980.56030.08660.09660.54490.08390.09650.12720.0203
y t 2 0.2425−0.02850.08960.1373−0.00930.06090.1447−0.00750.0609
y t 3 0.1693−0.00480.05380.11270.00400.03820.10190.00310.0370
y t 4 0.21500.02070.05700.21640.02290.05400.20610.02170.0530
R n t 0.9980−0.51920.11270.9975−0.50630.09450.9968−0.50990.0965−0.50530.0666
R n t 1 0.2530−0.06250.14070.1018−0.01910.08290.1044−0.02080.0856
R n t 2 0.2798−0.06690.14160.1037−0.01910.07820.1107−0.02270.0870
R n t 3 0.12040.00450.05130.05080.00150.02810.06230.00210.0323
R n t 4 0.1149−0.00310.03840.0605−0.00340.02780.0528−0.00320.0274
const0.2435−0.16390.37900.1435−0.10220.31500.1482−0.10420.3171
BMA (restricted) indicates variant where we impose stationary conditions for autoregressive parameters, while BMA (unrestricted) denotes variant without stationary restrictions.
Table 2. BACE posterior probabilities and coefficient estimates for the top 10 models of nominal narrow money demand in the UK.
Table 2. BACE posterior probabilities and coefficient estimates for the top 10 models of nominal narrow money demand in the UK.
Model M j M 1 M 2 M 3 M 4 M 5 M 6 M 7 M 8 M 9 M 10
P ( M j y ) 1.53%0.64%0.60%0.59%0.46%0.44%0.38%0.37%0.33%0.32%
m t 1 0.87100.86220.66910.72240.67600.87260.86890.89730.70560.7169
R n t −0.5057−0.4758−0.5634−0.5531−0.5807−0.4843−0.4948−0.5489−0.5749−0.6301
p t 0.1140 0.33330.12070.39830.11260.11550.22080.09830.2786
y t 1 0.12720.13530.12750.13330.1329 0.26990.10290.17650.0996
m t 4 0.2070 0.1941
p t 1 0.1198
P ( M j y ) 1.53%0.64%0.60%0.59%0.46%0.44%0.38%0.37%0.33%0.32%
m t 2 0.1431 0.17610.1853
p t 3 −0.2071
p t 2 −0.2685 −0.1251 −0.1826
R n t 2 −0.3563 −0.3143
const −0.6718
y t 2 −0.1408
y t 0.1255
Table 3. BMA (with stationary restrictions) posterior probabilities and coefficient estimates for the top 10 models of nominal narrow money demand in the UK.
Table 3. BMA (with stationary restrictions) posterior probabilities and coefficient estimates for the top 10 models of nominal narrow money demand in the UK.
Model M j M 1 M 2 M 3 M 4 M 5 M 6 M 7 M 8 M 9 M 10
P ( M j y ) 8.09%3.55%2.82%2.06%1.35%1.30%1.19%1.18%1.07%0.91%
m t 1 0.87100.86200.87250.89160.72280.86870.86450.88420.88610.8546
R n t −0.5059−0.4756−0.4843−0.4912−0.5527−0.4955−0.4530−0.5132−0.4635−0.4463
p t 0.11410.13550.11260.09860.12080.1157 0.0961 0.1423
y t 1 0.1273 0.13340.2695 0.1581
p t 1 0.1200 0.1178 0.1020
m t 2 0.1425
p t 2 0.1249
const −0.4988
y t 2 -0.1402
y t 0.1256 0.1329
y t 4 0.1083 0.1132
Table 4. BMA (without stationary restrictions) posterior probabilities and coefficient estimates for the top 10 models of nominal narrow money demand in the UK.
Table 4. BMA (without stationary restrictions) posterior probabilities and coefficient estimates for the top 10 models of nominal narrow money demand in the UK.
Model M j M 1 M 2 M 3 M 4 M 5 M 6 M 7 M 8 M 9 M 10
P ( M j y ) 7.06%3.28%3.14%2.18%1.42%1.16%0.97%0.95%0.88%0.87%
m t 1 0.87100.87250.86210.89140.72290.88780.88620.89910.88340.8687
R n t −0.5051−0.4844−0.4763−0.4925−0.5528−0.4995−0.4630−0.5396−0.4997−0.4948
p t 0.11400.1126 0.09870.12060.1018 0.17740.10510.1157
P ( M j y ) 7.06%3.28%3.14%2.18%1.42%1.16%0.97%0.95%0.88%0.87%
y t 1 0.1273 0.1354 0.1332 0.1012 0.2710
p t 1 0.1199 0.1019
y t 0.1256
m t 2 0.1427
y t 2 0.1158−0.1418
y t 3 0.1118
y t 4 0.1085 0.1131
p t 3 −0.0831
Table 5. Forecasting results of m in the UK based on BACE, BMA, Autometrics, and median probability models.
Table 5. Forecasting results of m in the UK based on BACE, BMA, Autometrics, and median probability models.
DateActualBACEBMA (Restricted)BMA (Unrestricted)AutometricsMedian BACEMedian BMA (Restricted)Median BMA (Unrestricted)
Fcast.SEFcast.SEFcast.SEFcast.SEFcast.SEFcast.SEFcast.SE
1985:310.96610.9710.016710.9690.015910.9690.016010.9670.014010.9670.014010.9670.015110.9670.0151
1985:411.00611.0200.018511.0170.023511.0180.023711.0130.018611.0130.018611.0130.021811.0130.0218
1986:111.07011.0730.021211.0660.030911.0670.031211.0580.021411.0580.021411.0580.027411.0580.0274
1986:211.12311.1270.024711.1160.038311.1180.038711.1030.023311.1030.023311.1030.032611.1030.0326
1986:311.18611.1740.028811.1600.045611.1620.046411.1430.024711.1430.024711.1430.037211.1430.0372
1986:411.21611.2180.034011.1990.053311.2020.054611.1780.025611.1780.025611.1780.041211.1780.0412
1987:111.28111.2650.040311.2410.062011.2450.063811.2150.026411.2150.026411.2160.045211.2160.0452
1987:211.34011.3110.046811.2820.070711.2870.073211.2500.026911.2500.026911.2510.049111.2510.0491
1987:311.37711.3510.053411.3180.079011.3230.082311.2800.027311.2800.027311.2820.052611.2820.0526
1987:411.42111.3980.060011.3590.087511.3650.091711.3160.027611.3160.027611.3180.055911.3180.0559
1988:111.47111.4380.066211.3950.095711.4020.100911.3470.027811.3470.027811.3500.059111.3500.0591
1988:211.51211.4770.073011.4310.104111.4380.110411.3770.028011.3770.028011.3800.061911.3800.0619
1988:311.53811.5070.080111.4570.112411.4650.119811.3980.028111.3980.028111.4010.064211.4010.0642
1988:411.55511.5390.087211.4840.120811.4930.129411.4190.028211.4190.028211.4230.066111.4230.0661
1989:111.60211.5710.093711.5120.128711.5220.138711.4420.028311.4420.028311.4460.068011.4460.0680
1989:211.64011.6000.100311.5380.136411.5490.147811.4630.028311.4630.028311.4680.069711.4680.0697
RMSE0.02240.05920.053170.10180.10180.09950.0995
MAPE0.17%0.43%0.39%0.74%0.74%0.72%0.72%
UM (bias)49.5%64.4%63.6%67.3%67.3%67.4%67.4%
UR (regression)40.0%34.0%34.4%31.9%31.8%31.9%31.8%
UD (disturbance)10.5%1.6%2.0%0.8%0.8%0.8%0.8%
BMA (restricted) indicates variant where we impose stationary conditions for autoregressive parameters, while BMA (unrestricted) denotes variant without stationary restrictions.
Table 6. List of variables and their definitions used in the long-term UK inflation model.
Table 6. List of variables and their definitions used in the long-term UK inflation model.
VariableDefinitionVariableDefinition
Y t real GDP, £ million, 1985 prices P e , t world prices (1985 = 1)
P t implicit deflator of GDP (1985 = 1) E t annual-average effective exchange rate
M t nominal broad money, million £ P n n i , t deflator of net national income (1985 = 1)
R s , t three-month treasury bill rate, fraction p.a. P c p i , t consumer price index (1985 = 1)
R l , t long-term bond interest rate, fraction p.a. P o , t commodity price index, $
R n , t opportunity cost of money measure m t d money excess demand
N t nominal National Debt, £ million y t d GDP excess demand
U t unemployment S t short–long spread
W p o p t working population n t d excess demand for debt
U r , t unemployment rate, fraction e r , t real exchange rate
L t employment π t * profit markup
K t gross capital stock U t d excess demand for labor
W t wages p o , t commodity prices in Sterling
H t normal hours (from 1920) C t nominal unit labor costs
Table 7. BACE, BMA, and Autometrics estimates for the long-term UK inflation model (30).
Table 7. BACE, BMA, and Autometrics estimates for the long-term UK inflation model (30).
VariableBACEBMA (Restricted)BMA (Unrestricted)Autometrics
PIPAvg.Avg.PIPAvg.Avg.PIPAvg.Avg.Coeff.Std.
MeanStd. Dev.MeanStd. Dev.MeanStd. Dev.Error
Hendry’s model (31) Econometrics 08 00021 i001 I d . t 1.000.03800.00151.000.03790.00141.000.03790.00140.03770.0015
Δ p e . t 1.000.26120.02481.000.26170.02361.000.26170.02360.26080.0247
S t 1 1.00−0.97860.10601.00−0.96960.10241.00−0.96960.1024−0.92340.0997
y t 1 d 1.000.18980.03811.000.18750.03521.000.18750.03520.18720.0330
Δ p t 1 1.000.28180.03531.000.28000.03221.000.28000.03220.26380.0264
π t 1 * 0.99−0.16740.02951.00−0.16840.02811.00−0.16840.0281−0.17780.0273
Δ R s . t 1 0.990.68960.12730.990.69030.11990.990.69030.11990.67230.1182
Δ p o . t 1 0.990.04920.01110.990.04890.01060.990.04900.01060.04870.0110
Δ m t 1 0.990.15310.03250.990.15750.03090.990.15750.03090.17320.0293
U t 1 d 0.71−0.05480.04430.60−0.04720.04490.60−0.04720.0449
n t 1 d 0.200.00060.00160.120.00040.00130.120.00040.0013
R l . t 1 0.150.00600.02170.090.00380.01700.090.00390.0170
Δ p e . t 1 0.120.00300.01340.070.00170.00990.070.00180.0100
Δ n t 1 0.120.00140.00630.060.00070.00440.060.00070.0044
const0.110.00010.00070.070.00010.00050.070.00010.0005
Δ w t 1 0.10−0.00010.01320.050.00010.00830.050.00010.0083
Δ U r . t 1 0.10−0.00090.02300.05−0.00010.01640.05−0.00010.0164
m t 1 d 0.10−0.00010.00430.05−0.00010.00280.050.00000.0029
Δ c t 1 0.090.00030.01040.050.00010.00660.050.00010.0066
Δ R l . t 1 0.090.00120.08390.050.00170.05910.050.00170.0590
BMA (restricted) indicates variant where we impose stationary conditions for autoregressive parameters, while BMA (unrestricted) denotes variant without stationary restrictions.
Table 8. BACE posterior probabilities and coefficient estimates for the top 10 models of UK inflation.
Table 8. BACE posterior probabilities and coefficient estimates for the top 10 models of UK inflation.
Model M j M 1 M 2 M 3 M 4 M 5 M 6 M 7 M 8 M 9 M 10
P ( M j y ) 21.93%6.42%4.63%3.28%3.06%3.01%2.80%2.57%2.41%2.41%
I d . t 0.03820.03770.03810.03830.03790.03780.03790.03800.03820.0382
Δ p e . t 0.26390.26080.26190.26100.25810.25830.26230.26350.26340.2635
S t 1 −0.9935−0.9234−1.0122−1.0002−0.9979−0.9607−0.9866−0.9896−0.9873−0.9946
y t 1 d 0.17880.18720.20690.17680.18580.22670.18290.18560.17850.1790
Δ p t 1 0.29240.26380.29220.28370.28350.26760.28850.29300.29470.2952
P ( M j y ) 21.93%6.42%4.63%3.28%3.06%3.01%2.80%2.57%2.41%2.41%
π t 1 * −0.1618−0.1778−0.1701−0.1642−0.1690−0.1875−0.1545−0.1619−0.1627−0.1620
Δ R s . t 1 0.71490.67230.66050.69370.72480.59960.71650.71460.69980.7171
Δ p o . t 1 0.04820.04870.05130.04790.04850.05320.04870.04880.04780.0480
Δ m t 1 0.15550.17320.14680.14850.14820.15790.14700.14650.15470.1562
U t 1 d −0.0790 −0.0710−0.0806−0.0814 −0.0716−0.0744−0.0833−0.0794
n t 1 d 0.0026 0.0038
Δ p e . t 1 0.0254
Δ n t 1 0.0123
R l . t 1 0.0253
const 0.0009
Δ U r . t 1 −0.0262
Δ w t 1 −0.0028
Table 9. BMA (with stationarity restrictions) posterior probabilities and coefficient estimates for the top 10 models of UK inflation.
Table 9. BMA (with stationarity restrictions) posterior probabilities and coefficient estimates for the top 10 models of UK inflation.
Model M j M 1 M 2 M 3 M 4 M 5 M 6 M 7 M 8 M 9 M 10
P ( M j y ) 32.66%17.63%3.74%3.24%2.95%2.80%2.39%1.94%1.92%1.90%
I d . t 0.03810.03770.03780.03810.03730.03830.03790.03790.03750.0380
Δ p e . t 0.26380.26090.25850.26160.25830.26100.2580−0.98640.26040.2636
S t 1 −0.9948−0.9224−0.9602−1.0122−0.9228−1.0017−0.99730.2621−0.9241−0.9898
y t 1 d 0.17870.18740.22640.20670.19420.17670.18600.18250.20020.1855
Δ p t 1 0.29270.26380.26760.29250.26140.28380.28310.28880.26870.2930
π t 1 * −0.1615−0.1782−0.1875−0.1699−0.1595−0.1639−0.1690−0.1544−0.1758−0.1619
Δ R s . t 1 0.71630.67180.59910.66100.68440.69440.72370.71650.67790.7140
Δ p o . t 1 0.04820.04880.05340.05140.04960.04780.04860.04870.04990.0488
Δ m t 1 0.15540.17300.15760.14660.15170.14840.14840.14710.15150.1466
U t 1 d −0.0793 −0.0711 −0.0807−0.0812−0.0718 −0.0746
n t 1 d 0.00380.0026
Δ p e . t 1 0.0254
Δ n t 1 0.0124
R l . t 1 0.0527 0.0251
const 0.00180.0008
Table 10. BMA (without stationarity restrictions) posterior probabilities and coefficient estimates for the top 10 models of UK inflation.
Table 10. BMA (without stationarity restrictions) posterior probabilities and coefficient estimates for the top 10 models of UK inflation.
Model M j M 1 M 2 M 3 M 4 M 5 M 6 M 7 M 8 M 9 M 10
P ( M j y ) 32.66%17.63%3.74%3.24%2.95%2.80%2.39%1.94%1.92%1.90%
I d . t 0.03810.03770.03780.03810.03730.03830.03790.03790.03750.0380
Δ p e . t 0.26380.26090.25850.26160.25830.26100.2580−0.98640.26040.2636
S t 1 −0.9948−0.9224−0.9602−1.0122−0.9228−1.0017−0.99730.2621−0.9241−0.9898
y t 1 d 0.17870.18740.22640.20670.19420.17670.18600.18250.20020.1855
Δ p t 1 0.29270.26380.26760.29250.26140.28380.28310.28880.26870.2930
π t 1 * −0.1615−0.1782−0.1875−0.1699−0.1595−0.1639−0.1690−0.1544−0.1758−0.1619
Δ R s . t 1 0.71630.67180.59910.66100.68440.69440.72370.71650.67790.7140
Δ p o . t 1 0.04820.04880.05340.05140.04960.04780.04860.04870.04990.0488
Δ m t 1 0.15540.17300.15760.14660.15170.14840.14840.14710.15150.1466
U t 1 d −0.0793 −0.0711 −0.0807−0.0812−0.0718 −0.0746
n t 1 d 0.00380.0026
Δ p e . t 1 0.0254
Δ n t 1 0.0124
R l . t 1 0.0527 0.0251
const 0.00180.0008
Table 11. BACE, BMA, Autometrics, and median probability models forecasting results for Δ p t in the UK.
Table 11. BACE, BMA, Autometrics, and median probability models forecasting results for Δ p t in the UK.
DateActualBACEBMA (Restricted)BMA Median BMA (Unrestricted)AutometricsMedian BACE (Restricted) Median BMA (Unrestricted)
Fcast.SEFcast.SEFcast.SEFcast.SEFcast.SEFcast.SEFcast.SE
19820.06810.04670.01240.04690.01180.04690.01180.04870.01100.04570.01070.04570.01160.04570.0116
19830.05510.04320.01200.04380.01250.04380.01250.04740.01140.04120.01110.04130.01210.04130.0121
19840.05270.04170.01260.04230.01320.04230.01320.04750.01140.03860.01120.03850.01250.03850.0125
19850.05290.04840.01200.04890.01260.04890.01260.05360.01140.04560.01120.04550.01200.04550.0120
19860.02590.03490.01310.03570.01360.03570.01360.04190.01140.03140.01120.03130.01270.03130.0127
19870.04950.01730.01420.01830.01510.01830.01510.02540.01140.0140−0.01120.01390.01420.01390.0142
19880.06260.06370.01420.06500.01540.06500.01540.07350.01140.05990.01120.05960.01430.05960.0143
19890.07440.07400.01290.07490.01410.07490.01410.08210.01140.07100.01120.07070.01330.07070.0133
19900.07690.04840.01320.04920.01410.04920.01410.05500.01140.04610.01120.04610.01360.04610.0136
19910.06040.03660.01290.03730.01390.03730.01390.04180.01140.03470.01120.03460.01340.03460.0134
RMSE0.01790.01750.01750.01510.01960.01970.0197
MAPE26.06%25.85%25.85%25.06%28.31%28.41%28.41%
UM (bias)54.5%51.3%51.3%21.9%65.0%65.3%65.3%
UR (regression)0.3%0.3%0.3%1.2%0.2%0.2%0.2%
UD (disturbance)45.2%48.3%48.3%76.9%34.8%34.5%34.5%
BMA (restricted) indicates variant where we impose stationary conditions for autoregressive parameters, while BMA (unrestricted) denotes variant without stationary restrictions.
Table 12. BACE coefficient estimates of the UKM1 model for different average prior model size assumptions.
Table 12. BACE coefficient estimates of the UKM1 model for different average prior model size assumptions.
Variable E ( Ξ ) = K / 8 E ( Ξ ) = K / 5 E ( Ξ ) = K / 4
PIPAvg.Avg.PIPAvg.Avg.PIPAvg.Avg.
MeanStd. Dev.MeanStd. Dev.MeanStd. Dev.
m t 1 1.000.76950.12241.000.76760.12241.000.77340.1220
m t 2 0.350.06870.11610.350.06810.11570.340.06640.1144
m t 3 0.12−0.00740.05060.13−0.00730.05120.13−0.00680.0510
m t 4 0.390.06070.09620.400.06320.09760.380.05840.0950
p t 0.670.15620.15810.670.15820.16070.660.15410.1593
p t 1 0.370.07830.20020.370.07930.20280.370.08010.2049
p t 2 0.28−0.05980.16570.29−0.06280.17110.31−0.06570.1752
p t 3 0.29−0.05480.12070.29−0.05470.12190.27−0.04910.1172
p t 4 0.19−0.01780.06820.19−0.01790.06930.19−0.01740.0680
y t 0.220.01800.05550.220.01830.05610.230.01880.0570
y t 1 0.650.11550.11730.640.11430.11710.640.11480.1178
y t 2 0.22−0.02690.08600.22−0.02590.08560.23−0.02690.0878
y t 3 0.15−0.00310.04880.15−0.00300.04980.15−0.00240.0493
y t 4 0.200.01950.05440.200.01920.05400.200.01850.0534
R n t 0.99−0.52080.11110.99−0.51870.11430.99−0.51950.1115
R n t 1 0.23−0.05530.13340.24−0.06010.13950.23−0.05540.1334
R n t 2 0.27−0.06500.14000.27−0.06580.14060.26−0.06100.1354
R n t 3 0.100.00340.04350.100.00320.04360.100.00400.0453
R n t 4 0.10−0.00300.03410.10−0.00260.03460.09−0.00280.0340
const0.23−0.15190.36520.22−0.15030.36610.22−0.14870.3626
Table 13. BACE coefficient estimates of UK inflation for different average prior model size assumptions.
Table 13. BACE coefficient estimates of UK inflation for different average prior model size assumptions.
Variable E ( Ξ ) = K / 8 E ( Ξ ) = K / 5 E ( Ξ ) = K / 4
PIPAvg.Avg.PIPAvg.Avg.PIPAvg.Avg.
MeanStd. Dev.MeanStd. Dev.MeanStd. Dev.
I d . t 1.000.03800.00151.000.03800.00151.000.03800.0015
Δ p e . t 1.000.26120.02481.000.26120.02481.000.26120.0248
S t 1 1.00−0.97860.10601.00−0.97860.10601.00−0.97860.1060
y t 1 d 1.000.18980.03811.000.18980.03811.000.18980.0381
Δ p t 1 1.000.28180.03521.000.28180.03521.000.28180.0352
π t 1 * 0.99−0.16740.02950.99−0.16730.02951.00−0.16740.0295
Δ R s . t 1 0.990.68960.12720.990.68970.12720.990.68960.1273
Δ p o . t 1 0.990.04920.01110.990.04920.01110.990.04920.0111
Δ m t 1 0.990.15320.03250.990.15320.03250.990.15320.0325
U t 1 d 0.71−0.05490.04430.71−0.05490.04430.71−0.05490.0443
n t 1 d 0.200.00060.00160.200.00060.00160.200.00060.0016
R l . t 1 0.150.00590.02160.150.00590.02160.150.00590.0217
Δ p e . t 1 0.120.00300.01330.120.00300.01330.120.00300.0133
Δ n t 1 0.120.00140.00630.120.00140.00630.120.00140.0063
const0.110.00010.00070.110.00010.00070.110.00010.0007
Δ w t 1 0.10−0.00010.01300.10−0.00010.01300.10−0.00010.0130
Δ U r . t 1 0.09−0.00090.02290.09−0.00090.02290.10−0.00090.0229
m t 1 d 0.09−0.00010.00420.10−0.00010.00420.09−0.00010.0042
Δ c t 1 0.090.00030.01020.090.00030.01020.090.00030.0103
Δ R l . t 1 0.090.00110.08310.090.00110.08320.090.00110.0833
Table 14. BMA (with stationarity restrictions) coefficient estimates of the UKM1 model for different average prior model size assumptions.
Table 14. BMA (with stationarity restrictions) coefficient estimates of the UKM1 model for different average prior model size assumptions.
Variable E ( Ξ ) = K / 8 E ( Ξ ) = K / 5 E ( Ξ ) = K / 4
PIPAvg.Avg.PIPAvg.Avg.PIPAvg.Avg.
MeanStd. Dev.MeanStd. Dev.MeanStd. Dev.
m t 1 1.000.83180.09661.000.82920.09781.000.82980.0980
m t 2 0.200.03560.08570.200.03620.08650.200.03640.0863
m t 3 0.06−0.00110.02870.07−0.00140.02910.06−0.00130.0287
m t 4 0.150.01780.05500.160.01960.05720.160.01900.0570
p t 0.640.10690.11700.650.10900.11780.650.10910.1184
p t 1 0.330.05560.14500.330.05530.14830.330.05290.1455
p t 2 0.20−0.02790.12330.20−0.02860.12540.20−0.02750.1245
p t 3 0.15−0.02160.07890.15−0.02230.07970.15−0.02110.0783
p t 4 0.11−0.00830.04470.10−0.00820.04490.10−0.00850.0442
y t 0.180.01730.04980.210.01990.05260.190.01780.0505
y t 1 0.600.09110.09510.590.09060.09530.590.09070.0952
y t 2 0.13−0.00740.05850.13−0.00720.05920.14−0.00690.0594
y t 3 0.120.00410.03960.110.00340.03770.100.00300.0363
y t 4 0.190.01910.04970.170.01760.04780.190.02000.0505
R n t 1.00−0.50900.09251.00−0.50810.09441.00−0.50800.0945
R n t 1 0.09−0.01730.07580.10−0.01990.08290.10−0.01920.0822
R n t 2 0.11−0.02120.08230.12−0.02200.08370.12−0.02240.0846
R n t 3 0.050.00120.02740.050.00140.02760.050.00160.0287
R n t 4 0.06−0.00300.02670.06−0.00260.02560.07−0.00330.0278
const0.14−0.09850.30760.14−0.09340.29850.14−0.09980.3097
Table 15. BMA (with stationarity restrictions) coefficient estimates of UK inflation for different average prior model size assumptions.
Table 15. BMA (with stationarity restrictions) coefficient estimates of UK inflation for different average prior model size assumptions.
Variable E ( Ξ ) = K / 8 E ( Ξ ) = K / 5 E ( Ξ ) = K / 4
PIPAvg.Avg.PIPAvg.Avg.PIPAvg.Avg.
MeanStd. Dev.MeanStd. Dev.MeanStd. Dev.
I d . t 1.000.03790.00141.000.03790.00141.000.03790.0014
Δ p e . t 1.000.26170.02381.000.18790.03541.000.18790.0354
S t 1 1.00−0.96870.10241.00−0.96860.10241.00−0.96850.1024
y t 1 d 1.000.18770.03531.000.26170.02381.000.26170.0238
Δ p t 1 1.000.27970.03241.000.27960.03241.000.27940.0324
π t 1 * 1.00−0.16850.02801.00−0.16870.02811.00−0.16870.0281
Δ R s . t 1 0.990.68930.11960.990.68890.12010.990.68890.1198
Δ p o . t 1 0.990.04900.01060.990.15770.03080.990.15770.0308
Δ m t 1 0.990.15750.03100.990.04890.01070.990.04890.0106
U t 1 d 0.59−0.04650.04490.59−0.04630.04490.59−0.04600.0449
n t 1 d 0.120.00040.00130.120.00040.00130.120.00040.0013
R l . t 1 0.100.00420.01760.090.00390.01710.090.00400.0172
Δ p e . t 1 0.070.00160.00960.070.00170.00970.070.00170.0097
Δ n t 1 0.060.00080.00450.070.00080.00470.070.00080.0047
const0.070.00010.00050.070.00010.00060.070.00010.0006
Δ w t 1 0.050.00010.00860.05−0.00030.01650.05−0.00010.0163
Δ U r . t 1 0.06−0.00030.01670.050.00020.00700.060.00010.0092
m t 1 d 0.05<0.00000.00280.050.00010.00850.05<0.00000.0028
Δ c t 1 0.050.00020.00680.050.00220.06190.050.00230.0611
Δ R l . t 1 0.050.00160.05910.05<0.00010.00290.050.00020.0069
Table 16. BMA (without stationarity restrictions) coefficient estimates of the UKM1 model for different average prior model size assumptions.
Table 16. BMA (without stationarity restrictions) coefficient estimates of the UKM1 model for different average prior model size assumptions.
Variable E ( Ξ ) = K / 8 E ( Ξ ) = K / 5 E ( Ξ ) = K / 4
PIPAvg.Avg.PIPAvg.Avg.PIPAvg.Avg.
MeanStd. Dev.MeanStd. Dev.MeanStd. Dev.
m t 1 1.000.83200.09821.000.83320.09731.000.83150.0982
m t 2 0.200.03600.08610.200.03630.08650.210.03740.0875
m t 3 0.06−0.00130.02810.07−0.00140.02920.06−0.00130.0277
m t 4 0.160.01930.05730.150.01750.05450.150.01810.0554
p t 0.640.11020.12030.650.10890.11800.650.11120.1202
p t 1 0.330.05470.14860.330.05210.14510.330.05230.1472
p t 2 0.20−0.02860.12580.18−0.02470.11760.20−0.02640.1243
p t 3 0.16−0.02310.08090.16−0.02290.08060.17−0.02400.0849
p t 4 0.11−0.00980.04770.11−0.00970.04730.11−0.00930.0482
y t 0.200.01910.05170.200.01950.05220.200.01890.0514
y t 1 0.560.08570.09450.570.08800.09560.570.08730.0952
y t 2 0.13−0.00660.05750.14−0.00700.05910.13−0.00710.0591
y t 3 0.110.00390.03760.110.00300.03770.110.00300.0371
y t 4 0.200.02050.05110.190.01910.04930.200.01990.0500
R n t 1.00−0.50990.09261.00−0.50870.09361.00−0.50890.0960
R n t 1 0.09−0.01770.07730.09−0.01810.07920.09−0.01860.0814
R n t 2 0.11−0.02210.08480.10−0.01920.07900.11−0.02100.0821
R n t 3 0.050.00130.02650.060.00150.02870.060.00160.0299
R n t 4 0.06−0.00290.02670.06−0.00280.02580.06−0.00310.0268
const0.14−0.09960.30970.14−0.09560.30350.13−0.08980.2958
Table 17. BMA (without stationarity restrictions) coefficient estimates of UK inflation for different average prior model size assumptions.
Table 17. BMA (without stationarity restrictions) coefficient estimates of UK inflation for different average prior model size assumptions.
Variable E ( Ξ ) = K / 8 E ( Ξ ) = K / 5 E ( Ξ ) = K / 4
PIPAvg.Avg.PIPAvg.Avg.PIPAvg.Avg.
MeanStd. Dev.MeanStd. Dev.MeanStd. Dev.
I d . t 1.000.03790.00141.000.03790.00141.000.03790.0014
Δ p e . t 1.000.26170.02381.000.18790.03541.000.18790.0354
S t 1 1.00−0.96870.10241.00−0.96860.10241.00−0.96850.1024
y t 1 d 1.000.18770.03531.000.26170.02381.000.26170.0238
Δ p t 1 1.000.27970.03241.000.27960.03241.000.27940.0324
π t 1 * 1.00−0.16850.02801.00−0.16870.02811.00−0.16870.0281
Δ R s . t 1 0.990.68930.11960.990.68890.12010.990.68890.1198
Δ p o . t 1 0.990.04900.01060.990.15770.03080.990.15770.0308
Δ m t 1 0.990.15750.03100.990.04890.01070.990.04890.0106
U t 1 d 0.59−0.04650.04490.59−0.04630.04490.59−0.04600.0449
n t 1 d 0.120.00040.00130.120.00040.00130.120.00040.0013
R l . t 1 0.100.00420.01760.090.00390.01710.090.00400.0172
Δ p e . t 1 0.070.00160.00960.070.00170.00970.070.00170.0097
Δ n t 1 0.060.00080.00450.070.00080.00470.070.00080.0047
const0.070.00010.00050.070.00010.00060.070.00010.0006
Δ w t 1 0.050.00010.00860.05−0.00030.01650.05−0.00010.0163
Δ U r . t 1 0.06−0.00030.01670.050.00020.00700.060.00010.0092
m t 1 d 0.05<0.00000.00280.050.00010.00850.05<0.00000.0028
Δ c t 1 0.050.00020.00680.050.00220.06190.050.00230.0611
Δ R l . t 1 0.050.00160.05910.05<0.00010.00290.050.00020.0069
Table 18. Run times of BACE gretl package.
Table 18. Run times of BACE gretl package.
CPUsUKM1UK Inflation
without Forecastswith Forecastswithout Forecastswith Forecasts
NrepRun TimeNrepRun TimeNrepRun TimeNrepRun Time
1 5 × 10 5 147 5 × 10 5 169 5 × 10 5 112 5 × 10 5 128
4 5 × 10 5 128 5 × 10 5 143 5 × 10 5 49 5 × 10 5 54
20 5 × 10 5 23 5 × 10 5 28 5 × 10 5 15 5 × 10 5 17
CPUs denotes the total number of processors used in simulation experiment, Nrep means the total number of iterations in MC3 sampling algorithm, and Run time denotes computational time (in seconds).
Table 19. Run times of BMA gretl package (with stationary restrictions).
Table 19. Run times of BMA gretl package (with stationary restrictions).
CPUsUKM1UK Inflation
without Forecastswith Forecastswithout Forecastswith Forecasts
NrepRun TimeNrepRun TimeNrepRun TimeNrepRun Time
1 1.5 × 10 5 10,554 1.5 × 10 5 275,165 1.5 × 10 5 1457 1.5 × 10 5 35,996
4 1.5 × 10 5 2470 1.5 × 10 5 56,294 1.5 × 10 5 380 1.5 × 10 5 6829
20 1.5 × 10 5 1169 1.5 × 10 5 14,771 1.5 × 10 5 136 1.5 × 10 5 3044
CPUs denotes the total number of processors used in simulation experiment, Nrep means the total number of iterations in MC3 sampling algorithm and Run time denotes computational time (in seconds).
Table 20. Run times of BMA gretl package (without stationary restrictions).
Table 20. Run times of BMA gretl package (without stationary restrictions).
CPUsUKM1UK Inflation
without Forecastswith forecastswithout Forecastswith Forecasts
NrepRun TimeNrepRun TimeNrepRun TimeNrepRun Time
1 1.5 × 10 5 353 1.5 × 10 5 291,328 1.5 × 10 5 81 1.5 × 10 5 32,095
4 1.5 × 10 5 167 1.5 × 10 5 65,862 1.5 × 10 5 54 1.5 × 10 5 6556
20 1.5 × 10 5 103 1.5 × 10 5 16,630 1.5 × 10 5 55 1.5 × 10 5 1778
CPUs denotes the total number of processors used in simulation experiment, Nrep means the total number of iterations in MC3 sampling algorithm, and Run time denotes computational time (in seconds).
Back to TopTop