Sustainable Approach to the Normalization Process of the UK’s Monetary Policy

: It has been more than a decade since central banks, in the face of the global ﬁnancial crisis, implemented a set of unconventional initiatives that included a rapid and signiﬁcant decrease in their main interest rates and an unprecedented balance sheet policy. Thus far, they still have not returned their monetary policy to the pre-crisis framework and have not implemented a normalization process. Currently, a trend of using econometric models in monetary policy for forecasting purposes has been observed. Among these models, Bayesian vector autoregression models (BVAR models) are increasingly being used by central banks. The main aim of this study was to conduct an empirical veriﬁcation of the BVAR model’s usage for short-term prediction which could then be used for a sustainable (ordered) normalization process for the UK’s monetary policy. This study veriﬁes a research hypothesis which states that the BVAR model might be a useful tool in the Bank of England’s decision-making process regarding the normalization of its monetary policy. Additionally, the cause and e ﬀ ect analysis, observation method, document analysis method, and synthesis method were also considered. The conducted research indicates that a large BVAR model has a signiﬁcant predictive value for short-term forecasting.


Introduction
The end of the first decade of the 21st century triggered the start of a new period in central banking-a period of non-standard and unconventional monetary policy instruments, which have been used with an unprecedented scope and scale. Monetary authorities' actions, undertaken in response to the spread of global financial instability, have included unprecedented interventions that have led to a reduction in main interest rates to historical lows (zero or, in some cases, even negative) levels, a huge expansion of central banks' balance sheets, and changes in their communication systems with stakeholders [1,2].
After more than a decade of extraordinary monetary policy, the further stimulation of domestic banking sectors seems to be inconvenient, even for central banks themselves. First of all, monetary authorities need to return to more a traditional approach of conducting monetary policy (normality), because too long a period of unconventional interventions could undermine their ability to maintain price stability. Secondly, they are afraid that the current environment of low interest rates could generate factors that previously contributed to instability at the beginning of the 21st century, thereby leading to another period of destabilization in the near future. Thirdly, non-standard monetary policy, conducted over a long period of time, can undermine monetary authorities' credibility, which is one of the most important attributes of modern central banks. All of these considerations suggest that the process of monetary policy normalization for modern central banks seems to be necessary and fully justified.

Literature Review
A sustainable approach to the normalization process of monetary policy is based on a procedure of how to exit non-standard monetary policy and return to the pre-global financial crisis framework. In a broader perspective, it is a process related to the withdrawal of unconventional instruments and the stabilization of banking sector conditions [4]. Assumptions of the normalization process are therefore precisely defined by monetary authorities in their exit strategies. In this approach, the normalization process of monetary policy is a result of three successive events [8]: • The completion of the non-standard monetary policy of the largest central banks implemented during the global financial crisis; • The adoption of an "exit strategy" by central banks; • Determining the conditions for normalization-how to return to classic monetary policy.
Therefore, exit strategies are a core of the normalization process. Nevertheless, this process is much broader and also includes other actions, initiatives, and instruments aimed at stabilizing the monetary policy of central banks, provided that the objectives of price stability and financial system stability are achieved.
Monetary policy normalization is a long-term process and, in its assumptions, includes several simple operations, relating to [9,10]:

•
Halting extraordinary interventions; • Downsizing and normalizing central banks' balance sheets; • Selling purchased assets, if necessary; • Raising short-term interest rates.
However, in practice, uncertainty about the perspectives of economic activity, inflation rate, or the functioning of the monetary transmission mechanism may complicate the implementation of the exit strategy. The normalization process should, therefore, be developed to promote sustainable economic growth in the long-term, while it also should be flexible enough to respond immediately to a changing macroeconomic environment [11].
Modern central banks are now faced with many challenges in the area of new monetary policy objectives and tools. It is extremely important to precisely define assumptions of the normalization process of modern monetary policy, in particular when setting aims to be achieved, based on sustainable development. Thus, the normalization process is much more difficult for central banks than the enter strategy policy-i.e., the implementation of unconventional instruments. A lack of previous experience in the realization of normalizing actions is also a type of barrier, or a fear of the smooth implementation of the exit assumptions. It is also difficult for central banks to predict the potential consequences of normalization, which additionally slows down their practical action towards the new normal.
Implementation in a timely way is a key aspect of a sustainable normalization process. This is why, for many central banks, the most important aspect is when they should stop monetary easing (expansionary monetary policy) and start its tightening (restrictive monetary policy) and then implement the adopted exit strategies. Some of them indicate that this should be the moment when certain decisions are made about:

•
The end of an increase in the asset portfolio of monetary authority-tapering purchase; • An increase in the main interest rates and starting to adjust the asset portfolio to the new steady state.
These are two separate decisions. However, both of them should primarily depend on the current economic and financial conditions. They should not be implemented at a predetermined time. Therefore, it is not possible to uniquely and precisely an identify appropriate start time for the normalization process. The completion of monetary policy easing should take place when there are justified assumptions that economies will return to a growth path towards a higher level of use of resources and inflation moves towards the target, and this path will be maintained despite the withdrawal of non-standard monetary policy instruments [12]. However, it is extremely difficult to determine when exactly macroeconomic variables achieve the path and will be able to keep on the path, even when monetary easing and asset purchase programs end [13]. Therefore, central banks are looking for some statistical or econometric methods to predict these variables as a base for their normalizing decisions.
This prediction is very important, as a too-early normalization process as well as a too-late implementation may generate both profits and losses [14]. Therefore, the question arises of whether it is more beneficial for monetary authorities to implement an exit strategy early enough, when the first signs of economic recovery appear, or whether it is better to extend the time of the normalization process until the total restoration of financial system stability. The rapid implementation of the normalization process will allow counteracting the excessive expansion of the central bank balance sheet and the growing inflation risk. On the other hand, it may cause the weakening of economic growth (or even recession), a positive effect of non-standard instrument implementation [13].
On the other hand, late normalization, when central banks' assets expand to a very high level, generates significant costs. An increase in the size and changes in the structure of central banks' balance sheets may affect their ability to begin a smooth normalization process or social confidence that this process will be implemented in an appropriate time without the risk of inflationary pressure [15]. On the other hand, central banks are concerned that the higher value of their balance sheet total and the longer the maturity of instruments held in their assets, the greater the negative consequences of normalization will be [4].
In the view of the central banks, the timing of the end of non-standard monetary policy instruments is primarily determined by their effectiveness and the degree of achievement of assumed objectives. For example, B. S. Bernanke and V. Reinhardt have suggested adopting a desirable (targeted) level of a specific indicator-for example, a level of government bond yields-that a central bank will try to achieve by implementing non-standard tools [16,17]. Then, the achievement of this objective will imply the need to withdraw specific instruments, giving rise to normalization actions. A. Belke emphasizes in turn that no specific date should be indicated and it should not be done "as early as possible", but should try to indicate the conditions that need to be met to trigger an exit [18]. This means that potential forecasts that can be obtained through econometric-predicted models might be crucial in central banks' decision-making processes.
In modern monetary policy, there is an observed a trend to use econometric models for the construction of macroeconomic variables forecasts, the assessment of the consequences of monetary policy instruments and actions undertaken, the simulation of economic reactions to specific shocks, as well as in the development of alternative monetary policy scenarios [19]. Central banks, in the process of the selection and construction of forecasting models for monetary policy, take into account analytical needs, research objectives, as well as the tasks that these models will have to fulfill [20].
Economic modeling used in monetary policy is a kind of consensus between theoretical and empirical cohesion. The model used by central banks-which is also the ideal model-should [21]: • be characterized by a high degree of theoretical coherence; • enable conducting economic analysis of specific issues; • be consistent with empirical data.
In central banks, the following macroeconometric models are most often used in the forecasting process [20]: • Dynamic factor models (DFM models); • Dynamic Stochastic General Equilibrium models (DSGE models); • Vector autoregression models (VAR models).
There are many advantages and arguments regarding using the above-presented forecasting models in monetary policy. First of all, they represent three main types of macroeconometric models used by central banks in forecasting. Secondly, they differ in terms of the ways in which they try to use the General Equilibrium Theory as a source of information about the formulation of models and conducting statistical inference. Thirdly, these models adopt different approaches to gather information, in particular when the main aim is to obtain useful predictions about complex phenomena, constructed for relatively few data. Nevertheless, in many methodological studies it is indicated that the most fruitful is a combination of these models, especially when they are varied [22].
Dynamic factor models (DFM) are used in short-term macroeconomic forecasting. Central banks apply them as a tool to support the short-term forecasting of key macroeconomic variables [23][24][25]. The essence of dynamic factor models is the aggregation of a large number of potential explanatory variables to several mutually independent factors, which are used to predict the selected variable [26]. However, one of the significant difficulties of dynamic factor models is its specification. In particular, this refers to the identification of a number of factors included in forecasting, as well as the number of possible delays occurring in the forecasting equation. Conducted literature studies indicate many approaches used to determine models' specification for prognostic purposes. Bai and Ng (2002) [27] as well as Breitung and Eickmeier (2005) [25] argue that the identification of the optimal number of factors should be made on the basis of modified information criteria. In turn, Artis, Banerjee, and Marcellino (2005) [28] as well as Matheson (2005) [29] propose that the selection of an exact specification of the prognostic equation should be based on the standard Bayesian Information Criterion. However, previously conducted research reveals that most often the number of factors and delays is set ad hoc at an arbitrary and usually small level [23,30].
Dynamic Stochastic General Equilibrium models (DSGE) are much younger forecasting models that are widely used in monetary policy. They constitute a monetary policy tool that is able to describe the monetary policy transmission mechanism, starting from a monetary impulse (monetary policy decisions) finally to the reaction of main macroeconomic variables [31]. Scientific research on the use of dynamic stochastic general equilibrium models in the monetary policy of central banks has been conducted for over 20 years. They focus on their theoretical construction, specifications, as well as their estimation. Most central banks, implementing the direct inflation targeting, have or are in the process of constructing their own DSGE model adapted to the specifics of their national economy.
In the prognostic process, the third most frequently used macroeconometric model in monetary policy is the vector autoregression model (VAR model). Vector autoregression models are commonly used in macroeconomics for structural analysis and forecasting since the groundbreaking work of Sims (1980) [32]. An important feature of the VAR models is their flexibility, which allows the identification of complex dependencies between macroeconomic variables. However, their use to estimate a large number of parameters also determines the need to increase the assumed degrees of freedom, and thus results in the need to adopt broad confidence intervals for estimated coefficients. Therefore, VAR models should be used in studies that take into account a small number of variables. On the other hand, they can lead to the problem of omitted-variables bias (OVB), significantly weakening the conducted structural analysis and forecasting effectiveness (See: [33][34][35]).
The use of vector autoregression models is very wide. In the 1990s, J. A. Bikker, a Dutch econometrician, used them for forecasting and modeling national economies [36], as well as for analyzing dependencies between the economies of different countries [37]. F. Canova studied relations between the production value of Germany, the U.S., and Japan [38]. Together with J. Pina, he also referred them to currency policy [39]. In turn, J.E. Sturm and J. de Haan (1995) analyzed connections between the real budget deficit and the real growth of national income [40]. Finally, F.C. Bagliano, and C.A. Favero (1998) used VAR models to analyze the monetary transmission mechanism in the U.S. [41].
However, the imperfections resulting from the all above models used in the forecasting process of central banks justify a need to search for new methods of variable forecasting in monetary policy, taking into account the assumptions of the sustainable development of the modern economies.
Against this background, in recent years research has been undertaken on using a new approach to vector autoregression models-Bayesian vector autoregression models (BVAR models). They are a useful tool to forecast macroeconomic variables. The main argument that confirms legitimacy of adopting the BVAR methodology in central banks' forecasting process is the fact that, in the case of large models-i.e., those with a large number of explanatory variables (containing about 30 variables and more)-the Bayesian approach allows avoiding problems with over-parameterization that could occur in classic VAR models. This phenomenon could occur in a situation of the use of a standard VAR model estimated for such a large set of variables. The second argument about the BVAR model is the possibility of minimizing an impact of the over-fitting problems. This problem occurs when a statistical model has too many parameters in relation to the size of research sample, which was a basis for its construction [39,42]. Considering the above, Bańbura Giannone and Reichlin (2010) indicate that it is the appropriate tool for large, dynamic macroeconometric models [43]. An important advantage of the BVAR models is their objectivity and flexibility. They give researcher an opportunity to exchange information within a research sample in a completely transparent manner. This possibility allows building a model that takes into account not only the stochastic behavior of economic variables, but also the uncertainty associated with existing dependencies within the analyzed economic system. This kind of flexibility leads to the formation of a better model in terms of economic forecasting compared to structural models with more restrictions, as well as a more accurate model than traditional VAR models, with coefficients estimated based on the least squares method, burdened by the previously mentioned over-fitting problems. Therefore, the BVAR model enables the characterization of the future path of economic variables in probabilistic terms. The objectivity of this type of model is also an important advantage, because the researcher can easily reproduce forecasts.
On the other hand, the BVAR models are not without flaws. Ciccarelli and Rebucci (2003) conclude that a main limitation of BVAR models is that the correct identification and selection of variables is extremely important for the final obtained results [44]. In many cases, the selection of variables is based on incomplete or incorrect information, and thus hinders and subjectivizes decisions regarding the adoption of a particular set of parameters. Another disadvantage of the BVAR models is also the lack of economic interpretation. Another criticism of BVAR models (also referring to classic VAR models) is the fact that they do not explicitly include long-term dependencies between variables. This theory postulates that certain variables follow a common path in time, or at least that they do not diverge continuously, meaning that they are cointegrated.
The first evidence of the use of BVAR models in forecasting macroeconomic variables relates to the work of the econometricians team from the Federal Reserve Bank of Minneapolis (later referred as the Minneapolis school-Minneapolis prior): Doan, Litterman, and Sims (1984) [45]; Todd (1984) [46]; and Litterman (1984Litterman ( , 1985 [47,48]. Doan, Litterman, and Sims tried to improve the prognostic results obtained in the VAR models by estimating them using the Bayesian approach, which takes into account any previous information that may be available to a researcher. Their results pointed to the lack of economic interpretation of the Bayesian vector autoregression models. On the other hand, they provided detailed characteristics of the dynamic statistical dependencies of a set of economic variables [45]. Litterman (1985) also emphasized the simplicity of using these models, which at the same time generate precise forecasts, with a low financial outlay, necessary in alternative methods of prediction, giving similar results. Moreover, he pointed out that the BVAR models generate not only a simple forecast but a multidimensional probability distribution for future results of the economy, which seemed more realistic than those obtained by other methods. However, in terms of inflation forecasting, the effectiveness of BVAR models was slightly less impressive. In Litterman's research, the inflation forecast errors turned out to be on average twice as large as errors predicted by other methods [48]. In turn, Tood (1984) confirmed that the BVAR models represent a better approach to forecasting, giving greater flexibility for a researcher in reflecting the real nature of phenomena as well as objectivity in the model building process [46].
In the early 1990s, Artis and Zhang (1990) used the Bayesian approach in vector autoregression models for studies of G7 group of countries. In the majority of the analyzed countries, the inflation forecast was as accurate as the forecasts of output and the balance of payments. Therefore, the authors confirmed that the method they used was as effective as the forecasts developed by traditional methods applied by the IMF in World Economic Outlook [49].
Further research on the effectiveness of BVAR models was carried out by Kadiyal and Karlsson (1997) [50], as well as Dua and Ray (1995) [51]. Comparing the obtained forecasts with those from the VAR and ARIMA models, they indicated that the BVAR model generates the most accurate shortand long-term forecasts, as well as correctly predicting the direction of changes in the analyzed variables [51].
Bayesian vector autoregression models are also increasingly used by central banks [52][53][54]. Kasuya and Tanemura (2000) confirmed earlier achievements, recognizing the Bayesian vector autoregression model as more effective compared to the VAR model [52]. Alvarez, Ballabriga, and Jareno (1998) estimated the BVAR model for the Spanish economy, comparing the results to those obtained from the VAR model. They concluded that, in predicting price variables, the superiority of the BVAR model over the other models is obvious, and the differences between them are quite significant [53]. Kenny, Meyler, and Quinn (1998) used the BVAR model to forecast inflation in Ireland. Their results confirmed a significant improvement of forecasts obtained by the use of the Bayesian approach [54].
Modern approach to the use of BVAR models is represented by Kapetanios et al. (2012) [6] and Lenza, Pill and Reichlin (2010) [7]. Their results confirm validity of the BVAR models in forecasting macroeconomic variables, used in the decision-making process by monetary authorities. This study uses the methodology developed by Kapetanios et al. (2012) [6] and Lenza, Pill, and Reichlin (2010) [7] and their experience in the construction of macroeconomic forecasts from the point of view of the normalization process of the Bank of England monetary policy.

Materials and Methods
Based on the above presented arguments, the adoption of the BVAR model's methodology in the process of forecasting macroeconomic variables for monetary policy objectives seems to be fully justified. The BVAR model adopted in empirical research was developed in accordance with the methodology proposed by Kapetanios et al. (2012) [6] and Lenza, Pill, and Reichlin (2010) [7], however it was adapted to the specifics of the UK market. The general BVAR model takes the following form [55]: where: Y t represents a large vector of random variables included in a large data set at time t-i.e., y 1t , y 2t , . . . y nt '; e t represents an n-dimensional vector white-noise error term; θ 0 represents an n-dimensional vector of contants; θ 1 . . . θ p represent n x n autoregressive parameter matrices.
Explanatory (endogenous) variables included in the model are a set of macroeconomic indicators-general economic and financial market parameters of the United Kingdom. The research covered 28 indicators (see Appendix A), characteristic for the UK economy, which also illustrates the specificity of its financial markets. These variables are permanent, which is a strong argument for including them in the constructed BVAR model [41,48].
They argue that, in general, simple autoregression and random walk models generate good-quality forecasts for macroeconomic and financial variables [6]. Therefore, in the conducted research, the method of forecasting based on the random walk process with drift [56] was chosen for each variable in the BVAR model, in accordance with the previous scientific studies undertaken and analyzed in the literature [57]. Thus, the adopted model is expressed by the following formula: (2) The elements on the diagonal θ 1 are striving towards 1, while other components of the matrix θ 1 . . . θ p tend toward 0, which also indicates that the first delay is the most important predictor in each equation in the BVAR model. In other words, the expected value of the matrix θ 1 is: In accordance with the adopted research methodology, the BVAR model was used to verify forecasts for the UK economy. In the model, endogenous variables were a set of general economic and financial parameters, including the consumer price index, domestic production volume, interest rates, treasury securities yields, monetary aggregates, unemployment rate, real estate prices, oil prices, share prices, consumer confidence indicator, exchange rates, etc. Thus, the selection of the adopted variables in BVAR models was not accidental or random. It was made fully based on the methodology developed and presented by Kapetanios et al. (2012) [6].
The BVAR models for the UK included mainly monthly data (only for one variable, WB2; these were quarterly data. See Appendix A). The analyzed models contained data for 101 periods. The analyses lasted from January 2005 to June 2013, assuming that the last 5 years of collected data (i.e., until June 2018) will be a basis for the evaluation of the model's prediction. The adopted research period resulted from the fact that it was the earliest period for which all the data needed in calculations were available. In the case of some explanatory variables, their logarithmized value were used. For the remaining variables, which, by definition, are indicators, their real values were assumed (see Appendix A). The Gretl software was used to perform econometric calculations [55].
The macroeconomic and financial variables used in the adopted BVAR models included data for the United Kingdom from the databases of the Bank of England, Federal Reserve Economic Data, World Development Indicators, Bloomberg, and OECD.
Finally, nine variables and two indicators were included in the analysis. Based on the Schwarz information criterion, we adopted one row of delays (SC = −45.45). In order to determine the accuracy of forecasts for 12 periods (1 year), forecasts for the period between February 2013 and January 2014 were determined. Table 1 presents the results of their evaluation. The adopted model represents a good value of the annual prediction for all variables. For WB18, the RMSE value is slightly higher, which indicates differences between the actual values and the forecast.
Nevertheless, these differences are not large (Theil's U-statistics are relatively low)--see Table 1. For WB14, the differences between the forecasts and actual values are significant, although there are not many of them; it can be noticed that the forecast took into account changes in the adopted data, although in fact they did not occur. Figure 1 illustrates differences in the actual values and the annual forecast for the BVAR model for the United Kingdom.    The obtained and detailed presented results for 1-year period forecasts confirmed that the model might be used for the prediction of financial and macroeconomics variables in the normalization process of central banks' monetary policy. The model's high ability to predict variables in the short-term shows the need for further in-depth research on its ability in the medium-and long-term.

Discussion and Conclusions
The normalization process does not occur automatically and requires some coordinated and well-planned actions to be undertaken by monetary authorities. Their scope is largely dependent on the economic and financial conditions in a specific country. Monetary policy tightening is difficult, and the consequences of too-late normalization can be significant. Lowering interest rates when economic activity slows and inflation decreases is relatively easy. However, the process of raising interest rates is much more difficult when the economy is moving towards recovery. This is all the more important when the signals of sustainable economic growth are not completely clear. Thus, it should be noted that monetary policy easing during the financial crisis was associated with the extraordinary courage of central banks, however it is expected that the normalization phase will require even much more courage from them.
From the sustainable monetary policy normalization process point of view, it seems that an effective method of forecasting macroeconomic variables, particularly important for central banks, is a key tool facilitating the decision-making process, regarding the identification of the time, pace, and methods of normalization. It gives monetary authorities an instrument through which they might be able to anticipate changes in macroeconomic and financial variables, also taking into account the consequences of specific monetary policy instruments and decisions. The Bayesian vector autoregression model presented in the study currently finds an increasing number of supporters among central banks to be an effective method of forecasting. The conducted research indicates that the large BVAR model, based on its high forecasting ability identified in the research, has a significant predictive value in short-term forecasting. Based on this model, central banks may predict future short-term macroeconomic conditions and decide about their further operations-whether it is justified to continue extraordinary monetary policy or whether it is better to start its withdrawal. At the same time, this means that monetary authorities may base their decision-making process on forecasts and, through them, make decisions about key issues, such as when and by which instruments they should begin moving their monetary policy towards the new normal. On the other hand, they may use the large BVAR model to predict what kinds of consequences specific normalizing activities may cause and decide whether their implementation is rational.
The results presented in the paper for the United Kingdom are only a part of a wider research project aimed at the verification of the accuracy of BVAR models in monetary policy and then the determination of forecasts of macroeconomic variables, which will form a basis for central bank decision-making processes regarding the normalizing activities of their monetary policy. The obtained results for the one-year prediction the confirm legitimacy of using the BVAR model for this purpose. At the same time, they indicate its considerable precision and accuracy in short-term forecasting, with a high degree of objectivity and flexibility left to the researcher. Thus, for short-term forecasting, they confirm the adopted research hypothesis that the Bayesian vector autoregression model might be a useful tool in the decision-making process of monetary authorities, regarding the normalization of the Bank of England's monetary policy. At the same time, they are a basis for further in-depth research and studies of whether the BVAR model can be used in medium-and long-term forecasting. The results presented in this paper have also confirmed the results obtained in the preliminary research conducted by the author for the Euro area and the European Central Bank. They indicated that only short-term predictions might be useful for the central bank of the Eurozone [5].
The obtained results constitute a significant contribution to the economic sciences. Indeed, they point out that large BVAR models might be a significant tool in the sustainable normalization process of modern central banks. Therefore, the results are innovative, confirming the value of short-term predictions in analyzing conditions for starting the implementation of normalizing activities. The conducted research can also be a reference point for further in-depth research for other countries and central banks that implemented non-standard and unconventional monetary policies-in particular, the United States, Japan, and some countries from the European Union outside the euro area (for example, Sweden).
Finally, some additional questions arise regarding the undertaken issue which could be a background for the author's further research: • Why are only short-term forecasts significant? • Are there any possibilities to use longer forecasting-medium-or long-term? • What is the final list of variables that should be included in the large BVAR model? • What are the results for other major central banks that also implemented non-standard and unconventional monetary policy instruments and that are now trying to exit them?
Nevertheless, it should be noted that the main limitation of the current research is that the list of variables can be expanded, which may impact the obtained results. Moreover, this list should be adapted to the specific selected country, so it is almost impossible to indicate one commonly used list of variables that should be included in the BVAR model for different countries and central banks.
Funding: This research received no external funding.

Conflicts of Interest:
The author declares no conflict of interest.