Does More Expert Adjustment Associate with Less Accurate Professional Forecasts ?

Professional forecasters can rely on an econometric model to create their forecasts. It is usually unknown to what extent they adjust an econometric model‐based forecast. In this paper we show, while making just two simple assumptions, that it is possible to estimate the persistence and variance of the deviation of their forecasts from forecasts from an econometric model. A key feature of the data that facilitates our estimates is that we have forecast updates for the same forecast target. An  illustration  to  consensus  forecasters  who  give  forecasts  for  GDP  growth,  inflation  and unemployment for a range of countries and years suggests that the more a forecaster deviates from a prediction from an econometric model, the less accurate are the forecasts.

When the predictions of professional forecasters are averaged, the resulting consensus forecast is quite often reasonably accurate. At times of a crisis or turning points, however, they can be inaccurate all together. The latter may be due to their joint behavior, where herding is sometimes seen, see Laster et al. (1999). Often studied forecasters are those collected in the Survey of Professional Forecasters (SPF) 1 and those in consensus economics 2 . In this paper we will study the behavior of the consensus forecasters.
Despite an abundance of studies on professional forecasters, there is less research available to understand what it is that the professional forecasters actually do when they create their forecasts. They may or may not look at each other, and they may or may not use similar sources of information. In the present paper, we aim to address the potential consequences of whether they rely on an econometric model. In fact, our research question will concern the link between potential deviations from an individual-specific econometric model and forecast accuracy.
There is some evidence in the literature that more deviation from a model forecast, and hence a large sized adjustment, associates with lower accuracy, see Franses (2014) for a recent survey. In this 1 https://www.philadelphiafed.org/research-and-data/real-time-center/survey-of-professional-forecasters/. 2 https://www.consensuseconomics.com/. paper we study this conjecture for the forecasts from the consensus forecasters for various years, for the three key variables-GDP growth, inflation and unemployment, and for a range of countries. Our basic finding is that we find much evidence that more expert adjustment associates with lower forecast accuracy.
Our paper proceeds as follows. In the next section we outline how we can construct measures, of the persistence and the variance of expert adjustment, from the observed forecasts. For many forecasters, we have a range of quotes for the same target variable. These forecast updates allow us to arrive at our estimates, where we need two key assumptions. The first assumption is that econometric model-based forecasts are updated only once a year, while the expert-adjusted forecast updates concern sequential months. If this holds true, then the observed updates appear to be informative for the adjustment process. Our second assumption is that the adjustment process can be described by a simple first-order autoregression. Another assumption on the model could be made too but that would make the estimation process a more complicated. Section 2 outlines our approach. Section 3 considers the forecasts, and we first present detailed results for the USA. Section 4 presents the results for other countries, where we present them is a summary. Section 5 concludes with our general finding that more deviation from the forecast from an econometric model associates with a deterioration of forecast accuracy.

Persistence and Variance of Adjustment
Consider a forecaster who gives a forecast F for variable y in year T. This forecast is given in each month j in the years T − 1 and T. Therefore, there are 24 forecasts for each year T. Note that January forecasts are special as these, for the first time, address a new calendar year. Therefore, out of the 24 forecasts monthly forecasts created across two years, we may view 22 of them as being useful updates.
To create a measure of persistence of forecast adjustment, we will rely on the updates, as we will explain next. A key assumption of our study is that a monthly forecast by a professional forecaster is the sum of an econometric model-based forecast and added intuition or expertise. We thus assume that: . Empirical evidence summarized in the survey in Franses (2014) supports that this assumption holds for a wide range of macro-economic and business forecasts. Another conclusion from that survey is that in practice it is rare that we observe both the finally adjusted forecasts and the model forecasts at the same time. In that case, one could simply evaluate the added value of intuition or expertise, by comparing the signs and size of adjustment with out-of-sample forecast performance, see Fildes et al. (2009) andFranses et al. (2011).
To elicit the sign and size of the added judgment, we make the assumption that model forecasts for annually observed variables are not updated each and every month, but that these are created only once in a year. Therefore, a plausible assumption is that: where | refers to the 24 forecasts for each year T created in the 24 months in years T − 1 and T, where | is the model forecast for year T created in year T − 1, and where | refers the monthly adjustment of these model-based forecasts. Hence, note that the model forecast | is made only once in the year T − 1. If we carry on with that assumption, then the forecast updates in year T − 1 are given by: which becomes: Therefore, we thus also have for year T that: Due to the special nature of January 3 , we have for each year, 11 useful updates which only include the adjustments, and these run from February to December in years T − 1 and T. With these, we can derive the properties of the adjustments based on the forecast updates.
To save notation, we denote a forecast update as , where t associates with the monthly frequency. Therefore, is the forecast adjustment with t denoting the t-th forecast for a given year, in chronological order. Recall that although we have 24 forecasts for each year, only 22 of them are updates. Therefore, t runs like 2, …, 12, 14, …, 24. If we assume covariance stationarity across a given year's forecast adjustments, we can write as the variance of , as the first-order autocovariance of and as the second-order autocovariance of , and we have: : 2 2 , and : 2 .
The final assumption that we now need is an assumption on the time series properties of . We propose that a first order-autoregressive process may not be unreasonable. Therefore, suppose: Hence, the first order autocorrelation of the forecast updates is: .
When 1 1, this first order autocorrelation is negative, which is also found in for example (Clements (1997)  we can obtain , the estimated variance of the shocks to the adjustments. Finally, to examine how persistence in adjustment and the variance of the shocks to adjustment relate to forecast performance, we run the regression: (1) where we have a and a for each of the years, where RMSPE is the root mean squared prediction error for each of the forecasted years 4,5 , is an intercept and u is an error term. Given the results in Franses (2014), we expect that more adjustment does not associate with better forecast performance, and hence we hypothesize that and are positive and significant.

Forecasting Three Key Variables for the USA
First, we consider in detail the results for forecasting real GDP growth for the USA. We have data for the years 1996-2018 which are the years to be forecasted, which involves just 23 observations. Due to this small sample size, we will adopt a 10% significant level in our statistical analysis.
Each month there are somewhere in between 20 and 40 forecasters, and these can vary over the months. In our analysis, we will analyze only those forecasters who give forecasts in more than 80% of all months in which a forecast could have been made. For real GDP growth in the USA, we therefore analyze the forecasts of 11 professional forecasters, see the first column of Table 1.  (1). We see from the estimation results that contributes significantly in two of the 11 cases (JP Morgan and Georgia State University), but with an unexpected negative sign. Further, we see that contributes significantly and positive in nine of the 11 cases. For real GDP growth forecasts, we thus learn that large sized shocks to adjustment associate with lower forecast accuracy.
It could now be that our results for GDP are driven by the revision process for this variable. Due to those revisions that become available throughout the year, forecasters may change their forecasts. To examine whether our findings for GDP are robust, we also consider two other key macroeconomic variables. Table 2 presents the results for regression (1), but now for inflation. We see that contributes significantly in two of the 11 cases, and now with expected positive sign. The last two columns of Table 2 indicate that the contribution of is significant and positive in 10 of the 11 cases. Finally, and again for the USA, Table 3 reports on the estimation results for (1) for unemployment. Here we see that never contributes significantly, while does so, and positively, in seven of the 11 cases. 4 By relying on the RMSPE measure we basically make an additional assumption and that is that the forecasters work under squared error loss. Extensions to alternative loss functions would be an interesting topic for further research.

Further Results
To see to what extent the results for the USA in the previous section are representative for professional forecasters in other countries, we now turn to the analysis of the professional forecasters in the Eurozone, France, Germany, Italy, Japan, the Netherlands, Norway, Spain, Sweden, Switzerland and the UK, again for the variables real GDP growth, inflation and unemployment. A summary of the results appear in Tables 4-6, respectively.
The results in Table 4 for real GDP growth suggest that it is mainly that contributes positively to less forecast accuracy, that is in 78.6% of the 84 cases. Table 5 presents similar results for forecasting inflation, where now the fraction of cases with a positive contribution of is 54.1%. This percentage decreases even further for forecasting unemployment, as we can see from the bottom row of Table 6, where this percentage is now just 19.6%. Table 4. Results for regression model (1) for other countries or regions, real GDP growth. The counts concern the number of cases with 10% significant estimation results.  Table 5. Results for regression model (1) for other countries or regions, inflation. The counts concern the number of cases with 10% significant estimation results.

Conclusions
With two assumptions, one on the model forecast updates and one on the time series properties of adjustment to model-based forecasts, we could elicit the size of persistence and variance of such adjustment for professional forecasters. In our analysis of the effects of these two estimated variables on forecast accuracy, we learned that it a larger variance associates with lower forecast accuracy. Given the literature, this outcome could be expected. We find that the effects mainly concentrate on forecasting real GDP growth, which could in part be due to GDP revisions. On the other hand, we do find similar results for inflation and unemployment, although there is evidence is less strong.
An obvious limitation of our study is that we had to make two key assumptions. On the other hand, without any assumptions it seems not possible to study the behavior of the professional forecasters when they create their quotes.
Author Contributions: The authors contributed equally to the paper. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding