Next Article in Journal
Monitoring Cointegrating Polynomial Regressions: Theory and Application to the Environmental Kuznets Curves for Carbon and Sulfur Dioxide Emissions
Next Article in Special Issue
Forecasting Facing Economic Shifts, Climate Change and Evolving Pandemics
Previous Article in Journal
Goodness–of–Fit Tests for Bivariate Time Series of Counts
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New York FED Staff Nowcasts and Reality: What Can We Learn about the Future, the Present, and the Past?  †

by
Boriss Siliverstovs
1,2
1
Monetary Policy Department, Bank of Latvia, K. Valdemara iela 2A, LV-1050 Riga, Latvia
2
KOF Swiss Economic Institute, ETH Zurich, Leonhardstrasse 21, 8092 Zürich, Switzerland
The paper was presented at the 2nd Vienna Workshop on Forecasting and at the 21st IWH-CIREQ-GW Macroeconometric Workshop. The author is grateful to two anonymous reviewers as well as workshop participants for their comments. The views are solely of the author and under no circumstances represent those of Latvijas Banka.
Econometrics 2021, 9(1), 11; https://doi.org/10.3390/econometrics9010011
Submission received: 14 December 2020 / Revised: 21 February 2021 / Accepted: 3 March 2021 / Published: 6 March 2021
(This article belongs to the Special Issue Special Issue on Economic Forecasting)

Abstract

:
We assess the forecasting performance of the nowcasting model developed at the New York FED. We show that the observation regarding a striking difference in the model’s predictive ability across business cycle phases made earlier in the literature also applies here. During expansions, the nowcasting model forecasts at best are at least as good as the historical mean model, whereas during the recessionary periods, there are very substantial gains corresponding in the reduction in MSFE of about 90% relative to the benchmark model. We show how the asymmetry in the relative forecasting performance can be verified by the use of such recursive measures of relative forecast accuracy as Cumulated Sum of Squared Forecast Error Difference (CSSFED) and Recursive Relative Mean Squared Forecast Error (based on Rearranged observations) (R2MSFE(+R)). Ignoring these asymmetries results in a biased judgement of the relative forecasting performance of the competing models over a sample as a whole, as well as during economic expansions, when the forecasting accuracy of a more sophisticated model relative to naive benchmark models tends to be overstated. Hence, care needs to be exercised when ranking several models by their forecasting performance without taking into consideration various states of the economy.

1. Introduction

The outbreak of the Great Financial Crisis about a decade ago significantly spurred the quest for reliable forecasting of economic conditions not only in the distant future, but also for reliable assessment of the current health of the economy. The forecasting academics and practitioners responded with developing of econometric models that specifically aim at forecasting GDP growth in the current or next quarter at most. This process of forecasting either the present or not that very distant past, or the future was naturally labelled as “nowcasting” (Banbura et al. 2011).
Among the recent contributions to the nowcasting literature, the project initiated and maintained at the Federal Reserve Bank of New York (FRBNY) is worth mentioning. The model described in the academic contribution of Bok et al. (2018) is promoted to the general public in a series of online blogs. The initial online announcement (Aarons et al. 2016) about the model and its regular nowcast releases made publicly available was made in 2016. The subsequent blog entry (Giannone et al. 2017) describes in the Q & A format—accessible to the general public—what nowcasting is and how the assessment of the current economic conditions is carried out by means of a pure data-driven approach. The model developers striving to achieve model transparency even went as far as putting the underlying code in the online code depository (Adams et al. 2018), such that anyone interested in nowcasting can go through the code and, if needed, adapt it for nowcasting of economic conditions in a country/region of their choice. More importantly, as initially announced, the nowcasts are regularly made public on the dedicated website in a forecast-as-you-go fashion since the inception of this project in 2016 (https://www.newyorkfed.org/research/policy/nowcast, accessed on 21 February 2021). In the most recent addition to the project, Adams et al. (2019) made the archive of nowcasts simulated backward for the period from 2002 to 2015 publicly available. As a result of this contribution, the sequence of nowcasts for every quarter since 2002 until the most recent one is available for analysis.
For our purposes, this most recent blog entry is important, as the sample for which nowcasts are available extends long enough in the past to include the Great Financial Crisis and it covers one complete business cycle period including both expansion and recession phases. In this study, we intend to verify the conclusions of Chauvet and Potter (2013) on asymmetric forecasting performance of the state-of-the-art macroeconometric models during expansionary and recessionary phases of the business cycle. This asymmetry displays itself in the fact that absolute forecasting errors during recessions tend to be larger than during expansions, i.e., the forecast accuracy, tend to decrease during economic downturns compared to economic upturns. Moreover, they find that during expansions, a simple univariate benchmark model that utilizes only its own past, delivers forecast accuracy that is comparable with that produced by very sophisticated models that draw information from various economic and financial indicators that are often available much earlier than official GDP releases. Since this aspect of the nowcasting performance of the model developed at the Federal Reserve Bank of New York is absent both in the online blog entries as well as in the published paper (Bok et al. 2018), it shapes the contribution of our study to the nowcasting literature. Namely, whether the conclusions of Chauvet and Potter (2013) reached for the other types of models can be generalized for the model in question.
Compared to the models utilized in Chauvet and Potter (2013), where for each quarter the accuracy of single one- and two-step ahead forecasts were evaluated, the output of the NY FRB Nowcasting model for each targeted quarter comprises a sequence of about 20 weekly nowcasts available for analysis. These weekly sequences of nowcasts provide us with additional information helping us to address the question of how far ahead in the future one can forecast using a weekly rather than a quarterly time scale definition.
Willingly or not, by making the historical record of model nowcasts publicly available, Adams et al. (2019) provide a benchmark that other forecasters may be tempted to use in order to compare nowcasting performance of their models, e.g., see Babii et al. (2019, p. 21) and Cimadomo et al. (2020). Interestingly enough, Bok et al. (2018) does not provide a formal comparison of the forecasting performance of their model with commonly used univariate benchmark models. This constitutes an additional motivation of our study, where we specifically evaluate the forecasting performance of the NY FED Nowcasting model relative to the univariate benchmark models for the full period and across business cycle phases. Our results can be informative for those studies that use NY FED Nowcasting model predictions of the US GDP growth as the benchmark. More generally, our research is related to such studies as Cai et al. (2019) and Alessi et al. (2014) where the actual forecasting experience at such policy-making institutions as ECB and FRBNY is scrutinized.
The rest of the paper is organized as follows. In Section 2, a review of the relevant literature is provided. The NY FED Nowcasting model and its output is detailed in Section 3. A description of benchmark models and forecast evaluation metrics used for model comparison is provided in Section 4 and Section 5, respectively. The accuracy of the nowcasting performance of the model against different releases of GDP data (advance, second, final, and latest) for the full sample as well as separately for the periods of the economic downturn (the Great Recession) and upturns is reported in Section 6. The final section concludes.

2. Literature Review

Instabilities in forecasting performance of macroeconometric models have been long acknowledged in the literature. For example, Rossi (2013) in a comprehensive review of the relevant literature points out at several stylized facts. First, the predictive strength of the variables substantially varies over time such that excellent predictive performance in the past does not warrant similar forecasting excellence in the near future, let alone the distant one. Second, model empirical validation, based on their performance in sample, often serves as a poor approximation of their forecasting ability out of sample.
While there are many potential reasons of unstable predictive ability of macroeconometric models, one explanation that seems obvious is the presence of the business cycles that at some more or less regular intervals shake up individual countries, whole regions, or even spread all over the globe. Naturally, the economic dynamics is completely different during recessions than expansions; hence, it is natural to expect that the forecasting performance varies with the state of the business cycle.
Rossi (2013) only briefly mentions business cycles as a possible explanation of forecasting instability, but a more thorough investigation of this topic is provided in Chauvet and Potter (2013). Chauvet and Potter (2013) specifically evaluated the predictive ability of several most widely used macroeconometric models using US GDP growth as an example. The list of these models includes the structural DSGE model, reduced-form VARs estimated using either Bayesian or frequentist approaches, the dynamic factor model with Markov–Switching mechanism, and the cumulative depth of recession model. The conclusions reached in Chauvet and Potter (2013) are surprisingly uniform across these very diverse models. First, the forecasting accuracy of the models worsened in recessions, i.e., on average, forecast errors tended to be larger in economic downturns than during upturns. Second, during expansions, the forecasting performance of highly sophisticated models was matched by that of a simple univariate autoregressive benchmark model.
Siliverstovs (2020a) extends the analysis of Chauvet and Potter (2013) to a different class of models, namely, models that combine data observed at the heterogeneous frequencies: quarterly GDP growth and several monthly economic and financial indicators, such as industry production, sentiment indices, labor-market and housing statistics, stock market index, and interest rates that are commonly used for assessing current economic conditions in the US. In particular, Siliverstovs (2020a) re-examines the forecasting performance of a multiple-indicator U-MIDAS-type model suggested in Carriero et al. (2015). The model generalises the Unrestricted MIDAS model suggested in Foroni et al. (2015) in several directions by allowing more than one skip-sampled explanatory variable, optional inclusion of stochastic volatility, and Bayesian estimation of model parameters. The adopted mixed-frequency setup allows to monitor changes in the forecast accuracy as more information can be incorporated in the forecasting model from one month to another, in contrast to Chauvet and Potter (2013), where forecasts were made once per quarter.
Siliverstovs (2020a) shows that, at first glance, the impressive reduction in the RMSFE over the benchmark AR(2) model up to 22% reported in Carriero et al. (2015), when evaluated over the whole forecast sample from 1985Q1 until 2011Q3, is mainly driven by a few observations during recessions, with the most prominent contribution being traced to those observations during the Great Recession. Evaluation of the model’s forecasting performance during NBER recessions and expansions indicates that, during expansions, the performance of this model is closely matched by that of the benchmark model, conforming with the conclusion of Chauvet and Potter (2013). At the same time, it is worthwhile pointing out that during recessions the improvement over the benchmark model is very dramatic—almost up to 60% in terms of the RMSFE. All in all, it seems that ignoring the asymmetry in the forecasting performance of a more sophisticated model over a benchmark model results in a biased assessment of the model forecasting performance. The predictive ability of the former model tends to be overstated during expansions that last longer than recessions, but at the same time, it is severely understated during rather rare recessions, when prevailing economic distress makes demands for accurate forecasts more acute.
The findings of Chauvet and Potter (2013) and Siliverstovs (2020a), reported for a single time series (US GDP growth), were extended in Siliverstovs and Wochner (2021) for each time series in the Stock–Watson dataset comprising more than 200 US macroeconomic variables. The aim of this exercise was to replicate the study of Stock and Watson (2002) on a more recent data vintage but evaluate the forecasting performance of the diffusion-index model separately for the NBER expansions and recessions in a similar way as done in Chauvet and Potter (2013).
Siliverstovs and Wochner (2021) confirm that there are systematic differences in forecasting accuracy across the business cycle phases both in absolute and relative terms with respect to the benchmark models. During expansions, both diffusion-index models and benchmark models generally display similar forecasting performance. However, the more sophisticated model tends to yield substantial forecasting gains around turning points relative to the benchmark models. Quite often, such forecasting gains of the complicated model outweigh its relative losses during economic upturns, such that when the models are judged on the basis of their average forecasting performance over the whole forecast evaluation period, the actual performance of more sophisticated models is overstated, making it appear better in normal times than it really is. The opposite side of the coin is that its performance during recessions tends to be understated.

3. Nowcasting Framework

3.1. Model

The NY FED Nowcasting model is similar to the one introduced in Giannone et al. (2008). This dynamic factor model conveniently accommodates features of the data that a forecaster faces when making forecasts in real time. These data features include mixed-frequency data, i.e., GDP data available at the quarterly frequency and auxiliary economic and financial data that often are released at the monthly or even higher frequency. A data set of the auxiliary indicators can be unbalanced both at the beginning of the sample as well as at the end of the sample. Missing data at the beginning of the sample arise most often due to the fact that some time series began earlier or later than others. Missing data at the current edge are due to differences in the release timing of different indicators during a month and because of different publication lags of these indicators. For example, indicators released at the end of the current month can have the latest available observation either for the current month, the previous month, or even for a month further back in the past. In fact, the nowcasting framework developed in Giannone et al. (2008) proved very robust to such challenges posed by the above characteristics of economic data. In case of nowcasting, the GDP growth in Switzerland, once coded at the end of 2009, it ran reliably without a single breakdown during weekly nowcasting exercises at the KOF Swiss Economic Institute (ETH Zurich). The model is described in Siliverstovs and Kholodilin (2012) and the track of its nowcasting performance in real time squared is documented in Siliverstovs (2012) and Siliverstovs (2017).

3.2. Timing

Since the nowcasting project of the NY FED goes on, we have to truncate the information flow to reflect the data availability at the time of writing. More specifically, we evaluate the accuracy of nowcasts using the period from 2002Q1 until 2020Q2. For every quarter in this sample, we collected sequence of 21 nowcasts released at a weekly frequency. For several quarters, the number of weekly nowcasts exceeds 21. We chose to concentrate on these 21 weekly forecasts because in this case we obtain an equal number of forecasts for each quarter. This makes our measures of forecasting accuracy which we compute at each forecast origin comparable. The limiting factor was that the sequence of nowcasts for quarter 2018Q1 started one week later than it was usually made for other quarters and it comprises exactly 21 weekly forecasts. The first nowcast in this sequence is released 20 weeks ahead of the week when advance GDP estimate for the targeted quarter is published. The second nowcast precedes the advance GDP release by 19 weeks, and so on. The release of the last nowcast for the targeted quarter coincides with the timing of the release of the advance GDP estimate for this quarter which typically takes place at the end of the first month following the end of the quarter in question. Given such weekly releases, we label nowcasts by their forecast origin measuring the distance by the number of weeks preceding the week when the final nowcast for each quarter was released.
For example, the sequence of nowcasts for quarter 2009Q3—the first quarter after the end of the Great Recession—is shown in Figure 1 together with the second estimate of GDP growth in this quarter. There are several GDP releases—advance, second, and final that are sequentially released at the end of the first, second and third months of the following quarter—they can be compared with the nowcasts. In addition, one can compare nowcasts with GDP growth estimates from the vintage (released on 30 September 2020) that was available to us at the time of writing this manuscript. In the main text, we describe the results with respect to the second estimates of GDP growth. In the Appendix A, we verify robustness of our results by evaluating forecast accuracy for other versions of GDP releases: advance, final and latest, see Table A1, Table A2, Table A3, Table A4, Table A5 and Table A6.) This sequence of weekly nowcasts very well illustrates the benefits of nowcasting. We can observe gradual improvement in outlook as more data become incorporated in the nowcasting model. Starting from a rather pessimistic nowcast of −1.76% released on 29 May 2009, each subsequent week brings largely positive news pushing up nowcasts until the final nowcast of 5.1% was made on 30 October 2009.
In the course of 2020, the unfolding COVID-19 pandemic brought about new challenges for the world economy and also for forecast practitioners that were forced to forecast unprecedented swings in GDP growth. In Figure 2, we present the nowcast sequence generated by the NY FED nowcasting model. As can be seen, the earliest nowcasts made at the beginning of March 2020 did not signal a severe recession for the US economy. It is only since the middle of April and in the course of May, when the enforced restrictions crippled the economy, that the model has sent a strong negative signal that turned out very close to the GDP estimates published much later. For example, according to the second GDP release on 31 July 2020, the US economy shrank by −31.7% at the annualized rate. Starting from the end of May, the model continuously signaled an improving outlook for the US economy.
When assessing the model predictive accuracy, we group nowcasts by their forecast origin. In doing so, we can track how nowcast precision evolves as more information about the relevant quarter becomes available in the course of time. Tracking nowcast accuracy also makes it possible to determine how far ahead in the future one can forecast more accurately using extraneous information from various economic and financial indicators than, for example, naive benchmark models that use exact information from past GDP data only.
We further group each weekly forecast origin by forecast horizon. We distinguish between two forecast horizons, h = 1 and h = 2, depending on the distance in quarters between a targeted quarter and the quarter for which an official estimation of GDP growth already was released. For example, recall the sequence of forecasts displayed in Figure 1. Please note that for the forecast made on 29 May for 2009Q3, the GDP growth estimate for 2009Q1 was already available. Hence, this forecast is labelled as a two-step ahead forecast (h = 2). Meanwhile, for the forecast made on 4 September for the same quarter 2009Q3, the advance GDP estimate for 2009Q2 was already available. Hence, this nowcast is labelled as a one-step ahead forecast (h = 1). Similarly, in Figure 2, we distinguish between two- and one-step ahead forecasts made before and after the release of advance GDP estimate for 2020Q1 on 29 April 2020.
Such breakdown of nowcasts into one- and two-step ahead forecasts is helpful when we compare their forecasting accuracy with that of the benchmark models, discussed in the next section. Forecasts from benchmark models are made only once per quarter, whenever a release of advance GDP assessment takes place. For example, a two-step ahead forecast for 2009Q3 from a benchmark model was made on 29 April, when the data for 2009Q1 were released. Consequently, a one-step ahead forecast for 2009Q3 from a benchmark model was made on 31 July, when the data for 2009Q2 were released.
In order to put a perspective by the challenges posed by the COVID-19 pandemic, we show GDP growth outturns (second estimates) as well as model forecasts at the selected three forecast origins, i.e., 20, 10, and 0 weeks preceding advance GDP releases in Figure 3. The shaded areas indicate the recessionary periods in our sample. The first one is the Great Financial Crisis (2007Q4–2009Q2) and the second recessionary period spans the last two quarters in our sample 2020Q1 and 2020Q2, with reported negative GDP growth. The expansionary period is correspondingly defined 2002Q1–2007Q3 and 2009Q3–2019Q4.

4. Benchmark Models

In this section, we will discuss the choice of a benchmark model against which one can compare the predictive accuracy of the factor model. A standard model that is routinely used as a benchmark model in the forecasting exercises of US GDP is an autoregressive model of order two, AR(2)
y t = α 0 + α 1 y t 1 + α 1 y t 2 + ε t .
For example, this benchmark model was used in Chauvet and Potter (2013) and Carriero et al. (2015). An alternative benchmark model is a historical-mean model (HMM)
y t = α 0 + ε t
which uses the average GDP growth rate as the forecast for the upcoming two quarters. As argued in Siliverstovs (2020a), this very simple model provides forecasts of US GDP growth that during NBER expansions match the predictive accuracy not only of an autoregressive model, but also of the mixed-frequency model of Carriero et al. (2015).
Both benchmark models are estimated using recursively expanding windows that start in 1970Q1. At each forecast origin, one- and two-step ahead forecasts from the benchmark models are made using the real-time GDP vintage that was historically available. Please note that two-step ahead forecasts from the AR(2) model are obtained iteratively. For the sake of brevity, we refer to the autoregressive and historical-mean models as ARM and HMM, respectively, and the NY FED Nowcasting model as DFM.

5. Forecast Accuracy Evaluation Metrics

In this section, we present such traditional measures of relative forecasting performance as (Root) Mean Squared Forecast Error ((R)MSFE) and its relative counterparts that deliver point estimates, as well as a more recent measure of relative forecasting performance of Welch and Goyal (2008) that allows one to determine influential observations that contribute most to relative forecast accuracy referred to as the Cumulated Sum of Squared Forecast Error Difference (CSSFED). Finally, we base our analysis on an innovative measure of relative forecasting accuracy based on rearranged observations, suggested in Siliverstovs (2020b). Siliverstovs (2020b) proposes to use this metrics in order to gauge the leverage of influential observations directly on relative (R)MSFE in a similar way as the CSSFED allows one to sort out the effect of influential observations on difference in (Root) Mean Squared Forecast Error metrics. In order to distinguish the newly introduced and traditional measures of relative forecast accuracy, we label those based on the rearranged observations as R2MSFE(+R) and R3MSFE(+R), denoting Recursive Relative MSFE and Recursive Relative Root MSFE both (based on rearranged observations) respectively.
By complementing our analysis with these recursive measures of forecast accuracy, we address the main shortcoming of such measures as the (Root) Mean Squared Forecast Error. In terms of this metric, the model ranking is based on comparing average values of squared forecast errors that are not informative about whether one model should be preferred because it systematically produces lower (squared) forecast errors and therefore it is genuinely better than its competitor, or results are driven by a limited number of observations that artificially boosts the difference in the reported (Root) Mean Squared Forecast Errors.

5.1. Traditional Measures of Point Forecast Accuracy

Models’ predictive accuracy is evaluated using the following accuracy measures of point forecasts: the Mean Squared Forecast Error (MSFE)
MSFE   =   t = 1 T ( y t y ^ t ) 2 T
with T standing for the number of observations in the forecast evaluation period and the relative MSFE (rMSFE)
rMSFE 1 / 2   =   MSFE 1 MSFE 2 1
and the Cumulative Sum of Squared Forecast Errors (CSSFED)
CSSFED [ τ _ , τ ¯ ] = t = τ _ τ ¯ [ e ^ 1 , t 2 e ^ 2 , t 2 ] .
The MSFE and rMSFE are point estimates of forecast accuracy and represent a typical yardstick to compare models’ predictive accuracy in terms of average squared forecast errors. In contrast, the CSSFED, introduced in Welch and Goyal (2008), is a cumulative sequence of the differential of squared forecast errors that allows to dissect the models’ relative forecasting performance observation by observation.
There is a number of interesting patterns that this sequence can take, and these patterns can reveal the nature of how and when one model dominates another in terms of forecasting accuracy. For example, a continuous upward or downward trend indicates that the first model tends to produce systematically larger or smaller (squared) forecast errors respectively than the second model does. Hovering around some horizontal line indicates that none of the models produces smaller forecast errors in a systematic way. Naturally, breaks in the trend slope, i.e., situations when an initially positive slope changes to negative, indicates reversals in the relative forecasting performance or instabilities in the forecasting performance, thoroughly discussed in Rossi (2013). Finally, jumps in the CSSFED sequence indicate an unusually large discrepancy in (squared) forecast errors in a given period, which can have a disproportionately large leverage on the calculated RMSFE for one model or relative ranking of two models based on their MSFEs or rMSFEs.
In the Bayesian econometrics, there is a natural counterpart of the CSSFED referred to as the Cumulated Sum of Logarithmic Score Difference (CSLSD) or the Cumulative Log Predictive Bayesian Factors. The significance of using such recursive metrics as the CSLSD for model comparison was emphasized in Geweke and Amisano (2010) stating that this metrics “… shows how individual observations contribute to the evidence in favour of one model over another. For example, it may show that a few observations are pivotal in the evidence strongly favouring one model over another.” This conclusion also naturally extends to the CSSFED.

5.2. R2MSFE(+R)/R3MSFE(+R)

The R2MSFE(+R) and R3MSFE(+R) is as an extension of the CSSFED metrics to recursive estimation of the relative MSFE and it can be straightforwardly derived as follows from
rMSFE = MSFE 1 MSFE 2 1 = MSFE 1 MSFE 2 MSFE 2
Opening the MSFE and cancelling the number of observations T results in
rMSFE = t = 1 T e 1 , t 2 t = 1 T e 2 , t 2 t = 1 T e 2 , t 2 = t = 1 T [ e 1 , t 2 e 2 , t 2 ] t = 1 T e 2 , t 2
Please note that that expression in the numerator corresponds to the cumulated sum of squared forecast error difference (CSSFED) introduced in Welch and Goyal (2008), see Equation (5).
As it stands, the rMSFE is a point estimate of the models’ relative forecasting performance computed over the whole forecast evaluation sample. However, as argued above, the relative forecasting performance may change over time and the point estimates like rMSFE are not informative about these changes. Hence, in order to gauge how the relative forecasting performance depends on separate observations, one needs to come up with a recursive version, similarly to the CSSFED measure of individual observation contributions to wedges in the forecast accuracy of the competing models.
To this end, Siliverstovs (2020b) suggests a recursively computed relative MSFE that exposes leverage of individual observations on the relative MSFE. This can be computed recursively using a rearranged sequence of observations in an ascending order according to the absolute value of the squared forecast error difference, |e21,j − e22,j| with ji < jk whenever |e21,ji − e22,ji| < |e21,jk − e22,jk|.
R 2 MSFE [ j 1 , , j i , , j T ] = j = j 1 j T [ e 1 , j 2 e 2 , j 2 ] j = j 1 j T e 2 , j 2 = j = j 1 j T e 1 , j 2 j = j 1 j T e 2 , j 2 1
Analogously, the R3 MSFE( + R) is defined as
R 3 MSFE [ j 1 , , j i , , j T ] = j = j 1 j T e 1 , j 2 j = j 1 j T e 2 , j 2 1
that also can be recursively computed using squared forecast errors arranged by absolute values of the squared forecast error difference, | e 1 , j 2 e 2 , j 2 | .

6. Results

In this section, results of the forecasting competition between the nowcasting model developed at the NY FED and simple benchmark models are presented. In the main text, we report the results based on second GDP releases. We verify the robustness of the conclusions using such alternative GDP releases as advance, final, and the latest ones, which are reported in the Appendix A. This section is divided in two parts. The first part discusses the predictive ability of the models in terms of the traditional measures based on squared forecast errors averaged over the full evaluation sample or its recessionary and expansionary sub-samples. The second part applies the recursive measures of the forecast accuracy dissecting differences in the predictive ability observation by observation.

6.1. Point Estimates of the Relative Forecasting Accuracy

The point estimates of the forecast accuracy (MSFE and relative MSFE) are reported in Table 1 and Table 2, respectively. These two tables are organized in the following way. The left panel reports the measures of the forecasting accuracy for the pre-COVID period, 2002Q1–2019Q4. In the left panel of Table 1, we report the MSFE for the full sample as well as separately for the expansionary period (2002Q1–2007Q3 and 2009Q3–2019Q4) and the period of the Great Financial Crisis (2007Q4–2009Q2). The left panel of Table 2 correspondingly contains the derived relative MSFEs of the DFM and ARM with respect to the benchmark HMM model. The right panel of each table contains the nominal and relative MSFEs for the full sample at our disposal (2002Q1–2020Q2) and the two recessionary periods (2007Q4–2009Q2 and 2020Q1–2020Q2). Since the expansionary period for the full sample is the same as for the pre-COVID sample, the relevant column was omitted from the right panel in Table 1 and Table 2.
The results presented in this way allow us to disentangle the effect of extending the forecasting exercise with the two COVID quarters on the nominal and relative forecast accuracy measures reported for the full sample, i.e., averaging across expansionary and recessionary quarters, as well as for the case when one is interested in differences in the models’ predictive ability across the expansionary and recessionary phases.
First, we address differences in MSFEs brought about by extending the sample by the COVID recessionary period. The evolution of MSFE at each forecast origin for each of the three models under scrutiny is shown for the pre-COVID and full samples in the left and right panels in Figure 4, respectively. Upon comparing these two plots, it becomes evident that the average squared forecast error substantially increased at every forecast origin and for every model. We also observe that at the earlier forecast origins the relative model ranking has changed. In the pre-COVID period, the univariate benchmark models were characterized by lower MSFE than the NY FED model. In the full sample, this advantage in forecast accuracy of the benchmark models disappeared.
One more detail deserves attention. In the pre-COVID period, the MSFE of the DFM showed a clear downward trending behavior, implying increasing forecast accuracy as more information was incorporated into the model. In the full sample, this pattern is no longer observed. In fact, the most accurate predictions are made for the forecasts made about 7–12 weeks before advance GDP releases. Forecasts made at shorter forecast horizons are characterized by increasing MSFE values. An explanation for such observation can be found in Figure 2 where the sequence of nowcasts for 2020Q2 is presented. One can observe that, at the forecast origins of 7–12 weeks, the nowcasts are very close to the GDP outturn, whereas this is not the case for nowcasts made either earlier or later than that. In short, this example illustrates that a single data point can have a rather large influence on the measures of forecast accuracy based on averages of squared forecast errors.
Comparative forecasting performance of the benchmark models deserves a special mention. As can be seen in Figure 4, when evaluated for the full sample (either with or without the COVID period), the ARM produces lower MSFE values than the HMM. At first glance, this observation should support the choice of the autoregressive model as the harder-to-beat benchmark model. However, when one examines the relative MSFEARM/HMM reported in Table 2, it becomes evident that during the expansionary phase the MSFE values of the ARM are up to 20% higher than those of the HMM. It is only during recessions when losses in forecast accuracy during expansions relative to the historical mean model are overcompensated by the respective gains for the autoregressive model. This implies that the HMM is a harder-to-beat benchmark during the expansions that take a lion’s share of observations in our sample. This is actually the main reason why the relative measures of forecast accuracy are reported with respect to the historical mean model in this study.
Motivated by this conclusion, we present evolution rMSFE DFM/HMM for the samples without and with COVID observations in Figure 5. The overall conclusion that can be tentatively made is very comforting for the NY FED nowcasting model. The reduction in MSFEs, when compared to that of the HMM, is up to 55% for the pre-COVID forecast evaluation sample and about 80% for the full sample under scrutiny.
Another dimension for the analysis of the predictive ability of the NY FED nowcasting model is to compare the MSFE values for the expansionary and recessionary periods. Chauvet and Potter (2013) observe that during recessions it is harder to make forecasts in the sense that forecast errors tend to be larger than those observed during expansions. The corresponding MSFE values are shown in Figure 6. In the left panel of the figure, we report the evolution of the MSFE for expansionary (2002Q1–2007Q3 and 2009Q3–2019Q4) and recessionary samples (2007Q4–2009Q2) in the pre-COVID period. In the right panel of the figure, we report the evolution of the MSFE for the expansionary (2002Q1–2007Q3 and 2009Q3–2019Q4) and recessionary samples (2007Q4–2009Q2 and 2020Q1–2020Q2) in the full period.
For the pre-COVID sample, one can observe the pattern that largely conforms to the observation made by Chauvet and Potter (2013). At the earlier forecast origins, the forecasts are less precise during the GFC than during the expansionary phases. At the same time, for the forecasts made less than three weeks ahead of the advance GDP estimate releases, the forecasting accuracy during the GFC and expansions is very similar. However, the latter observation can no longer be confirmed when one compares the MSFEs computed for both the GFC and COVID recessionary periods with the MSFE computed for the expansionary period. As can be seen from the right panel of Figure 6, at all forecast origins, forecasts of GDP growth during the recessions are less precise than those during the expansions.
More importantly, given such large differences in nominal measures of forecast accuracy during recessions and expansions, it is worthwhile verifying whether there are noticeable differences in the relative measures. In Figure 7, we plot the evolution of the relative MSFE of the NY FED model and the historical mean model rMSFEDFM/HMM separately for the recessionary and expansionary periods. The left panel shows the results for the pre-COVID sample and the right panel—for the sample extended with the COVID recessionary period.
In both panels of Figure 7, one can observe a very pronounced asymmetry in the relative forecast accuracy between the DFM and HMM. During expansions, the HMM model produces more precise forecasts made at forecast origins longer than four weeks ahead of advance GDP releases. Only where forecasts are made less than four weeks ahead of advance GDP releases, the forecast accuracy of both models becomes very similar. Given the timing of a typical advance GDP release, this corresponds to the forecast origins at the end of the last month of a targeted quarter and during the weeks of the first month after the end of the targeted quarter. This observation is consistent with that made by Chauvet and Potter (2013), i.e., simple univariate models are robust forecasting devices during expansions; see also Siliverstovs (2020a) for an assessment of the predictive ability of the model of Carriero et al. (2015) for US GDP growth during expansions and recessions. At the same time, during economic crisis periods, the NY FED nowcasting model produces much more accurate forecasts than the historical mean model.
At this point, it is instructive to compare rMSFEDFM/HMM shown in Figure 5 for the full sample (excluding or including the COVID recession) with the above values of rMSFEDFM/HMM shown in Figure 7. As can be seen, the advantages of the more sophisticated model over the very simple benchmark model reported for the full sample are brought about by rather few observations during economic crises, be it only the Great Financial Crisis or both the GFC and COVID pandemic. Hence, when one ignores this asymmetry in the forecasting performance of the models during business cycles, the forecasting ability of a more sophisticated model tends to be severely overstated during expansions. In this sense, recessions serve as the breadwinner for forecasters devoted to developing evolved models—a point that was made by Siliverstovs and Wochner (2021)/Siliverstovs and Wochner (2019) after a comprehensive and systematic evaluation of forecastability of more than 200 US time series during expansions and recessions.

6.2. Recursive Estimates of the Relative Forecasting Accuracy

In this section, we present an analysis of the relative forecasting accuracy of the NY FED dynamic factor model (DFM) and the historical mean model (HMM). The main focus of our analysis is centered around Figures 9 and 11 representing CSSFED DFM/HMM and R2MSFE(+R) recursive measures. The auxiliary plots depicting SFED DFM/HMM in its natural temporal ordering and rearranged by its absolute value within each sub-period (expansionary, GFC, and CVD) are represented in Figures 8 and 10, respectively.
For the sake of brevity and without loss of generalization, we will concentrate on the forecasts made at the three selected forecast origins, namely 20, 10, and 0 weeks ahead of advance GDP estimate releases. Figure 8 depicts the SFED DFM/HMM computed for each out-of-sample forecast evaluation period. As can be seen, there are rather few observations for which we observe substantial differences in the forecast accuracy at the 20-week forecast origin. Less so at the other two forecast origins. Please note that during the COVID pandemic in 2020Q2 there is the largest SFED in our sample. As expected during the GFC, we also observe substantial differences in forecast accuracy between these two models.
The raw plots of SFED DFM/HMM are informative in pointing out that the models’ forecasting accuracy varies from observation to observation and it tends to be more pronounced during periods of economic distress. However, a simple operation makes these differences informative about changes in the relative ranking of the models, based on their forecasting performance. The resulting cumulative sums of these SFED DFM/HMM are shown in the respective panels of Figure 9. Points above the zero line indicate that up until this observation the DFM produced on average higher squared forecast errors than the HMM. Points below the zero line indicate the opposite.
As for the forecasts made at the 20-week origin (see the upper panel of Figure 9), we can conclude that, based on the evidence from all but one observation (2020Q2), the forecast accuracy produced by the HMM was superior to that of the DFM. It was only the latest observation in our forecast evaluation sample that was the game changer reversing the conclusion in favour of the DFM over the HMM. This is a good example when one observation leads to complete overhaul of the models’ relative ranking based on their average forecasting performance. From the middle panel of Figure 9, we can infer that the conclusion on the superior average forecasting ability of the HMM over DFM reversed much earlier, i.e., during the GFC period. In any case, for the 10- and 0-week forecast origins, the observation in 2020Q2 strongly reinforces the evidence of the superior average forecasting accuracy of the NY FED nowcasting model that first surfaced during the GFC.
Last but not least, we conclude the analysis of relative predictive ability using the R2MSFE(+R) of Siliverstovs (2020b). The R2MSFE(+R) allows to directly track the evolution of the rMSFE DFM/HMM, as for its computation one adds more and more observations of increasing intensity that is measured in terms of absolute values of SFED, |SFEDt|. One option is to report R2MSFE(+R) based on the reordered observations purely by the magnitude of |SFEDt| in the ascending order. We, however, apply a slight modification in rearranging the observations capitalising on the knowledge of the expansionary and recessionary phases of the business cycle in our data. We present the R2MSFE(+R) in Figure 10 that is based on the observations rearranged by the magnitude of |SFEDt| in ascending order within each of the three sub-samples: the expansions (2002Q1–2007Q3 and2009Q3–2019Q4), the GFC period (2007Q4–2009Q2), and the COVID period (2020Q1–2020Q2). The main advantage of visually presenting the results in this way is that they are directly comparable to the numerical results reported in Table 2. The underlying SFEDs, rearranged in the ascending order according to its modulus within the three sub-periods (expansionary, the GFC, and COVID periods), are shown in Figure 11.
The R2MSFE(+R) calculated using the forecasts for the three selected forecast origins are shown in Figure 10. These sequences visually display asymmetry in the relative forecasting ability of the dynamic factor and historical mean models during different sub-samples. As for the expansionary period, we can clearly observe that the HMM, on average, produces lower squared forecast errors than its sophisticated counterpart at the forecast origins that are 20- and 10-weeks ahead of advance GDP releases. The last point in the red sequence in the upper and middle panels of the figures corresponds to the rMSFE reported in Column (3) of Table 2. These values indicate that the MSFE of the DFM model is 71.8% and 28.0% higher than that of the HMM for the forecasts made 20 and 10 weeks before the releases of advance GDP estimates. We can infer from the lower panel of the figure that the rMSFEDFM/HMM is very close to zero for the forecasts released during the same week when advance GDP estimates are published. In fact, the corresponding entry of −0.022 in Table 2 indicates that the MSFEDFM is only about 2% lower than the MSFEHMM during expansions.
The sequence of the green dots shows how the relative MSFE changes if we add observations from the GFC period. The last green dot corresponds to the value of rMSFE reported in Column (1) in Table 2. For the 20-week-ahead forecasts, we can read off the corresponding value of 0.165 which indicates that the HMM average forecast accuracy is superior to that of the DFM, even when the observations from the GFC are taken into account. However, for the shorter forecast horizon of 10 weeks, the corresponding entry is −0.308 indicating changes in the models’ relative ranking brought about by the observations during the GFC period. This perfectly illustrates how excessive gains in the forecasting ability during the Great Recession, accrued by the more sophisticated model, can overcompensate for the forecast accuracy losses during much longer periods of economic expansion. As for the forecasts released during the same week as advance GDP releases, the relevant value of −0.546 in Table 2 signals a reduction of about 55% in MSFE brought about by the DFM.
Finally, the blue dots indicate how rMSFEDFM/HMM changes if the two remaining observations (2020Q1–2020Q2) are added to the sample. The last dots in these sequences correspond to the entries in Column (7) of Table 2 for the DFM model. As discussed above, addition of these observations changes the relative ranking of the models for 20-week-ahead forecasts, now indicating a reduction of about 5.4% in MSFE brought about by the DFM, and substantially lowers the relative MSFE for the forecasts made at shorter forecast horizons. The corresponding reductions in MSFE are 81.8% and 68.6% relative to those of the HMM.

7. Conclusions

In this study, we analyze the predictive performance of the dynamic factor model (DFM) developed and maintained at the NY FED. In contrast to many forecasting exercises that were carried out on the past historical data vintages, this project publishes forecasts in a do-it-as-you-go manner and, therefore, is void of data snooping biases. This fact allows us to evaluate the genuine forecasting ability of the econometric model in question, not the least, during the unfolding COVID-19 pandemic when the uncertainty about the current economic conditions is substantially elevated compared to the tranquil times and the demand for accurate assessment of the current state of the economy is especially acute.
The dataset that we analyze comprises US GDP quarterly growth forecasts made in real time squared since 2016 and forecasts calculated backwards during 2002–2015 using historical data vintages. We summarize the nominal and relative accuracy of the DFM forecasts made at 21 weekly forecast origins. The earliest forecast origin precedes releases of advance GDP estimates by 20 weeks, next forecast origin—by 19 weeks, and so on until the week when the advance GDP estimate is released for the targeted quarter. The DFM forecast accuracy is compared with that of two benchmark models: a historical mean model (HMM) and autoregressive model of order (2), ARM.
The main contribution to the forecasting literature is that we analyze the DFM predictive performance during the whole period as well as separately during its expansionary and recessionary sub-periods. The recessionary sub-period includes two distinct periods: the Great Financial Crisis (2007Q4–2009Q2) and the first two quarters of the unfolding COVID-19 pandemic (2020Q1–2020Q2). In doing so, we intend to verify whether the conclusions of Chauvet and Potter (2013) on the asymmetric predictive ability of a wide range of modern macroeconometric models also applies to the NY FED nowcasting model.
Our main conclusion is that we indeed observe a very strong variation in the forecasting ability of the DFM across the business cycle phases. This conclusion is supported when one compares the predictive performance during the pre-COVID period, where there is only one recessionary period (GFC) in our sample, and it is even further reinforced when the latest observations from the COVID pandemic period are included in the analysis. As for the expansionary period, we find that at longer forecast horizons accuracy of the DFM predictions is inferior to that of the historical mean model. It is only at the forecast origins less than 4 weeks ahead of advance GDP releases where both the sophisticated and the naive benchmark models deliver similar forecasting accuracy. On the contrary, the DFM delivers superior forecast accuracy during the recessionary quarters.
Our analysis also demonstrates that the widespread practice of reporting the measures of forecast accuracy, based on average squared forecast errors (and their differences) over longer periods that include both expansionary and recessionary phases of business cycles, are prone to deliver a biased assessment of the models’ nominal and relative forecasting ability. Typically, since the relative gains in the forecasting accuracy of a more sophisticated model during recessions significantly outweigh the relative losses during expansions, the results of averaging across both expansions and recessions tend to artificially exaggerate the predictive ability of a more sophisticated model relative to naive benchmarks both for the period as a whole and during its expansionary sub-sample.
In order to avoid such misrepresentation of the results of a forecasting exercise, it is advisable to complement the measures of forecasting ability reported for the whole sample with results reported for more homogeneous sub-samples, e.g., expansions and recessions. Additional information regarding the models’ relative forecasting ability can be provided by recursive measures of forecasting accuracy such as CSSFED of Welch and Goyal (2008) and R2MSFE(+R) of Siliverstovs (2020b). The recursive measures dissect the relative predictive performance of the competing models’ observation by observation and alleviate the gauging leverage of one or more observations on the models’ relative ranking.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All computations were performed in R (R Core Team 2012). The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

The tables presented in appendix replicate the results shown in Table 1 and Table 2 in the main text but rely on forecast accuracy metrics computed for advance, final, and latest releases of GDP estimates.
Table A1. Forecast accuracy, 2002Q1–2019Q4/2002Q1–2020Q2.
Table A1. Forecast accuracy, 2002Q1–2019Q4/2002Q1–2020Q2.
Advance GDP Release
2002Q1–2019Q42002Q1–2020Q2
Full SampleBoomBustFull SampleBust
WeeksDFMHMMARMDFMHMMARMDFMHMMARMDFMHMMARMDFMHMMARM
204.694.053.953.321.852.1117.3624.5121.0520.5921.9521.66145.31167.13162.86
194.284.053.953.011.852.1116.0424.5121.0519.4721.9521.66138.31167.13162.86
184.224.053.952.971.852.1115.7524.5121.0519.5121.9521.66138.93167.13162.86
173.844.053.953.041.852.1111.3224.5121.0518.5321.9521.66130.37167.13162.86
163.724.053.952.941.852.1111.0024.5121.0518.3621.9521.66129.69167.13162.86
153.264.053.952.861.852.116.9724.5121.0512.2021.9521.6679.68167.13162.86
143.284.054.012.931.852.176.5624.5121.0512.2821.9521.7179.81167.13162.86
133.334.033.462.991.852.146.5124.2515.7411.3021.8919.4171.27166.63144.12
123.204.033.522.891.852.206.0624.2515.743.7221.8919.469.65166.63144.12
112.984.033.522.731.852.205.2824.2515.743.4621.8919.468.73166.63144.12
102.714.033.522.501.852.204.6624.2515.743.3421.8919.469.41166.63144.12
92.524.033.522.371.852.203.9324.2515.743.2021.8919.469.15166.63144.12
82.454.033.522.321.852.203.6224.2515.743.7021.8919.4613.62166.63144.12
72.254.033.522.111.852.203.6024.2515.743.4021.8919.4612.77166.63144.12
62.124.033.521.971.852.203.4724.2515.745.1921.8919.4628.46166.63144.12
52.044.033.521.911.852.203.2224.2515.746.2621.8919.4637.67166.63144.12
41.724.033.521.771.852.201.3424.2515.746.4921.8919.4640.63166.63144.12
31.704.033.521.721.852.201.5424.2515.746.3621.8919.4639.93166.63144.12
21.664.033.521.711.852.201.2624.2515.746.5521.8919.4641.54166.63144.12
11.644.033.521.671.852.201.3424.2515.746.5421.8919.4641.76166.63144.12
01.654.033.521.681.852.201.3524.2515.746.8321.8919.4644.02166.63144.12
Note: The table entries are the MSFEs computed for the advance GDP release. The MSFEs are computed for each model in each forecast round measured in terms of the approximate number of weeks preceding the release of the advance GDP estimate. The left panel reports the results for the pre-COVID sample, whereas the right panel refers to the full sample available. The defined expansionary period (BOOM) covers 2002Q1–2007Q3 and 2009Q3–2019Q4, and, since they are identical for both the left and right panels, the corresponding results are reported only in the left panel.
Table A2. Relative forecast accuracy, 2002Q1–2019Q4/2002Q1–2020Q2.
Table A2. Relative forecast accuracy, 2002Q1–2019Q4/2002Q1–2020Q2.
Advance GDP Release
2002Q1–2019Q42002Q1–2020Q2
Full SampleBoomBustFull SampleBust
Weeks (Columns)DFM/HMM (1)ARM/HMM (2)DFM/HMM (3)ARM/HMM (4)DFM/HMM (5)ARM/HMM (6)DFM/HMM (7)ARM/HMM (8)DFM/HMM (9)ARM/HMM (10)
200.157−0.0260.7960.139−0.292−0.141−0.062−0.013−0.131−0.026
190.056−0.0260.6290.139−0.346−0.141−0.113−0.013−0.172−0.026
180.041−0.0260.6080.139−0.357−0.141−0.111−0.013−0.169−0.026
17−0.052−0.0260.6430.139−0.538−0.141−0.156−0.013−0.220−0.026
16−0.081−0.0260.5900.139−0.551−0.141−0.164−0.013−0.224−0.026
15−0.196−0.0260.5460.139−0.716−0.141−0.444−0.013−0.523−0.026
14−0.191−0.0120.5800.172−0.732−0.141−0.441−0.011−0.522−0.026
13−0.172−0.1400.6190.158−0.732−0.351−0.484−0.113−0.572−0.135
12−0.205−0.1270.5640.189−0.750−0.351−0.830−0.111−0.942−0.135
11−0.260−0.1270.4780.189−0.782−0.351−0.842−0.111−0.948−0.135
10−0.327−0.1270.3520.189−0.808−0.351−0.847−0.111−0.944−0.135
9−0.373−0.1270.2820.189−0.838−0.351−0.854−0.111−0.945−0.135
8−0.392−0.1270.2560.189−0.851−0.351−0.831−0.111−0.918−0.135
7−0.441−0.1270.1390.189−0.851−0.351−0.845−0.111−0.923−0.135
6−0.475−0.1270.0640.189−0.857−0.351−0.763−0.111−0.829−0.135
5−0.495−0.1270.0310.189−0.867−0.351−0.714−0.111−0.774−0.135
4−0.572−0.127−0.0460.189−0.945−0.351−0.703−0.111−0.756−0.135
3−0.578−0.127−0.0720.189−0.936−0.351−0.709−0.111−0.760−0.135
2−0.587−0.127−0.0780.189−0.948−0.351−0.701−0.111−0.751−0.135
1−0.594−0.127−0.0980.189−0.945−0.351−0.701−0.111−0.749−0.135
0−0.591−0.127−0.0920.189−0.944−0.351−0.688−0.111−0.736−0.135
Note: Table entries are rMSFE of the DFM or ARM with respect to the benchmark HMM computed for the advance GDP release. The rMSFEs are computed using the corresponding entries in Table A1. For additional information, please see the notes in that table.
Table A3. Forecast accuracy, 2002Q1–2019Q4/2002Q1–2020Q2.
Table A3. Forecast accuracy, 2002Q1–2019Q4/2002Q1–2020Q2.
Final GDP ReleasE
2002Q1–2019Q42002Q1–2020Q2
Full SampleBoomBustFull SampleBust
WeeksDFMHMMARMDFMHMMARMDFMHMMARMDFMHMMARMDFMHMMARM
205.955.024.944.202.512.7822.1828.3124.9320.4921.5021.23138.11158.63154.47
195.695.024.944.092.512.7820.6128.3124.9319.5521.5021.23131.26158.63154.47
185.645.024.944.072.512.7820.2428.3124.9319.6021.5021.23131.76158.63154.47
175.335.024.944.202.512.7815.8128.3124.9318.7021.5021.23123.41158.63154.47
165.205.024.944.102.512.7815.4128.3124.9318.5221.5021.23122.69158.63154.47
154.585.024.943.972.512.7810.2228.3124.9312.5221.5021.2374.31158.63154.47
144.635.024.994.062.512.859.8728.3124.9312.6321.5021.2974.46158.63154.47
134.625.004.604.052.522.959.9128.0619.9111.6521.4519.2066.50158.15136.59
124.395.004.653.872.523.019.1828.0619.914.8721.4519.2612.08158.15136.59
114.045.004.653.692.523.017.2428.0619.914.4821.4519.2610.17158.15136.59
103.615.004.653.192.523.017.5728.0619.914.1921.4519.2611.43158.15136.59
93.395.004.653.072.523.016.3428.0619.914.2221.4519.2612.50158.15136.59
83.305.004.653.022.523.015.9128.0619.914.2921.4519.2613.46158.15136.59
72.995.004.652.792.523.014.8928.0619.913.9021.4519.2611.96158.15136.59
63.045.004.652.872.523.014.5428.0619.915.5821.4519.2625.16158.15136.59
52.885.004.652.742.523.014.1328.0619.916.4621.4519.2633.34158.15136.59
42.675.004.652.642.523.012.8828.0619.916.7421.4519.2636.36158.15136.59
32.655.004.652.612.523.012.9728.0619.916.6321.4519.2635.64158.15136.59
22.605.004.652.662.523.012.0428.0619.916.7621.4519.2636.33158.15136.59
12.575.004.652.632.523.012.0428.0619.916.7521.4519.2636.47158.15136.59
02.575.004.652.632.523.011.9728.0619.916.9921.4519.2638.49158.15136.59
Note: The table entries are the MSFEs computed for the final GDP release. The MSFEs are computed for each model in each forecast round measured in terms of the approximate number of weeks preceding the release of the advance GDP estimate. The left panel reports the results for the pre-COVID sample, whereas the right panel refers to the full sample available. The defined expansionary period (BOOM) covers 2002Q1–2007Q3 and 2009Q3–2019Q4, and, since they are identical for both the left and right panels, the corresponding results are reported only in the left panel.
Table A4. Relative forecast accuracy, 2002Q1–2019Q4/2002Q1–2020Q2.
Table A4. Relative forecast accuracy, 2002Q1–2019Q4/2002Q1–2020Q2.
Final GDP Release
2002Q1–2019Q42002Q1–2020Q2
Full SampleBoomBustFull SampleBust
Weeks (Columns)DFM/HMM (1)ARM/HMM (2)DFM/HMM (3)ARM/HMM (4)DFM/HMM (5)ARM/HMM (6)DFM/HMM (7)ARM/HMM (8)DFM/HMM (9)ARM/HMM (10)
200.185−0.0170.6720.107−0.216−0.119−0.047−0.013−0.129−0.026
190.134−0.0170.6260.107−0.272−0.119−0.091−0.013−0.173−0.026
180.123−0.0170.6190.107−0.285−0.119−0.088−0.013−0.169−0.026
170.062−0.0170.6720.107−0.442−0.119−0.130−0.013−0.222−0.026
160.035−0.0170.6300.107−0.455−0.119−0.139−0.013−0.227−0.026
15−0.089−0.0170.5790.107−0.639−0.119−0.417−0.013−0.532−0.026
14−0.079−0.0060.6160.132−0.651−0.119−0.413−0.010−0.531−0.026
13−0.075−0.0800.6100.173−0.647−0.290−0.457−0.104−0.579−0.136
12−0.123−0.0700.5370.195−0.673−0.290−0.773−0.102−0.924−0.136
11−0.193−0.0700.4650.195−0.742−0.290−0.791−0.102−0.936−0.136
10−0.278−0.0700.2650.195−0.730−0.290−0.805−0.102−0.928−0.136
9−0.323−0.0700.2180.195−0.774−0.290−0.803−0.102−0.921−0.136
8−0.341−0.0700.1980.195−0.789−0.290−0.800−0.102−0.915−0.136
7−0.402−0.0700.1070.195−0.826−0.290−0.818−0.102−0.924−0.136
6−0.393−0.0700.1400.195−0.838−0.290−0.740−0.102−0.841−0.136
5−0.425−0.0700.0880.195−0.853−0.290−0.699−0.102−0.789−0.136
4−0.467−0.0700.0490.195−0.897−0.290−0.686−0.102−0.770−0.136
3−0.471−0.0700.0370.195−0.894−0.290−0.691−0.102−0.775−0.136
2−0.480−0.0700.0560.195−0.927−0.290−0.685−0.102−0.770−0.136
1−0.486−0.0700.0430.195−0.927−0.290−0.686−0.102−0.769−0.136
0−0.487−0.0700.0440.195−0.930−0.290−0.674−0.102−0.757−0.136
Note: Table entries are rMSFE of the DFM or ARM with respect to the benchmark HMM computed for the final GDP release. The rMSFEs are computed using the corresponding entries in Table A3. For additional information, please see the notes in that table.
Table A5. Forecast accuracy, 2002Q1–2019Q4/2002Q1–2020Q2.
Table A5. Forecast accuracy, 2002Q1–2019Q4/2002Q1–2020Q2.
Latest GDP Release
2002Q1–2019Q42002Q1–2020Q2
Full SampleBoomBustFull SampleBust
WeeksDFMHMMARMDFMHMMARMDFMHMMARMDFMHMMARMDFMHMMARM
206.385.735.443.652.412.4531.7536.5533.1620.9022.1821.71145.50164.98160.81
196.115.735.443.572.412.4529.7136.5533.1619.9522.1821.71138.29164.98160.81
186.035.735.443.522.412.4529.2836.5533.1619.9722.1821.71138.75164.98160.81
175.785.735.443.712.412.4525.0536.5533.1619.1422.1821.71130.55164.98160.81
165.605.735.443.542.412.4524.7236.5533.1618.9022.1821.71129.88164.98160.81
155.005.735.443.572.412.4518.2236.5533.1612.9322.1821.7180.48164.98160.81
145.025.735.473.642.412.4917.9136.5533.1613.0022.1821.7480.65164.98160.81
134.995.704.973.532.402.5218.5436.2827.7412.0022.1219.5673.16164.48142.61
124.745.705.013.382.402.5617.3536.2827.745.2122.1219.6018.38164.48142.61
114.375.705.013.302.402.5614.3136.2827.744.8022.1219.6015.62164.48142.61
104.045.705.012.842.402.5615.1936.2827.744.6022.1219.6017.30164.48142.61
93.805.705.012.772.402.5613.3936.2827.744.6122.1219.6017.92164.48142.61
83.715.705.012.752.402.5612.6236.2827.744.6822.1219.6018.62164.48142.61
73.535.705.012.792.402.5610.4436.2827.744.4222.1219.6016.22164.48142.61
63.265.705.012.642.402.569.0436.2827.745.8022.1219.6028.61164.48142.61
53.125.705.012.522.402.568.6236.2827.746.6922.1219.6036.78164.48142.61
43.005.705.012.482.402.567.8836.2827.747.0722.1219.6040.20164.48142.61
32.985.705.012.442.402.567.9736.2827.746.9422.1219.6039.47164.48142.61
22.885.705.012.512.402.566.2636.2827.747.0222.1219.6039.57164.48142.61
12.895.705.012.542.402.566.0836.2827.747.0522.1219.6039.57164.48142.61
02.905.705.012.572.402.565.9536.2827.747.3122.1219.6041.55164.48142.61
Note: The table entries are the MSFEs computed for the latest GDP release. The MSFEs are computed for each model in each forecast round measured in terms of the approximate number of weeks preceding the release of the advance GDP estimate. The left panel reports the results for the pre-COVID sample, whereas the right panel refers to the full sample available. The defined expansionary period (BOOM) covers 2002Q1–2007Q3 and 2009Q3–2019Q4, and, since they are identical for both the left and right panels, the corresponding results are reported only in the left panel.
Table A6. Relative forecast accuracy, 2002Q1–2019Q4/2002Q1–2020Q2.
Table A6. Relative forecast accuracy, 2002Q1–2019Q4/2002Q1–2020Q2.
Latest GDP Release
2002Q1–2019Q42002Q1–2020Q2
Full SampleBoomBustFull SampleBust
Weeks (Columns)DFM/HMM (1)ARM/HMM (2)DFM/HMM (3)ARM/HMM (4)DFM/HMM (5)ARM/HMM (6)DFM/HMM (7)ARM/HMM (8)DFM/HMM (9)ARM/HMM (10)
200.114−0.0510.5150.017−0.131−0.093−0.058−0.021−0.118−0.025
190.066−0.0510.4810.017−0.187−0.093−0.101−0.021−0.162−0.025
180.052−0.0510.4630.017−0.199−0.093−0.100−0.021−0.159−0.025
170.010−0.0510.5410.017−0.315−0.093−0.137−0.021−0.209−0.025
16−0.023−0.0510.4680.017−0.324−0.093−0.148−0.021−0.213−0.025
15−0.127−0.0510.4840.017−0.502−0.093−0.417−0.021−0.512−0.025
14−0.123−0.0440.5110.034−0.510−0.093−0.414−0.020−0.511−0.025
13−0.124−0.1280.4710.047−0.489−0.236−0.457−0.116−0.555−0.133
12−0.168−0.1210.4080.066−0.522−0.236−0.765−0.114−0.888−0.133
11−0.233−0.1210.3740.066−0.606−0.236−0.783−0.114−0.905−0.133
10−0.291−0.1210.1800.066−0.581−0.236−0.792−0.114−0.895−0.133
9−0.333−0.1210.1510.066−0.631−0.236−0.792−0.114−0.891−0.133
8−0.350−0.1210.1420.066−0.652−0.236−0.789−0.114−0.887−0.133
7−0.380−0.1210.1600.066−0.712−0.236−0.800−0.114−0.901−0.133
6−0.428−0.1210.0970.066−0.751−0.236−0.738−0.114−0.826−0.133
5−0.453−0.1210.0490.066−0.762−0.236−0.698−0.114−0.776−0.133
4−0.473−0.1210.0310.066−0.783−0.236−0.681−0.114−0.756−0.133
3−0.478−0.1210.0150.066−0.780−0.236−0.686−0.114−0.760−0.133
2−0.495−0.1210.0460.066−0.828−0.236−0.683−0.114−0.759−0.133
1−0.493−0.1210.0580.066−0.833−0.236−0.681−0.114−0.759−0.133
0−0.492−0.1210.0680.066−0.836−0.236−0.670−0.114−0.747−0.133
Note: Table entries are rMSFE of the DFM or ARM with respect to the benchmark HMM computed for the latest GDP release. The rMSFEs are computed using the corresponding entries in Table A5. For additional information, please see the notes in that table.

References

  1. Aarons, Grant, Daniele Caratelli, Domenico Giannone, Argia Sbordone, and Andrea Tambalotti. 2016. Just Released: Introducing the New York Fed Staff Nowcast. Federal Reserve Bank of New York Liberty Street Economics (blog). Available online: https://libertystreeteconomics.newyorkfed.org/2016/04/just-released-introducing-the-frbny-nowcast.html (accessed on 21 February 2021).
  2. Adams, Patrick, Brandyn Bok, Daniele Caratelli, Domenico Giannone, Eric Qian, Argia Sbordone, Camilla Schneier, and Andrea Tambalotti. 2018. Opening the Toolbox: Computer Code for the Nowcast. Federal Reserve Bank of New York, Liberty Street Economics. Available online: http://libertystreeteconomics.newyorkfed.org/2018/08/opening-the-toolbox-the-nowcasting-code-on-github.html (accessed on 21 February 2021).
  3. Adams, Patrick, Domenico Giannone, Eric Qian, and Argia Sbordone. 2019. Just Released: Historical Reconstruction of the New York Fed Staff Nowcast, 2002-15. Federal Reserve Bank of New York, Liberty Street Economics. Available online: https://libertystreeteconomics.newyorkfed.org/2019/07/just-released-historical-reconstruction-of-the-new-york-fed-staff-nowcast-2002-15.html (accessed on 21 February 2021).
  4. Alessi, Lucia, Eric Ghysels, Luca Onorante, Richard Peach, and Simon Potter. 2014. Central Bank Macroeconomic Forecasting During the Global Financial Crisis: The European Central Bank and Federal Reserve Bank of New York Experiences. Journal of Business & Economic Statistics 32: 483–500. [Google Scholar]
  5. Babii, Andrii, Eric Ghysels, and Jonas Striaukas. 2019. Machine Learning Time Series Regressions with an Application to Nowcasting. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3503191 (accessed on 21 February 2021).
  6. Banbura, Marta, Domenico Giannone, and Lucrezia Reichlin. 2011. Nowcasting. In Oxford Handbook on Economic Forecasting. Edited by M. P. Clements and D. F. Hendry. Amsterdam: Elsevier, Chp. 7. pp. 193–224. [Google Scholar]
  7. Bok, Brandyn, Daniele Caratelli, Domenico Giannone, Argia Sbordone, and Andrea Tambalotti. 2018. Macroeconomic Nowcasting and Forecasting with Big Data. Annual Review of Economics 10: 615–43. [Google Scholar] [CrossRef] [Green Version]
  8. Cai, Michael, Marco Del Negro, Marc. P. Giannoni, Abhi Gupta, Pearl Li, and Erica Moszkowski. 2019. DSGE forecasts of the lost recovery. International Journal of Forecasting 35: 1770–89. [Google Scholar] [CrossRef] [Green Version]
  9. Carriero, Andrea, Todd E. Clark, and Massimiliano Marcellino. 2015. Realtime nowcasting with a bayesian mixed frequency model with stochastic volatility. Journal of the Royal Statistical Society: Series A (Statistics in Society) 178: 837–62. [Google Scholar] [CrossRef] [PubMed]
  10. Chauvet, Marcelle, and Simon Potter. 2013. Forecasting output. In Handbook of Forecasting. Edited by G. Elliott and A. Timmermann. Amsterdam: North Holland, vol. 2, pp. 1–56. [Google Scholar]
  11. Cimadomo, Jacopo, Domenico Giannone, Michele Lenza, Andrej Sokol, and Francesca Monti. 2020. Nowcasting with Large Bayesian Vector Autoregressions. Working Paper Series 2453; Frankfurt: European Central Bank. [Google Scholar]
  12. Foroni, Claudia, Massimiliano Marcellino, and Christian Schumacher. 2015. Unrestricted mixed data sampling (midas): Midas regressions with unrestricted lag polynomials. Journal of the Royal Statistical Society: Series A (Statistics in Society) 178: 57–82. [Google Scholar] [CrossRef]
  13. Geweke, John, and Gianni Amisano. 2010. Comparing and evaluating Bayesian predictive distributions of asset returns. International Journal of Forecasting 26: 216–30. [Google Scholar] [CrossRef]
  14. Giannone, Domenico, Argia Sbordone, and Andrea Tambalotti. 2017. Hey, Economist! How Do You Forecast the Present? New York: Federal Reserve Bank of New York Liberty, Available online: http://libertystreeteconomics.newyorkfed.org/2017/06/hey-economist-how-do-you-forecast-the-present.html (accessed on 21 February 2021).
  15. Giannone, Domenico, Lucrezia Reichlin, and David Small. 2008. Nowcasting: The real-time informational content of macroeconomic data. Journal of Monetary Economics 55: 665–76. [Google Scholar] [CrossRef]
  16. R Core Team. 2012. R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing, ISBN 3-900051-07-0. [Google Scholar]
  17. Rossi, Barbara. 2013. Advances in Forecasting under Instability. Amsterdam: Elsevier, vol. 2, chp. 21. pp. 1203–324. [Google Scholar]
  18. Siliverstovs, Boriss. 2012. Keeping a Finger on the Pulse of the Economy: Nowcasting Swiss GDP in Real-Time Squared. KOF Working Papers 12-302. Zürich: KOF Swiss Economic Institute, ETH Zurich. [Google Scholar]
  19. Siliverstovs, Boriss. 2017. Dissecting models’ forecasting performance. Economic Modelling 67: 294–99. [Google Scholar] [CrossRef] [Green Version]
  20. Siliverstovs, Boriss. 2020a. Assessing nowcast accuracy of US GDP growth in real time: The role of booms and busts. Empirical Economics 58: 7–27. [Google Scholar] [CrossRef]
  21. Siliverstovs, Boriss. 2020b. Gauging the Effect of Influential Observations on Measures of Relative Forecast Accuracy in Post-COVID-19 era: An Application to Nowcasting Euro Area GDP Growth. Unpublished Manuscript. [Google Scholar]
  22. Siliverstovs, Boriss, and Daniel Wochner. 2021. State-dependent evaluation of predictive ability. Journal of Forecasting 40: 547–74. [Google Scholar] [CrossRef]
  23. Siliverstovs, Boriss, and Daniel Wochner. 2019. Recessions as Breadwinner for Forecasters State-Dependent Evaluation of Predictive Ability: Evidence from Big Macroeconomic US Data. KOF Working Papers 19-463. Zürich: KOF Swiss Economic Institute, ETH Zurich. [Google Scholar]
  24. Siliverstovs, Boriss, and Konstantin. A. Kholodilin. 2012. Assessing the real-time informational content of macroeconomic data releases for now-/forecasting GDP: Evidence for Switzerland. Jahrbücher für Nationalökonomie und Statistik 232: 429–44. [Google Scholar]
  25. Stock, James H., and Mark W. Watson. 2002. Macroeconomic forecasting using diffusion indexes. Journal of Business and Economic Statistics 20: 147–62. [Google Scholar] [CrossRef] [Green Version]
  26. Welch, Ivo, and Amit Goyal. 2008. A comprehensive look at the empirical performance of equity premium prediction. Review of Financial Studies 21: 1455–508. [Google Scholar] [CrossRef]
Figure 1. Sequence of weekly nowcasts for 2009Q3.
Figure 1. Sequence of weekly nowcasts for 2009Q3.
Econometrics 09 00011 g001
Figure 2. Sequence of weekly nowcasts for 2020Q2.
Figure 2. Sequence of weekly nowcasts for 2020Q2.
Econometrics 09 00011 g002
Figure 3. GDP growth: Actual (second release) and forecasts. Forecasts at selected forecast origins are shown. In the legend, 20 and 10 indicate the number of weeks ahead of releases of advance GDP estimates. The remaining forecast sequence, labelled as 0, corresponds to the forecasts made during the same week when advance GDP estimates were published.
Figure 3. GDP growth: Actual (second release) and forecasts. Forecasts at selected forecast origins are shown. In the legend, 20 and 10 indicate the number of weeks ahead of releases of advance GDP estimates. The remaining forecast sequence, labelled as 0, corresponds to the forecasts made during the same week when advance GDP estimates were published.
Econometrics 09 00011 g003
Figure 4. MSFEs reported for the pre-COVID and full samples.
Figure 4. MSFEs reported for the pre-COVID and full samples.
Econometrics 09 00011 g004
Figure 5. Relative MSFEs (DFM/HMM) reported for the pre-COVID and full samples.
Figure 5. Relative MSFEs (DFM/HMM) reported for the pre-COVID and full samples.
Econometrics 09 00011 g005
Figure 6. MSFEs (DFM) in expansions/recessions, reported for the pre-COVID and full samples.
Figure 6. MSFEs (DFM) in expansions/recessions, reported for the pre-COVID and full samples.
Econometrics 09 00011 g006
Figure 7. rMSFEs (DFM/HMM) in expansions/recessions, reported for the pre-COVID and full samples.
Figure 7. rMSFEs (DFM/HMM) in expansions/recessions, reported for the pre-COVID and full samples.
Econometrics 09 00011 g007
Figure 8. SFEDDFM/HMM computed for selected forecast origins, 20, 10, and 0 weeks ahead of advance GDP estimate releases.
Figure 8. SFEDDFM/HMM computed for selected forecast origins, 20, 10, and 0 weeks ahead of advance GDP estimate releases.
Econometrics 09 00011 g008
Figure 9. CSSFEDDFM/HMM computed for selected forecast origins, 20, 10, and 0 weeks ahead of advance GDP estimate releases.
Figure 9. CSSFEDDFM/HMM computed for selected forecast origins, 20, 10, and 0 weeks ahead of advance GDP estimate releases.
Econometrics 09 00011 g009
Figure 10. R2MSFE(+R)DFM/HMM computed for selected forecast origins, 20, 10, and 0 weeks ahead of advance GDP estimate releases.
Figure 10. R2MSFE(+R)DFM/HMM computed for selected forecast origins, 20, 10, and 0 weeks ahead of advance GDP estimate releases.
Econometrics 09 00011 g010
Figure 11. SFEDDFM/HMM rearranged by its modulus computed for selected forecast origins, 20, 10, and 0 weeks ahead of advance GDP estimate releases.
Figure 11. SFEDDFM/HMM rearranged by its modulus computed for selected forecast origins, 20, 10, and 0 weeks ahead of advance GDP estimate releases.
Econometrics 09 00011 g011
Table 1. Forecast accuracy, 2002Q1–2019Q4/2002Q1–2020Q2.
Table 1. Forecast accuracy, 2002Q1–2019Q4/2002Q1–2020Q2.
2002Q1–2019Q42002Q1–2020Q2
Full SampleBoomBustFull SampleBust
WeeksDFMHMMARMDFMHMMARMDFMHMMARMDFMHMMARMDFMHMMARM
205.64.84.713.792.212.4822.3428.9225.4520.4421.621.32140.63161.66157.42
195.234.84.713.552.212.4820.8428.9225.4519.3921.621.32133.79161.66157.42
185.174.84.713.522.212.4820.4628.9225.4519.4221.621.32134.29161.66157.42
174.794.84.713.612.212.4815.7528.9225.4518.4621.621.32125.67161.66157.42
164.664.84.713.512.212.4815.3528.9225.4518.2821.621.32124.94161.66157.42
154.064.84.713.42.212.4810.2128.9225.4512.2421.621.3276.08161.66157.42
144.124.814.763.492.212.549.928.9225.4512.3421.621.3776.26161.66157.42
134.184.784.293.562.212.579.9628.6620.2211.4221.5419.1968.21161.17139.23
123.984.784.353.412.212.649.2728.6620.224.4921.5419.2512.25161.17139.23
113.624.784.353.22.212.647.5628.6620.224.0921.5419.2510.53161.17139.23
103.314.784.352.832.212.647.7728.6620.223.9221.5419.2511.75161.17139.23
93.094.784.352.712.212.646.6228.6620.223.9121.5419.2512.52161.17139.23
83.024.784.352.672.212.646.2128.6620.224.0821.5419.2514.21161.17139.23
72.714.784.352.442.212.645.2228.6620.223.6821.5419.2512.7161.17139.23
62.714.784.352.472.212.644.9728.6620.225.3921.5419.2526.46161.17139.23
52.574.784.352.362.212.644.528.6620.226.3121.5419.2534.8161.17139.23
42.314.784.352.222.212.643.0828.6620.226.5521.5419.2537.77161.17139.23
32.34.784.352.22.212.643.228.6620.226.4421.5419.2537.06161.17139.23
22.24.784.352.22.212.642.2928.6620.226.5321.5419.2537.81161.17139.23
12.184.784.352.172.212.642.3128.6620.226.5221.5419.2537.97161.17139.23
02.174.784.352.162.212.642.2428.6620.226.7721.5419.2540.03161.17139.23
Note: The table entries are the MSFEs computed for the second GDP release. The MSFEs are computed for each model in each forecast round measured in terms of the approximate number of weeks preceding the release of the advance GDP estimate. The left panel reports the results for the pre-COVID sample, whereas the right panel refers to the full sample available. The defined expansionary period (BOOM) covers 2002Q1–2007Q3 and 2009Q3–2019Q4, and, since they are identical for both the left and right panels, the corresponding results are reported only in the left panel.
Table 2. Relative forecast accuracy, 2002Q1–2019Q4/2002Q1–2020Q2.
Table 2. Relative forecast accuracy, 2002Q1–2019Q4/2002Q1–2020Q2.
2002Q1–2019Q42002Q1–2020Q2
Full SampleBoomBustFull SampleBust
Weeks
(Columns)
DFM/HMM (1)ARM/HMM (2)DFM/HMM (3)ARM/HMM (4)DFM/HMM (5)ARM/HMM (6)DFM/HMM (7)ARM/HMM (8)DFM/HMM (9)ARM/HMM (10)
200.165−0.0190.7180.123−0.227−0.120−0.054−0.013−0.130−0.026
190.089−0.0190.6080.123−0.279−0.120−0.102−0.013−0.172−0.026
180.075−0.0190.5940.123−0.293−0.120−0.101−0.013−0.169−0.026
17−0.002−0.0190.6370.123−0.455−0.120−0.145−0.013−0.223−0.026
16−0.030−0.0190.5900.123−0.469−0.120−0.154−0.013−0.227−0.026
15−0.155−0.0190.5390.123−0.647−0.120−0.434−0.013−0.529−0.026
14−0.144−0.0090.5810.148−0.658−0.120−0.429−0.011−0.528−0.026
13−0.126−0.1030.6100.164−0.652−0.295−0.470−0.109−0.577−0.136
12−0.167−0.0910.5430.194−0.676−0.295−0.792−0.106−0.924−0.136
11−0.242−0.0910.4470.194−0.736−0.295−0.810−0.106−0.935−0.136
10−0.308−0.0910.2800.194−0.729−0.295−0.818−0.106−0.927−0.136
9−0.353−0.0910.2270.194−0.769−0.295−0.819−0.106−0.922−0.136
8−0.369−0.0910.2080.194−0.783−0.295−0.811−0.106−0.912−0.136
7−0.434−0.0910.1010.194−0.818−0.295−0.829−0.106−0.921−0.136
6−0.433−0.0910.1160.194−0.827−0.295−0.750−0.106−0.836−0.136
5−0.463−0.0910.0670.194−0.843−0.295−0.707−0.106−0.784−0.136
4−0.518−0.0910.0060.194−0.893−0.295−0.696−0.106−0.766−0.136
3−0.519−0.091−0.0030.194−0.889−0.295−0.701−0.106−0.770−0.136
2−0.539−0.091−0.0070.194−0.920−0.295−0.697−0.106−0.765−0.136
1−0.544−0.091−0.0200.194−0.920−0.295−0.697−0.106−0.764−0.136
0−0.546−0.091−0.0220.194−0.922−0.295−0.686−0.106−0.752−0.136
Note: Table entries are rMSFE of the DFM or ARM with respect to the benchmark HMM computed for the second GDP release. The rMSFEs are computed using the corresponding entries in Table 1. For additional information, please see the notes in that table.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Siliverstovs, B. New York FED Staff Nowcasts and Reality: What Can We Learn about the Future, the Present, and the Past? . Econometrics 2021, 9, 11. https://doi.org/10.3390/econometrics9010011

AMA Style

Siliverstovs B. New York FED Staff Nowcasts and Reality: What Can We Learn about the Future, the Present, and the Past? . Econometrics. 2021; 9(1):11. https://doi.org/10.3390/econometrics9010011

Chicago/Turabian Style

Siliverstovs, Boriss. 2021. "New York FED Staff Nowcasts and Reality: What Can We Learn about the Future, the Present, and the Past? " Econometrics 9, no. 1: 11. https://doi.org/10.3390/econometrics9010011

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop