Next Article in Journal
Thermodynamic Performance of a Brayton Pumped Heat Energy Storage System: Influence of Internal and External Irreversibilities
Next Article in Special Issue
Minimum Message Length in Hybrid ARMA and LSTM Model Forecasting
Previous Article in Journal
Cooling Cycle Optimization for a Vuilleumier Refrigerator
Previous Article in Special Issue
Winsorization for Robust Bayesian Neural Networks
 
 
Article

History Marginalization Improves Forecasting in Variational Recurrent Neural Networks

by 1,2, 3 and 4,*
1
Bosch Center for AI, 71272 Renningen, Germany
2
Department of Computer Science, TU Kaiserslautern, 67653 Kaiserslautern, Germany
3
Department of Computer Science, University of California, Irvine, CA 92697, USA
4
Bosch Center for AI, Pittsburgh, PA 15222, USA
*
Author to whom correspondence should be addressed.
Academic Editors: Eric Nalisnick and Dustin Tran
Entropy 2021, 23(12), 1563; https://doi.org/10.3390/e23121563
Received: 30 September 2021 / Revised: 18 November 2021 / Accepted: 19 November 2021 / Published: 24 November 2021
(This article belongs to the Special Issue Probabilistic Methods for Deep Learning)
Deep probabilistic time series forecasting models have become an integral part of machine learning. While several powerful generative models have been proposed, we provide evidence that their associated inference models are oftentimes too limited and cause the generative model to predict mode-averaged dynamics. Mode-averaging is problematic since many real-world sequences are highly multi-modal, and their averaged dynamics are unphysical (e.g., predicted taxi trajectories might run through buildings on the street map). To better capture multi-modality, we develop variational dynamic mixtures (VDM): a new variational family to infer sequential latent variables. The VDM approximate posterior at each time step is a mixture density network, whose parameters come from propagating multiple samples through a recurrent architecture. This results in an expressive multi-modal posterior approximation. In an empirical study, we show that VDM outperforms competing approaches on highly multi-modal datasets from different domains. View Full-Text
Keywords: sequential latent variable models; time series forecasting; variational inference sequential latent variable models; time series forecasting; variational inference
Show Figures

Figure 1

MDPI and ACS Style

Qiu, C.; Mandt, S.; Rudolph, M. History Marginalization Improves Forecasting in Variational Recurrent Neural Networks. Entropy 2021, 23, 1563. https://doi.org/10.3390/e23121563

AMA Style

Qiu C, Mandt S, Rudolph M. History Marginalization Improves Forecasting in Variational Recurrent Neural Networks. Entropy. 2021; 23(12):1563. https://doi.org/10.3390/e23121563

Chicago/Turabian Style

Qiu, Chen, Stephan Mandt, and Maja Rudolph. 2021. "History Marginalization Improves Forecasting in Variational Recurrent Neural Networks" Entropy 23, no. 12: 1563. https://doi.org/10.3390/e23121563

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop