Next Article in Journal
Making the Environment an Informative Place: A Conceptual Analysis of Epistemic Policies and Sensorimotor Coordination
Previous Article in Journal
Robust Inference after Random Projections via Hellinger Distance for Location-Scale Family
Previous Article in Special Issue
Adaptive Extended Kalman Filter with Correntropy Loss for Robust Power System State Estimation
Article Menu

Export Article

Open AccessFeature PaperArticle
Entropy 2019, 21(4), 349; https://doi.org/10.3390/e21040349

Reduction of Markov Chains Using a Value-of-Information-Based Approach

1
Advanced Signal Processing and Automated Target Recognition Branch, US Naval Surface Warfare Center—Panama City Division, Panama City, FL 32407, USA
2
Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611, USA
3
Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
4
Computational NeuroEngineering Laboratory (CNEL), University of Florida, Gainesville, FL 32611, USA
*
Author to whom correspondence should be addressed.
Received: 18 February 2019 / Revised: 24 March 2019 / Accepted: 25 March 2019 / Published: 30 March 2019
(This article belongs to the Special Issue Information Theoretic Learning and Kernel Methods)
  |  
PDF [3935 KB, uploaded 17 April 2019]
  |  

Abstract

In this paper, we propose an approach to obtain reduced-order models of Markov chains. Our approach is composed of two information-theoretic processes. The first is a means of comparing pairs of stationary chains on different state spaces, which is done via the negative, modified Kullback–Leibler divergence defined on a model joint space. Model reduction is achieved by solving a value-of-information criterion with respect to this divergence. Optimizing the criterion leads to a probabilistic partitioning of the states in the high-order Markov chain. A single free parameter that emerges through the optimization process dictates both the partition uncertainty and the number of state groups. We provide a data-driven means of choosing the ‘optimal’ value of this free parameter, which sidesteps needing to a priori know the number of state groups in an arbitrary chain. View Full-Text
Keywords: Markov chains; value of information; aggregation; model reduction; dynamics reduction; information theory Markov chains; value of information; aggregation; model reduction; dynamics reduction; information theory
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Sledge, I.J.; Príncipe, J.C. Reduction of Markov Chains Using a Value-of-Information-Based Approach. Entropy 2019, 21, 349.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top