Next Article in Journal
Entropy of Entanglement between Quantum Phases of a Three-Level Matter-Radiation Interaction Model
Next Article in Special Issue
Strong- and Weak-Universal Critical Behaviour of a Mixed-Spin Ising Model with Triplet Interactions on the Union Jack (Centered Square) Lattice
Previous Article in Journal
Information Theoretic-Based Interpretation of a Deep Neural Network Approach in Diagnosing Psychogenic Non-Epileptic Seizures
Previous Article in Special Issue
Oscillations in Multiparticle Production Processes
Article Menu
Issue 2 (February) cover image

Export Article

Open AccessArticle
Entropy 2018, 20(2), 51; https://doi.org/10.3390/e20020051

Minimising the Kullback–Leibler Divergence for Model Selection in Distributed Nonlinear Systems

1
Australian Centre for Field Robotics, The University of Sydney, Sydney NSW 2006, Australia
2
Complex Systems Research Group, The University of Sydney, Sydney NSW 2006, Australia
3
Centre for Autonomous Systems, University of Technology Sydney, Ultimo NSW 2007, Australia
*
Author to whom correspondence should be addressed.
Received: 21 December 2017 / Revised: 17 January 2018 / Accepted: 18 January 2018 / Published: 23 January 2018
(This article belongs to the Special Issue New Trends in Statistical Physics of Complex Systems)
Full-Text   |   PDF [1430 KB, uploaded 29 January 2018]   |  

Abstract

The Kullback–Leibler (KL) divergence is a fundamental measure of information geometry that is used in a variety of contexts in artificial intelligence. We show that, when system dynamics are given by distributed nonlinear systems, this measure can be decomposed as a function of two information-theoretic measures, transfer entropy and stochastic interaction. More specifically, these measures are applicable when selecting a candidate model for a distributed system, where individual subsystems are coupled via latent variables and observed through a filter. We represent this model as a directed acyclic graph (DAG) that characterises the unidirectional coupling between subsystems. Standard approaches to structure learning are not applicable in this framework due to the hidden variables; however, we can exploit the properties of certain dynamical systems to formulate exact methods based on differential topology. We approach the problem by using reconstruction theorems to derive an analytical expression for the KL divergence of a candidate DAG from the observed dataset. Using this result, we present a scoring function based on transfer entropy to be used as a subroutine in a structure learning algorithm. We then demonstrate its use in recovering the structure of coupled Lorenz and Rössler systems. View Full-Text
Keywords: Kullback–Leibler divergence; model selection; information theory; transfer entropy; stochastic interaction; nonlinear systems; complex networks; state space reconstruction Kullback–Leibler divergence; model selection; information theory; transfer entropy; stochastic interaction; nonlinear systems; complex networks; state space reconstruction
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Cliff, O.M.; Prokopenko, M.; Fitch, R. Minimising the Kullback–Leibler Divergence for Model Selection in Distributed Nonlinear Systems. Entropy 2018, 20, 51.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top