Next Article in Journal
Distinguishability in Entropy Calculations: Chemical Reactions, Conformational and Residual Entropy
Next Article in Special Issue
Tsallis Mutual Information for Document Classification
Previous Article in Journal
Thermodynamics of Thermoelectric Phenomena and Applications
Previous Article in Special Issue
Tsallis-Based Nonextensive Analysis of the Southern California Seismicity
Article Menu

Article Versions

Export Article

Open AccessArticle
Entropy 2011, 13(8), 1518-1532; doi:10.3390/e13081518

A Risk Profile for Information Fusion Algorithms

Raytheon Integrated Defense Systems, 235 Presidential Way, Woburn, MA 01801, USA
Raytheon Integrated Defense Systems, 2461 S. Clark St., Suite 1000, Arlington, VA 22202, USA
Author to whom correspondence should be addressed.
Received: 17 May 2011 / Revised: 4 August 2011 / Accepted: 11 August 2011 / Published: 17 August 2011
(This article belongs to the Special Issue Tsallis Entropy)
View Full-Text   |   Download PDF [2392 KB, uploaded 24 February 2015]   |  


E.T. Jaynes, originator of the maximum entropy interpretation of statistical mechanics, emphasized that there is an inevitable trade-off between the conflicting requirements of robustness and accuracy for any inferencing algorithm. This is because robustness requires discarding of information in order to reduce the sensitivity to outliers. The principal of nonlinear statistical coupling, which is an interpretation of the Tsallis entropy generalization, can be used to quantify this trade-off. The coupled-surprisal, -lnκ(p)≡-(pκ-1)/κ , is a generalization of Shannon surprisal or the logarithmic scoring rule, given a forecast p of a true event by an inferencing algorithm. The coupling parameter κ=1-q, where q is the Tsallis entropy index, is the degree of nonlinear coupling between statistical states. Positive (negative) values of nonlinear coupling decrease (increase) the surprisal information metric and thereby biases the risk in favor of decisive (robust) algorithms relative to the Shannon surprisal (κ=0). We show that translating the average coupled-surprisal to an effective probability is equivalent to using the generalized mean of the true event probabilities as a scoring rule. The metric is used to assess the robustness, accuracy, and decisiveness of a fusion algorithm. We use a two-parameter fusion algorithm to combine input probabilities from N sources. The generalized mean parameter ‘alpha’ varies the degree of smoothing and raising to a power Νβ with β between 0 and 1 provides a model of correlation.
Keywords: Tsallis entropy; proper scoring rules; information fusion; machine learning Tsallis entropy; proper scoring rules; information fusion; machine learning
This is an open access article distributed under the Creative Commons Attribution License (CC BY 3.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Nelson, K.P.; Scannell, B.J.; Landau, H. A Risk Profile for Information Fusion Algorithms. Entropy 2011, 13, 1518-1532.

Show more citation formats Show less citations formats

Related Articles

Article Metrics

Article Access Statistics



[Return to top]
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top