Next Article in Journal
Entropy Generation and Natural Convection of CuO-Water Nanofluid in C-Shaped Cavity under Magnetic Field
Next Article in Special Issue
An Optimization Approach of Deriving Bounds between Entropy and Error from Joint Distribution: Case Study for Binary Classifications
Previous Article in Journal
Particular Solutions of the Confluent Hypergeometric Differential Equation by Using the Nabla Fractional Calculus Operator
Previous Article in Special Issue
A New Process Monitoring Method Based on Waveform Signal by Using Recurrence Plot
Article Menu

Export Article

Open AccessArticle
Entropy 2016, 18(2), 51; doi:10.3390/e18020051

Classification Active Learning Based on Mutual Information

1
Department of Electrical and Computer Engineering, Northeastern University, 360 Huntington Ave, Boston, MA 02115, USA
2
Department of Electrical and Computer Engineering, University of Pittsburgh, 3700 O’Hara Street, Pittsburgh, PA 15261, USA
3
National Science Foundation, 4201 Wilson Boulevard, Arlington, VA 22230, USA
These authors contributed equally to this work.
*
Author to whom correspondence should be addressed.
Academic Editors: Badong Chen and Jose C. Principe
Received: 15 December 2015 / Revised: 28 January 2016 / Accepted: 1 February 2016 / Published: 5 February 2016
(This article belongs to the Special Issue Information Theoretic Learning)
View Full-Text   |   Download PDF [625 KB, uploaded 5 February 2016]   |  

Abstract

Selecting a subset of samples to label from a large pool of unlabeled data points, such that a sufficiently accurate classifier is obtained using a reasonably small training set is a challenging, yet critical problem. Challenging, since solving this problem includes cumbersome combinatorial computations, and critical, due to the fact that labeling is an expensive and time-consuming task, hence we always aim to minimize the number of required labels. While information theoretical objectives, such as mutual information (MI) between the labels, have been successfully used in sequential querying, it is not straightforward to generalize these objectives to batch mode. This is because evaluation and optimization of functions which are trivial in individual querying settings become intractable for many objectives when we are to select multiple queries. In this paper, we develop a framework, where we propose efficient ways of evaluating and maximizing the MI between labels as an objective for batch mode active learning. Our proposed framework efficiently reduces the computational complexity from an order proportional to the batch size, when no approximation is applied, to the linear cost. The performance of this framework is evaluated using data sets from several fields showing that the proposed framework leads to efficient active learning for most of the data sets. View Full-Text
Keywords: active learning; mutual information; submodular maximization; classification active learning; mutual information; submodular maximization; classification
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Sourati, J.; Akcakaya, M.; Dy, J.G.; Leen, T.K.; Erdogmus, D. Classification Active Learning Based on Mutual Information. Entropy 2016, 18, 51.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top