Next Article in Journal
Entropy Generation and Heat Transfer Performance in Microchannel Cooling
Next Article in Special Issue
An Information Criterion for Auxiliary Variable Selection in Incomplete Data Analysis
Previous Article in Journal
Nonrigid Medical Image Registration Using an Information Theoretic Measure Based on Arimoto Entropy with Gradient Distributions
Previous Article in Special Issue
The Optimized Multi-Scale Permutation Entropy and Its Application in Compound Fault Diagnosis of Rotating Machinery

Mixture of Experts with Entropic Regularization for Data Classification

Department of Engineering Science, Andres Bello University, Santiago 7500971, Chile
Department of Engineering Informatics, Catholic University of Temuco, Temuco 4781312, Chile
Department of Computer Sciences, Pontifical Catholic University of Chile, Santiago 7820436, Chile
Author to whom correspondence should be addressed.
Entropy 2019, 21(2), 190;
Received: 4 January 2019 / Revised: 4 February 2019 / Accepted: 15 February 2019 / Published: 18 February 2019
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)
Today, there is growing interest in the automatic classification of a variety of tasks, such as weather forecasting, product recommendations, intrusion detection, and people recognition. “Mixture-of-experts” is a well-known classification technique; it is a probabilistic model consisting of local expert classifiers weighted by a gate network that is typically based on softmax functions, combined with learnable complex patterns in data. In this scheme, one data point is influenced by only one expert; as a result, the training process can be misguided in real datasets for which complex data need to be explained by multiple experts. In this work, we propose a variant of the regular mixture-of-experts model. In the proposed model, the cost classification is penalized by the Shannon entropy of the gating network in order to avoid a “winner-takes-all” output for the gating network. Experiments show the advantage of our approach using several real datasets, with improvements in mean accuracy of 3–6% in some datasets. In future work, we plan to embed feature selection into this model. View Full-Text
Keywords: mixture-of-experts; regularization; entropy; classification mixture-of-experts; regularization; entropy; classification
Show Figures

Figure 1

MDPI and ACS Style

Peralta, B.; Saavedra, A.; Caro, L.; Soto, A. Mixture of Experts with Entropic Regularization for Data Classification. Entropy 2019, 21, 190.

AMA Style

Peralta B, Saavedra A, Caro L, Soto A. Mixture of Experts with Entropic Regularization for Data Classification. Entropy. 2019; 21(2):190.

Chicago/Turabian Style

Peralta, Billy, Ariel Saavedra, Luis Caro, and Alvaro Soto. 2019. "Mixture of Experts with Entropic Regularization for Data Classification" Entropy 21, no. 2: 190.

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

Back to TopTop