Maximum Entropy Learning with Deep Belief Networks
AbstractConventionally, the maximum likelihood (ML) criterion is applied to train a deep belief network (DBN). We present a maximum entropy (ME) learning algorithm for DBNs, designed specifically to handle limited training data. Maximizing only the entropy of parameters in the DBN allows more effective generalization capability, less bias towards data distributions, and robustness to over-fitting compared to ML learning. Results of text classification and object recognition tasks demonstrate ME-trained DBN outperforms ML-trained DBN when training data is limited. View Full-Text
Scifeed alert for new publicationsNever miss any articles matching your research from any publisher
- Get alerts for new papers matching your research
- Find out the new papers from selected authors
- Updated daily for 49'000+ journals and 6000+ publishers
- Define your Scifeed now
Lin, P.; Fu, S.-W.; Wang, S.-S.; Lai, Y.-H.; Tsao, Y. Maximum Entropy Learning with Deep Belief Networks. Entropy 2016, 18, 251.
Lin P, Fu S-W, Wang S-S, Lai Y-H, Tsao Y. Maximum Entropy Learning with Deep Belief Networks. Entropy. 2016; 18(7):251.Chicago/Turabian Style
Lin, Payton; Fu, Szu-Wei; Wang, Syu-Siang; Lai, Ying-Hui; Tsao, Yu. 2016. "Maximum Entropy Learning with Deep Belief Networks." Entropy 18, no. 7: 251.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.