Average Contrastive Divergence for Training Restricted Boltzmann Machines
AbstractThis paper studies contrastive divergence (CD) learning algorithm and proposes a new algorithm for training restricted Boltzmann machines (RBMs). We derive that CD is a biased estimator of the log-likelihood gradient method and make an analysis of the bias. Meanwhile, we propose a new learning algorithm called average contrastive divergence (ACD) for training RBMs. It is an improved CD algorithm, and it is different from the traditional CD algorithm. Finally, we obtain some experimental results. The results show that the new algorithm is a better approximation of the log-likelihood gradient method and outperforms the traditional CD algorithm. View Full-Text
Scifeed alert for new publicationsNever miss any articles matching your research from any publisher
- Get alerts for new papers matching your research
- Find out the new papers from selected authors
- Updated daily for 49'000+ journals and 6000+ publishers
- Define your Scifeed now
Ma, X.; Wang, X. Average Contrastive Divergence for Training Restricted Boltzmann Machines. Entropy 2016, 18, 35.
Ma X, Wang X. Average Contrastive Divergence for Training Restricted Boltzmann Machines. Entropy. 2016; 18(1):35.Chicago/Turabian Style
Ma, Xuesi; Wang, Xiaojie. 2016. "Average Contrastive Divergence for Training Restricted Boltzmann Machines." Entropy 18, no. 1: 35.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.