Next Article in Journal
Investigation into the Suppression Effects of Inert Powders on the Minimum Ignition Temperature and the Minimum Ignition Energy of Polyethylene Dust
Previous Article in Journal
A Self-Adaptive Damping Control Strategy of Virtual Synchronous Generator to Improve Frequency Stability
Previous Article in Special Issue
Exergo-Economic Optimization of Organic Rankine Cycle for Saving of Thermal Energy in a Sample Power Plant by Using of Strength Pareto Evolutionary Algorithm II
Open AccessArticle

A New Improved Learning Algorithm for Convolutional Neural Networks

School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China
*
Author to whom correspondence should be addressed.
Processes 2020, 8(3), 295; https://doi.org/10.3390/pr8030295
Received: 24 January 2020 / Revised: 14 February 2020 / Accepted: 26 February 2020 / Published: 4 March 2020
(This article belongs to the Special Issue Neural Computation and Applications for Sustainable Energy Systems)
The back-propagation (BP) algorithm is usually used to train convolutional neural networks (CNNs) and has made greater progress in image classification. It updates weights with the gradient descent, and the farther the sample is from the target, the greater the contribution of it to the weight change. However, the influence of samples classified correctly but that are close to the classification boundary is diminished. This paper defines the classification confidence as the degree to which a sample belongs to its correct category, and divides samples of each category into dangerous and safe according to a dynamic classification confidence threshold. Then a new learning algorithm is presented to penalize the loss function with danger samples but not all samples to enable CNN to pay more attention to danger samples and to learn effective information more accurately. The experiment results, carried out on the MNIST dataset and three sub-datasets of CIFAR-10, showed that for the MNIST dataset, the accuracy of Non-improve CNN reached 99.246%, while that of PCNN reached 99.3%; for three sub-datasets of CIFAR-10, the accuracies of Non-improve CNN are 96.15%, 88.93%, and 94.92%, respectively, while those of PCNN are 96.44%, 89.37%, and 95.22%, respectively. View Full-Text
Keywords: convolutional neural networks; loss function; MNIST; CIFAR-10 convolutional neural networks; loss function; MNIST; CIFAR-10
Show Figures

Figure 1

MDPI and ACS Style

Yang, J.; Zhao, J.; Lu, L.; Pan, T.; Jubair, S. A New Improved Learning Algorithm for Convolutional Neural Networks. Processes 2020, 8, 295.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop