Next Article in Journal
Dombi Aggregation Operators of Neutrosophic Cubic Sets for Multiple Attribute Decision-Making
Next Article in Special Issue
An Online Energy Management Control for Hybrid Electric Vehicles Based on Neuro-Dynamic Programming
Previous Article in Journal
Spectrum Allocation Based on an Improved Gravitational Search Algorithm
Previous Article in Special Issue
Research on Degeneration Model of Neural Network for Deep Groove Ball Bearing Based on Feature Fusion
Article Menu

Export Article

Open AccessArticle
Algorithms 2018, 11(3), 28; https://doi.org/10.3390/a11030028

Modified Convolutional Neural Network Based on Dropout and the Stochastic Gradient Descent Optimizer

Key Laboratory of Advanced Manufacturing Technology of Ministry of Education, Guizhou University, Jixie Building 405 of West Campus, Huaxi District, Guiyang 550025, China
*
Author to whom correspondence should be addressed.
Received: 25 December 2017 / Revised: 5 March 2018 / Accepted: 5 March 2018 / Published: 7 March 2018
(This article belongs to the Special Issue Advanced Artificial Neural Networks)
Full-Text   |   PDF [3376 KB, uploaded 9 March 2018]   |  

Abstract

This study proposes a modified convolutional neural network (CNN) algorithm that is based on dropout and the stochastic gradient descent (SGD) optimizer (MCNN-DS), after analyzing the problems of CNNs in extracting the convolution features, to improve the feature recognition rate and reduce the time-cost of CNNs. The MCNN-DS has a quadratic CNN structure and adopts the rectified linear unit as the activation function to avoid the gradient problem and accelerate convergence. To address the overfitting problem, the algorithm uses an SGD optimizer, which is implemented by inserting a dropout layer into the all-connected and output layers, to minimize cross entropy. This study used the datasets MNIST, HCL2000, and EnglishHand as the benchmark data, analyzed the performance of the SGD optimizer under different learning parameters, and found that the proposed algorithm exhibited good recognition performance when the learning rate was set to [0.05, 0.07]. The performances of WCNN, MLP-CNN, SVM-ELM, and MCNN-DS were compared. Statistical results showed the following: (1) For the benchmark MNIST, the MCNN-DS exhibited a high recognition rate of 99.97%, and the time-cost of the proposed algorithm was merely 21.95% of MLP-CNN, and 10.02% of SVM-ELM; (2) Compared with SVM-ELM, the average improvement in the recognition rate of MCNN-DS was 2.35% for the benchmark HCL2000, and the time-cost of MCNN-DS was only 15.41%; (3) For the EnglishHand test set, the lowest recognition rate of the algorithm was 84.93%, the highest recognition rate was 95.29%, and the average recognition rate was 89.77%. View Full-Text
Keywords: convolutional neural network; activate function; dropout; SGD optimizer convolutional neural network; activate function; dropout; SGD optimizer
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Yang, J.; Yang, G. Modified Convolutional Neural Network Based on Dropout and the Stochastic Gradient Descent Optimizer. Algorithms 2018, 11, 28.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Algorithms EISSN 1999-4893 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top