Next Article in Journal
Intelligent Energy Management of Electrical Power Systems
Next Article in Special Issue
Virtual Grounding Point Concept for Detecting Abnormal and Normal Events in Home Care Monitoring Systems
Previous Article in Journal
Computer Aided Detection of Pulmonary Embolism Using Multi-Slice Multi-Axial Segmentation
Previous Article in Special Issue
Instance Hard Triplet Loss for In-video Person Re-identification
Open AccessFeature PaperArticle

Improving Classification Performance of Softmax Loss Function Based on Scalable Batch-Normalization

1
School of Communication & Information Engineering, Shanghai University, Shanghai 200444, China
2
Key Laboratory of intelligent infrared perception, Chinese Academy of Sciences, Shanghai 200083, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(8), 2950; https://doi.org/10.3390/app10082950
Received: 26 March 2020 / Revised: 14 April 2020 / Accepted: 21 April 2020 / Published: 24 April 2020
(This article belongs to the Special Issue Advanced Intelligent Imaging Technology Ⅱ)
Convolutional neural networks (CNNs) have made great achievements on computer vision tasks, especially the image classification. With the improvement of network structure and loss functions, the performance of image classification is getting higher and higher. The classic Softmax + cross-entropy loss has been the norm for training neural networks for years, which is calculated from the output probability of the ground-truth class. Then the network’s weight is updated by gradient calculation of the loss. However, after several epochs of training, the back-propagation errors usually become almost negligible. For the above considerations, we proposed that batch normalization with adjustable scale could be added after network output to alleviate the problem of vanishing gradient problem in deep learning. The experimental results show that our method can significantly improve the final classification accuracy on different network structures, and is also better than many other improved classification Loss. View Full-Text
Keywords: convolutional neural network; loss function; gradient decent convolutional neural network; loss function; gradient decent
Show Figures

Figure 1

MDPI and ACS Style

Zhu, Q.; He, Z.; Zhang, T.; Cui, W. Improving Classification Performance of Softmax Loss Function Based on Scalable Batch-Normalization. Appl. Sci. 2020, 10, 2950. https://doi.org/10.3390/app10082950

AMA Style

Zhu Q, He Z, Zhang T, Cui W. Improving Classification Performance of Softmax Loss Function Based on Scalable Batch-Normalization. Applied Sciences. 2020; 10(8):2950. https://doi.org/10.3390/app10082950

Chicago/Turabian Style

Zhu, Qiuyu; He, Zikuang; Zhang, Tao; Cui, Wennan. 2020. "Improving Classification Performance of Softmax Loss Function Based on Scalable Batch-Normalization" Appl. Sci. 10, no. 8: 2950. https://doi.org/10.3390/app10082950

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop