Next Article in Journal
On Double Cyclic Codes and Applications to DNA Codes
Previous Article in Journal
Integrating Deep Learning into Semiparametric Network Vector AutoRegressive Models
Previous Article in Special Issue
Dual-Channel Heterogeneous Graph Neural Network for Automatic Algorithm Recommendation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

ReLU Neural Networks and Their Training

1
Institute of Al for Industries, Nanjing 211100, China
2
Faculty of Engineering, University of Toyama, Toyama-shi 930-8555, Japan
*
Authors to whom correspondence should be addressed.
Mathematics 2026, 14(1), 39; https://doi.org/10.3390/math14010039
Submission received: 24 October 2025 / Revised: 8 December 2025 / Accepted: 17 December 2025 / Published: 22 December 2025
(This article belongs to the Special Issue New Advances and Challenges in Neural Networks and Applications)

Abstract

Among various activation functions, the Rectified Linear Unit (ReLU) has become the most widely adopted due to its computational simplicity and effectiveness in mitigating the vanishing-gradient problem. In this work, we investigate the advantages of employing ReLU as the activation function and establish its theoretical significance. Our analysis demonstrates that ReLU-based neural networks possess the universal approximation property. In addition, we provide a theoretical explanation for the phenomenon of neuron death in ReLU-based neural networks. We further validate the effectiveness of this explanation through empirical experiments.
Keywords: ReLU; neural network; training of neural networks; datasets ReLU; neural network; training of neural networks; datasets

Share and Cite

MDPI and ACS Style

Luo, G.; Wang, X.; Zhao, W.; Tao, S.; Tang, Z. ReLU Neural Networks and Their Training. Mathematics 2026, 14, 39. https://doi.org/10.3390/math14010039

AMA Style

Luo G, Wang X, Zhao W, Tao S, Tang Z. ReLU Neural Networks and Their Training. Mathematics. 2026; 14(1):39. https://doi.org/10.3390/math14010039

Chicago/Turabian Style

Luo, Ge, Xugang Wang, Weizun Zhao, Sichen Tao, and Zheng Tang. 2026. "ReLU Neural Networks and Their Training" Mathematics 14, no. 1: 39. https://doi.org/10.3390/math14010039

APA Style

Luo, G., Wang, X., Zhao, W., Tao, S., & Tang, Z. (2026). ReLU Neural Networks and Their Training. Mathematics, 14(1), 39. https://doi.org/10.3390/math14010039

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop