This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Open AccessReview
A Survey of Loss Functions in Deep Learning
by
Caiyi Li
Caiyi Li 1,2,
Kaishuai Liu
Kaishuai Liu 1,2 and
Shuai Liu
Shuai Liu
Prof. Shuai Liu (M'22, SM'25) is a distinguished member of China Computer Federation, Lifetime of He [...]
Prof. Shuai Liu (M'22, SM'25) is a distinguished member of China Computer Federation, Lifetime member of Chinese Association for Artificial Intelligence. He now acts as a full-time professor with the Institute of Interdisciplinary Studies, Hunan Normal University, and the director of disciplinary research team of education informatization and intelligence. His main research domains include computer vision, multimodal information processing, and educational informatization and intelligence. He has published more than 80 papers in respected journals and conferences, with more than 7000 citations, Google H-index 44. He acts as editor role of many journals such as “Information Fusion”, “International Journal of Pattern Recognition and Artificial Intelligence”, and so on.
1,2,*
1
School of Educational Science, Hunan Normal University, Changsha 410081, China
2
Institute of Interdisciplinary Studies, Hunan Normal University, Changsha 410081, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(15), 2417; https://doi.org/10.3390/math13152417 (registering DOI)
Submission received: 16 June 2025
/
Revised: 20 July 2025
/
Accepted: 21 July 2025
/
Published: 27 July 2025
Abstract
Deep learning (DL), as a cutting-edge technology in artificial intelligence, has significantly impacted fields such as computer vision and natural language processing. Loss function determines the convergence speed and accuracy of the DL model and has a crucial impact on algorithm quality and model performance. However, most of the existing studies focus on the improvement of specific problems of loss function, which lack a systematic summary and comparison, especially in computer vision and natural language processing tasks. Therefore, this paper reclassifies and summarizes the loss functions in DL and proposes a new category of metric loss. Furthermore, this paper conducts a fine-grained division of regression loss, classification loss, and metric loss, elaborating on the existing problems and improvements. Finally, the new trend of compound loss and generative loss is anticipated. The proposed paper provides a new perspective for loss function division and a systematic reference for researchers in the DL field.
Share and Cite
MDPI and ACS Style
Li, C.; Liu, K.; Liu, S.
A Survey of Loss Functions in Deep Learning. Mathematics 2025, 13, 2417.
https://doi.org/10.3390/math13152417
AMA Style
Li C, Liu K, Liu S.
A Survey of Loss Functions in Deep Learning. Mathematics. 2025; 13(15):2417.
https://doi.org/10.3390/math13152417
Chicago/Turabian Style
Li, Caiyi, Kaishuai Liu, and Shuai Liu.
2025. "A Survey of Loss Functions in Deep Learning" Mathematics 13, no. 15: 2417.
https://doi.org/10.3390/math13152417
APA Style
Li, C., Liu, K., & Liu, S.
(2025). A Survey of Loss Functions in Deep Learning. Mathematics, 13(15), 2417.
https://doi.org/10.3390/math13152417
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
here.
Article Metrics
Article metric data becomes available approximately 24 hours after publication online.