Previous Article in Journal
Extreme Multi-Label Text Classification for Less-Represented Languages and Low-Resource Environments: Advances and Lessons Learned
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Delving into Unsupervised Hebbian Learning from Artificial Intelligence Perspectives

1
Department of Neuroscience, College of Biomedicine, City University of Hong Kong, Tat Chee Avenue, Kowloon Tong, Kowloon, Hong Kong 999077, China
2
CityU Shenzhen Research Institute, 8 Yuexing 1st Road, Shenzhen Hi-Tech Industrial Park, Nanshan District, Shenzhen 518057, China
*
Authors to whom correspondence should be addressed.
Mach. Learn. Knowl. Extr. 2025, 7(4), 143; https://doi.org/10.3390/make7040143
Submission received: 25 August 2025 / Revised: 30 September 2025 / Accepted: 6 November 2025 / Published: 11 November 2025
(This article belongs to the Section Learning)

Abstract

Unsupervised Hebbian learning is a biologically inspired algorithm designed to extract representations from input images, which can subsequently support supervised learning. It presents a promising alternative to traditional artificial neural networks (ANNs). Many attempts have focused on enhancing Hebbian learning by incorporating more biologically plausible components. Contrarily, we draw inspiration from recent advances in ANNs to rethink and further improve Hebbian learning in three interconnected aspects. First, we investigate the issue of overfitting in Hebbian learning and emphasize the importance of selecting an optimal number of training epochs, even in unsupervised settings. In addition, we discuss the risks and benefits of anti-Hebbian learning in model performance, and our visualizations reveal that synapses resembling the input images sometimes do not necessarily reflect effective learning. Then, we explore the impact of different activation functions on Hebbian representations, highlighting the benefits of properly utilizing negative values. Furthermore, motivated by the success of large pre-trained language models, we propose a novel approach for leveraging unlabeled data from other datasets. Unlike conventional pre-training in ANNs, experimental results demonstrate that merging trained synapses from different datasets leads to improved performance. Overall, our findings offer fresh perspectives on enhancing the future design of Hebbian learning algorithms.
Keywords: Hebbian learning; overfitting; anti-Hebbian learning; activation functions; hybrid synapses Hebbian learning; overfitting; anti-Hebbian learning; activation functions; hybrid synapses

Share and Cite

MDPI and ACS Style

Lin, W.; Piao, Z.; Fung, C.C.A. Delving into Unsupervised Hebbian Learning from Artificial Intelligence Perspectives. Mach. Learn. Knowl. Extr. 2025, 7, 143. https://doi.org/10.3390/make7040143

AMA Style

Lin W, Piao Z, Fung CCA. Delving into Unsupervised Hebbian Learning from Artificial Intelligence Perspectives. Machine Learning and Knowledge Extraction. 2025; 7(4):143. https://doi.org/10.3390/make7040143

Chicago/Turabian Style

Lin, Wei, Zhixin Piao, and Chi Chung Alan Fung. 2025. "Delving into Unsupervised Hebbian Learning from Artificial Intelligence Perspectives" Machine Learning and Knowledge Extraction 7, no. 4: 143. https://doi.org/10.3390/make7040143

APA Style

Lin, W., Piao, Z., & Fung, C. C. A. (2025). Delving into Unsupervised Hebbian Learning from Artificial Intelligence Perspectives. Machine Learning and Knowledge Extraction, 7(4), 143. https://doi.org/10.3390/make7040143

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop