Hey Guillermo truly appreciate your comment on this article. It was indeed as much of a process of learning for me to write this article and I may not fully have understood all concepts contained therein. However I will try to answer your question nevertheless. Definition seems part of the answer.

I found a topic on Science Direct on Hebbian learning and there is far more information on the topic there:
https://www.sciencedirect.com/topics/engineering/hebbian-learning

One of the top articles says the following: Hebbian learning is widely accepted in the fields of psychology, neurology, and neurobiology. It is one of the fundamental premises of neuroscience. The LMS (least mean square) algorithm of Widrow and Hoff is the world’s most widely used adaptive algorithm, fundamental in the fields of signal processing, control systems, communication systems, pattern recognition, and artificial neural networks. These learning paradigms are very different. Hebbian learning is unsupervised. LMS learning is supervised. However, a form of LMS can be constructed to perform unsupervised learning and, as such, LMS can be used in a natural way to implement Hebbian learning. Combining the two paradigms creates a new unsupervised learning algorithm, Hebbian-LMS.”

As such there could be through to be a possibility to synthesise different thoughts from these various areas (as attempted here).

I remember having to deal with PCA in my machine learning classes, but remember I am very much a newbie (doing a BA and just starting out). I am regurgitating information in quite a lot of context. Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables (entities each of which takes on various numerical values) into a set of values of linearly uncorrelated variables called principal components.

As such Hebbian theory is a neuroscientific theory claiming that an increase in synaptic efficacy arises from a presynaptic cell’s repeated and persistent stimulation of a postsynaptic cell. It is an attempt to explain synaptic plasticity, the adaptation of brain neurons during the learning process.

In the study of neural networks in cognitive function, it is often regarded as the neuronal basis of unsupervised learning.

PCA on the other hand is mostly used as a tool in exploratory data analysis and for making predictive models. It is often used to visualize genetic distance and relatedness between populations.

Returning to your question: why is hebbian learning included in the definition of unsupervised learning?

The general idea is an old one, that any two cells or systems of cells that are repeatedly active at the same time will tend to become ‘associated’ so that activity in one facilitates activity in the other.

As such there may be something to say for this associative assumption. When one cell repeatedly assists in firing another, the axon of the first cell develops synaptic knobs (or enlarges them if they already exist) in contact with the soma of the second cell.

PCA is a statistical procedure and hebbian learning is a theoretical framework. As such hebbian learning is the theoretical framework from the neuroscientific that some (certainly not all) unsupervised learning algorithms are based on.

Does this make sense? I would love to hear your reply and I might of course be wrong. This is very much a process of learning for me as I mentioned in the article.

Again thank you kindly for commenting on my article Guillermo.

AI Policy and Ethics at www.nora.ai. Student at University of Copenhagen MSc in Social Data Science. All views are my own. twitter.com/AlexMoltzau