Hyperdimensional Computing

An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors

Comments as I read this.

This paper is a more persuasive introduction to ideas of brain organization that I first became aware of in a much earlier paper by Kanerva. Accordingly New Ideas:

distribution of information in address bus
Kanerva argues that addresses must be redundant for robustness. I would add (and perhaps he does later) that different people will code different concepts pairs at different Hamming distances. (But he soon shows that almost all pairs of vectors are separated by Hamming distance very near 0.5 .)
Combinatorics
His presentation is better and more concrete.
Wow
“For example, a point C half-way between unrelated points A and B is very closely related to both, and another half-way point D can be unrelated to the first, C.”
autoassociative
I now understand ‘autoassociative’ which I think he described earlier without that name. It is clearly significant.
The main question I have (while only 20% into the paper) is whether there is any topology to the vector space—is there correlation between the bits that code closely related concepts? Small dimensional vector spaces encourage such topology and high dimensional spaces need it much less. Still I think there is topology.

The reasons he gives for randomness are unpersuasive. (sec 4) They do partly convince me that the question is not important and randomness is convenient to theorize about. I think he means merely to show that the theory has great explanatory power without topology.

Section 5.3 is a sugar coated presentation of the mathematician’s abstract vector space, contextualized for the current purposes.

Section 6.3 proposes a scheme for representing small sets of represented items. Such sets cannot be large. If we are asked to mention all the English letters, almost anyone will recite the alphabet. I think this is a liked list, but with about 6 links, not 26. ("abcdefg" "hijklmnop" "qrs" "tuv" "wxyz") I also have mental pointers from letters into these 5 groups. As I consult a dictionary for a word, I am slightly aware of this structure as I consider which of two letters comes first in the alphabet. If they are of two groups then I have some total ordering of these groups accessible to me. It they are of the same group I may have to recite that group.


Stimulated ideas:

A random mix of the bits form two addresses, preserving their location within the address, are unlikely to be between any other two ‘populated’ addresses. Might the brain exploit this?

Variation on several Kanerva comments: You can have many points in this hyperspace but with the property that the mid point between any two of them, is close to both but close to no others! Furthermore the original population need not be specially chosen. A population using random normal deviates as coordinates suffices.

I have a hunch but little evidence that this pattern may be replicated a few times—as different memory systems or ‘address spaces’. Word memory, face memory, food memory. Address spaces can link to each other, of course. See ‘horse’ here.


Non localized comments.

Kanerva’s earlier papers turned off some because it sounded like he was saying that the brain was just like a computer and this was unpopular and prevented many from exploring his ideas then. This paper suggests, somewhat seriously, real computer should perhaps be built this way. I think this may be a very clever rhetorical trick.

Many years ago I wrote a program to seek a pair of words from a list of many thousand 64 bit words, that differed by no more than 4 bits. It sounds quadratic in the list size but I cut that cost by the square root of the list size.

Hyperdimensionality invokes the same Hilbert space magic as ultra-wideband radio.