Abstract :
We consider two aspects on the efficiency of Kanerva´s sparse distributed memory (SDM). First, it has been suggested that in certain situations it would make sense to use different activation probabilities for writing and reading in SDM. However, here we model such a situation and find that, at least approximately, it is optimal to use the same probabilities for writing and reading. Second, and more important, we investigate the scaling up of SDM, in connection with some observations made by Sjodin (1997). It is shown that the original SDM (here in Jaeckel´s version) does not scale up if the reading address is disturbed, but that this can be remedied by using a kind of SDM with sparse address vectors, showing that SDM could well be used as a clean-up memory in computing with large patterns