Autoassociative memory, also known as auto-association memory or an autoassociation network, is any type of memory that is able to retrieve a piece of data from only a tiny sample of itself. They are very effective in de-noising or removing interference from the input and can be used to determine whether the given input is “known” or “unknown”.
In artificial neural network, examples include variational autoencoder, denoising autoencoder, Hopfield network.
In reference to computer memory, the idea of associative memory is also referred to as Content-addressable memory (CAM).
The net is said to recognize a “known” vector if the net produces a pattern of activation on the output units which is same as one of the vectors stored in it.
Background
editTraditional memory
editTraditional memory[clarification needed] stores data at a unique address and can recall the data upon presentation of the complete unique address.
This section needs expansion. You can help by adding to it. (October 2016) |
Autoassociative memory
editAutoassociative memory, also known as auto-association memory or an autoassociation network, is any type of memory that is able to retrieve a piece of data from only a tiny sample of itself. They are very effective in de-noising or removing interference from the input and can be used to determine whether the given input is “known” or “unknown”.
In artificial neural network, examples include variational autoencoder, denoising autoencoder, Hopfield network.
In reference to computer memory, the idea of associative memory is also referred to as Content-addressable memory (CAM).
The net is said to recognize a “known” vector if the net produces a pattern of activation on the output units which is same as one of the vectors stored in it.
Autoassociative memories are capable of retrieving a piece of data upon presentation of only partial information[clarification needed] from that piece of data. Hopfield networks[1] have been shown[2] to act as autoassociative memory since they are capable of remembering data by observing a portion of that data.
Iterative Autoassociative Net
editIn some cases, an auto-associative net does not reproduce a stored pattern the first time around, but if the result of the first showing is input to the net again, the stored pattern is reproduced.[3] They are of 3 further kinds — Recurrent linear auto-associator,[4] Brain-State-in-a-Box net,[5] and Discrete Hopfield net. The Hopfield Network is the most well known example of an autoassociative memory.
Hopfield Network
editHopfield networks serve as content-addressable ("associative") memory systems with binary threshold nodes, and they have been shown to act as autoassociative since they are capable of remembering data by observing a portion of that data.[2]
Heteroassociative memory
editHeteroassociative memories, on the other hand, can recall an associated piece of datum from one category upon presentation of data from another category. For example: It is possible that the associative recall is a transformation from the pattern “banana” to the different pattern “monkey.”[6]
Bidirectional associative memory (BAM)
editBidirectional associative memories (BAM)[7] are artificial neural networks that have long been used for performing heteroassociative recall.
Example
editFor example, the sentence fragments presented below are sufficient for most English-speaking adult humans to recall the missing information.
- "To be or not to be, that is _____."
- "I came, I saw, _____."
Many readers will realize the missing information is in fact:
- "To be or not to be, that is the question."
- "I came, I saw, I conquered."
This demonstrates the capability of autoassociative networks to recall the whole by using some of its parts.
This section needs expansion. You can help by adding to it. (October 2016) |
References
edit- ^ Hopfield, J.J. (1 April 1982). "Neural networks and physical systems with emergent collective computational abilities". Proceedings of the National Academy of Sciences of the United States of America. 79 (8): 2554–8. Bibcode:1982PNAS...79.2554H. doi:10.1073/pnas.79.8.2554. PMC 346238. PMID 6953413.
- ^ a b Coppin, Ben (2004). Artificial Intelligence Illuminated. Jones & Bartlett Learning. ISBN 978-0-7637-3230-1.
- ^ Jugal, Kalita (2014). "Pattern Association or Associative Networks" (PDF). CS 5870: Introduction to Artificial Neural Networks. University of Colorado.
- ^ Thomas, M.S.C.; McClelland, J.L. (2008). "Connectionist models of cognition" (PDF). In Sun, R. (ed.). The Cambridge handbook of computational psychology. Cambridge University Press. pp. 23–58. CiteSeerX 10.1.1.144.6791. doi:10.1017/CBO9780511816772.005. ISBN 9780521674102.
- ^ Golden, Richard M. (1986-03-01). "The "Brain-State-in-a-Box" neural model is a gradient descent algorithm". Journal of Mathematical Psychology. 30 (1): 73–80. doi:10.1016/0022-2496(86)90043-X. ISSN 0022-2496.
- ^ Hirahara, Makoto (2009), "Associative Memory", in Binder, Marc D.; Hirokawa, Nobutaka; Windhorst, Uwe (eds.), Encyclopedia of Neuroscience, Berlin, Heidelberg: Springer, p. 195, doi:10.1007/978-3-540-29678-2_392, ISBN 978-3-540-29678-2
- ^ Kosko, B. (1988). "Bidirectional Associative Memories" (PDF). IEEE Transactions on Systems, Man, and Cybernetics. 18 (1): 49–60. doi:10.1109/21.87054.