Associative memory (AM) is a very important part of the theory of neural networks. Although the Hebbian learning rule is always used to model the associative memory, it easily leads to spurious state because of the linear outer product method. In this work, nonlinear function constitution and dynamic synapses, against a spurious state for associative memory neural network are proposed. The model of the dynamic connection weight and the updating scheme of the states of neurons are presented. Nonlinear function constitution improves the conventional Hebbian learning rule to be a nonlinear outer product method. The simulation results show that both nonlinear function constitution and dynamic synapses can effectively enlarge the attractive basin. Comparing to the existing memory models, associative memory of neural network with nonlinear function constitution can both enlarge the attractive basin and increase the storage capacity. Owing to dynamic synapses, the attractive basin of the stored patterns is further enlarged, at the same time the attractive basin of the spurious state is diminished. But the storage capacity is decreased by using the dynamic synapses.
This paper is concentrated on the Kanerva’s Sparse Distributed Memory as a kind of artificial rieural net and associative memory. SDM eaptures some basic properties of human long-term memory. SDM may be regarded as a three-layered feed-forward neural net. Input layer neurons copy input vectors only, hidden layer neurons have radial basis functions and output layer neurons have linear basis functions. The hidden layer is initialized randomly in the basic SDM algorithm. The aim of the paper is to study of Kanerva’s model behaviour for reál input data (largge input vectors, correlated data). A modification of the basic model is introduced and tested.
This paper is concentrated on the Kanerva’s Sparse Distributed Memory as a kind of artificial rieural iiet and associative memory. SDM captures some basic properties of hnman long-term mernory. SDM may be regarded as a three-layered feed-forward neural net. Input layer neurons copy input vectors only, hidden layer nenrons have radial basis functions and output layer neurons have linear basis functions. The hidden layer is initialized randomly in the basic SDM algorithm. The aim of the paper is to study of Kanerva’s model behaviour for reál input data (largge input vectors, correlated data). A modification of the basic model is introduced and tested.
A unifying picture to the hermeneutical approach to schizophrenia is given by combining the philosophical and the experimental/computational approaches. Computational models of associative learning and recall in the cortico-hippocampal system helps to understand the circuits of normal and pathological behavior.