Computational Models Of Learning And Memory Research Paper

Academic Writing Service

Sample Computational Models Of Learning And Memory Research Paper. Browse other  research paper examples and check the list of research paper topics for more inspiration. If you need a research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our research paper writing service for professional assistance. We offer high-quality assignments for reasonable rates.

Models of neural basis of learning and memory (also known as computational models or neural networks) are computer programs that simulate the performance of various brain structures in learning and memory, based on the theorized function of those brain structures. These models are able to produce results that match the behavioral results from human and animal studies. These models test the ability of existing theories to simulate behavioral results and also make novel predictions which lead to new experiments.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code

The basic structure of neural networks is a set of units (or nodes) which represent neurons in the brain. The connections between these units represent the synaptic connections between neurons. The connections are plastic and can change their strength based on some learning algorithm. The nodes are grouped into layers that have both forward and backward connections. These backward connections may be recurrent, whereby active units can feedback their Activity onto other active units and form a pattern of Activity for each input. In this way, inputs can enter the network, cause changes in connections within the layers, and produce an output that represents the learned behavior.

Models can be applied to learning and memory at different levels of specificity. At a general level, psychological models deal with a mathematical formula which is theorized as a mechanism for learning but do not apply this mechanism to any particular brain region. At a more complex level, qualitative models consist of a central concept or metaphor that attempts to capture the essence of the function of a brain region. Finally, neural networks apply mechanisms to particular brain regions based on the known circuitry of the brain region thought to be responsible for a particular learning task.

1. Psychological Models

The most influential psychological model of learning and memory is the Rescorla–Wagner (1972) model. This model proposed a mathematical equation which could account for learning simple associative tasks such as classical conditioning. In classical conditioning, a neutral stimulus (the conditioned stimulus or CS) is paired with a response-evoking stimulus (the unconditioned stimulus or US). Early in training, a reflexive response is elicited by the US while the CS elicits no response. Following repeated pairings of the CS and US, the CS begins to elicit a learned response (the conditioned response or CR).

The Rescorla–Wagner (1972) model assumes that the change in association between the CS and the US is a function of the difference between the US and an animal’s expectation of the US given all CSs present on the trial. Because the discrepancy, or ‘error,’ between the animal’s expectations and what actually occurs drives learning in this theory, the theory is referred to as an ‘error-correcting’ learning rule.

The Rescorla–Wagner model with this single simple error correction mechanism can account for much learning in classical conditioning including simple acquisition, discrimination learning, blocking, and conditioned inhibition. While the Rescorla–Wagner model can account for many learning phenomena it does not address where in the brain their proposed mechanism acts.

2. Neural Network Models

Psychological models such as the Rescorla–Wagner rule have been incorporated into subsequent neural network models. Therefore the mechanism of error correction can be solved within the context of specific neural circuits.

Neural network models have been put forth which address various kinds of learning and memories such as motor learning in the cerebellum, declarative memory, spatial learning, classical conditioning in the hippocampus, and conditioned fear in the amygdala.

3. Motor Learning And The Cerebellum

Motor learning is best understood through work on classical eyeblink conditioning in the rabbit. In classical eyeblink conditioning, a tone CS is paired with a corneal air puff US. The US elicits a reflexive eyeblink. Following repeated pairings of the tone and air puff, the tone begins to elicit a conditioned eyeblink response. Not only is a conditioned eyeblink elicited, but it is timed so that maximal eyelid closure occurs at about the time of the air puff arrival. In other words, not only is it learned that the tone predicts the air puff, but that the tone predicts the air puff’s arrival at a specific point in time.

The acquisition of a conditioned eyeblink has been found to involve the cerebellum and brainstem structures (for review see Anderson and Steinmetz 1994). All of the cerebellar models of eyeblink conditioning to be discussed have a foundation in the earlier theories of Marr (1969) and Albus (1971). They proposed theories for a role of the cerebellum in motor learning and the error correction of ongoing movements. The cerebellum has two distinct input pathways: the mossy and parallel fiber system, and the climbing fiber system. These two inputs converge both on Purkinje cells in the cerebellar cortex and on neurons in the deep cerebellar nuclei. Marr and Albus proposed that these two systems are involved in motor learning. The convergence of these two inputs was thought to alter Purkinje cell responsivity to parallel fiber inputs. Marr (1969) proposed that the Purkinje cell responsivity would be facilitated by conjunctive Activity in the parallel and climbing fiber systems while Albus (1971) proposed that the Purkinje cell responsivity would be reduced. Albus’s (1971) prediction was demonstrated by Ito and Kano (1982) who discovered the phenomena of long-term depression or LTD, in which simultaneous Activity in the parallel and climbing fiber systems caused a reduction of Purkinje cell responsivity to parallel fiber inputs. This change in Purkinje cell responsivity is the basis for many computational models of cerebellar function in motor learning.

As mentioned previously, the cerebellum and brainstem structures are necessary for eyeblink conditioning. The tone information is transmitted to the cerebellum via the mossy and parallel fiber pathway while the air puff information is transmitted via the climbing fiber pathway. The convergence of tone and air puff information in the cerebellum causes an increase in Activity in the cerebellar deep interpositus nucleus which is associated with the conditioned response. Models of eyeblink conditioning have attempted to simulate the action of the cerebellum through a variety of mechanisms.

Some cerebellar models have been put forth which work on the error-correction principle of the Rescorla–Wagner (1972) model (Gluck et al. 1995). In this model, the cerebellar CR output is fed back to the inferior olive, which is the source of the climbing fiber input that conveys air puff information to the cerebellum. This feedback from the cerebellum is inhibitory. Therefore, as learning occurs, the cerebellum is inhibiting the air puff information from entering the cerebellum. In this way, the error signal of the air puff is reduced as learning to respond to the tone occurs.

Other cerebellar models are more concerned with the real-time aspects of the conditioned response. Not only is a conditioned eyeblink learned, but this response is well timed with regard to the arrival of the US air puff. Many models have been put forth which can account for the well-timed nature of the conditioned response through various mechanisms. Some cerebellar models simulate the real-time characteristics of the conditioned eyeblinks by way of a feedback loop that is similar in nature to the error-correction feedback loop to the inferior olive (Gluck et al. 1995). The feedback loop that is responsible for the welltimed nature of the conditioned eyeblink returns cerebellar CR output to the pontine nuclei, which are the source of the mossy fiber inputs. Therefore, there is a convergence of CS and CR information that allows for the proper timing of the CR. Another recurrent cerebellar model produces well-timed CRs due to recurrent connections within the cerebellar cortex between granule, Golgi, and Purkinje cells (Buonomano and Mauk 1994). Another type of real-time cerebellar model is based on tapped delay lines (Desmond and Moore 1988). In a tapped delay line model, the well-timed CR is due to a series of synapses arranged so that the CS arrival to the Purkinje cells is delayed until the proper time to coincide with the US.

A more physiologically accurate model of the cerebellum was put forth by Fiala et al. (1996) who modeled cerebellar timing of the conditioned eyeblink response using a model of the metabotrophic glutamate receptor in cerebellar Purkinje cells. The metabotrophic glutamate receptors elicit slow responses by way of phosphoinositol hydrolysis and calcium release, which bridges the time interval between the CS and US onsets. Temporal correlation of the metabotrophic responses and climbing fiber inputs produce phosphorylation of AMPA and calcium-dependent potassium channels. This phosphorylation leads to long-term depression (LTD) of AMPA receptors. The phosphorylation of calcium-dependent potassium channels reduces baseline membrane potential and also reduces Purkinje cell firing rates in the CS–US interval. The decrease in Purkinje cell firing rates releases interpositus cells from inhibition and allows them to fire at the proper time. Overall, the Purkinje cells are responsible for timing the conditioned response while the interpositus is responsible for calibrating the strength of the response.

All of these cerebellar models are based on the same anatomy and connections and can simulate most of the same set of trial-level and real-time characteristics of eyeblink conditioning through various mechanisms. The question which remains is whether all of these mechanisms are physiologically realistic and may be operating in parallel in the cerebellum, or whether only one of these proposed mechanisms is correct. In this way, computational models are driving future empirical studies that should further advance the understanding of the exact mechanisms of motor learning in the cerebellum.

4. Hippocampal Models

Another brain region which has been modeled by many investigators is the hippocampal region. The hippocampal region consists of the hippocampus, dentate gyrus, subiculum, and entorhinal cortex. The hippocampus is thought to be involved in many forms of learning including declarative and episodic memory (the remembering of facts and events), spatial learning, and some forms of classical conditioning.

One of the earliest and most influential models of hippocampal region function was proposed by Marr (1971). Based on the hippocampal anatomy and physiology, Marr sought to infer the information processing capacity of the region. Marr’s basic idea was to distinguish separable roles in memory for the archicortex (including the hippocampus) from the neocortex. He assumed that the chief role of the neocortex was to store large complex event memories. The role of the hippocampus was to be a separate processor which could rapidly store event memories and then allow gradual transfer of this pattern to the neocortex.

Marr’s model consisted of two layers of cells. Inputs cause Activity in the first A layer of cells which project onto the second B layer of cells. The B cells in turn project back to the A cells. All connections between cells are modifiable, but they are simplified to allow only binary on or off values. A stored pattern can be retrieved if, when part is presented to the A cells, the evoked Activity on the B cells feeds back to complete the original firing pattern on the A cells.

The network described by Marr is a form of autoassociator. An autoassociator network learns to associate an input pattern with an identical output pattern. An autoassociator requires three basic features: (a) a high degree of internal recurrency among the principal cells; (b) strong, sparse synapses from external afferents, which could function as forcing synapses; and (c) plasticity at the synapses between co-active cells. These requirements are satisfied by the known physiology of the CA3 region of the hippocampus.

Fast temporary storage in an autoassociator is an important component of an episodic or declarative memory system (Alvarez and Squire 1994, Hasselmo et al. 1996, Hasselmo and Wyble 1997, Murre 1996). It is generally assumed in these models that a relatively small temporary store in the hippocampus interacts with a relatively large long-term storage neocortical system.

For example, in a network model of hippocampal function in episodic memory (Hasselmo and Wyble 1997), a stimulus enters the neocortex via the sensory system and subsequently activates cells in the entorhinal cortex. Entorhinal inputs funnel multimodal sensory inputs to various portions of the hippocampus by forming a very compressed or fused representation of the inputs. The dentate gyrus serves as a pattern separator and forms less overlapping representations of the entorhinal Activity patterns. These differentiated patterns of Activity are sent to the CA3 region of the hippocampus. The CA3 region serves as a recurrent autoassociator and encodes and retrieves the features. The CA1 region of the hippocampus receives inputs directly from the entorhinal cortex and forms representations of these compressed inputs. These CA1 representations are then compared to the recalled CA3 representations. Through these interactions between the entorhinal cortex, dentate gyrus, CA3, and CA1, specific episodic memories can be formed and accurately retrieved. Presentation of a portion of the original input can lead to retrieval of the entire memory by retrieval of the correct episodic memory representation by CA3.

The hippocampus in turn feeds back to the neocortex and initiates activation patterns there. It may activate new cell populations, which are then added to the representation, or it may allow connections to form between active cells in the neocortex. The hippocampus may be required to present memories to the neocortex repeatedly, over some period of time, to allow the neocortex to integrate new knowledge without overwriting the old. This process is termed memory consolidation. Over time, as consolidation occurs, the sensory input is able to activate these neocortical cells directly, without hippocampal intervention, in order to retrieve the stored memory.

Spatial memory is a hippocampal-dependent task in rats (O’Keefe and Nadel 1978), and many models of hippocampal processing in spatial learning have been based on autoassociative models of the hippocampal region. One strategy is to define spatial maps as composed of sets of complex associations representing places (McNaughton and Nadel 1990). In one place, there would be many views, depending on which direction the animal is facing. The hippocampal autoassociator would be able to map from one of these views to the full representation of the current place. Given this interpretation, spatial learning need not differ from other types of hippocampal learning.

The hippocampus is also thought to be involved in classical conditioning tasks that require learning about nonreinforced configurations of stimuli, contextual information, or relationships that span short delays. Several recent computational models have focused on possible information-processing roles for the hippocampal region (e.g., Eichenbaum et al. 1992, Gluck and Myers 1993, Myers et al. 1995, Schmajuk and DiCarlo 1991, Sutherland and Rudy 1989). Most of these models assume that while the hippocampus is required for some complicated forms of stimulus association, the neocortex and cerebellum are sufficient for simpler stimulus–stimulus associations such as those that underlie classical conditioning.

The key idea of one of these models, the Gluck and Myers (1993) cortico-hippocampal model, is that the hippocampal region is able to facilitate learning by adapting representations in two ways. First, it is assumed to compress, or make more similar, representations of stimuli that co-occur; second, it is assumed to differentiate, or make less similar, representations of stimuli that are to be mapped to different responses.

This model fits nicely with the hypothesized function of the entorhinal and hippocampal function previously mentioned. That is, entorhinal Activity patterns are compressions of all the sensory inputs to the hippocampal system and the dentate gyrus separates these inputs into different patterns of Activity.

More recent computational models have not only dealt with the hippocampal region in general, but attempted to differentiate the function of the hippocampus from the entorhinal cortex (Eichenbaum et al. 1994, Myers et al. 1995). In this way, these models have served to drive new empirical studies that test the predictions of the effects of different types of selective lesions within the hippocampal region.

It is very interesting that while the hippocampus is thought to be involved in a variety of apparently diverse memory types, the application of the autoassociator mechanism can account for these different memory functions. In this way the modeling work has served to form a more united understanding of the function of the hippocampus which is consistent for spatial learning, episodic memory, and stimulus representations that may not have been apparent from the diverse empirical data.

5. Conditioned Fear And The Amygdala

Another brain region which has been modeled is the amygdala. The amygdala is thought to be involved in emotional memories, especially conditioned fear responses. The amygdala, specifically the lateral nucleus, receives sensory inputs from all modalities as well as highly processed sensory information from the association cortices and hippocampus. The fear circuit is best understood for auditory conditioned stimuli. Sensory information reaches the amygdala from the thalamus via two parallel pathways: a direct connection from the extralemniscal areas of the thalamus, and an indirect connection via primary and secondary auditory cortices.

Armony et al. (1995) implemented a model of the conditioned fear network. The architecture of the network is modular; relevant structures are represented by interconnected modules based on the anatomical connectivity of the amygdala. In the model the input patterns represent a range of tone frequencies. As a result of training with the input patterns, each unit develops a receptive field, that is, it responds only to a subset of input patterns centered around a best frequency. For simulating conditioning, all the input patterns are presented with one pattern chosen as the CS and paired with the US. This pairing resulted in some units showing frequency-specific retuning of their receptive fields, which matches with empirical results.

6. Future Physiological Models

A criticism of the computational models described above is that while the network nodes capture some of the response characteristics of real neurons, they are too simplistic in their input–output functions to represent the full range of physiological complexity in real neurons. Obviously, many factors such as neurotransmitter types, different receptor types, and membrane dynamics play an essential role in the responsivity and plasticity of neurons. The computational models previously described do not represent these features. Another class of models addresses these issues by modeling the biophysical, structural, and synaptic characteristics of individual neurons (e.g., Fiala et al. 1996). While these physiological models can simulate the action of individual neurons, they do not deal with large numbers of neurons in network connectivity. Future computational models should attempt to account for both network properties of learning and memory while maintaining physiologically precise mechanisms.


  1. Albus J S 1971 A theory of cerebellar function. Mathematical Biosciences 10: 25–61
  2. Anderson B J, Steinmetz J E 1994 Cerebellar and brainstem circuits involved in classical eyeblink conditioning. Reviews in Neuroscience 5: 251–73
  3. Alvarez P, Squire L 1994 Memory consolidation and the medial temporal lobe: A simple network model. Proceedings of the National Academy of Sciences 91: 7041–5
  4. Armony J L, Servan-Schreiber D, Cohen J D, LeDoux J E 1995 An anatomically constrained neural network model of fear conditioning. Behavioral Neuroscience 109: 246–57
  5. Buonomano D V, Mauk M D 1994 Neural network model of the cerebellum: Temporal discrimination and the timing of motor responses. Neural Computation 6: 38–55
  6. Desmond J E, Moore J W 1988 Adaptive timing in neural networks: The conditioned response. Biological Cybernetics 58: 405–15
  7. Eichenbaum H, Cohen N J, Otto T, Wible C 1992 Memory representation in the hippocampus: Functional domain and functional organization. In: Squire L R, Lynch G, Weinberger N M, McGaugh J L (eds.) Memory Organization and Locus of Change. Oxford University Press, Oxford, UK
  8. Eichenbaum H, Otto T, Cohen N 1994 Two functional components of the hippocampal memory system. Behavioral and Brain Science 17: 449–518
  9. Fiala J, Grossberg S, Bullock D 1996 Metabotropic glutamate receptor activation in cerebellar Purkinje cells as a substrate for adaptive timing of the classically conditioned eye-blink response. Journal of Neuroscience 16: 3760–74
  10. Gluck M A, Myers C E 1993 Hippocampal mediation of stimulus representation: A computational theory. Hippocampus 3: 491–516
  11. Gluck M A, Myers C E, Thompson R F 1995 A computational model of the cerebellum and motor-reflex conditioning. In: Zornetzer S F, Davis J L, Lau C (eds.) An Introduction to Neural and Electronic Networks. Academic Press, San Diego, CA
  12. Hasselmo M E, Wyble, B 1997 Simulation of the effects of scopolamine on free recall and recognition in a network model of the hippocampus. Behavioral and Brain Research 89: 1–34
  13. Hasselmo M E, Wyble B, Wallenstein G 1996 Encoding and retrieval of episodic memories: Role of cholinergic and GABAergic modulation in the hippocampus. Hippocampus 6: 693–708
  14. Hinton G 1989 Connectionist learning procedures. Artificial Intelligence 40: 185–234
  15. Ito M, Kano M 1982 Long-lasting depression of parallel-fiber-Purkinje cell transmission induced by conjunctive stimulation of parallel fibers and climbing fibers in the cerebellar cortex. Neuroscience Letters 33: 253–8
  16. Marr D 1969 A theory of cerebellar cortex. Journal of Physiology 202: 437–70
  17. Marr D 1971 Simple memory: A theory for archicortex. Proceedings of the Royal Society, London, Series B 262: 23–81
  18. McNaughton B, Nadel L 1990 Hebb–Marr networks and the neurobiological representation of action in space. In: Gluck M, Rumelhart D (eds.) Neuroscience and Connectionist Theory. Lawrence Erlbaum, Hillsdale, NJ
  19. Murre J 1996 Tracelink: A model of amnesia and consolidation of memory. Hippocampus 6: 675–84
  20. Myers C E, Gluck M A, Granger R 1995 Dissociation of hippocampal and entorhinal function in associative learning: A computational approach. Psychobiology 23: 116–38
  21. O’Keefe J, Nadel L 1978 The Hippocampus as a Cognitive Map. Clarendon Press, Oxford, UK
  22. Rescorla R, Wagner A 1972 A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. In: Black A, Prokasy W (eds.) Classical Conditioning II: Current Research and Theory. AppletonCentury-Crofts, New York
  23. Schmajuk N A, DiCarlo J J 1991 A neural network approach to hippocampal function in classical conditioning. Behavioral Neuroscience 105: 82–110
  24. Sutherland R J, Rudy J W 1989 Configural association theory: The role of the hippocampal formation in learning, memory and amnesia. Psychobiology 17: 129–44
Neural Basis Of Learning And Memory Research Paper
Learning And Instruction Research Paper


Always on-time


100% Confidentiality
Special offer! Get 10% off with the 24START discount code!