Models Of Cognitive Functions Research Paper

Academic Writing Service

Sample Models Of Cognitive Functions Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. iResearchNet offers academic assignment help for students all over the world: writing from scratch, editing, proofreading, problem solving, from essays to dissertations, from humanities to STEM. We offer full confidentiality, safe payment, originality, and money-back guarantee. Secure your academic success with our risk-free services.

The field of cognitive neuropsychology centers around two coupled goals: to use patterns of cognitive deficits in brain-damaged patients to inform theories and models of how cognitive processes are carried out by the brain, and to apply existing models to explain the specific deficits of individual patients in order to design more effective strategies for remediating these deficits. The roots of this effort can be traced back to the pioneering work of Broca, Wernicke, and Lichtheim in the mid-to late nineteenth century. These neurologists attempted to decompose complex cognitive functions, such as language, into the joint operation of multiple functional ‘centers’ with specific patterns of connectivity between them. Damage either to the centers themselves or to the pathways between them were thought to give rise to distinct patterns of cognitive deficits. Indeed, the traditional scheme used today to categorize patterns of language impairment into distinct clinical syndromes—such as Broca’s aphasia, Wernicke’s aphasia, transcortical sensory or motor aphasia, etc.—derives from the Wernicke– Lichtheim model of the organization of the language system developed in the late nineteenth century (see Fig. 1).

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


Models Of Cognitive Functions Research Paper

1. Box-And-Arrow Modeling

Although the form of explanation offered by these so-called ‘diagram makers’ was criticized roundly in the early twentieth century, it is echoed in modern-day cognitive neuropsychology in the form of box-and-arrow information processing models (see Fig. 2). While the functions ascribed to the centers or com-ponents are far more specific than in the nineteenth century, the same underlying explanatory logic is applied—the patterns of performance of brain-damaged patients are explained by positing one or more ‘lesions’ to the so-called functional architecture of the cognitive system. Typically, the predictions of the model, both in normal operation and under damage, have consisted of verbal descriptions based on fairly general notions about how the various modules would operate and interact. While these types of predictions may suffice for capturing the more general characteristics of normal and impaired cognitive functioning, they become increasingly unreliable as the model is elaborated to account for more detailed phenomena.




Models Of Cognitive Functions Research Paper

Two recent trends have increased the usefulness of box-and-arrow theorizing within cognitive neuropsychology. First, with improvements in techniques for structural lesion localization in patients and for functional brain imaging in both patients and normal subjects, there has been a more concerted effort to situate the components and pathways in specific brain regions. This is important because information on neuroanatomic localization places strong constraints on how components participate in various tasks and on how the system must be damaged to account for the performance of specific patients.

The second, and perhaps more important trend has been the development of working computer simulations of cognitive models that can both reproduce the characteristics of normal performance and can exhibit the appropriate deficits when damaged in a manner analogous to brain damage. Computational modeling makes it possible to demonstrate the sufficiency of the underlying theory in accounting for the phenomena by making the behavior of a detailed cognitive model explicit. A working simulation guarantees that the underlying theory is neither vague nor internally inconsistent, and the behavior of the simulation can be used to generate specific predictions of the theory.

One example of computational modeling based on box-and-arrow theorizing is the work of Coltheart and co-workers (Coltheart et al. 1993) in simulating a dual-route model of word reading. In the model, one pathway from print to sound applies grapheme-phoneme correspondence rules (e.g., B at the beginning of a word is pronounced /b/), while the other uses memorized whole-word correspondences. (These are the two pathways in Fig. 2 from the ‘Orthographic input buffer’ to the ‘Phonological output buffer’ that bypass the ‘Cognitive System.’) The rule route is effective for regular words (e.g., MINT) and for pronounceable but meaningless pseudowords (e.g., RINT); however, the lexical route is needed to pronounce exception words (e.g., YACHT, PINT) whose pronunciations violate the rules. In Coltheart and co-workers’ implementation, the rule route consists of a collection of template-matching rules that operate left-to-right on the input string and generate single phonemes at fixed intervals. The lexical route is a version of a highly influential model of word recognition developed by Rumelhart and McClelland (1982), known as the Interactive Activation model, which contains a separate processing unit for each word in the vocabulary. Damage to the lexical route yields a reading pattern in which exception words are regularized (e.g., PINT read to rhyme with MINT), analogous to patients with acquired surface dyslexia (Patterson et al. 1985). Conversely, damage to the rule route causes impaired reading of pseudowords relative to words, corresponding to acquired phonological dyslexia (Beauvois and Derouesne 1979).

2. Connectionist Modeling

Although substantial progress has been made within the framework of box-and-arrow theorizing, many researchers have come to believe that, in order to capture the full range of cognitive and neuropsychological phenomena, a formalism is needed that is based more closely on the style of computation employed by the brain. One such formalism that is widely used in connectionist modeling (see, e.g., McClelland et al. 1986, McLeod et al. 1998, Rumelhart et al. 1986).

In connectionist models—sometimes called neural networks or parallel distributed processing systems— cognitive processes take the form of cooperative and competitive interactions among large numbers of simple, neuron-like processing units (Fig. 3). Typically, each unit has a real-valued activity level, roughly analogous to the firing rate of a neuron. Unit inter-actions are governed by weighted connections that encode the long-term knowledge of the system and are learned gradually through experience. The activity of some of the units encodes the input to the system; the resulting activity of other units encodes the system’s response to that input. The patterns of activity of the remaining units constitute learned, internal representations that mediate between inputs and outputs. Units and connections generally are not considered to be in one-to-one correspondence with actual neurons and synapses. Rather, connectionist systems attempt to capture the essential computational properties of the vast ensembles of real neuronal elements found in the brain through simulations of smaller networks of units. In this way, the approach is distinct from computational neuroscience (Sejnowski et al. 1988) which aims to model the detailed neurophysiology of relatively small groups of neurons. Although the connectionist approach uses physiological data to guide the search for underlying principles, it tends to focus more on overall system function or behavior, attempting to determine what principles of brain-style computation give rise to the cognitive phenomena observed in human behavior.

Models Of Cognitive Functions Research Paper

The simplest type of connectionist system is a feed-forward network, in which information flows unidirectionally from input units to output units, typically via one or more layers of hidden units (so called because they are not visible to the environment). Such networks are useful in many contexts but have a limited ability to process time-varying information. In such contexts, recurrent networks, that permit any pattern of interconnection among the units, are more appropriate. In one common type of recurrent net-work, termed an attractor network, unit activities gradually settle to a stable pattern in response to a fixed input. Recurrent networks can also learn to process sequences of inputs and/or to produce sequences of outputs. For example, in a simple recurrent network (Elman 1990), the internal representation generated for each element in a sequence is made available as an additional input to provide context for processing subsequent elements. Critically, the internal representations themselves adapt so as to encode this context information effectively, enabling the system to learn to represent and retain relevant information at multiple time scales.

An issue of central relevance in the study of cognition is the nature of the underlying representation of information. Connectionist models divide roughly into two classes in this regard. In localist models, such as the Interactive Activation model mentioned earlier, each unit corresponds to a distinct, familiar entity such as a letter, word, concept, or proposition. By contrast, in distributed models, such entities are encoded not by individual units but by alternative patterns of activity over the same group of units, so that each unit participates in representing many entities. Both localist and distributed models are ‘connectionist’ in the sense that the system’s knowledge is encoded in terms of weights on connections between units.

Because localist models specify the form and content of representations, they tend to de-emphasize the role of learning. With a distributed model, by contrast, there is greater emphasis on the ability of the system to learn effective internal representations. Thus, instead of attempting to stipulate the specific form and content of the knowledge required for performance in a domain, the approach instead stipulates the tasks the system must perform, including the nature of the relevant information in the environment, but then leaves it up to learning to develop the necessary internal representations and processes.

Learning in a connectionist system involves modifying the values of weights on connections between units in response to feedback on the behavior of the network. A variety of specific learning procedures are employed in connectionist research; most that have been applied to cognitive domains, such as back-propagation (Rumelhart et al. 1986) take the form of error correction: change each weight in a way that reduces the discrepancy between the correct response to each input and the one actually generated by the system. Although it is unlikely that the brain implements back-propagation in any direct sense, there are more biologically plausible procedures that are computationally equivalent (see, e.g., O’Reilly 1996).

In an early application of error-correcting learning, Rumelhart and McClelland (1986) showed that a single network could learn to generate the past-tense forms of both regular verbs (e.g., BAKE => ‘baked’) and irregular verbs (e.g., TAKE => ‘took’), thereby obviating the need for dual rule-based and exception mechanisms (Pinker 1999), analogous to those in the dual-route reading models mentioned earlier. Although aspects of the approach were criticized strongly (Pinker and Prince 1988), many of the specific limitations of the model have been addressed in subsequent simulation work (see, e.g., MacWhinney and Leinbach 1991, Plunkett and Marchman 1993, 1996). Of particular interest is recent work by Joanisse and Seidenberg (1999) showing that damage either to phonological or to semantic representations within single processing system can account for the observation of selective impairments in performance on regular vs. irregular verbs following Parkinson’s dis-ease vs. Alzheimer’s disease, respectively (Ullman et al. 1997).

A similar line of progress has taken place in the domain of English word reading. An early connectionist model (Seidenberg and McClelland 1989) provided a good account of word reading but was poor at pronouncing word-like pseudowords (e.g., MAVE, Besner et al. 1990). A more recent series of simulations (Plaut et al. 1996) showed that the limitations of this preliminary model stemmed from the model’s use of poorly structured orthographic and phonological representations. By contrast, networks with more appropriate representations were able to learn to pronounce both regular and exception words, and yet also pronounce pseudowords as well as skilled readers. Moreover, damage to semantic representations in such networks gave rise to surface dyslexia, in which patients produce regularization errors to exception words. In closely related work (Hinton and Shallice 1991, Plaut and Shallice 1993), damage to the pathway between orthography and phonology, combined with secondary damage to the semantic pathway, yielded the complementary pattern of deep dyslexia—often viewed as a severe form of phonological dyslexia—in which patients are extremely poor at pronouncing pseudowords and make semantic errors in reading words aloud (e.g., misreading RIVER as ‘ocean’; see Coltheart et al. 1980). With the additional of an attentional mechanism, fully recurrent networks have also been used to account for the interaction of both perceptual and lexical/semantic factors in the reading errors of neglect dyslexic patients (Mozer and Behrmann 1990). In fact, such networks have been applied to a wide range of neuropsychological phenomena, including selective impairments in face recognition (Farah et al. 1993), visual object rec-ognition (Humphreys et al. 1992), spatial attention (Cohen et al. 1994), semantic memory (Farah and McClelland 1991), anomia and aphasia (Dell et al. 1997), spelling (Brown and Ellis 1994), and executive control (Cohen and Servan-Schreiber 1992).

One of the main attractions of distributed connectionist models is their ability to discover the structure implicit in ensembles of events and experiences. Accomplishing this, however, requires making only very small changes in response to each input so that the resulting weight values reflect the long-term experience of the system. Attempts to teach such networks the idiosyncratic properties of specific events one after the other do not generally succeed since the changes made in learning each new case produce ‘catastrophic interference’ with what was stored previously in the weights (McCloskey and Cohen 1989). McClelland et al. (1995) observed, however, that catastrophic interference does not occur if continued training of old knowledge is interleaved with the training of new knowledge. They proposed that the brain employs two complementary learning systems: a cortical system for gradual learning using highly overlapping distributed representations, and a sub-cortical, hippocampal-based system for rapid learning using much sparser, less-overlapping representations. On their account, stored instances in the hippocampus provide the training input for past experience that must be inter-leaved with ongoing experience to prevent interference in cortex. The argument was that learning in cortex and in distributed networks are similarly constrained, so that the strengths and limitations of structure-sensitive learning in networks explained why the brain employs two complementary learning systems in hippocampus and neocortex.

Although fully recurrent networks are capable of learning to exhibit complex temporal behavior, for reasons of efficiency it is more common to apply simple recurrent networks in temporal domains. For example, Elman (1991) demonstrated that a simple recurrent network could learn the structure of an English-like grammar, involving number agreement and variable verb argument structure across multiple levels of embedding, by repeatedly attempting to predict the next word in processing sentences. St. John and McClelland (1990) also showed how such net-works can learn to develop a representation of sentence meaning by attempting to answer queries about thematic role assignments throughout the course of processing a sentence. There have, however, been relatively few attempts at applying simple re-current networks to neuropsychological phenomena.

3. Future Directions

In many ways, the application of computational modeling to understanding normal and impaired cognition is still in its infancy. Only a small fraction of the relevant behavioral phenomena have been addressed in any detail by existing models. Certainly considerable fruitful work remains in applying existing methods to a broader range of empirical issues. Even so, it seems clear that existing computational frame-works have a number of limitations which hamper their broader application. This is particularly true with regards to the application of connectionist networks to complex temporal domains, such as language, reasoning, and problem solving. While there have been some promising initial steps in these areas, substantial development of the computational methodology itself is likely to be necessary before satisfactory models will be possible.

4. Summary

Researchers interested in human cognitive processes have long used computer simulations to try to identify the principles of cognition. The strategy has been to build computational models that embody a set of principles and then examine how well the models capture human performance in cognitive tasks. A number of formalisms have been used to model cognitive processing in normal individuals. Those based on general principles of neural computation— including connectionist or neural-network models— have proven most effective at capturing the effects of brain damage on cognition. Considerable work re-mains, however, in extending such models to address more complex temporal phenomena.

Bibliography:

  1. Beauvois M-F, Derouesne J 1979 Phonological alexia: Three dissociations. Journal of Neurology, Neurosurgery, and Psychiatry 42: 1115–24
  2. Besner D, Twilley L, McCann R S, Seergobin K 1990 On the connection between connectionism and data: Are a few words necessary? Psychological Review 97(3): 432–46
  3. Brown G D A, Ellis N C (eds.) 1994 Handbook of Normal and Disturbed Spelling. Wiley, New York
  4. Cohen J D, Romero R D, Servan-Schreiber D, Farah M J 1994 Mechanisms of spatial attention: The relation of macro-structure to microstructure in parietal neglect. Journal of Cognitive Neuroscience 6(4): 377–87
  5. Cohen J D, Servan-Schreiber D 1992 Context, cortex, and dopamine: A connectionist approach to behavior and biology in schizophrenia. Psychological Review 99(1): 45–77
  6. Coltheart M, Curtis B, Atkins P, Haller M 1993 Models of reading aloud: Dual-route and parallel-distributed-processing approaches. Psychological Review 100(4): 589–608
  7. Coltheart M, Patterson K, Marshall J C (eds.) 1980 Deep Dyslexia. Routledge and Kegan Paul, London
  8. Dell G S, Schwartz M F, Martin N, Saffran E M, Gagnon D A 1997 Lexical access in normal and aphasic speakers. Psycho-logical Review 104: 801–38
  9. Elman J L 1990 Finding structure in time. Cognitive Science 14(2): 179–211
  10. Elman J L 1991 Distributed representations, simple recurrent networks, and grammatical structure. Machine Learning 7: 195–225
  11. Farah M J, McClelland J L 1991 A computational model of semantic memory impairment: Modality-specificity and emergent category-specificity. Journal of Experimental Psychology: General 120(4): 339–57
  12. Farah M J, O’Reilly R C, Vecera S P 1993 Dissociated overt and covert recognition as an emergent property of a lesioned neural network. Psychological Review 100(4): 571–88
  13. Hinton G E, Shallice T 1991 Lesioning an attractor network: Investigations of acquired dyslexia. Psychological Review 98(1): 74–95
  14. Howard D, Franklin S 1988 Missing the Meaning? MIT Press, Cambridge, MA
  15. Humphreys G W, Freeman T, Muller H J 1992 Lesioning a connectionist model of visual search: Selective effects on distractor grouping. Canadian Journal of Psychology 46: 417–60
  16. Joanisse M F, Seidenberg M S 1999 Impairments in verb morphology after brain injury: A connectionist model. Proceedings of the National Academy of Science, USA 96: 7592–7
  17. Lichtheim L 1885 On aphasia. Brain 7: 433–84
  18. MacWhinney B, Leinbach J 1991 Implementations are not conceptualizations: Revising the verb learning model. Cognition 40: 121–53
  19. McClelland J L, McNaughton B L, O’Reilly R C 1995 Why there are complementary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory. Psychological Review 102: 419–57
  20. McClelland J L, Rumelhart D E, PDP Research Group (eds.) 1986 Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 2: Psychological and Biological Models. MIT Press, Cambridge, MA
  21. McCloskey M, Cohen N J 1989 Catastrophic interference in connectionist networks: The sequential learning problem. In: Bower G H (ed.) The Psychology of Learning and Motivation. Academic Press, New York, Vol. 24, pp. 109–65
  22. McLeod P, Plunkett K, Rolls E T 1998 Introduction to Connectionist Modelling of Cognitive Processes. Oxford University Press, Oxford, UK
  23. Mozer M C, Behrmann M 1990 On the interaction of selective attention and lexical knowledge: A connectionist account of neglect dyslexia. Journal of Cognitive Neuroscience 2(2): 96–123
  24. O’Reilly R C 1996 Biologically plausible error-driven learning using local activation differences: The generalized re-circulation algorithm. Neural Computation 8(5): 895–938
  25. Patterson K, Coltheart M, Marshall J C (eds.) 1985 Surface Dyslexia. Erlbaum, Hillsdale, NJ
  26. Pinker S 1999 Words and Rules: The Ingredients of Language. Basic Books, New York
  27. Pinker S, Prince A 1988 On language and connectionism: Analysis of a parallel distributed processing model of language acquisition. Cognition 28: 73–193
  28. Plaut D C 1997 Structure and function in the lexical system: Insights from distributed models of naming and lexical decision. Language and Cognitive Processes 12: 767–808
  29. Plaut D C, McClelland J L, Seidenberg M S, Patterson K 1996 Understanding normal and impaired word reading: Computational principles in quasi-regular domains. Psychological Review 103: 56–115
  30. Plaut D C, Shallice T 1993 Deep dyslexia: A case study of connectionist neuropsychology. Cognitive Neuropsychology 10(5): 377–500
  31. Plunkett K, Marchman V A 1993 From rote learning to system building: Acquiring verb morphology in children and connectionist nets. Cognition 48(1): 21–69
  32. Plunkett K, Marchman V A 1996 Learning from a connectionist model of the acquisition of the English past tense. Cognition 61(3): 299–308
  33. Rumelhart D E, Hinton G E, Williams R J 1986 Learning representations by back-propagating errors. Nature 323(9): 533–6
  34. Rumelhart D E, McClelland J L 1982 An interactive activation model of context effects in letter perception: Part 2. The contextual enhancement effect and some tests and extensions of the model. Psychological Review 89: 60–94
  35. Rumelhart D E, McClelland J L 1986 On learning the past tenses of English verbs. In: McClelland J L, Rumelhart D E, PDP Research Group (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 2: Psychological and Biological Models. MIT Press, Cambridge, MA, pp. 216–71
  36. Rumelhart D E, McClelland J L, PDP Research Group (eds.) 1986 Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations. MIT Press, Cambridge, MA
  37. Seidenberg M S, McClelland J L 1989 A distributed, developmental model of word recognition and naming. Psycho-logical Review 96: 523–68
  38. Sejnowski T J, Koch C, Churchland P S 1988 Computational neuroscience. Science 241: 1299–1306
  39. John M F, McClelland J L 1990 Learning and applying contextual constraints in sentence comprehension. Artificial Intelligence 46: 217–57
  40. Ullman M T, Corkin S, Coppola M, Hicock G, Growdon J H, Koroshetz W J, Pinker S 1997 A neural dissociation within language: Evidence that the mental dictionary is part of declarative memory and that grammatical rules are processed by the procedural system. Journal of Cognitive Neuroscience 9: 266–76
Cognitive Maps Research Paper
Cognitive Dissonance Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!