Sample Cognitive Modeling Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. iResearchNet offers academic assignment help for students all over the world: writing from scratch, editing, proofreading, problem solving, from essays to dissertations, from humanities to STEM. We offer full confidentiality, safe payment, originality, and money-back guarantee. Secure your academic success with our risk-free services.
Cognitive science is a genuinely interdisciplinary field, which owes its existence to the insight that, in different disciplines, interesting research was based on the common assumption that cognition could be regarded as computation. It follows that if cognition is computation, theories of cognition should be specified in terms of representations and the computational steps performed on them. Thus, cognitive modeling follows naturally from the basic tenet of cognitive science.
Academic Writing, Editing, Proofreading, And Problem Solving Services
Get 10% OFF with 25START discount code
Cognition has been addressed by philosophy for at least 2,500 years, by psychology since well into the nineteenth century, and by artificial intelligence since the mid-twentieth century—anyone would be ill-advised to mistake cognitive science as the only science of cognition. In fact, it overlaps considerably with cognitive psychology and parts of several other disciplines. Cognitive modeling is a unifying methodology for the whole field of cognitive science.
Cognitive modeling combines research methods of vastly different origin. The first group consists of techniques of formal analysis of tasks and systems, usually from philosophy, logic, theoretical linguistics, mathematics, physics, and the foundations of computer science. The second group consists of the empirical methods used predominantly in experimental psychology and in neuroscience, which are used to test models for cognitive adequacy. Finally, the third group of methods are the programming techniques developed in artificial intelligence, which are used to build working computer models. As a whole, the methodology combines formal and empirical analysis with constructive synthesis.
To identify cognitive modeling with computer simulation would be wrong for two reasons: first, we would ignore that building the computer model is just one, albeit essential, part of the methodology. Second, a computer simulation may be successful if it produces the same kinds of results, such as a commercial chess program, but a cognitive model (e.g., of a human cognitive function) must arrive demonstrably at the same results through the same kinds of computations.
1. Cognitive Modeling As Second-Order Model Construction
1.1 Epistemological Perspective
Cognition comprises sophisticated means of a system’s adaptation to its environment, notably planning, which in turn draws on anticipation of the results of actions. Anticipation rests on learning and, in its most advanced forms, on episodic memory. Planning and decision rely on mental representations of the system’s environment (world model) and the system as an actor in it (the system’s self-model). The construction of these models is constrained by the interaction between system and environment. In the case of organisms, the necessities of evolution have ensured that internal world models are sufficiently realistic to be adaptive and ensure the survival of the species. In technical cognitive systems (e.g., autonomous robots), the need also arises to represent important features of the environment. Cognition therefore implies modeling the environment, the system itself, and other systems (as in discourse models used in communication).
Science in general aims at constructing models. Cognitive science attempts to model cognition in biological as well as technical systems. And since model building (in the sense discussed above) is an essential part of cognition, scientific models of cognition are about how cognitive systems construct models of their environment and of themselves. In short, cognitive modeling amounts to second-order modeling (and it is important to keep both levels well apart).
1.2 General Characteristics Of Models
A model is a mapping from an empirical domain (a set of elements and certain relations defined between them) to another one, the model domain (often a numerical one, as in measurement). Modeling is constrained on both sides. The empirical domain, as viewed for modeling, is a highly reduced abstraction, and the model domain (a formal system) often includes relations that are not relevant to the model at all.
Scientific models are abstractions in the sense that the empirical domain grasped by the model comprises only part of the objects (elements) and their relations. Which ones to select depends on the epistemological interest, i.e., on the theoretical perspective as well as on intended applications. For instance, a typical driver’s map of some region ignores geology, climate, biology, etc., and concentrates exclusively on roads, abstracting even there from most of the details. In cognitive modeling, we usually focus on certain aspects of mental representations (e.g., of a memory trace), and some relation or relations, which may be as different as association (linking two elements) and entailment (the semantic relation of logical implication between two statements).
On the model side, we must be careful to define which of the many known relations in the formal system being used to model the empirical domain is part of the model. This is well known from psycho-logical scaling, where only a few of the relations known to hold among real numbers (if those have been chosen as the model domain) are valid model characteristics. For example, only the relation may be valid for an ordinal scale, while differences and ratios between numbers are meaningless. Likewise, the computer programs typically chosen in cognitive modeling exhibit a wealth of parameters and other details of data structures and control flow, some of which are certainly irrelevant as an aspect of the model. With computer programs, however, it is much more difficult to analyze just which relations are valid parts of the model (see Sect. 5.2).
2. Model Domains For Cognitive Modeling
The traditional framework of cognitive modeling has been defined by Marr (1982) and Newell (1982). They distinguish an abstract level of cognitive theory (the knowledge level) from the level of description typically chosen in cognitive modeling (symbol level, or algorithmic level). The level below that (implementation level) is considered as irrelevant to cognitive modeling. This does not exclude the construction of computational models for specific implementations, however. Such models have been used with much success as mediators between psychological and neurobiological analyses of brain functioning.
2.1 Symbol-Processing Approaches
According to Newell and Simon, cognitive processes are symbol transformations on arbitrary complex symbol structures (i.e., mental representations). Accordingly, the classical approach to cognitive modeling aims to construct programs that manipulate symbol structures of compositional semantics by means of algorithms taken from artificial intelligence (e.g., heuristic search). This approach adheres closely to the Turing Machine model of computation.
This approach provides considerable degrees of freedom regarding how to go about constructing models. In practice, most of the work has made use of a production system architecture, or of a declarative knowledge base coupled with an inference algorithm. Therefore, the if–then rules of production systems and logical statements (e.g., the Horn clauses used in Prolog) are among the most widely used formalisms for cognitive modeling. Semantic networks, or frames, with inheritance (the is a relation) are another well-known example of this approach.
2.2 Connectionist Approaches
These approaches are different with respect to the algorithmic level. Simple elements or ‘nodes’ (which may be regarded as abstract neurons) are connected in a more or less pre-specified way, the connectionist network’s architecture. Each element’s output is a function of its inputs integrated over time, and is passed on to other nodes that are connected with it. Two groups of connectionist models can be distinguished according to the semantics of representation employed: parallel distributed processing (PDP) and localist networks. In the latter each node is a representation of something (e.g., a concept), whereas in PDP it is the vector of activation values taken over a number of nodes that has representative character. This aspect of PDP models has been highlighted as pertaining to a ‘sub-symbolic’ level by Smolensky (1988), who also stresses that artificial neural networks define a computational architecture that is nearer to symbol processing than to biological neural networks.
2.3 Nonlinear Dynamics And Other Approaches
Connectionist models, relying on differential equations rather than logic, paved the way to simulations of nonlinear dynamic systems (imported from physics) as models of cognition.
Purely descriptive mathematical models have also been used in cognitive science, of course, but they do not take the form of an implemented computer program, and hence cannot be considered to be at the heart of cognitive modeling, but rather to be part of the formal analyses typically executed to arrive at sound specifications for cognitive models.
3. Cognitive Modeling And Cognitive Architectures
3.1 Unified And Modular Theories Of Cognition
The process of cognitive modeling makes use of computational architectures, as we have seen for symbol processing and connectionist frameworks. As a special case, the framework may be a general theory about the architecture of the human mind, usually called a cognitive architecture.
Relying on a general cognitive architecture is to assume that all cognitive processes instantiate the same principle (e.g., firing a production rule). The present state of cognitive science casts doubt on the reasonableness of this assumption. The functional organization of the human brain is such that cognitive functions may be highly specialized, neuroanatomically focused, not open to introspection, and working in parallel with other cognitive processes (Kolb and Wishaw 1990).
3.2 General Cognitive Architectures
All these architectures take the form of a production system. This computational architecture, developed in the 1960s, comprises knowledge in the form of production rules (if–then rules), contained in a permanent memory, plus a working memory of unlimited capacity and a rule interpreter for control. Several lines of development have led to a number of systems that claim to be both a unified theory of human cognition and a program development environment for cognitive modeling, among them SOAR (Laird et al. 1987, Newell 1990), ACT, known best in the versions ACT* (Adaptive Control of Thought; Ander-son 1983) and its revision, ACT-R (Atomic Components of Thought; Anderson and Lebiere 1998), and others such as CAPS, EPIC, or PRODIGY. Apart from being production systems at heart, all these architectures differ markedly. For instance, SOAR relies on productions only, whereas ACT* also has a declarative memory (a spreading activation network of unlimited capacity). Both address learning, but differently (see Johnson (1998) for a more extensive comparison of ACT and SOAR). ACT-R (following E there) now also includes a number of perceptual and motor components. This means that at least some of these architectures have grown beyond a uniform approach.
The advantages of modeling within a cognitive architecture are obvious. Much of the work of modeling has already been done, and programmers can use special predefined functions. At least the architectures of long-standing tradition (ACT and SOAR) boast a number of successful empirical tests and applications. Drawbacks are that the modeler is confined to a specific architecture and its means, and that these architectures have grown so big that it becomes difficult to assess an architecture’s relevance to some specific small-scale model.
3.3 Alternative Computational Frameworks
Cognitive modeling may be based on almost any computational model. These may be specialized (tailored) processing architectures of artificial intelligence, such as the blackboard architecture (Hayes-Roth 1985), case-based reasoning, or architectures for autonomous social agents. Beyond symbol processing, we find connectionist models of the PDP (see McClelland (1999) for an overview) or the localist orientation (Page 2000). Arbitrary combinations of different approaches (hybrid systems) may also be employed.
It may be advantageous to base one’s model of a specific cognitive function on a computational frame-work that is ideally suited for the task, and do without the massive overhead of the big cognitive architectures. The usual drawback is that one cannot build on the long work of others. COGENT (Cooper and Fox 1998) promises to ease the work of individual model building by providing, in modern object-oriented programming style, a toolkit for cognitive modeling with a user-friendly graphical interface for development.
At present, it is difficult to recommend a particular computational framework for cognitive modeling. What works best, or what is easier to develop, depends heavily on one’s own experience, and on the cognitive function one wants to model.
4. Cognitive Modeling Produces Theories
The computer programs resulting from cognitive modeling have the status of a well-formulated theory about cognition. They have the advantage of being explicit (no computer would execute a ‘magic’ command) and fully specified (for the same reason), which means that theories cannot focus on certain issues while leaving others in the dark—something that is just too easily done on paper. As soon as the program has been implemented successfully on a computer, and produces the expected output, this is proof of the logical consistency of the theory and positive evidence (proof being impossible) for its adequacy.
4.1 Generative Theories As Compared With Scientific Laws
The most important characteristic of cognitive modeling is that it results in generative theories: computer programs that not only explain, but actually produce, the cognitive phenomena in question. This is very different from ‘natural laws’ stating, for example, that in our universe, nothing can travel with a speed beyond that of light.
Models in psychology and other behavioral and social sciences usually take the form of a system of equations for computing the value of some variables given the value of some other variables. These models, however, do not give a detailed explanation (as in an algorithm, i.e., a step-by-step computation without any ‘magical’ operations) of how the resulting values are arrived at. Being generative is the main advantage of cognitive models.
4.2 Validation Strategies In Cognitive Modeling
Cognitive modeling is more than just constructing a computer program that produces data more or less indistinguishable from ‘real data’ as gathered in psychological experiments. As a methodology, it includes testing and comparing models. This is where the pitfalls of cognitive modeling lie: they do not render themselves easily to the falsification strategy (Popper 1968) usually recommended in science.
First, anyone who has constructed a model that seems to work well is reluctant to focus on its weaknesses (but this is typical of all science: no one should misinterpret Popper as obliging individual scientists to falsify their own theories; it is sufficient that there are other scientists around to do that.) Second, the programs that are the result of cognitive modeling are often highly complex, and their relation to empirical data cannot been tested exhaustively, but only for specific cases. Third, there is a fundamental problem: it is always the case that many models (each consisting of a representational and an operational, or process, part) may fit the same data.
Criteria for the empirical assessment of models are: good fit to data in essential aspects (e.g., difficulty of tasks is reflected in computational effort in the model; the model produces preferences, or errors, of the kind found in human subjects, etc.), and formal qualities (e.g., the fewer parameters on which the model depends, the better: a variant of Occam’s razor). Typically, some empirical studies give rise to the model, and further empirical studies are used to test it.
Formal analyses of the tasks used in the domain (e.g., analyzing the syntactic structure and complexity of sentences in computational psycholinguistics) can and should be used not only to find hints as to how a model could be constructed, but also to exclude classes of models.
4.3 An Evaluation Of Cognitive Modeling
Cognitive modeling is the research methodology which follows from the basic tenet of cognitive science that the essential aspect of cognition is that it is a computational process. Its main assets are that it produces theories which are explicit, complete, consistent, and generative. Formal analyses of the domain as well as empirical studies are required both as a prerequisite for the construction of models, and as the means of validating these models, e.g., to test their generalizability. Cognitive science since the 1970s has demonstrated that this is an extremely successful research strategy.
For further information, the reader is referred to Scarborough and Sternberg (1999) for an excellent collection of approaches to cognitive modeling.
Bibliography:
- Anderson J R 1983 The Architecture of Cognition. Harvard University Press, Cambridge, MA
- Anderson J R, Lebiere C 1998 Atomic Components of Thought. Lawrence Erlbaum Associates, Hillsdale, NJ
- Cooper R, Fox J 1998 COGENT: A visual design environment for cognitive modeling. Behavior Research Methods, Instruments and Computers 30: 553–64
- Hayes-Roth B 1985 A blackboard architecture for control. Artificial Intelligence 26: 251–321
- Johnson T R 1998 A comparison between ACT-R and SOAR. In: Schmid U, Krems J F, Wysotzki F (eds.) Mind Modelling: a Cogniti e Science Approach to Reasoning, Learning and Discovery. Pabst, Berlin, pp. 17–37
- Kolb B, Wishaw I Q 1990 Fundamentals of Human Neuropsychology. Freeman, San Francisco
- Laird J E, Newell A, Rosenbloom P S 1987 SOAR: An architecture for general intelligence. Artificial Intelligence 33: 1–64
- Marr D 1982 Vision. A Computational Investigation into the Human Representation and Processing of Visual Information. Freeman, San Francisco
- McClelland J L 1999 Cognitive modeling: Connectionist. In: Wilson R A, Keil F C (eds.) The MIT Encyclopedia of the Cognitive Sciences. MIT Press, Cambridge, MA, pp. 137–44
- Newell A 1982 The knowledge level. Artificial Intelligence 18: 87–127
- Newell A 1990 Unified Theories of Cognition. Harvard University Press, Cambridge, MA
- Page M 2000 Connectionist modelling in psychology: A localist manifesto. Behavioral and Brain Sciences 23: 443–512
- Popper K R 1968 The Logic of Scientific Discovery, 5th edn. Hutchinson, London
- Scarborough D, Sternberg S (eds.) 1999 Methods, Models, and Conceptual Issues. MIT Press, Cambridge, MA
- Smolensky P 1988 On the proper treatment of connectionism. Behavioral and Brain Sciences 11: 1–74