Sample SOAR Theory Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. iResearchNet offers academic assignment help for students all over the world: writing from scratch, editing, proofreading, problem solving, from essays to dissertations, from humanities to STEM. We offer full confidentiality, safe payment, originality, and money-back guarantee. Secure your academic success with our risk-free services.
SOAR is a computational theory of human cognition that takes the form of a general cognitive architecture (Laird et al. 1987, Newell 1990, Rosenbloom et al. 1992). SOAR (not an acronym) is a major exemplar of the architectural approach to cognition, which attempts the unification of a range of cognitive phenomena with a single set of mechanisms, and addresses a number of significant methodological and theoretical issues common to all computational cognitive theories (Anderson and Lebiere 1998, Newell1990, Pylyshyn 1984). SOAR is also characterized by a set of specific theoretical commitments shaped primarily by attempting to satisfy the functional requirements for supporting human-level intelligence, manifest in soar’s parallel existence as a state-of-the art artificial intelligence system (Laird et al. 1987). This focus on functionality, and its attendant theoretical commitments, is what makes soar both distinctive and controversial in cognitive psychology. SOAR represents the last major work of Allen Newell, one of the founders of modern cognitive science and artificial intelligence, and a pioneer in the development of architectures as a class of cognitive theory.
Academic Writing, Editing, Proofreading, And Problem Solving Services
Get 10% OFF with 24START discount code
1. Multiple Constraints On Mind And Computational Theories Of Cognition
Newell (1980a, 1990) described the human mind as a solution to a set of functional constraints (e.g., exhibit adaptive (goal-oriented) behavior, use language, operate with a body of many degrees of freedom) and a set of constraints on construction (a neural system, grown by embryological processes, arising through evolution). The structure of SOAR is shaped primarily by three of the functional constraints: (a) exhibiting flexible, goal-driven behavior, (b) learning continuously from experience, and (c) exhibiting real-time cognition (elementary cognitive behavior must be evident within about a second).
The emergence of computational models of cognition in information processing psychology (and artificial intelligence) represented a significant theoretical advance by providing the first proposals for physical systems that could, in principle, satisfy the functional constraints of exhibiting intelligence (Newell et al. 1958, Newell and Simon 1972). However, they raised a set of difficult methodological and theoretical issues that cognitive science still grapples with today. Among these issues are: (a) the problem of irrelevant specification (in a complex computer pro-gram, which of the myriad aspects of the program carry theoretical content, and which are irrelevant implementation details) (Reitman 1965); (b) the problem of too many degrees of freedom (an unconstrained computer program can be modified to fit any data pattern); and (c) the problem of identifiability (any sufficiently general proposal for processing schemes or representations can mimic the input output characteristics of any other general processing or representation scheme (Anderson 1978, Pylyshyn 1973).
2. SOAR As A Confluence Of Five Major Technical Ideas In Cognitive Science
SOAR can be seen as a confluence of five major technical ideas in cognitive science, which, taken together, are intended to address the three functional constraints summarized above, as well as the fundamental meth-odological issues concerning computational models.
2.1 Physical Symbol Systems
SOAR is a physical symbol system. The physical symbol system hypothesis asserts that physical symbol systems are the only class of systems that can in principle satisfy the constraint of supporting intelligent, flexible behavior. Physical symbol systems are a reformulation of Turing universal computation (Church 1936, Turing 1936) that identifies symbol processing as a key feature of intelligent computation. The requirement is that the system be capable of manipulating and composing symbols and symbol structures—physical patterns with associated processes that give the pat-terns the power to denote either external entities or other internal symbol structures (Newell 1980a, 1990, Simon 1996). The key to the universality of Turing machines and physical symbol systems is their prorammability: content can be added to the systems (in the form of programs) to change their behavior, yielding indefinitely many response functions.
2.2 Cognitive Architectures
SOAR is a cognitive architecture. A cognitive architecture is a theory about the fixed computational structure of cognition (Anderson and Lebiere 1998, Newell 1990, Pylyshyn 1984). Computational systems that are programmable must have some kind of fixed structure that processes the variable content: a set of primitive processes, memories, and control structures. The theoretical status of this underlying structure has not always been clear in cognitive models. For example, when a cognitive model is programmed in Lisp, the theorist intends to make some theoretical claims about the program (e.g., that the steps of the program corresponds in some way to the cognitive steps of the human performing the task), but probably intends to make no theoretical claims about Lisp as the architecture that executes the program (e.g., the fact that unused memory structure are reclaimed via a garbage collection process is theoretically irrelevant).
A cognitive architecture explicitly specifies a fixed set of processes, memories, and control structures that are capable of encoding content and executing pro-grams. Cognitive models for specific tasks can be developed in such architectures by programming them. The theoretical status of various parts of a programmed implementation is now considerably clarified: what counts is the structure of the architecture (not its particular implementation), and the cognitive model’s program, which makes a set of specific commitments about the form and content of knowledge used in a specific task. Thus, implemented cognitive architectures go a long way toward solving the irrelevant specification problem.
Cognitive architectures, especially those with temporal mappings and integrated learning mechanisms, can also address the degrees of freedom problem and identifiability problem in four ways. First, to the extent that architectures have a constrained temporal mapping, the space of possible programs that yield both the required functionality and temporal profile is considerably reduced. Second, to the extent that architectures have learning components that can acquire new knowledge (e.g., about a specific task), the form of that knowledge is no longer freely under control of the theorist. Third, to the extent that architectures are programmable (and are also constrained by a temporal mapping or learning mechanism), they permit a single set of processing assumptions to be applied to a diverse range of tasks, constraining that theory by a broader range of data. Fourth, to the extent that cognitive architectures are comprehensive and include some perceptual and motor components, they can be used to provide closed-loop models of complete tasks, so that no explanatory power need be ascribed to anything external to the model.
2.3 Production Systems
All long-term memory in SOAR is held in the form of productions (Anderson 1993, Newell 1973). Each production is a condition-action pair. The conditions form access paths and the actions form the memory contents. Productions continuously match against a declarative working memory that contains the momentary task context, and matching productions put their contents (actions) back into the working memory. Productions are the lowest level of elementary memory access available in SOAR, and Newell’s (1990) temporal mapping onto human cognition places them approximately at the 10 ms level. This mapping pro-vides strong constraints on the shape of cognitive models built in SOAR that must operate in real time.
SOAR’s productions form a recognition memory. Such recognition memories have a number of features that make them attractive as models of human memory: they are associational in nature (access is via the contents of working memory); they are fine-grained and independent (which makes them a good match for continuous, incremental learning mechanisms); they are dynamic (a production system by itself defines a computationally complete system that can yield behavior; other processes are not needed to access or execute the memory structures); and they are cognitively impenetrable (their contents and structure may not be arbitrarily searched over, examined, or modified, but only accessed via automatic association). All of these properties place them in sharp contrast to memories in digital computers, which are static structures (not processes), freely addressable by location.
3. Search In Problem Spaces Supported By A Two-Level Automatic Deliberate Control Structure
SOAR achieves all cognition by search in problem spaces, and architecturally supports this by a flexible, two-level recognize–decide–act control structure. Problem spaces are based in part on the idea that search in combinatoric spaces is the fundamental process for attainment of difficult tasks. The nature of such search is seen most easily in tasks like chess that have a well-defined set of operators and states. A search space consists of a set of (generated) representational states and operators that transition be-tween states.
Problem spaces as realized in SOAR extend the standard notion of search in an important direction: problems spaces are taken to be the fundamental way that humans accomplish all cognitive tasks, including routine (i.e., well-practiced) tasks. SOAR is, therefore, one realization of the problem-space hypothesis (Newell 1980b), which asserts that all deliberate cognitive activity occurs in problem spaces. The key to this move lies in the role of knowledge in problem spaces: problem spaces freely admit of any amount of knowledge for guiding search, executing operators, or formulating the space initially in response to a task. Because SOAR provides a set of mechanisms (described next) that support this kind of knowledge use, behavior in SOAR spans the well-known continuum between knowledge-intensive processing (little search) and knowledge-lean processing (much search) (Newell 1990).
Supporting knowledge-driven search places strong functional demands on the architecture’s control structure: at any step in the problem-solving process— selecting the next operator, generating the next state, etc.—any relevant knowledge must be brought to bear. There are two parts to the solution to this problem: the mechanisms for appropriate indexing of the knowledge, and the mechanisms for retrieving and applying the relevant knowledge during search. The indexing concerns learning, discussed below.
For retrieving and applying the knowledge during search, SOAR relies on a two-level control structure that separates the automatic access of knowledge via the productions from the deliberate level of problem solving. Each cognitive step is accomplished by a recognize–decide–act cycle. In the recognize phase, all productions that match the current state fire, producing new content in the working memory. Part of this retrieved content is about what the system should do next—the possible operators to try in the current state, the relative desirability of these operators (e.g., operator A is better than operator B), and so on. Next, in the decide phase, a fixed (domain independent) decision procedure sorts out these preferences in working memory to determine if they converge on a consistent decision. In the event that this processing clearly determines the next step, the decision procedure places in working memory an assertion about what that step should be. In the act phase, that step is taken (by additional production rule firings): the move to the next state in internal problem space search, or the release of motor intentions in external interaction. If it is not clear what to do next (e.g., several operators have been proposed, but no knowledge is evoked to prefer one option to another, or there are conflicts in the retrieved knowledge), an impasse has arisen, and the decision procedure records in working memory the type of the impasse, and sets a subgoal of resolving that impasse. In this way, SOAR’s problem solving gives rise automatically to a cascade of subgoals whenever the knowledge delivered by the recognition memory is insufficient for the current task.
The critical feature of this control structure is its run-time, least-commitment nature: each local decision in the problem space is made at execution time by assembling whatever relevant bits of knowledge can be retrieved (by automatic match) at that moment. Decisions are not fixed in advance, and there are no architectural barriers to the kinds of knowledge that can be brought to bear on the decisions.
3.1 Continuous, Impasse-Driven Learning
SOAR continuously acquires new knowledge in its long-term memory through an experience-based learning mechanism called chunking (Laird et al. 1987, Rosen-bloom and Newell 1983). This mechanism generates new productions in the long-term memory by pre-serving the results of problem solving that occurred in response to impasses. The conditions of the new production consist of aspects of the working memory state just before the impasse, and the actions of the production consist of the new knowledge that resolved the impasse (e.g., an assertion that one of the proposed operators is to be preferred to the other in the current situation). Upon encountering a similar situation in the future, the production will automatically match and retrieve the knowledge that permits SOAR to avoid the impasse. Thus, chunking is a mechanism that converts problem solving into recognition memory, continuously moving SOAR from knowledge-lean to knowledge-rich processing.
Chunking in SOAR has two important functional properties. First, it begins to provide a solution to the knowledge-indexing problem raised earlier. The sys-tem assembles its own indices out of the contents of working memory in a way that is directly aimed at making the knowledge retrievable when it is relevant to the immediate demands of the task at hand. Second, learning permeates all aspects of cognition in SOAR. Chunking applies to all kinds of impasses, so any problem space function is open to learning improvements: problem-space formulation, operator generation, operator selection, and so on.
4. Major Architectural Implications And Specific Domains Of Application
SOAR can be used as a theory in multiple ways (Newell 1990). Qualitative predictions can be drawn from SOAR as a verbal theory, without actually running detailed computer simulations. These qualitative predictions can be both domain-general (cutting across all varieties of cognitive behavior) and domain-specific. The theory can be also be applied to specific domains by developing detailed computational models of a task; this involves programming SOAR by adding domain-specific production rules to its long-term memory, and generating behavioral traces.
4.1 Domain-Independent Predictions
A principal prediction of a theory of human cognition is that humans are intelligent; the only way to clearly make that prediction is to demonstrate it operationally. SOAR makes this prediction only to the extent that the system has been demonstrated to exhibit intelligent behavior. As a state of the art AI system that has been applied to difficult tasks (ranging from algorithm design to scheduling problems), SOAR makes the prediction to a greater degree than other psycho-logical theories.
SOAR makes a number of general predictions related to long-term memory and skill (Newell 1990). These include the prediction that procedural skill transfer is essentially by identical elements, and will usually be highly specific (Singley and Anderson 1989, Thorndike 1903); the bias of Einstellung will occur—the preservation of learned skill when it is no longer useful (Luchins 1942); the encoding specificity principle (Tulving 1983) holds; and recall will generally take place by a generate-and-recognize process (Kintsch 1970). The best known of SOAR’s general predictions is the power law of practice, which relates the time to do a task to the number of times the task has been performed (Newell and Rosenbloom 1981, Snoddy 1926).
4.2 Domain-Specific Predictions
SOAR models have been constructed across a range of task domains, and the behavior of the models has been compared with human data on those tasks. One area that has received considerable attention is human-computer interaction (HCI). Some of the successes in this area, such as a detailed model of transcription typing (John 1988), are a result of SOAR inheriting the results of the SOAR theory (Goal, Operators, Methods, and Selection rules), a theory developed in HCI to predict the time it takes expert users to do routine tasks (Card et al. 1983). (SOAR can be seen at one level as a specialization of SOAR, missing features such as learning and impassing.) Other SOAR HCI models depend crucially on SOAR’s real-time interrupt ability (a function of the two-level control structure) and SOAR’s learning mechanism. SOAR models have been developed of real-time interaction and learning in video games (John et al. 1994), novice-to-expert transitions in computer menu navigation (Howes and Young 1997), and a programmer’s interaction with a text editor (Altmann and John 1999), among others.
SOAR models have also been developed of problem solving (Newell 1990), sentence processing (Lewis 2000), concept acquisition (Miller and Laird 1996), and interaction with educational micro-worlds (Miller et al. 1999). In all SOAR models (as with any cognitive model), the explanatory power is shared to varying degrees by both the content posited by the theorist for the particular task and the architectural mechanisms. For example, in the sentence processing model, SOAR’s control structure and learning mechanism, coupled with the real-time constraint, lead directly to a theory of ambiguity resolution that yields a novel explanation of apparent modularity effects and their malleability (Lewis 1996a, Newell 1990), but the architecture provides little apparent constraint on the choice of grammatical theory, which also plays a role in the empirical predictions (Lewis 1996b). Similarly, the general theory of episodic indexing of attention events embodied in the text editor model depends critically on SOAR’s continuous chunking mechanism (Altmann and John 1999), while the specific behavioral traces are a function, in part, of task strategies that could be accommodated by alternative architectures.
5. Critiques Of SOAR, And Future Directions
Critiques of SOAR fall into three major classes: critiques of specific models built within SOAR, critiques of the architecture itself, and critiques of the general methodological approach of building comprehensive architectural theories. For example, specific empirical critiques have been made of SOAR models of the Sternberg memory search task (Lewandowsky 1992) and immediate reaction tasks (Cooper and Shallice 1995). The theoretical challenge is understanding the extent to which the empirical problems can be resolved within the existing architecture, or whether they point back to problems in the architecture itself (Newell 1992b). (The fact that the latter is a real possibility demonstrates that the architectural approach has made some headway on the identifiability and degrees of freedom problems.)
At the architectural level, nearly every major assumption of SOAR has been challenged in the literature (see the multiple book review in BBS for a range of assessments; Newell 1992a). Many of these architectural-level criticisms have been aimed at the uniformity assumptions in SOAR (all tasks as problem spaces, all long-term memory as productions, all learning as chunking), which appear at first to run strikingly against the prevailing mode of theorizing in both cognitive psychology and cognitive neuroscience, which emphasizes functional specialization and distinctions over computational generality. The evaluation of SOAR in light of these concerns is not always transparent, however. For example, the analysis of SOAR’s implications for modularity (particularly in language processing) revealed that SOAR is not only consonant with, but even predicts, many of Fodor’s diagnostics of modular systems (Lewis 1996a, Newell 1990).
Finally, the general approach to cognitive theory that SOAR embraces has come under sharp criticism (most notably by Cooper and Shallice 1995) for not living up to the promise of addressing the methodological concerns identified above, and for not yielding theories with deep empirical coverage that clearly gain their explanatory power from general architectural mechanisms. To the extent that these critiques depend on practice with the SOAR theory specifically, their implications for the broader approach are insecure. Other architectural theories (e.g., ACT (Anderson and Lebiere 1998) and EPIC (Meyer et al. 1995)) exist in the field, and each has adopted somewhat different ways of dealing with these methodological issues that may or may not make them suspect to the same criticisms.
The evolution of SOAR as a theory, and its broader role in cognitive science, is likely to proceed along two fronts. First, SOAR will remain an important source of ideas for developing theories of complex cognition, even for those theorists who do not embrace the architecture whole cloth, or reject the architectural methodology. A harbinger of this can be seen in cognitive neuroscience: as researchers begin to tackle the problem of understanding the nature of ‘executive’ processes and their realization in the brain, models like SOAR can provide concrete proposals for a set of functionally sufficient mechanisms for the control of deliberate cognition; (see the recent volume on working memory and executive control for evidence of such interaction by Miyake and Shah 1999). Second, SOAR will continue to evolve as a unified set of mechanisms itself, informed in part by the continued application of SOAR to difficult AI problems, and in part by the continued construction and empirical evaluation of detailed models of cognitive tasks that focus on unique aspects of the architecture.
Bibliography:
- Altmann E M, John B E 1999 Episodic indexing: A model of memory for attention events. Cognitive Science 23(2): 117–56
- Anderson J R 1978 Arguments concerning representations for mental imagery. Psychological Review 85(4): 249–77
- Anderson J R 1993 Rules of the Mind. Erlbaum, Hillsdale, NJ Anderson J, Lebiere C 1998 Atomic Components of Thought. Erlbaum, Hillsdale, NJ
- Card S K, Moran T P, Newell A 1983 The Psychology of Human–Computer Interaction. Erlbaum, Hillsdale, NJ
- Church A 1936 An unsolvable problem of elementary number theory. The American Journal of Mathematics 58: 345–63
- Cooper R, Shallice T 1995 SOAR and the case for unified theories of cognition. Cognition 55(2): 115–49
- Howes A, Young R M 1997 The role of cognitive architecture in modelling the user: SOAR’s learning mechanism. Human– Computer Interaction 12: 311–43
- John B E 1988 Contributions To Engineering Models of Human– Computer Interaction. Carnegie Mellon University, Pitts-burgh, PA
- John B E, Vera A H, Newell A 1994 Toward real-time GOMS: A model of expert behavior in a highly interactive task. Behavior and Information Technology 13: 255–67
- Kintsch W 1970 Models for free recall and recognition. In: Norman D A (ed.) Models of Human Memory. Academic Press, New York
- Laird J E, Newell A, Rosenbloom P S 1987 SOAR: An architecture for general intelligence. Artificial Intelligence 33: 1–64
- Lewandowsky S 1992 Unified cognitive theory: Having one’s apple pie and eating it. Behavioral and Brain Sciences 15(3): 449–50
- Lewis R L 1996a Architecture matters: What SOAR has to say about modularity. In: Steier D M, Mitchell T M (eds.) Mind Matters: Contributions to Cognitive and Computer Science in Honor of Allen Newell. Lawrence Erlbaum Associates, Mahwah, NJ
- Lewis R L 1996b Interference in short-term memory: The magical number two (or three) in sentence processing. Journal of Psycholinguistic Research 25(1): 93–115
- Lewis R L 2000 Specifying architectures for language processing: Process, control, and memory in parsing and interpretation. In: Crocker M W, Pickering M, Clifton C Jr (eds.) Architectures and Mechanisms for Language Processing. Cambridge University Press, Cambridge, UK
- Luchins A S 1942 Mechanization in problem solving. Psychological Monographs 54(6): no. 28
- Meyer D E, Kieras D E, Lauber E, Schumacher E H, Glass J, Zurbriggen E, Gmeindl L, Apfelblat D 1995 Adaptive Executive Control: Flexible Human Multiple-task Performance Without Pervasive Immutable Response-selection Bottlenecks. University of Michigan, Ann Arbor, MI
- Miller C S, Laird J E 1996 Accounting for graded performance within a discrete search framework. Cognitive Science 20: 499–537
- Miller C S, Lehman J F, Koedinger K R 1999 Goals and learning in micro-worlds. Cognitive Science 23(3): 305–36
- Miyake A, Shah P (eds.) 1999 Models of Working Memory: Mechanisms of Active Maintenance and Executive Control. Cambridge University Press, Cambridge, UK
- Newell A 1973 Production systems: Models of control structures. In: Chase W G (ed.) Visual Information Processing. Academic Press, New York
- Newell A 1980a Physical symbol systems. Cognitive Science 4: 135–83
- Newell A 1980b Reasoning, problem solving and decision processes: The problem space as a fundamental category. In: Nickerson R (ed.) Attention and Performance VIII. Erlbaum, Hillsdale, NJ
- Newell A 1990 Unified Theories of Cognition. Harvard University Press, Cambridge, MA
- Newell A 1992a Precis of unified theories of cognition. Behavioral and Brain Sciences 15(3): 425–92
- Newell A 1992b SOAR as a unified theory of cognition: Issues and explanations. Behavioral and Brain Sciences 15(3): 464–92
- Newell A, Rosen-bloom P 1981 Mechanisms of skill acquisition and the law of practice. In: Anderson J R (ed.) Cognitive Skills and Their Acquisition. Erlbaum, Hillsdale, NJ
- Newell A, Shaw J C, Simon H A 1958 Elements of a theory of human problem solving. Psychological Review 65: 151–66
- Newell A, Simon H A 1972 Human Problem Solving. Prentice- Hall, Englewood Cliffs, NJ
- Pylyshyn Z W 1973 What the mind’s eye tells the mind’s brain: A critique of mental imagery. Psychological Bulletin 80(1): 1–24
- Pylyshyn Z W 1984 Computation and Cognition. Bradford MIT Press, Cambridge, MA
- Reitman W 1965 Cognition and Thought. Wiley, New York
- Rosenbloom P S, Laird J E, Newell A (eds.) 1992 The SOAR Papers: Research on Integrated Intelligence. MIT Press, Cambridge, MA
- Rosenbloom P S, Newell A 1983 The chunking of goal hierarchies: A generalized model of practice. In: Michalski R S, Carbonell J, Mitchell T (eds.) Machine Learning: An Artificial Intelligence Approach II. Morgan Kaufman, Los Altos, CA
- Simon H A 1996 The patterned matter that is mind. In: D a M Steier T (ed.) Mind Matters: Contributions to Cognitive and Computer Science in Honor of Allen Newell. Erlbaum, Hills-dale, NJ
- Singley M K, Anderson J R 1989 The Transfer of Cognitive Skill. Harvard University Press, Cambridge, MA
- Snoddy G S 1926 Learning and stability. Journal of Applied Psychology 20: 1–36
- Thorndike E L 1903 Educational Psychology. Lemke and Buechner, New York
- Tulving E 1983 Elements of Episodic Memory. Oxford University Press, New York
- Turing A M 1936 On computable numbers, with an application to the Entscheidungsproblem. Paper presented at the Proceedings of the London Mathematics Society