Cognitive Science Research Paper

Academic Writing Service

Sample Cognitive Science Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. iResearchNet offers academic assignment help for students all over the world: writing from scratch, editing, proofreading, problem solving, from essays to dissertations, from humanities to STEM. We offer full confidentiality, safe payment, originality, and money-back guarantee. Secure your academic success with our risk-free services.

Cognitive science (CS) is a young discipline that emerged from a research program started in 1975. It partially overlaps with its mother disciplines: psychology, artificial intelligence, linguistics, philosophy, anthropology, and the neurosciences. By no means the only discipline dedicated to the study of cognition, cognitive science is unique in its basic tenet that cognitive processes are computations, a perspective which allows for direct comparison of natural and artificial intelligence, and emphasizes a methodology that integrates formal and empirical analyses with computational synthesis. Computer simulations as generative theories of cognition have therefore become the hallmark of CS methodology.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


Today, CS is an internationally established field. The dominant tradition of the early years, close to artificial intelligence and its symbol-processing architectures, has been enriched by alternative computational architectures (e.g., artificial neural net-works) and by the recognition that natural, especially human cognition, rests on biological as well as on social and cultural foundations. CS studies cognitive systems, which may be organisms, machines, or any combination of these acting in an environment that may be open and dynamically changing. Cognition in CS denotes a class of advanced control mechanisms that allow for sophisticated adaptation to changing needs (e.g., learning and planning) through computations operating on mental representations. Cognition typically coexists with simpler regulatory mechanisms, like reflexes. CS recognizes that cognition in biological systems is implemented in brain processes, but emphasizes the importance of analyses at the functional level, with cognitive neuroscience relating both domains. Applications of cognitive science may be found in human–computer interaction and in the design of software and information systems, as well as in human factors engineering, health care, and, most notably, in education.

1. Cognition And The Cognitive Science Approach

Although reflection on the mind dates back at least to Plato, the term ‘cognition,’ etymologically based on ancient Greek gignoskein and Latin cognoscere, is relatively recent. It surfaces in nineteenth-century psychology, which exclusively dealt with the phenomenology of consciousness, e.g., in Spencer’s characterization of the interrelations of human feelings. At about the same time, the triad of thinking, feeling, and willing of eighteenth century Vermogenspsychologie became the well-known taxonomy of the mind, dividing it into cognition, emotion, and volition.




We all have an intuitive understanding of what ‘cognition’ refers to, and there is common agreement that thinking, memory, and language, and ‘the use or handling of knowledge’ (Gregory 1987, p. 149) are correctly subsumed under that term. On the other side, it is difficult to define the term strictly. Minimalist approaches (e.g., Searle 1990) would like to reserve its use for the contents of consciousness, whereas a maximum approach has been taken by Maturana and Varela (1980, p. 13), where they claim that ‘living as a process is a process of cognition’ (cf., Boden 2000, for a critique). The leading opinion, however, will consider unconscious processes as well as conscious ones (Neisser 1967, Norman 1981) without attributing cognitive abilities to every living organism (e.g., a tree) and, indeed, without confining the use of the term to biological systems. Still, ‘cognition’ continues to be a rather ill-defined term, which even as ambitious a project as The MIT Encyclopedia of the Cognitive Sciences (Wilson and Keil 1999) has not dared to treat in an article of its own.

The hallmark of the CS approach to cognition is to identify cognitive processes with computation: cognition is information processing. Not every computation is cognition, however, which means that these computational processes must be characterized (Newell and Simon 1976), and further restrictions must be named, such as referential content or ‘intentionality’ (Fodor 1975), or system complexity (Smolensky 1988). The rest of this section will sketch the way towards a computational theory of mind.

1.1 Formal Approaches In Philosophy, Computing, And Neuroscience

The philosophical tradition on which CS draws may be traced from the invention of number systems to medieval algebra (Raimundus Lullus) and on to Descartes and Leibniz, and from the Aristotelian syllogisms to Frege’s seminal work on formal logic. Apart from being a history of the development of formal systems, this philosophical tradition can be seen as an analysis of thinking and reasoning aimed at separating content and form of argumentation. The syllogisms of Aristotle, which continued to constitute the core of logic, virtually unchanged, over more than 2,000 years, are the first milestone: In analyzing an argument such as ‘All human beings will die. Socrates is a human being. Therefore, Socrates will die’ as p→q, p.´. q(modus ponens) in the Aristotelian tradition (combined with the formal advancements of Lullus and Descartes), the specific content of the argument is separated from the form of reasoning, and it becomes clear that the latter is sufficient for warranting a true conclusion, provided that the input to this logical vehicle consists of premises that are true. An important generalization about reasoning had been found. It was Leibniz in the late-seventeenth century who advocated the use of formal reasoning in the hope that all fruitless discussions might be ended just by formalizing the arguments and computing the conclusions—hopelessly optimistic from our view (there would be endless debate on how to formalize the premises), but instrumental for an account of thinking as symbol processing.

Frege’s (1879 1967) reformulation of logic laid the foundations of modern logical semantics and marks the beginning of the modern tradition that led to CS. The theory of symbol processing is formalized as a general theory of computation in the works of Godel, Turing, Church, and Post between 1931 and 1943, soon to become the foundation of computer science and, especially, artificial intelligence. Logical positivism (as epitomized in Wittgenstein’s Tractatus, written during World War I) and logical semantics (foremost, Tarski’s work in the 1920s) constitute the philosophical legacy on which CS could draw.

The invention of computing machinery in the 1940s (Zuse, with Babbage as an isolated forerunner in the nineteenth century) was instrumental in promoting a computational perspective to cognition. A major step was the founding of artificial intelligence (AI) in 1956. The work done in AI, notably on human problem solving by Newell and Simon, may well be regarded as CS at a time when the term was not around yet.

The symbol processing tradition was complemented in the neurosciences by the invention of the formal neuron (an abstraction from biological neurons; McCulloch and Pitts 1943), as well as by analog approaches to computing and self-regulation and general systems theory (cybernetics; Wiener 1948). The detection of representational functions of single neurons in the visual system (receptive fields; Hubel and Wiesel 1962) paved the way for a new, scientific approach to the concept of mental representation, while the development of artificial neural networks, notably the perceptron (Rosenblatt 1958), showed a possible way to combine these new discoveries with the idea of computation.

1.2 Other Disciplines: Linguistics And Anthropology

Linguistics made a huge step forward when researchers such as Jakobson and Troubetzkoy succeeded in discovering the common abstract structure behind the phonemes utilized in different languages. Linguistic structuralism, later to be applied to syntax by Harris and Chomsky, became another driving force in the development of CS, as well as structuralism developed in ethnology by Levi-Strauss, and in cognitive anthropology. In addition, analysis of formal languages (Chomsky 1959a) provided the link to computer science.

1.3 The Rise of Cognitive Psychology

Nineteenth-century psychology has only recently been rediscovered, but failed to contribute to CS’s early development because it had been all but extinguished by behaviorism from about 1915 to 1960. Only the later behaviorists considered internal variables (notably Hull), or even memory and mental representation (Tolman). It was information theory (developed by Shannon 1948), or rather its insufficiency to explain central psychological phenomena such as the memory span, which brought G. A. Miller in the mid-1950s to reconsider human information processing. His collaboration with Noam Chomsky, and especially the latter’s poignant critique of behaviorist approaches to language (Chomsky 1959b), served to reorient psycho-logical research towards issues of internal storage and processing, towards a psychology that no longer ignored mind and consciousness, yet was careful to stay within the limits of scientific rigor (Miller et al. 1960). Cognitive psychology underwent a rapid development and in 1967, Ulric Neisser wrote the first textbook of the new field, coining its name. Gardner (1985) names further influential researchers in psychology, among them Bruner (notably his work on strategies) and Jean Piaget. Although Piaget’s numerous monographs on cognitive development did not become available in English before the 1960s, he is certainly a forerunner of CS who always insisted on the importance of formal principles for explaining cognitive development.

1.4 The Origin Of Cognitive Science

A state-of-the-art report on CS by the Alfred P. Sloan Foundation concludes that ‘What has brought the field into existence is a common research objective: to discover the representational and computational capacities of the mind and their structural and functional representation in the brain’ (Sloan Foundation 1978, p. 6). But it would not be false to say that the Sloan Foundation acted as midwife at the birth of CS. Its committee diagnosed convergent approaches vis-vible across disciplines and went one step beyond that diagnosis to unite what still were very different approaches. Institutionalization followed soon: a journal first (Cognitive Science, in 1977), then, two years later, a society and a yearly conference and, still later, doctoral programs and research projects, all evolving into a flourishing new field.

2. The Basic Tenet: Cognition As Computation

The modern idea of computation was formulated in the 1930s. It owes much to Hilbert’s program for the complete axiomization of mathematics, limited by Godel’s (1931) proof that not all theorems about a formal system are provable with the means provided by that very same system. Some people have tried to use this result against the idea that cognition could be computation, but did not realize that there may well be truths about the human mind that it fails to arrive at, let alone prove them formally.

The best-known definitions of computation rely on recursive functions (Church 1941), logical productions (Post 1943), or the abstract machine designed by Turing (1936). All these approaches have been proved to be equivalent in scope, and especially the Turing machine may be considered the direct forerunner of today’s computers. The common core is the idea of a formal system that uses symbols, i.e., variables and operators combined to form symbolic expressions that are manipulated according to fixed rules and to the internal state of the system.

2.1 Physical Symbol Systems

When they received the Turing Award for their ground-breaking work in AI, Newell and Simon expanded the theory of symbol processing and coined the Physical Symbol Systems Hypothesis (PSSH): ‘A physical symbol system has the necessary and sufficient means for intelligent action’ (Newell and Simon 1976, p. 117).

A physical symbol system is a formal system. Like all formal systems, it has an ‘alphabet’ of (at least two) arbitrarily defined symbols, as well as operators to create and transform symbol structures (symbolic expressions) of arbitrary complexity from the elementary symbols of the alphabet according to syntactic rules. This system is ‘physical’ in the sense that it has been implemented in a suitable way. One such way is the encoding of the symbols as levels of voltages in an array of transistors, and the operations by hard-wired connections between transistors; thus it is done in semiconductor chips. A different way would be to encode the symbols as ‘spikes’ (action potentials) of neurons. Other ways are possible, but those two already show how the theory can be applied to organisms as well as to technical systems. The point is that a system like that needs the physical implementation in order to function in the real world and become more than just an idea. The kind of implementation is arbitrary, however, because the system is functioning according to its symbolic expressions and syntactic rules, completely independent of its implementation.

Symbols are arbitrary signs, but they designate objects or processes, including processes in the system itself. Their semantics is defined either by reference to an object (in the sense that depending on the respective symbolic expression, the system exerts an influence on the object or is influenced by it), or by the symbolic expression being executable as a kind of program.

Physical symbol systems may have a lot of symbol structures, which means that they need to have a symbol store, or memory, which is unlimited according to Newell and Simon. In fact, physical symbol systems are Turing-equivalent computing devices and, there-fore, the PSSH is equivalent to the notion of cognition being computation. The PSSH cannot be proven, nor can it be refuted formally. It gained plausibility, however, through empirical studies of human problem solving and its simulation (Newell and Simon 1972). The PSSH has integrated AI into computer science through the common reference to the theory of automata, a platform which gives a foundation to CS as well. The distinction between functional and implementation levels, elaborated by Newell (1982) and Marr (1982), enabled CS as a science of biological as well as technical cognitive systems.

2.2 Philosophical Foundations: Functionalism And The Computational Theory Of Mind

Mental states have been analyzed as ‘intentional attitudes’ in the philosophy of mind, consisting of a propositional content (e.g., P = the sun is shining) and an attitude that characterizes one’s own relation to that proposition (e.g., I wish that P would become true). Fodor (1975) developed this approach further, arriving at a ‘language of thought’ that treats the propositional content as data and the intentional relation as an algorithmic one.

If we accept these as the elements of a ‘language of thought,’ then the question arises of how mental states relate to brain states: a well-known problem in philosophy. Following Putnam (1960), Fodor and others conceptualize the relation between brain and mental states as being parallel to the relation between a computer (i.e., the hardware) and a program running on that computer: the mind as the software of the brain. This approach is known as the computational theory of mind. It fits well with the PSSH, and it soon became the dominant framework in CS. However, it addresses (potentially) conscious thought only, ignoring lower cognitive processes.

2.3 Achievements And Drawbacks Of The Classical Symbol-Processing Approach

From the twenty-first-century perspective, the PSSH and the computational theory of mind together constitute the classical period of CS, spanning the decade from 1975 to 1985. Within that period, cognitive modeling emerges as CS’s characteristic methodology. Its applications include the following fields.

2.3.1 Problem Solving. The former general model of problem solving as heuristic search (Newell and Simon 1972) was enlarged by recognizing the importance of domain-specific knowledge, which became the foundation of an AI technology (know-ledge-based systems or ‘expert systems’; Hayes-Roth et al. 1983, Buchanan and Shortliffe 1984) and inspired much psychological research on expertise (e.g., Ericsson and Smith 1991).

2.3.2 Cognitive Architectures. Rule-based architectures evolved into ambitious models of human cognition in general, comprising memory, problem solving, learning, and some natural language processing. The best-known of these systems are SOAR (Laird et al. 1987, Newell 1990) and the impressive series of ACT, ACT*, and ACT-R frame-works, all developed by John Anderson (Anderson 1976, 1983, Anderson and Lebiere 1998).

2.3.3 Natural Language Processing. From the late 1950s on, theoretical linguistics has been dominated by Noam Chomsky. His theories (notably Chomsky 1981) are framed as theories of human linguistic competence and have inspired CS research on human parsing (Frazier 1987, Mitchell 1994) and on language acquisition (Pinker 1984).

2.3.4 Computers And Education. Knowledge-based systems have been built for purposes of instruction, so-called ‘intelligent tutorial systems’ (Psotka et al. 1988). Although their cost-efficiency relation turned out to be not well suited for general education, they have been used with success for the training of specialists.

The main drawback of research during the classical period of CS was that it excluded many important issues that could well have been covered by assuming mental representation and cognitive algorithms. The original objective for AI, stated by Simon (1955)—to address problems whose solution by a human being would lead us to attribute intelligence to that person —led CS and AI largely to ignore problems that do not seem to require intelligence in people. However, these problems turned out to be the real ‘tough nuts,’ e.g., navigation and other skilled action.

The computer technology of that period was still mainframe oriented, and interactive and graphic technologies were scarcely developed. This state of the art did not encourage researchers to model real-time agent–environment interaction, although there were some exceptions, e.g., Winograd (1972).

The ‘methodological solipsism,’ as advocated by Fodor (1980) in connection with his theory of mental representation, and the dominance of logic-based approaches in AI (especially during the 1980s) made CS researchers believe that all interesting aspects of cognition happened within a single symbol system.

In Fodor’s ‘language of thought,’ content is defined as a truthful representation of (a part of?) the world, as in Tarski semantics and similar approaches, which unduly constrains mental representation and ignores its constructive nature.

To summarize, CS research in fact fell short of the scope even its original symbol-processing framework provided.

3. Alternative Computational Frameworks

From a very abstract viewpoint, a Turing machine is all one ever needs to compute. However, different architectures or virtual machines may make some computations easy and others difficult. Classical CS had adopted symbol-processing frameworks. There is a large gap, however, between a functional specification and the way in which a brain is built. The new connectionist movement, which surfaced in the mid-1980s, attempted to bridge the gap with a frame-work that was all but forgotten at that time.

3.1 Artificial Neural Networks: Connectionism

McCulloch and Pitts (1943) presented the biological neuron as an abstract computing device. Hebb (1949) added a lot of hypotheses, most of which turned out to be correct in the meantime (e.g., that enduring changes in the transmission efficiency of certain synapses are the neurophysiological basis of memory). Rosenblatt (1958) built a simple artificial neural network (ANN) that could be used for pattern recognition, the ‘perceptron.’ Soon, connectionism (the name adopted for this line of research) flourished. Its boom came to an end, however, when perceptrons ran into trouble with certain distinctions, and their limitations were mathematically proven (Minsky and Papert 1969). From then on, only a few researchers, mostly in biology and biophysics, continued the connectionist tradition.

The sudden rebirth of connectionism started with the discovery of an architectural change— introduction of ‘hidden layers’ in ANNs—that over-came the limitations of the perceptron (Rumelhart, McClelland and the PDP group 1986). After a period of heated dispute, connectionism has now been integrated into mainstream CS. Numerous ANN architectures have been developed, and hybrid connectionist–symbolic systems have also been constructed.

3.2 Distributed Representations As Subsymbols

At the heart of the dispute over connectionism was the issue of mental computation. Symbolic and ‘localist’ connectionist architectures (see Page 2000 for an overview) maintain that variables (or the nodes of an ANN) can be interpreted as being meaningful. In ‘parallel distributed processing’ (PDP), however, entities are represented by patterns, usually by a ‘feature vector’ containing the activation values of formal neurons. Smolensky (1988) claimed that only the subsymbolic approach (referring to the elements of a feature vector) grasps the essence of cognition, whereas symbol processing approaches were confined to a mere approximation. To the contrary, Fodor and Pylyshyn (1988) argued that connectionism either was just an implementation of symbol processing (hence banal), or inadequate for modeling productivity and systematicity in natural language. The debate was never resolved, but it is generally recognized that suitability for cognitive modeling is more important than an abstract decision about which framework is ‘better,’ and that, in fact, both frameworks share important characteristics, being computational as well as representational.

3.3 Beyond Connectionism: Nonlinear Dynamics

Since the early 1990s, another computational frame-work has been claimed to be useful for cognitive modeling: the theory of nonlinear dynamic systems, initially known as ‘chaos theory’ (Port and van Gelder 1995). Fine-grained analyses of movement (e.g., in phonetics) and of developmental changes are the intended area of application. Although this renders a more precise picture of how cognitive processes are implemented (like the neural models used in biology are more detailed than connectionist accounts), it could well be that these characteristics are less import-ant on the functional level (Eliasmith 2000).

4. The ‘New’ Cognitive Science: Interacting Cognitive Systems

The technical cognitive systems of classical AI and CS were systems that were only cognitive at their functional level. This is never the case in biological systems: animals live, move and eat, and reproduce. All this basic behavior is made possible without cognition in the sense of thoughtful decision (in lower animals, at least). Built-in physiological regulations, reflexes, instincts and species-specific behaviors serve to achieve the necessary adaptation. Learning in its most primitive form (classical conditioning) has been demonstrated in some worms and, of course, higher species; categorization of stimuli (the forerunner of concepts) in birds and mammals; episodic memory at least in some mammals, especially in apes; full-fledged language in humans only.

Cognition has evolved because of the adaptive value of learning (and the cultural tradition that is based upon it) and thinking (in mental simulation of an act and its consequences). But cognition came late; it has to coexist with those basic regulatory mechanisms mentioned above. In the human species, most of an individual’s knowledge, and even specialized cognitive processes, e.g., reading and writing, have to be learned and trained within the culture that provides it. Sensorimotor processes (e.g., driving in a city) rely heavily on a system’s interaction with its environment and with special tools (like a car). Also, we are embedded in a social world: many cognitive processes, foremost language, are acquired through social interaction. All this cannot be modeled adequately by a single symbol-processing system that scarcely (if at all) interacts with its environment and with other cognitive systems. In CS, this paradigm shift occurred gradually, starting in the mid-1980s. Having extended its scope since its classical period, modern CS appears as the science of cognitive systems in a fuller sense, as described above.

4.1 Situated Cognition

Using the methodology of field studies, Suchman (1987), Hutchins (1995a, 1995b) and others could demonstrate how much people rely on cues and representations provided by the environment. While traditional accounts of planning rely on mental representations and processes exclusively, human planning can be shown to rely on maps, road signs, guidance by other people, and other external sources. The importance of external representations in problem solving has been repeatedly demonstrated (Zhang 1997).

As is so often the case, ‘situated cognition’ was introduced as an alternative to the ‘old’ paradigm of CS. Vera and Simon (1993), however, insisted that CS from its very start (e.g., Newell and Simon 1976) does not exclude situatedness, but rather emphasizes its necessity. Indeed, to realize that cognition is situated can also solve the problem of symbol grounding: mental representations arise and get their meaning in the context of acting in the world (a solution already envisioned by Newell 1980). Situated cognition, it seems, highlights a formerly neglected aspect of CS.

Situatedness also includes the body, not only in the basic sense that cognition coexists with other, more basic processes in organisms. Rather, mental representations are in many ways influenced by the cognitive representation of the body, as Lakoff and Johnson (1980) have shown for metaphors.

4.2 The Social Aspects Of Cognition

Culture, and social action, shape a lot of our thoughts and cognitive skills: ‘Sociality and culture are made possible by cognitive capacities, contribute to the ontogenetic and phylogenetic development of these capacities, and provide specific inputs to cognitive processes’ (Sperber and Hirschfeld 1999, p. cxi). By focusing on a single individual concerned with a single task only, CS had followed the model provided by experimental psychology, its most prominent source of empirical data. The recent change of view in favor of social and cultural factors was mainly due to work in an applied CS area where social interaction is central: education. Lave (1988) gives numerous demonstrations of cultural influences on the solution of apparently context-free tasks in mathematics. More generally, culture-oriented theories of development and learning (beyond sociobiology as in Lumsden and Wilson 1981) have been advanced by Cole (1991) and Tomasello et al. (1993).

Because of the sheer amount of detail provided in cultures, especially in modern industrial ones with libraries and a high degree of specialization in work, it is very difficult to integrate cultural aspects in cognitive theories. The problem is eased when analyses focus on narrowly defined tasks.

A related field is research on groups (e.g., in computer-supported cooperative work (CSCW); Olson and Olson 1997) and on teams of experts (Hinsz et al. 1997). Here, as well as in knowledge engineering (Strube et al. 1996), the task itself comprises the exchange of knowledge and, at least to a certain degree, the development of ‘shared knowledge’ or ‘shared mental models’ (Lewis and Sycara 1993).

4.3 A New Paradigm: Autonomous Social Agents And Mixed Groups

According to the dominant paradigm in CS, cognitive systems are conceived as autonomous social agents, situated in a complex dynamic environment. This view bears no accidental resemblance to the shift in AI instigated by new architectures in robotics (Brooks 1991) and the development of intelligent agents (see Wooldrige and Jennings 1995 for an overview).

Agent approaches emphasize the complex action control needed for an agent (robot or organism) that pursues its own goals, which are always many, and potentially conflicting (Maes 1990). Because of restricted resources, agents in the real world cannot be fully rational; however, ‘bounded rationality’ (Simon 1955) has been claimed a universal principle of human thought (Gigerenzer et al. 1999).

Distributed AI (Bond and Gasser 1988) studies cooperation and competition among agents. The biennial RoboCup contest (Kitano et al. 1997) epitomizes this line of research, having robot or simulated agent teams playing soccer games against each other. The relevance for CS lies in the development of integrated architectures that comprise cognitive and more primitive (so-called ‘reactive’) regulation, as well as social interaction. This fits well with the current emphasis on situated cognition and with applied problems of CS, e.g., in the field of office automation and cooperation in mixed groups that comprise human workers as well as technical systems, as in the case of air traffic (Hutchins 1995b).

5. Achievements And Present State

CS in the twenty-first century has grown from an innovative interdisciplinary field to an academic discipline of its own, albeit in a still early stage of institutionalization. CS has been evolving differently, however, in different parts of the world. It has a solid infrastructure of departments, graduate programs, etc., in the UK and North America, in France, and in some other countries, but remains still an early stage of institutionalization in Germany, for instance, and still less in many other different countries. In the USA, cognitive neuroscience has split away from cognitive science, at least organizationally. In other countries like France, however, it would be impossible to imagine CS without brain research and neuroscience. The necessity and importance of real-world applications of CS in education, industry, and other fields is generally recognized, but CS still lacks a well-developed professional profile. On the other hand, CS has an impressive record of success in research, an infrastructure of international and national academic societies (foremost the Cognitive Science Society, founded in 1979), international and national conferences, and dedicated CS journals (Cognitive Science, founded in 1977, and Cognitive Science Quarterly, founded in 2000). An analysis of publications in Cognitive Science in 1977–95 found evidence not only for a dominance of psychologists and computer scientists among the CS community, but also for CS ‘as a discipline of its own […] becoming increasingly more common’ (Schunn et al. 1998, p. 117). CS, as it seems, is still but steadily evolving.

Bibliography:

  1. Anderson J R 1976 Language, Memory, and Thought. Erlbaum, Hillsdale, NJ
  2. Anderson J R 1983 The Architecture of Cognition. Harvard University Press, Cambridge, MA
  3. Anderson J R, Lebiere C 1998 Atomic Components of Thought. Erlbaum, Hillsdale, NJ
  4. Boden M A 2000 Autopoiesis and life. Cognitive Science Quarterly 1: 135–46
  5. Bond A H, Gasser L (eds.) 1988 Readings in Distributed Artificial Intelligence. Kaufmann, San Mateo, CA
  6. Brooks R A 1991 Intelligence without representation. Artificial Intelligence 47: 139–59
  7. Buchanan B G, Shortliffe E H 1984 Rule-based Expert Systems. Addison-Wesley, Reading, MA
  8. Chomsky N 1959a On certain formal properties of grammars. Information and Control 2: 137–67
  9. Chomsky N 1959b Verbal behavior—Skinner B F. Language 35: 26–58
  10. Chomsky, N 1981 Lectures on Government and Binding. Foris, Dordrecht, The Netherlands
  11. Church A 1941 The Calculi of Lambda-Conversion. Princeton University Press, Princeton, NJ
  12. Cole M 1991 A cultural theory of development: what does it imply about the application of scientific research? Special issue: culture and learning. Learning and Instruction 1: 187–200
  13. Eliasmith C 2000 Is the brain analog or digital? The solution and its consequences for cognitive science. Cognitive Science Quarterly 1: pp. 147–70
  14. Ericsson K A, Smith J (eds.) 1991 Toward a General Theory of Expertise: Prospects and Limits. Cambridge University Press, Cambridge, UK
  15. Fodor J A 1975 The Language of Thought. Crowell, New York
  16. Fodor J A 1980 Methodological solipsism considered as a research strategy in cognitive psychology. Behavioral and Brain Sciences 3: 63–73
  17. Fodor J A, Pylyshyn Z W 1988 Connectionism and cognitive architecture: a critical analysis. Cognition 28: 3–71
  18. Frazier L 1987 Sentence processing: a tutorial review. In: Coltheart M (ed.) The Psychology of Reading. Attention and Performance. Erlbaum, Hove, UK, Vol. 12, pp. 559–86
  19. Frege G 1879/1967 Begriffsschrift: a formula language, modeled upon that of arithmetic, for pure thought. In: van Heihenoort J (ed.) From Frege to Godel: a Source Book on Mathematical Logic, 1879–1931. Harvard University Press, Cambridge, MA pp. 5–82
  20. Gardner H 1985 The Mind’s New Science: a History of the Cognitive Revolution. Basic Books, New York
  21. Gigerenzer G, Todd P M, ABC Research Group 1999 Simple Heuristics That Make Us Smart. Oxford University Press, New York
  22. Godel K 1931 Uber formal unentscheidbare Satze der Principia Mathematica und verwandter Systeme. Monatshefte fur Mathematik und Physik 38: 173–98
  23. Gregory R L (ed.) 1987 The Oxford Companion to the Mind. Oxford University Press, Oxford, UK
  24. Hayes-Roth F, Waterman D A, Lenat D B 1983 Building Expert Systems. Addison-Wesley, Reading, MA
  25. Hebb D O 1949 The Organization of Behavior. Wiley, New York
  26. Hinsz V B, Tindale R S, Vollrath D A 1997 The emerging conceptualization of groups as information processors. Psychological Bulletin 121: 43–64
  27. Hubel D H, Wiesel T N 1962 Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. Journal of Physiology 160: 106–54
  28. Hutchins E 1995a Cognition in the Wild. MIT Press, Cambridge, MA
  29. Hutchins E 1995b How a cockpit remembers its speeds. Cognitive Science 19: 265–88
  30. Kitano H, Asada M, Kuniyoshi Y, Noda I, Osawa E, Matsubara H 1997 RoboCup: a challenge problem for AI. AI Magazine 18: 73–85
  31. Laird J E, Newell A, Rosenbloom P S 1987 SOAR: an architecture for general intelligence. Artificial Intelligence 33: 1–64
  32. Lakoff G, Johnson M 1980 Metaphors We Live By. Chicago University Press, Chicago
  33. Lave J 1988 Cognition in Practice: Mind, Mathematics and Culture in Everyday Life. Cambridge University Press, Cambridge, UK
  34. Lewis C M, Sycara K P 1993 Reaching informed agreement in multispecialist cooperation. Group Decision and Negotiation 2: 279–99
  35. Lumsden C J, Wilson E O 1981 Genes, Minds and Culture. Harvard University Press, Cambridge, MA
  36. Maes P (ed.) 1990 Designing Autonomous Agents. Theory and Practice from Biology to Engineering and Back. MIT Press, Cambridge, MA
  37. Marr D 1982 Vision. A Computational In vestigation into the Human Representation and Processing of Visual Information. Freeman, San Francisco
  38. Maturana H R, Varela F J 1980 Autopoiesis and Cognition: the Realisation of the Living. Reidel, Dordrecht, The Netherlands
  39. McCulloch W S, Pitts W 1943 A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5: 115–33
  40. Miller G A, Galanter E, Pribram K H 1960 Plans and the Structure of Behavior. Holt, New York
  41. Minsky M, Papert S 1969 Perceptrons. MIT Press, Cambridge, MA
  42. Mitchell D C 1994 Sentence parsing. In: Gernsbacher M A (ed.) Handbook of Psycholinguistics. Academic Press, San Diego, CA, pp. 375–409
  43. Neisser U 1967 Cognitive Psychology. Prentice-Hall, Englewood Cliffs, NJ
  44. Newell A 1980 Physical symbol systems. Cognitive Science 4: 135–83
  45. Newell A 1982 The knowledge level. Artificial Intelligence 18: 87–127
  46. Newell A 1990 Unified Theories of Cognition. Harvard University Press, Cambridge, MA
  47. Newell A, Simon H A 1972 Human Problem Solving. Prentice-Hall, Englewood Cliffs, NJ
  48. Newell A, Simon H A 1976 Computer science as empirical enquiry: symbols and search. Communications of the ACM 19: 113–26
  49. Norman D A 1981 Categorization of action slips. Psychological Review 88: 1–15
  50. Olson G M, Olson J S 1997 Research on computer supported cooperative work. In: Helander M, Landauer T K, Prabhu P V (eds.) Handbook of Human–Computer Interaction, 2nd rev. edn. Elsevier, Amsterdam, pp. 1433–56
  51. Page M 2000 Connectionist modelling in psychology: a localist manifesto. Behavioral and Brain Sciences 23
  52. Pinker S 1984 Language Learnability and Language Development. Harvard University Press, Cambridge, MA
  53. Port R, van Gelder T J 1995 Mind as Motion: Explorations in the Dynamics of Cognition. MIT Press, Cambridge, MA
  54. Post E 1943 Formal reductions of the general combinatorial decision problem. American Journal of Mathematics 65: 197–268
  55. Psotka J, Massey D L, Mutter S A (eds.) 1988 Intelligent Tutoring Systems: Lessons Learned. Erlbaum, Hillsdale, NJ
  56. Putnam H 1960 Minds and machines. In: Hook S (ed.) Dimensions of Mind. New York University Press, New York, pp. 138–64
  57. Rosenblatt F 1958 The perceptron: a probabilistic model for information storage and/organization in the brain. Psychological Review 65: 386–408
  58. Rumelhart D E, McClelland J L & the PDP Group 1986 Parallel Distributed Processing: Explorations in the Microstructure of Cognition. MIT Press, Cambridge, MA
  59. Schunn C D, Crowley K, Okada T 1998 The growth of multidisciplinarity in the Cognitive Science Society. Cognitive Science 22: 107–30
  60. Searle J R 1990 Consciousness, explanatory inversion, and cognitive science. Behavioral and Brain Sciences 13: 585–95
  61. Shannon C 1948 A mathematical theory of communication. Bell System Technical Journal 27: 379–423 , 623–656
  62. Simon H A 1955 A behavioral model of rational choice. Quarterly Journal of Economics 69: 99–118
  63. Sloan Foundation 1978 Cognitive Science 1978. Report of the State of the Art Committee. Alfred P. Sloan Foundation, New York
  64. Smolensky P 1988 On the proper treatment of connectionism. Behavioral and Brain Sciences 11: 1–23
  65. Sperber D, Hirschfeld L 1999 Culture, cognition and evolution. In: Wilson R A, Keil F C (eds.) The MIT Encyclopedia of the Cognitive Sciences. MIT Press, Cambridge, MA, pp. cxi–cxxxii
  66. Strube G, Janetzko D, Knauff M 1996 Cooperative construction of expert knowledge: the case of knowledge engineering. In: Baltes P B, Staudinger U M (eds.) Interactive Minds. Cambridge University Press, Cambridge, UK, pp. 366–93
  67. Suchman L A 1987 Plans and Situated Actions: the Problem of Human–Machine Communication. Cambridge University Press, New York
  68. Tomasello M, Kruger A C, Ratner H H 1993 Cultural learning. Behavioral and Brain Sciences 16: 495–511
  69. Turing A 1936 On computable numbers, with an application to the Entscheidungs problem. Proceedings of the London Mathematical Society (Series 2) 42: 230–65; 43: 544–546 (addendum)
  70. Vera A H, Simon H A 1993 Situated action: a symbolic interpretation. Cognitive Science 17: 7–48
  71. Wiener N 1948 Cybernetics. Wiley, New York
  72. Wilson R A, Keil F C (eds.) 1999 The MIT Encyclopedia of the Cognitive Sciences. MIT Press, Cambridge, MA
  73. Winograd T 1972 Understanding natural language Cognitive Psychology 3(1)
  74. Wooldridge M, Jennings N R 1995 Intelligent agents: theory and practice. Knowledge Engineering Review 10: 115–52
  75. Zhang J J 1997 The nature of external representations in problem solving. Cognitive Science 21: 179–217
Comparative Neuroscience Research Paper
History of Cognitive Science Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!