Philosophy of Cognitive Science Research Paper




Sample Philosophy of Cognitive Science Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. iResearchNet offers academic assignment help for students all over the world: writing from scratch, editing, proofreading, problem solving, from essays to dissertations, from humanities to STEM. We offer full confidentiality, safe payment, originality, and money-back guarantee. Secure your academic success with our risk-free services.

Cognitive science emerged as a distinct field in the middle 1950s influenced by two important developments. First was the construction of digital computers and their capacity to perform operations that apparently require intelligent thinking. Second was Noam Chomsky’s idea that linguistic capacities involve in some sense knowledge of and conforming to unconscious grammatical rules and that at least some of this knowledge is innate. The first suggested that the mind’s operations could be understood on the model of a computer implementing an internally represented program (Turing 1950). The second that mental capacities are best describable in intentional terms like ‘knowledge,’ ‘belief,’ and ‘following a rule.’ Both of these developments rejected behaviorism (Skinner 1953). In its extreme forms behaviorism endorsed the view that human (and other animal) mental capacities are best understood in terms of causal connections between stimuli and responses and that learning is best understood as the change of such connections under reinforcement. Behaviorists tended to reject questions about the internal structures and processes that mediated stimulus and responses. They viewed intentional concepts as meaningless and/or unscientific.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 25START discount code


Some mental states are attributed to a person, e.g. beliefs, memories, perceptions, desires, while others are properly attributed only to subpersonal parts or faculties, e.g. the language faculty and the visual system. Both kinds of mental states and processes involving them are intentional. ‘Intentionality’ refers to the fact that mental states represent. For example, the thought that New York is tropical represents New York as being tropical. It has long been observed that a thought can represent what doesn’t exist (e.g., thoughts about Santa Claus) and misrepresent what does exist. Intentional mental states possess semantic properties. They refer and are evaluable as true or false. The thought that New York is tropical refers to New York and is false. Semantic features are shared with natural language expressions and by other kinds of representations (e.g., maps and pictures). For example, the belief that New York is tropical and ‘New York is tropical’ possess the same semantic content. It is widely held that the semantic features of nonmental items are ultimately derived from intentional mental states. Many theories in cognitive science assume that mental processes consist of mental states that are connected by virtue of their semantic contents. Such processes include ones that are ‘rational’ in that they tend to produce true beliefs or promote survival. Mental information processing consists of sequences of rationally related mental states.

Most cognitive scientists assume that psychological states are part of the natural physical order. However it is usually thought that cognitive science will, like other special sciences, employ its own taxonomy of states, and in particular one that differs from that of neurophysiology. The ‘received’ view is that psycho-logical states are certain kinds of functional states. A functional state or property is one that is individuated in terms of a causal role. For example, what makes something a carburetor is its causal role in taking gas and air as inputs and producing a mixture of the two as output. In the case of psychological states the causal role involves causal connections to other psychological states, to stimuli, and to behavior. An important feature of functionalist accounts is that physically different kinds of structures can realize a given functional state by satisfying its functional specification. Thus carburetors can be made out of metal or plastic and minds out of brains or computers.

There are three big issues in the philosophy of mind. How is rational thinking possible? How can the mind represent the world? What is consciousness? Cognitive science has much to say about the first and has given new twists to the second and third. Below are discussions of these and related issues.




(a) Is cognitive science possible? There are philosophical traditions which claim that there can be no science of the mind. The main contemporary objection revolves around the idea that intentionality and rationality are normative categories and this dis-qualifies them from being the subject of scientific laws and causal explanations. Ruminations along these lines can be found in Wittgenstein (1953) and Ryle (1949) and later taken up by philosophers as diverse as Kripke (1982), Davidson (1980), McDowell (1994), and Quine (1960). Davidson’s arguments have been especially influential among philosophers. He (1969) argues that in attributing beliefs, thoughts, desires, and other propositional attitudes to one another we are engaged in a project of interpretation. Further, he claims that interpretation is guided by a holistic ‘principle of charity.’ This principle dictates that when attributing mental states we ought, other things being equal, to maximize the subject’s rationality. Attributions guided by charity are holistic since whether or not a belief or preference, etc. is rational depends on other beliefs, etc. The normativity of rationality makes intentional states normative as well. Davidson claims that interpretation is so different from the way physical properties are assigned as to make it impossible for there to be any strict laws connecting intentional psychological states with physical states or with each other. But while Davidson’s (and other such) arguments have been influential it is far from clear that they are sound or even whether if sound they really would undermine the employment of intentional vocabulary in scientific laws and explanations. In any case, cognitive scientists persist in employing intentional concepts in formulating explanations and theories. For example, there is a lively area of cognitive science concerning how people engage in everyday logical and statistical inference that proposes hypotheses concerning causes of errors, selection of conclusions, and so forth (Johnson-Laird 1983). The mental states and processes studied are, of course, intentional ones. The issues of whether the mind and what aspects of mentality can be scientifically studied and whether there are laws involving intentional states are more likely to be resolved by the success (or lack thereof ) of the theories produced in the cognitive sciences than by philosophical argument.

Most cognitive scientists think that a science of the mind like other sciences contains laws, which explain psychological capacities and processes. The question arises of how such laws are related to biological laws. According to Fodor (1975), psychological laws are neurophysiologically implemented. In this view psychology is autonomous in that its taxonomy and laws are specific to it but every instance of a psychological law is also an instance of a more basic law or causal mechanism. If this is right then various kinds of mechanisms can implement psychological laws so psychology is not restricted to humans but can apply to Martians and computers. Cognitive scientists dis-agree about exactly how much can be learned about psychology by studying neurophysiology. ‘Top down’ theorists (Fodor 1981) think that very little beyond crude neural geography of mental capacities can be learned from neural sciences. At the other extreme, ‘bottom up’ theorists think quite a lot can be learned about the mind and perhaps even that neurophysiology should replace cognitive psychology.

(b) What is the status of folk psychology? The mental states people attribute to one another (and sometimes animals) include belief, knowledge, memory, desire, perception, and emotions. Normal humans are capable of attributing mental states with reliability and employing them in explaining other mental states and behaviors. Such explanations seem to conform to general principles like ‘if a person wants q and believes that doing A is the only way to get q then unless she has a reason not to do A she will intend to do A’ and also to somewhat more specific principles like ‘if a person sees a friend coming then unless she has some special reason not to she will greet him.’ The collection of such principles has come to be known as ‘folk psychology.’ One issue concerning folk psychology is whether it is approximately true. Fodor (1981, 1987) argues that it is and provides the appropriate starting place of a deeper theory of the mind. In contrast Churchland (1995) argues that folk psychological explanations are often mistaken or vacuous and predicts that a developed science of the mind will abandon it in much the same way that physics abandons folk physics. At the current stage of development many cognitive theories do employ folk psychological concepts (or refinements of them) and produce theories that sometimes explain folk psycho-logical regularities.

(c) Are there unconscious and subpersonal mental states? Folk psychology recognizes that there are unconscious mental states and processes. For example, most of a person’s memories are not present to consciousness and the mental processes a person engages in when driving a car might well be un-conscious. Psychoanalytic theory, which to some extent has been appropriated by folk psychology, recognizes unconscious thoughts and desires that are not normally accessible to consciousness, at least not without therapeutic assistance. Theories in cognitive science go quite a bit further, often positing intentional states that are in principle inaccessible to conscious-ness and are properly attributed only to parts of the mind. For example, some psycholinguistic theories posit a language module that cognizes grammatical rules formulated in terms of concepts that are un-known to the person but in some way guide comprehension and production of her speech.

Some functional structures engage in mental processes relatively independently of others. Such structures are said to be cognitively encapsulated modules. The mental processing engaged in by many modules is not accessible to consciousness although the end product of the processing may be. For example, there is evidence that the mind contains a ‘face detecting’ module that is able to determine whether or not a person has seen a particular face previously. The picture of the mind that emerges from this is in which it consists of many modules each dedicated to a particular task (e.g., face recognition, speech production, character trait attribution) and a general reasoner which receives information from the modules and whose operations are (partly) consciously accessible. There are disagreements concerning the ex-tent to which cognitive capacities are modularized. Some (e.g., Pinker 1995) seem to think that the mind is massively modularized, while others (Fodor 1983) ascribe much more importance to general reasoning capacities.

(d) What are propositional attitudes? Belief, knowledge, desire, and other folk psychological mental states are said to be propositional attitudes. The reason is the widely held view that the ‘that’ clauses used for attributing such states refer to propositions. There are various views about the nature of propositions but they all have in common that propositions are the part of the meaning of a sentence that is or determines the conditions under which the sentence is true. The question arises of exactly what it is to have a propositional attitude; for example, the belief that New York is tropical. This question is often seen (e.g., Fodor 1981) as having two parts: (i) what is it to have a belief? (ii) what is it to have a belief that expresses a particular proposition? The most widely accepted answer to the first question is that a belief is a functional state, i.e., it has a certain causal role. One approach to answering the second question is that a belief expresses the proposition that p in virtue of involving a representation whose content is that of the intentional state. So to believe that New York is tropical is to be in a functional state (believing) that involves a representation that has the intentional content that New York is tropical. This provides a nice explanation of why beliefs are truth evaluable, are about things, can be involved in inferences, etc. It is because these are features of the representations that they contain. The view that the mind contains representations is usually called ‘the representational theory of mind’ or RTM. It should be cautioned that RTM is restricted to explicit beliefs. Implicit beliefs (e.g., your belief you had prior to reading this sentence that no giraffe is bigger than the empire state building) are dispositions to form explicit beliefs under suitable circumstances. This account of propositional attitudes is the beginning of an account of how mental states can have intentional content. It has its dissenters. Some philosophers (Dennett 1981) suggest that there may be no internal representations with the same content as that clause we use in attributing a belief, but it is still appropriate to attribute the belief because of the person’s behavioral (including linguistic behavior) dispositions that themselves may involve states that represent at the subpersonal level.

(e) What is the nature of mental representations? Cognitive science is up to its neck (and beyond) in representations. Some representations are involved in propositional attitudes, others in relatively high-level unconscious cognitive systems like the language faculty, and others lower level systems that may implement the higher level systems. There are various views about the nature of these representations. One widely held view (Fodor 1975) is that most mental representations, at least those involved in proposition-al attitudes, belong to an internal language of thought, ‘mentalese.’ Mentalese contains basic expressions— predicates, names, connectives, etc. and rules of combination. The rules allow for the construction of a potential infinity of complex expressions from a finite basic vocabulary. There are important differences between a natural language like English and mentalese. The most important difference (as will be further discussed in item (g)) is that mentalese syntax is logical form. Whereas English can be used for communication, mentalese is used for thinking. Understanding a natural language can be accounted for in terms of processes that ‘translate’ natural language into mentalese. But understanding mentalese cannot be under-stood in the same way on the pain of regress. Rather mentalese is not literally ‘understood’ but rather is the language in which thinking and other mental processes take place. Whereas natural languages must be learned, many proponents of mentalese think that in some sense it is innate.

(f ) What is thinking? Perception and rational thinking involve mental states that are semantically related. For example, a person sees Macguire swing at a ball and the ball go over the centerfield fence. He thinks that it is a home run number 62 and then that Macguire has broken Maris’ home run record. On the representational theory of mind the process running from the visual perception to the thought that the home run record has been broken consists of many representations, e.g., a visual representation of shapes and colors, a perception that the ball is going over the fence, etc. These are not arbitrarily related but are related to each other by virtue of their semantic features. These semantic features involve relations to things external to the mind, e.g., to Macguire, the property of being a home run, and so on. How does the mind ‘know’ to go from a representation, which refers one thing to a representation that refers to a related thing if it has access only to the representations and not to their references? A closely related question is how can the mind engage in reasoning, which leads it from some true representations (say about light striking the retinas) to other true representations (say about the scene in front of the eyes). The computational theory of mind suggests answers to this question. It says that mental processes like thinking and perceiving are computations on mental representations. A computer is a device which is able to follow a program for manipulating representations on the basis of their syntax. So any relation which can be reduced to or encoded in syntactical relations can, in principle, be computed. Logical relations, e.g. logical implication, can be reduced to syntactical relations of sentences whose syntactic forms are logical forms (as is the case for Mentalese) computation can then account for logical inference. By encoding semantic features in syntax the computer can manipulate representations in ways that respect their semantics. Researchers in computer science have shown how many tasks that apparently require intelligence can be educed to computations.

According to the computational theory of mind (CTM), the mind is a computer and mental processes consist of computations on Mentalese representations. In this way the mind is able to manipulate representations that are semantically related. Of course there is a big difference between a computer and a mind. The computer is programmed by a human programmer who supplies interpretations for the symbols it manipulates. If the CTM is correct the mind that programs what the mind implements is a product of the structure of the brain and that, presumably, is at least partly dependent on evolution. The interpretation of the symbols that the mind manipulates is not provided by a programmer. Exactly what determines the semantic features of mental representations is discussed in item (i).

Not every cognitive scientist is impressed by CTM as an account of mental processes. Some go along with the idea that mental processes involve a kind of computation but conceive of computation very differently from the TM account. They suggest that the mind has a connectionist architecture. I discuss this view in the next section. The most famous philosophical objection to CTM is due to John Searle (1980). Suppose that understanding a language, e.g. Chinese, involves following a program. Searle argues that this cannot be correct since it is possible for a person who knows no Chinese but is able to follow program instructions just like a computer implements the program. The program is written so that questions in Chinese are input and answers in Chinese are output. Searle observes that although the man implements the program he clearly has no clue as to the meanings of the Chinese symbols. There have been many replies to this objection. Perhaps the most convincing from proponents of CTM is that implementing a program is necessary but not sufficient for language understanding. The language must also be translated into the person’s Mentalese. Of course this still leaves open the question of what it is for a symbol of Mentalese to represent, which will be discussed in item (i).

A different worry about CTM is whether it can account for certain kinds of reasoning, specifically inductive inference. Inductive reasoning involves considering various hypotheses and coming up with the one that is best supported by the evidence. We employ it in producing explanations, identifying causes, and so forth. For example, Sherlock Holmes solved a case when he realized that the fact that no dog barked was evidence that the murderer was known to the dogs. The worry, which is sometimes called ‘the frame problem’ (Pylyshyn 1987), is that no computational program can realistically perform this kind of task. In inductive reasoning almost any bit of information may be relevant. We seem to have the ability to survey a great deal of what we know and come up with what is evidentially relevant. But a program that operated by having to survey a vast number of representations evaluating them for relevance would seem to be completely impractical. There are too many computations to be performed. One response to this is to think that CTM may provide good accounts of the information processing that occurs in mental modules but that it is not very good at accounting for the mental capacities of a general reasoner. Thus those who like CTM are attracted to the view that the mind is massively modular.

(g) Classical or connectionist architecture? In the classical account of computation discussed in item (f ), the architecture of the mind is that of a classical computer (or system of such computers) and mental processes are operations on linguistic-like representations by virtue of their syntactic forms. There is an alternative account of cognitive structure and computation that has been developed called ‘connectionist architecture.’ Roughly, a connectionist system consists of a network of nodes joined together in a pattern of connections. Each node can be activated (or activated to a certain degree) and can receive signals and send signals to certain connected nodes. Whether or not a signal travels via a connection from node A to node N depends on the weight of the connection. Some nodes are activated by external stimuli (input nodes) and others send signals outside the network. At any time the state of the network is determined by the weights of the connections and the activation of the nodes. Signals that activate the input nodes are propagated throughout the system to the output nodes. Thus connectionist systems can be thought of as computing certain outputs given certain inputs. Further, the connectionist network can be ‘trained’ by altering the connection weights depending on whether or not a given output is ‘appropriate’ for a given input. Connectionist networks have been constructed that do a number of tasks e.g. recognize letters of the alphabet, ‘read’ text, recognize faces, and so forth by such training.

Connectionist cognitive architecture is apparently quite different from a classical computer. There is no ‘executive’ that is following a program. Computations are not performed on sentences but on the totality of connections among nodes. Although the state of a connectionist system can be thought of as a representation—say as representing that the cat is on the mat—unlike sentences of mentalese it needn’t contain any parts that correspond to ‘the cat,’ ‘is on,’ and ‘the mat.’ Proponents of connectionism think that it provides a model of mental states and processes that is more plausible than classical accounts. One reason is that some find it difficult to believe in mentalese. Another is that it seems natural for connectionist systems to implement vague concepts since a connectionist system can be trained to respond in a graded manner. The holism of connectionist representation is also appealing to some and there are suggestions that its holistic features may help with the frame problem mentioned in item (f ). Finally, connectionist networks are reminiscent of assemblies of neurons and so strike some as biologically realistic. Proponents of classicism point out that connectionism is more than reminiscent of behaviorism. Like behaviorism it is an associacionist psychology that construes mental processes in terms of establishing and modifying associations. Although it is in a sense holistic it is far from clear how that will help accounting for inductive inference. In fact critics of connectionism set up a dilemma (Fodor and Pylyshyn 1988, Fodor and McLaughlin 1990): either the connectionist architecture implements a classical architecture, in which case it is not really an alternative to classicism, or it fails to account for essential features of thought. These features are productivity and systematicity. Productivity involves the fact that once a thinker has basic concepts she is able to produce a potential infinity of novel thoughts involving those concepts. Systematicity is the feature that any thinker who can think a thought can think related thoughts that apparently have the same components. For example, if one can think Jack loves Jill then one can also think Jill loves Jack. These features are easily accounted for by classicism since the thoughts correspond to syntactically structured representations. But a connectionist system can be capable of being in a state that represents that Jack loves Jill without being capable of being in a state that represent Jill loves Jack since it need not have parts that correspond to Jack, loves, Jill. If the connectionist system does have such parts then it is merely implementing a classical system.

(h) What are concepts? The concept plays an important role in the cognitive sciences. Thoughts (beliefs, memories, desires, etc.) are composed of concepts and so what mental processes a thinker can engage in depends on what concepts she possesses. Developmental psychology is interested in how people acquire concepts and whether some concepts are innate. There are various views about the nature of concepts. Advocates of RTM think of concept tokens as representations but there is a wide diversity of views on what makes a particular representation a particular concept, say the concept horse. One view is that a concept is something like a definition. For example, the concept horse may be the definition is a large land mammal that has been domesticated for riding. To possess the concept horse is to know the definition. This view has come under much deserved criticism. One problem is that not all concepts can have definitions without circularity. More serious even is that most words (and the concepts they are associated with) do not seem to have definitions at all. There are horses that are not large and large animals that have been domesticated for riding (e.g., elephants) that are not horses. Another view is that concepts are prototypes. A prototype consists of a core exemplar—a representation of something that is a paradigm example of the concept—and then a similarity metric that determines how close something is to the paradigm. For example, the concept bird consists of the representation robin and a metric that makes eagles pretty good birds and penguins pretty bad ones. But while there is evidence that thinkers do judge instances of a concept as better or worse examples, the account faces some of the same difficulties faced by the definition account. A somewhat more general approach considers the inferences that a thinker is disposed to make concerning thoughts containing a concept as individuating the concept. Such ‘conceptual role’ theories of concepts face a dilemma. Either all of the inferences involving the concept are individuative of it (holism) or only some are (molecularism). If the first then, as our beliefs change so do our concepts and it is unlikely that two people ever share the same concept. If the second then the question arises of what makes some inferences concept constituting. There are arguments in the philosophical literature (Quine 1960, Fodor and LePore 1992) that there is no principled distinction between the two but also some proposals (Peacocke 1992) for how to make the distinction. Finally, there is the view that concepts are expressions in mentalese that are individuated by their syntax and by their reference (Fodor 1998). This view allows for thinkers with very different beliefs to share the same concept. But it also allows for the bizarre possibility of someone possessing the concept horse while believing that horses are edible fruits.

Cognitive theories that posit innate knowledge are also committed to the innateness of the concepts that constitute the knowledge. Some cognitive scientists go much further and claim that many of our concepts are innate. One reason for this is the difficulty in ac-counting for how concepts can be learned. As Fodor (1980) observed, they cannot be learned by testing various hypotheses about them since the formulation of the hypotheses already requires possessing the concept. At one time Fodor thought that this line of argument showed that even the concept carburetor is innate. Fodor has since moderated his view but there is no consensus concerning how concepts are acquired.

(i) How does the mind represent the world? What makes a component of a mental state a representation is that it possesses semantic properties, e.g., it refers, has truth-value, and so on. But what exactly deter-mines that a given representation possesses a certain semantic property. The Cartesian tradition generally thought that intentionality is a distinct and basic feature of mental substance. But most philosophers of cognitive science who think that there are mental representations think that whatever determines semantic features has to be within the realm of natural science. On the view that was once widely held in philosophy that concepts are images the answer to this question is that resemblance makes for representation. But even cognitive scientists who posit mental images do not think that these literally resemble their references. The two views that are currently most widely advocated are informational semantics and teleological semantics or some combination of the two Millikan 1984, Fodor 1987, 1990, Loewer 1998). Simplified informational semantics says that the fact that a certain state carries certain information under certain circumstances or is reliably caused by certain properties under certain circumstances determines its semantic properties. Teleological semantics says that semantic properties of a representation are determined by its biological function. A simplified combined view is that the function of carrying certain information under certain circumstances determines representations of semantic properties. For example, it is not implausible that there is a certain system of a frog’s brain with the function of being in a particular state R if and only if a fly is nearby when the circumstances are normal (e.g., in a pond, good light, etc.). If the circumstances are normal then an occurrence of R in the frog’s brain carries the information (to other parts of the frog’s brain) that a fly is present. This kind of account fits very nicely with the view that the mind is a kind of information processor. But whether it can be developed so as to provide a plausible account of the semantics of the mental representations involved in human thought is a big and very open question.

Bibliography:

  1. Block N 1995 On a confusion about a function of consciousness. Behavioral and Brain Sciences 18: 227–47
  2. Chomsky N 1954 Syntactic Structures. Mouton, The Hague
  3. Chomsky N 1959 A Review of Skinner’s Verbal Behavior. Language 35: 26–58
  4. Churchland P M 1995 The Engine of Reason, The Seat of the Soul. MIT Press, Cambridge, MA
  5. Davidson D 1980 Essays on Actions and Events. Oxford University Press, Oxford, UK
  6. Dennett D 1981 Brainstorms. MIT Press, Cambridge, MA
  7. Dennett D 1994 Consciousness Explained. Little Brown, New York
  8. Descartes R 1641/1970 Meditations on first philosophy. In: ES Haldane, Ross GRT (trans.) The Philosophical Works of Descartes, Cambridge University Press, Cambridge, UK, Vol. 1, pp. 131–200
  9. Fodor J A 1975 The Language of Thought. Crowell, New York
  10. Fodor J A 1981 RePresentations: Philosophical Essays on the Foundations of Cognitive Science. MIT Press, Cambridge, MA
  11. Fodor J A 1983 The Modularity of Mind. MIT Press, Cambridge, MA
  12. Fodor J A 1987 Psychosemantics. MIT Press, Cambridge, MA
  13. Fodor J A 1990 A Theory of Content: And Other Essays. MIT Press, Cambridge, MA
  14. Fodor J A 1998 Concepts: Where Cognitive Science Went Wrong. University Press, Oxford, UK
  15. Fodor J A, LePore E 1992 Holism: a Shopper’s Guide. Blackwell, Oxford, UK
  16. Fodor J A, McLaughlin B 1990 Connectionism and the problem of systematicity. Cognition 35(2): 185–204
  17. Fodor J A, Pylyshyn Z 1988 Connectionism and cognitive architecture: a critical analysis. In: Pinker S, Mehler J (eds.) Connect-ions and Symbols. MIT Press, Cambridge, MA
  18. Johnson-Laird P 1983 Mental Models: Towards a Cognitive Science of language, Inference, and Consciousness. Cambridge University Press, Cambridge, UK
  19. Kossylyn S M 1980 Image and Mind. Harvard University Press, Cambridge, MA
  20. Kripke S A 1982 Wittgenstein on Rules and Private Language. Harvard University Press, Cambridge, MA
  21. Loewer B 1998 Guide to naturalizing semantics. In: Hale B, Wright C (eds.) The Companion to the Philosophy of Language. Blackwell, Oxford, UK
  22. Loewer B, Georges, Rey 1991 Meaning in Mind. Blackwell, Oxford, UK
  23. Marr D 1982 Vision. Freeman and Co.
  24. Millikan R 1984 Language, Thought, and Other Biological Categories. MIT Press, Cambridge, MA
  25. McDowell J 1994 Mind and World. Harvard University Press, Cambridge, MA
  26. Nagel T 1979 Mortal Questions. Cambridge University Press, Cambridge, UK
  27. Peacocke C 1992 A Study of Concepts. MIT Press, Cambridge, MA
  28. Pinker S 1997 How the Mind Works. Norton, New York
  29. Pylyshyn Z W 1984 Computation and Cognition. MIT Press, Cambridge, MA
  30. Pylyshyn Z W 1987 The Robot’s Dilemma: The Frame Problem in Artificial Intelligence. Ablex, Norwood, NJ
  31. Rey G 1997 Contemporary Philosophy of Mind. Cambridge, MA
  32. Ryle G 1949 The Concept of Mind. Hutchenson’s University Library, London
  33. Quine W 1960 Word and Object. Technology Press of MIT, Cambridge, MA
  34. Searle J 1980 Minds, brains, and programs with commentaries. The Behavioral and Brain Sciences 3: 417–57
  35. Searle J R 1992 Rediscovery of the Mind. MIT Press, Cambridge, MA
  36. Skinner B P 1953 The Science of Human Behavior. Macmillan, New York
  37. Smolenski P 1988 On the proper treatment of connectionism. Behavioral and Brain Sciences 11: 1–74
  38. Turing A 1950 Computing machinery and intelligence. Mind 59: 433–60
  39. Wittgenstein L 1953 Philosophical Investigations [trans. Ans-combe GEM]. Macmillan, New York
Philosophy of Communication Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 25START discount code!