Biological Psychology of Language Research Paper

View sample biological psychology of language research paper. Browse research paper examples for more inspiration. If you need a psychology research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our writing service for professional assistance. We offer high-quality assignments for reasonable rates.

Language is a means of communicating information from one individual to others. It is a code that human societies have developed for the expression of meaning. Across all societies, the code involves words, as well as rules for linking words together (syntax). Words are arbitrary collections of sounds, produced by articulatory gestures (or manual signing, or writing) that convey particular meanings; a typical speaker acquires a vocabulary of at least 30,000 words in his or her native language. Sentences are combinations of words, governed by the syntax of the language, that encode more complex and, in many cases, novel messages. Comprehension entails recovery of the meaning intended by the speaker (or signer, or writer); production entails translating the message the speaker (or signer, or writer) wishes to convey into a series of words, constrained by the syntax of the language. Most children acquire the spoken words and syntax of their language community within a short period of time, and seemingly effortlessly. Once acquired, language comprehension and production processes appear automatic; people cannot help but understand what they hear in their native language, and they usually produce coherent sentences with little apparent effort, even when they are engaged in other tasks.

Language is a uniquely human capacity. Although there is evidence of learned responses to specific calls in primates, and of brain asymmetries in chimpanzees that may foreshadow the asymmetrical representation of language in the human brain (Gannon, Holloway, Broadfield, & Braun, 1998), other animals do not exhibit communicative behavior that compares with the richness, structure, and combinatorial capacity of human language. Hence the investigation of language function and of language-brain relationships requires the use of human subjects. In recent years, the development of methods for monitoring brain activity as people engage in cognitive tasks has provided a means of studying languagebrain relationships in normal individuals. These techniques complement the study of disruptions to language function caused by brain damage—the aphasias—which for some time served as the primary means of investigating the neural substrates of language. The breakdown patterns observed in aphasia will be the focus of much of this research paper. We will also review evidence from studies that examine language activity in the normal brain.

We begin with an overview of relevant brain anatomy, historical treatment of aphasic disorders, and techniques currently being used to investigate brain-behavior relationships. Following this, we present a more detailed discussion of three specific content areas: semantics (the representation of meaning), spoken language comprehension, and spoken language production.

Brain and Language: A Brief Introduction

Brain Anatomy

The human brain is a very complex structure composed of billions of nerve cells (neurons) and their interconnections, which occur at synapses, or points of contact between one neuron and another. The vast majority of these synapses are formed between axons, which may travel short or long distances between neurons, and processes called dendrites, which extend from neuronal cell bodies. Such connections govern the patterns of activation among neurons and, thereby, the physical and mental activity of the organism. While the connectivity patterns are to some extent genetically determined, a major characteristic of the nervous system is the plasticity of neural connections. Organisms must learn to modify their behavior as a function of their experience; the child’s acquisition of the language s/he hears is an especially relevant example.

Insofar as language capability is concerned, the key structures reside in the cerebral cortex—the mantle of cells six layers deep that is the topmost structure of the brain. This tissue has an undulating appearance, composed of hills (gyri) and valleys (fissures or sulci) between them; this pattern reflects the folding of the cortex to fit within the limited space in the skull. In the majority of individuals, whether right- or left-handed, it is the left hemisphere of the cerebral cortex that has primary responsibility for language function.

Each hemisphere is divided into four lobes: the frontal, parietal, occipital, and temporal lobes. The occipital lobe has primary responsibility for visual function, which extends into neighboring areas of the temporal and parietal lobes. The frontal lobe is concerned with motor programming, including speech, as well as high-level cognitive functions. For example, sequential behavior is the province of prefrontal cortex, situated anterior to the frontal motor and premotor areas, which occupy the more posterior portions of the frontal lobe. Auditory processing is performed by areas of the temporal lobe, much of which is also concerned with language. There is evidence that inferior portions of the temporal lobe support semantic functions, whereas structures buried in the medial portions of the temporal lobe (the hippocampus, the amygdala) are concerned with mechanisms of memory and emotion. The parietal lobe is concerned with tactile and other sensory experiences of the body, and has an important role in spatial and attentional functions; it is involved in language, as well.

Biological Psychology of Language Research Paper

In the early years of the twentieth century, a neuroanatomist named Brodmann examined cortical tissue under the microscope and assigned numbers to areas that appeared different both with respect to types of neurons and their densities. A map of Brodmann’s areas can be found in Figure 21.1. To a large extent, these histological differences are associated with differences in function. The Brodmann numerical scheme is still in use today; for example, functional imaging studies of the human brain employ these numerical designations. Some areas have also been given names. For example, the deep fissure separating the temporal from the frontal and parietal lobes is known as the lateral or Sylvian fissure; areas bordering this fissure in the left hemisphere are essential to language, and the language zone of the left hemisphere is sometimes referred to as the perisylvian area. Another map containing names of regions that are relevant to a discussion of language is presented in Figure 21.2.

Biological Psychology of Language Research Paper

Although it has long been known that there are functional differences between the two cerebral hemispheres, it was not until the 1960s that corresponding differences in structure were identified. For example, Geschwind and Levitsky (1968) found that an area called the planum temporale, at the juncture of the temporal and parietal lobes, is generally larger in the left hemisphere than the right; this area is involved in language. However, it has recently been shown that chimpanzee brains contain the same asymmetry (Gannon et al., 1998), a finding that undermines what appeared to be a strong relationship between structural enlargement and functional capability. It may be, however, that this asymmetry foreshadows the dedication of this area to language function.

The cortical role in language depends critically on the receipt of information from lower brain centers, including the thalamus. The thalamus is an egg-shaped structure at the top of the brain stem, divided into regions called nuclei. Thalamic nuclei send projections to areas of the cortex and receive inputs from these areas. For example, auditory input from the medial geniculate nucleus of the thalamus projects to the temporal lobe; fibers from the lateral geniculate nucleus carry visual input to the occipital lobe. An extensive network of white matter, consisting of axons, underlies the six cell layers of the cortex, connecting cortical regions as well as carrying information to and from subcortical regions. The two hemispheres of the cerebral cortex are connected  as the split brain operation, which is performed to relieve the spread of epileptic discharges that fail to respond to pharmacological intervention (e.g., Gazzaniga, 1983). There is evidence that other structures, such as the basal ganglia, which are located beneath the cerebral cortex, also contribute to language function (e.g., A. R. Damasio, Damasio, Rizzo, Varney, & Gersch, 1982; Naeser et al., 1982). The basal ganglia project to areas of the frontal cortex and are essential to motor activity. The motor aspects of speech production depend on the integrity of connections from motor areas in the frontal cortex to nuclei in the brain stem, which innervate the articulatory musculature.

The nature of the connection patterns is also important. Both hemispheres receive input from both ears and both eyes. However, in the case of vision, the left hemisphere is sensitive to the right half of space (or visual field), which projects from the left half of each retina—and vice versa for the visual projection to the right hemisphere. Thus, if the visual area of the left hemisphere is lesioned, vision from the right visual field is interrupted. The receipt of visual information from both eyes (but from the same visual field) is essential for depth vision. Cases where part or all of a hemifield is lost are termed hemianopias. If the corpus callosum is severed (or the posterior part of it lesioned), the left hemisphere will receive information from only the right visual field, and the right hemisphere from only the left visual field. The connectivity pattern is different for the auditory system, where fibers cross over at several levels from right to left and vice versa; however, the projection to each hemisphere from the ear on the opposite side dominates, reflecting the larger number of fibers that each receives from the contralateral ear. In the case of sensory and motor function, the left hemisphere receives input and exerts motor control over the right half of the body, and the right hemisphere does the same for the left. Because certain language areas lie in close proximity to left hemisphere motor cortex, some aphasic patients will have motor weakness on the right, particularly affecting the upper limb. Lesions of left temporal cortex, which also result in aphasia, may disrupt fibers carrying information to visual cortex, resulting in defects in the right visual field.

Historical Background to the Classification of the Aphasias

Systematic observation of the relationships between brain damage and language dysfunction began in the middle of the nineteenth century. The first steps in this direction are generally credited to a French physician, Paul Broca, who also had an interest in anthropology. Prior to his work, the strongest claims about mind-brain relationships were made by phrenologists, who took bumps on the skull to reflect the development of cortical areas beneath. Phrenologists attempted to localize such functions as spirituality and parental love; they also made claims about language, assigning it to the anterior tips of both frontal lobes, an area that lies just behind the eye. (The basis for this assignment was the acquaintance of phrenology’s founder, Gall, with an articulate schoolmate who had protruding eyes!)

Broca was properly skeptical about these pseudoscientific views, believing that functions must be directly related to the convolutions of the cerebral cortex. He observed a patient (Monsieur Leborgne) with a severe speech production impairment, whose output was limited to a single monosyllable (“tan”); in contrast, the gentleman’s comprehension of language appeared to be well preserved. Leborgne passed away, and Broca was able to examine his brain. He found an area of damage in the left inferior frontal lobe and later confirmed this observation in another patient with an articulatory impairment. Broca observed several additional patients with similar problems, although he did not always have information about the underlying neuropathology. In 1865, he postulated that the left frontal lobe was the substrate for the “faculty of articulate language.” He speculated that this function of the left frontal lobe was related to the left hemisphere’s control of the usually dominant right hand (Broca, 1865). Broca also hypothesized that the right hemisphere was responsible for language in left-handers, a conjecture that later proved to be incorrect, although there is evidence that language is less completely lateralized to the left hemisphere in left-handers (e.g., Rasmussen & Milner, 1977).

Several years later, a German physician, Carl Wernicke, observed a different form of language impairment, characterized by a severe comprehension deficit. In contrast to Broca’s cases, these patients spoke fluently, although their output was often uninterpretable. They frequently produced paraphasias (substituted words or speech segments), sometimes uttered nonwords (neologisms), and relied heavily on pronouns or general terms such as “place” and “thing.” Wernicke was able to examine the brain of one of the patients, and found a lesion in the superior temporal gyrus on the left. The area of damage was located in what would now be called the auditory association area, at the junction of the left temporal and parietal lobes. (Association cortex typically borders primary sensory cortex, which receives input from subcortical nuclei; association areas are involved in the further processing of sensory information.) As auditory comprehension and speech production were both compromised by the lesion, Wernicke (1874) speculated that this region contained “auditory word images” acquired in learning the language. In addition to their role in interpreting speech input, he suggested that these images guided the articulatory functions that were localized in Broca’s area. He further speculated that the anatomical connection between Wernicke’s and Broca’s areas would allow the listener to repeat the speech of others; damage to this pathway, he predicted, should result in a disorder in which production would be error-prone, although comprehension would be preserved. The existence of this disorder (conduction aphasia) was subsequently confirmed, although the locus of damage that gives rise to it is still debated (H. Damasio & Damasio, 1989; Dronkers, Redfern, & Knight, 2000).

Biological Psychology of Language Research Paper

Wernicke’s early theory of brain-language relationships was elaborated by Leopold Lichtheim (1885), whose model is represented in Figure 21.3. Lichtheim added a concept center, which would serve as the seat of understanding for the listener (hence its connection toWernicke’s area) as well as the source of the messages ultimately expressed by the speaker (it is also connected to Broca’s area). Although he referred to it as a center, he believed that this information was diffusely represented in the brain. Lichtheim’s model predicted several disorders not yet observed. Among these were the transcortical aphasias (sensory and motor), in which repetition was better preserved than spontaneous production due to the isolation of Broca’s or Wernicke’s area from the concept center. Transcortical sensory aphasia also involved a comprehension problem, due to the disconnection between Wernicke’s area and conceptual information. Lichtheim also predicted a receptive disorder—word deafness—resulting from the disconnection of Wernicke’s area from auditory input, and a production disorder—aphemia—resulting from damage to the pathway from Broca’s area to brain centers concerned with motor implementation of spoken output. All of these patterns were later reported. The Wernicke-Lichtheim approach, which relied heavily on connections among brain centers, came to be known as connectionism (not to be confused with current computer models of cognitive function, which use the same term). Table 21.1 shows the taxonomy of aphasia syndromes, which took shape under this approach and is still in use today.

Connectionist aphasiology has always drawn opposition. One of the early critics was Sigmund Freud, who published a monograph on aphasia in 1891. Freud argued that the connectionist view was too simplistic to account for so complex a function as human language. He also showed that it could not explain the symptoms of several cases of aphasia that had been reported in the literature (Freud, 1891/1953). Contemporary language scientists would agree; the connectionist approach focused on word comprehension and production, completely ignoring sentence-level processes. (Famed neurologist Arnold Pick stood apart from this tradition and made outstanding contributions to the study of sentence-level breakdown; see Pick, 1913). Most of those who studied aphasia in the early to mid-twentiethth century adopted a more holistic approach to language-brain relationships (e.g., Head, 1926; Marie, 1906). However, the center-based approach, with emphasis on the importance of connections between areas responsible for particular functions, was revived in the 1960s, largely through the efforts of neurologist Norman Geschwind. He published two celebrated papers that explained a range of cognitive disturbances, in animals and man, in terms of the severing of connections between brain regions (Geschwind, 1965). Insofar as language was concerned, he favored the approach initiated by Wernicke and elaborated by Lichtheim. At about the same time, Schuell and her colleagues at the University of Minnesota conducted large group studies of aphasics, which led them to conclude that the language area of the brain was not differentiated with respect to function—that it operated as a whole (Schuell, Jenkins, & Carroll, 1962). One major difference between these approaches is that Geschwind was looking primarily at cases with restricted lesions, whereas the Minnesota study enrolled patients irrespective of the nature of their lesions and likely included individuals with extensive brain damage.

To a large extent, these polar views—componential or modular versus holistic—continue to characterize the debate among aphasia researchers, and in a somewhat different guise, among language researchers in general. Are language tasks delegated to distinct processing components that operate to a large degree independently of one another (the modular view) or does the system operate with a good deal of interaction? Is there a separate syntactic component, or are syntactic functions subsumed by the lexicon? We will try to address such questions wherever we can. It should be appreciated, however, that in many instances the answers are, at best, provisional. In view of the complexity of language processes, it is not surprising that many issues remain controversial. (To sample contemporary arguments against the modular approach, see Dick et al., 2001, and Van Orden, Pennington, & Stone, 2001.)

Methods Used to Study Brain-Language Relationships

Localizing Brain Lesions

Although encased in the protective skull, the brain is nevertheless susceptible to a wide variety of disorders. The circulatory system is the primary cause of aphasic impairments. Most are caused by disruption of the left middle cerebral artery, which supplies the lateral surface of the left hemisphere, including brain regions concerned with language function. Circulatory disorders include those that block an artery (ischemia), hemorrhages involving rupture of a blood vessel, and aneurysms, in which a weak arterial wall ultimately allows blood to leak into brain tissue—a major cause of aphasia in younger individuals. Other disorders include tumors, infections, degenerative diseases (such as Alzheimer’s), and traumatic brain injuries produced by falls, bullets, and vehicular accidents.

Biological Psychology of Language Research Paper

Until the early 1970s, techniques for imaging the brain in vivo were rudimentary; radiological techniques did not have much sensitivity. To gain precise information about the location of brain damage, it was necessary to examine the brain after the patient had died, or to rely on descriptions of tissue removed at surgery. The development of the CT scan (CT is an acronym for computer-assisted tomography) improved matters considerably. This is an X-ray procedure that is enhanced by the use of computers; it is possible to acquire images of successive slices through the brain, and due to differences in the density of neural tissue (denser tissue absorbs more radiation), to localize the damage rather precisely. Magnetic resonance imaging (MRI), also developed in the 1970s, led to further advances in brain imaging. One advantage of this technique is that it does not utilize radiation. Instead, a magnetic field is imposed that causes certain atoms (usually hydrogen, a major component of water) to orient in a particular direction; subsequent imposition of a radio frequency signal causes the atoms to spin, giving off radio waves that are registered by a computer. Again, it is possible to examine serial slices of brain tissue. The signal varies with the content of the tissue and yields sharp images that are becoming increasingly refined in their degree of resolution. These techniques have provided a great deal of information on lesion sites, which can then be related to the nature of the language impairment. Computerized methods for compiling lesion data across the brains of patients with similar deficits are proving particularly useful (e.g., Dronkers et al., 2000).

Although aphasia research has yielded much informative data, it must be acknowledged that the evidence is problematic in some respects. The injuries result from disease or trauma, accidents of nature that afford no control over the locus or size of the lesion. Moreover, the damage is often extensive, which makes it difficult to isolate the regions responsible for specific language functions. For example, the lesion that results in chronic Broca’s aphasia extends well beyond the area identified by Broca (Mohr et al., 1978), and it has been claimed that the same applies to Wernicke’s aphasia (Dronkers et al., 2000). Some cases of Broca’s and Wernicke’s aphasias have lesions that completely spare the classical Broca’s and Wernicke’s areas (Dronkers et al.). Furthermore, the delineation of these areas is complicated by the fact that human brains differ in the size and precise localization of particular areas (e.g., Uylings, Malofeeva, Bogolepova, Amunts, & Zilles, 1999).

Some aphasia researchers have tried to interpret aphasic deficits as purely subtractive, taking the residual behavior to represent normalcy minus the damaged component (e.g., Caramazza, 1984). This view is theoretically appealing, but increasingly untenable. Patients with language disorders struggle to communicate. In doing so, they often employ compensatory strategies, which speech pathologists strongly encourage in their attempts to rehabilitate aphasic disorders. This introduces another source of variability: Do different behavior patterns reflect different deficits, or different ways of compensating for the same underlying impairment?

It is also becoming clear that, in some cases at least, regions of the right hemisphere, normally thought to be little involved in basic language functions, provide support for residual language in aphasia. For example, functional imaging studies have shown increased brain activity in areas of the right hemisphere that are homologous to damaged language areas on the left (e.g., Cardebat et al., 1994; Weiller et al., 1995). Other studies have shown that patients whose language has improved subsequent to left hemisphere damage become worse as a result of subsequent right hemisphere lesions (e.g., Basso, Gardelli, Grassi, & Mariotti, 1989). These considerations should be kept in mind as we review the data; where appropriate, we will refer to them explicitly.

There are other reasons to be cautious when drawing inferences about lesion-deficit relations. A functional deficit, even if consistently related to the same locus of injury, may not directly reflect the localization of the impaired function. Much of the brain’s activity depends on connections between sets of neurons, and the deficit may reflect disruption of connectivity patterns as opposed to localization of the function at the site of damage per se. This point is supported by metabolic imaging studies of brain-damaged patients (see next subsection), which have shown hypometabolic changes at sites remote from the structural lesion, and, in some cases, changes in regions of brain of that show no evidence of focal brain damage on CT or MRI (e.g., Breedin, Saffran, & Coslett, 1994).

Imaging Brain Metabolism With Positron Emission Tomography

The imaging methods discussed previously provide static images of brain structures. With positron emission tomography (PET), it has become possible to explore the physiological effects of a structural lesion, for example, by measuring regional metabolism of glucose, the major energy source used by the brain. PET (and the lesser used SPECT) are methods that localize and quantify the radiation arising from positronemitting isotopes, which are injected into the bloodstream and which accumulate in different brain regions in proportion to the metabolic activity of those regions and the demands of this activity for greater cerebral blood flow (for a readable introduction, see Metter, 1987). The use of PET in studies of functional brain activity is discussed next. PET has also been used productively to measure resting-state activity in brains damaged by stroke or other neurological insult. As noted above, such studies have revealed that the areas of brain that are metabolically altered by a structural brain lesion far exceed the boundaries of the structural lesion. Furthermore, the metabolic maps provide a very different picture of function-lesion correlations in patients with aphasia (Metter; Metter et al., 1990).

Imaging Functional Brain Activity

A major innovation in cognitive neuroscience has been the extension of PETand MRI methods to the measurement of regional activation associated with ongoing cognitive behavior. What follows is a brief overview; for further details, the reader is referred to Friston (1997), Rugg (1999), and Bibliography: therein.

As implied earlier, there is a close coupling between the changes in activity level of a region of brain and changes in its blood supply, such that increased activity leads to an increase in blood supply. In so-called cognitive activation studies with PET, images are acquired while the subject performs two conditions: an experimental condition and a control condition that ideally differs from the experimental condition with respect to only a single cognitive operation. Computerized methods are then used to subtract the activation patterns in the control state from that induced by the experimental state. Regions that achieve above-threshold activation after the subtraction are taken to subserve the cognitive operation(s) of interest.

Functional MRI (fMRI) takes advantage of a hemodynamic effect called BOLD, for blood oxygenation level dependent. It happens that oxygen flows to activated brain areas in excess of the amount needed, so that the oxygen content of blood is higher when it leaves a highly active area compared with a less active one. Dynamic changes in the ratio of oxygenated to nonoxygenated blood as a cognitive task is performed thus provides an index of the changes of brain activity in areas of interest. Signals obtained from certain MRI measures are sensitive to these changes in blood oxygenation and, by extension, regional brain activity.

FMRI has a number of advantages over PET, not the least of which is that it does not require injection of radioactive compounds. This aspect of PET limits the number of scans that can safely be obtained from a single subject, which generally necessitates the pooling of data across subjects. FMRI, in contrast, can be used with single subjects. Moreover, the images can be acquired over shorter time periods (seconds, as opposed to minutes in the case of PET), and have better spatial resolution. On the other hand, fMRI suffers from artifacts introduced by movements, including small head movements such as occur during speech. This has limited fMRI research on speech production; however methods for correcting such artifacts are continually evolving and we can expect to see more such studies in the future. We can also expect more research on the application of fMRI methods to brain injured populations. As it stands, the hemodynamic models related to BOLD are of questionable validity when applied to patients with cerebrovascular alterations due to stroke or trauma.

Many fMRI experiments employ the same blocked designs and subtraction logic as are used with PET. This approach has been criticized, in that the results are heavily dependent on the choice of the control task. Indeed, it is arguable whether the idealized single-component difference between experimental and control tasks is ever, in reality, achieved (e.g., Friston, 1997). Other methods currently in use include parametric designs, in which the difficulty of a task is systematically varied and regions are identified that show a corresponding increase in activation; and designs in which trials, rather than blocks of trials, constitute the unit of analysis (Zarahn, Aguirre, & D’Esposito, 1997). Some of these newer methods take advantage of fMRI’s sensitivity to transient signal change in order to examine the dynamic response to a sensory event, similar to the electrophysiological ERP technique, discussed next.

Electrophysiological Methods

Electrophysiological methods have been used both to record events in the brain, and, by introducing current, to interfere with brain activity. Potentials temporally linked to sensory stimuli and recorded from the scalp—event-related potentials,orERPs—haveprovedextremelyusefulformappingthe time course of cognitive operations; but as the source of the current is difficult to specify, this approach is less useful for the localization of brain activity (see Kutas, Federeier, & Sereno, 1999, for a more extensive treatment of this topic). More precise localization data have been acquired from electrode grids placed on the cortex prior to surgical intervention, usuallyincasesofintractableepilepsy(e.g.,Nobre,Allison,& McCarthy, 1994). In some cases, electrodes have been used to apply current to brain regions, which disrupts ongoing brain activity. In the 1950s, such studies were used to map general brain functions (e.g., Penfield & Roberts, 1959); more recently, the technique has been used to identify brain areas associated with specific language functions (e.g., Boatman, Lesser, & Gordon, 1995; Ojemann & Mateer, 1979).

Recently, ERP techniques have been used to investigate aphasic disorders. One advantage of this approach is that it does not require a response on the part of the subject. Overt responses are often delayed or hesitant in patients with aphasia; in some cases, the patient may even say “yes” when he or she means “no,” or vice versa. The ERP methodology utilizes standard electrophysiological responses, such as the one generated by semantic anomaly (e.g., Kutas & Hillyard, 1980). This signal is called the N400—N because it involves a negative change from the baseline of ongoing electrical activity, and 400 because it occurs 400 msec after the anomalous word (e.g., after “socks,” given the sentence “He spread his warm bread with socks”).The amplitude of the N400 is related to the difficulty in integrating the word into the sentence context; in the following examples, it is smaller in the case of number 1 than number 2 (Hagoort, Brown, & Osterhout, 1999). There is evidence that the N400 is reduced in aphasics with severe comprehension deficits (Swaab, Brown, & Hagoort, 1997).

  1. The girl put the sweet in her mouth after the lesson.
  2. The girl put the sweet in her pocket after the lesson.

Magnetic Stimulation and Recording

Electrical activity in the brain generates magnetic changes that can be recorded from the surface of the skull (magnetoencepholography, or MEG; see Rugg, 1999, for more details.) With respect to the source of the activity, MEG is more restrictive than ERP, in that the decline in strength of the magnetic field drops off more sharply than that of the electrical field. This technique is now being applied in attempts to localize brain activity related to language function (e.g., Levelt, Praamstra, Meyer, Helenius, & Salmelin, 1998).

The application of a magnetic field to points on the skull, which generates electrical interference, has also been used to disrupt the brain’s electrical activity in the cortex below. This technique can be used to help localize brain activity in relation to ongoing tasks (e.g., Coslett & Monsul, 1994).

Inferences From Patterns of Language Breakdown

In addition to the use of patient data to localize language function in the brain, the patterns of language breakdown in aphasia are of interest for their bearing on the functional organization of language. For example, if it could be shown convincingly that syntactic processes are disrupted independently of lexical functions, and vice versa, it would lend support to the theory that these capacities constitute separate components of the language system. Over the past 25 years or so, this has been the enterprise of the field known as cognitive neuropsychology. Cognitive neuropsychologists study the fractionation of cognitive functions in cases of brain damage, with the aim of informing models of the functional architecture of human cognition. This work extends well beyond language, to research in perception, memory, attention, action, and so forth, but a good deal of this effort has focused on the language-processing system.

Much of this research involves the detailed study of individual cases. There are several reasons for this. One is that characterization of the deficit involves extensive examination, often with tasks devised specifically for the patient. These studies often take months to complete, and would be difficult to conduct with a large group of subjects. The second has to do with variability. The classical syndrome descriptors (e.g., Broca’s aphasia, Wernicke’s aphasia) tolerate a wide range of variation. For example, many patients considered to be Broca’s aphasics exhibit agrammatic production (reduction in syntactic complexity; omission of grammatical morphemes), but not all of them do, at least not to a degree that is clinically apparent. A third reason is that there are some disorders of considerable theoretical interest that are quite rare; examples of those to be discussed include word deafness and semantic dementia. If not studied as single cases, many of these disorders could not be investigated at all. Of course, it is risky to make generalizations on the basis of a single instance, and in the vast majority of cases the patterns have been replicated in other patients. It is also comforting to note that recent studies of brain activation in normal subjects have confirmed many of the functions assigned to particular areas on the basis of lesion data.

Computational Models and Simulated Lesions

The area of language study known as psycholinguistics aims to explain language performance in terms of transformations in the language code such as are affected at particular processing stages. Researchers interested in the cognitive neuroscience of language take as their ultimate to specify how these encoding-decoding operations are related to specific areas of the brain.

Until relatively recently, the models took the form of boxand-arrow diagrams representing how information flowed from one stage of processing to the next. Over the past decade, there has been an increasing interest in computational models that characterize the processing of information in enough detail that they can be implemented on a computer and experiments can be conducted as computer simulations. These models can also be lesioned—that is, they can be altered in some way (e.g., by increasing noise levels, weakening connections, etc.) to simulate effects of brain damage. (See Saffran, Dell, & Schwartz, 2000, for discussion of several such models.) Although vastly simplified in relation to the real language system, computational models seek to capture basic principles of neural function, in that the elements that comprise the model receive activation and pass this activation on to other units. Some models employ feedback as well as feed-forward activation, as feedback appears to be a widespread feature of neural systems. There are some that contain inhibitory as well as excitatory connections between units. One important class of models starts out with random connections from the layer receiving the input to the layer of units that generates output; the model is then trained by strengthening connections that produce the correct output. These are termed parallel distributed processing (PDP) models (e.g., McClelland & Rumelhart, 1986; see Plaut & Shallice, 1993, for one application to the effects of brain damage). In these models, the information about the relationship between input and output units is distributed across elements in a so-called hidden layer (or in some cases, layers) of units, which is intermediate between input and output. For example, consider the fact that there is no consistent relationship between semantics and phonology; cats and dogs share some similarities, but the sounds of the words that denote these entities are completely different. As a result of the inconsistent mapping between semantics and phonology, the relationship between them must be represented in a hidden layer. In other models (called localist models), the modeler specifies the connections among elements.

The attempt to model the effects of brain damage provides another way of testing the adequacy of computational models (in addition to simulating data from psycholinguistic experiments). In other words, it should be possible to take a model capable of generating normal language patterns and damage it so that it produces abnormal patterns that are actually observed following injury to the brain (e.g., Saffran et al., 2000; Haarmann, Just, & Carpenter, 1997). It is also possible that these efforts to simulate effects of lesions will yield insights into the nature of pathological states, and even contribute to approaches to remediating these disorders (Plaut, 1996). We will provide some examples of computer-based lesion studies as we go along.

The Semantic Representation of Objects

In 1972, psychologist Endel Tulving introduced the term semantic memory to denote the compendium of information that represents one’s knowledge of the world as derived from both linguistic and nonlinguistic sources. Tulving was interested in distinguishing this store of general knowledge from episodic or autobiographical memory, which preserves information about an individual’s personal experiences. That tigers have stripes, that cars have engines, that Egypt is in Africa—these facts are among the contents of semantic memory, whereas personal remembrances such as the site of one’s last vacation or the details of one’s most recent restaurant meal are entered in episodic memory. In this section, we consider how the brain represents one particular aspect of semantic memory—the knowledge that allows one to generate and understand words and pictures. Our major source of evidence will be individuals whose store of semantic knowledge is severely compromised by brain damage.

Semantic Disorders

Semantic Dementia

In the syndrome that has come to be known as semantic dementia, there is progressive erosion of semantic memory with relative sparing of other cognitive functions (Snowden, Goulding, & Neary, 1989; and for earlier cases that conform to this description, Schwartz, Marin, & Saffran, 1979, and Warrington, 1975). Patients initially complain of inability to remember the names of people, places, and things. Formal testing confirms a word retrieval deficit, often accompanied by impairment in word comprehension. As the disorder progresses, most patients also lose the ability to answer questions about real or depicted objects (e.g., regarding their color, size, or country of origin) or to indicate which two of three pictured objects are more closely related (e.g., horse, cow, bear). In other words, the semantic impairment affects nonverbal as well as verbal concepts. On the other hand, the ability of these patients to handle and use objects in practical tasks is generally far better than what their naming and matching performance would predict. This is presumed to reflect their preserved sensorimotor knowledge or practical problem solving (Hodges, Bozeat, Lambon Ralph, Patterson, & Spatt, 2000; and for critical discussion, Buxbaum, Schwartz, & Carew, 1997).

Semantic dementia is the manifestation of a degenerative brain disease (cause unknown) that targets the temporal lobes, in most cases predominantly the left (Hodges, Patterson, Oxbury, & Funnell, 1992). Radiological investigation with CT or MRI often reveals focal atrophy in the anterior and inferior temporal regions of one (the left) or both hemispheres. A quantitative analysis of gray-matter volumetric changes in 6 cases revealed that the atrophy begins in the temporal pole (Brodmann’s area [BA] 38) and spreads anteriorly into the adjacent ventromedial frontal region, and posteriorly into the inferior and medial temporal gyri (Mummery et al., 2000). In support of this, Breedin, Saffran, and colleague (1994) found SPECT abnormalities that were maximal in the anterior inferior temporal lobes in a semantic dementia patient who exhibited no structural brain changes on MRI. Other functional imaging studies with SPECT or PET have described hypometabolism outside the regions of atrophy, most notably in the temporo-occipitalparietal area known to be important for object identification and naming (e.g., Mummery et al., 1999). It is likely that this posterior hypometabolism reflects disruption of connections from the damaged anterior temporal lobes (Mummery et al., 2000).

Remarkably, the neuropathology in semantic dementia spares the classical anterior and posterior language zones (the parts damaged in Wernicke’s and Broca’s aphasias). As a result, aspects of language processing, including word repetition and grammatical encoding, remain largely intact (Schwartz et al., 1979; Breedin & Saffran, 1999). Also spared is the neural substrate for formation of episodic memories, in the medial temporal lobes and hippocampus. Thus, unlike Alzheimer’s disease, in which day-to-day memory loss is often one of the earliest symptoms, semantic dementia leaves recent autobiographical memory well preserved long into the course of the disease (Snowden, Griffiths, & Neary, 1994). Eventually, however, the degenerative process invades other areas and a general dementia sets it, rendering the individual incapable of caring for him- or herself. Autopsy studies of brain tissue reveal a spectrum of nonAlzheimer’s pathological changes, including, in many cases, those indicative of Pick’s disease (Hodges, Garrard, & Patterson, 1998).

The loss of verbal and nonverbal concepts in semantic dementia happens gradually, in that specific features are lost before more general ones. This can be shown by asking subjects to name objects aloud, match words to pictures, or answer probe questions regarding the physical or other characteristics of objects. Until late in the clinical course, errors are mostly within category, such as naming a fork a “spoon” or a cow a “horse” (e.g., Schwartz et al., 1979). We saw something similar in the drawings of a patient who was formerly an artist. Her early depictions of named objects was generally accurate for category-level information, but not identifying detail (Figure 21.4).

Biological Psychology of Language Research Paper

When asked to define words, semantic dementia patients provide little information about an object’s perceptual characteristics (Lambon Ralph, Graham, & Patterson, 1999). Other than this, most semantic dementia patients demonstrate no striking selectivity in their semantic loss. There are, however, patients whose semantic impairment affects some types of entities more than others. The most impressive instances of selective impairment are the disorders that have been termed category specific. We turn to these next.

Disproportionate Impairment for Living Things

This condition was first described in detail by Warrington and Shallice (1984). They studied four patients, three of whom who were suffering the aftereffects of herpes encephalitis, which generally includes dense amnesia (reflecting damage to medial temporal and inferior frontal lobe structures) along with semantic impairment; the fourth patient had semantic dementia. The disproportionate impact on living things emerged clearly on a definitions test. For example, one patient defined a compass as “tools for telling direction you are going,” whereas a snail was, “an insect animal.” Another defined submarine as a “ship that goes underneath the sea,” but a spider as “person looking for things, he was a spider for a nation or country.” Warrington and Shallice’s patients were also impaired in their knowledge of foods, a category that includes manufactured items (e.g., bread, pizza) as well as biological entities such as fruits and vegetables. There were also indications of impairment on certain categories of man-made things, such as gemstones, fabrics, and musical instruments.

Warrington and Shallice’s study was followed by a number of others demonstrating similar deficits involving living things and foodstuffs in patients with damage to the temporal lobes (see Saffran & Schwartz, 1994, for a review of cases). The claim that these impairments represent the loss of knowledge for certain categories of object did not go unchallenged, however. In some cases, living and non-living categories were not matched for frequency or familiarity, and there are a few patients whose category differences disappeared when these factors were adequately controlled (Funnell & Sheridan, 1992; Stewart, Parkin, & Hunkin, 1992). This can be a particular problem with animals, which tend to be rated as less familiar than artifacts. On the other hand, foods are more familiar, yet they pattern with animals. Other factors that could contribute to the difficulty of living things include visual complexity and similarity in form, which are generally greater for living things than for artifacts (e.g., Gaffan & Heywood, 1993; Humphreys, Lamote, & Lloyd-Jones, 1995). In most cases, however, control of these factors through stimulus selection (e.g., Funnell & De Mornay Davies, 1997) or statistical analyses (e.g., Farah, Meyer, & McMullen, 1996) has not eliminated category-specific deficits for living things. Moreover, the factors that render living things more difficult cannot explain the occurrence of the opposite pattern—greater impairment on man-made objects than living things.

Disproportionate Impairment for Artifacts

This pattern was described in two case studies by Warrington and McCarthy (1983, 1987), and subsequently in patients studied by Behrmann and Leiberthal (1989), Hillis and Caramazza (1991), and Sacchett and Humphreys (1992). The subjects of these reports were aphasics with left hemisphere lesions. Three of the four cases had lesions involving frontoparietal cortex; in the fourth case (the patient reported by Hillis and Caramazza) the lesion involved the left temporal and basal ganglia, which project to the frontal lobe. Because Warrington and McCarthy’s patients (VER and YOT) were severely aphasic, they could be tested only on word comprehension. On a word-to-picture matching test, YOT scored 67% correct on artifacts, 86% correct on living things, and 83% on food; VER scored 58% correct on artifacts and 88% on food. YOT also proved to be impaired on body parts, scoring only 22% on this highly familiar category. Tested on picture naming, CW (Sacchett & Humphreys) and JJ (Hillis & Caramazza) scored 35% and 45%, respectively, on artifacts and body parts and 95% and 92%, respectively, on living things. Breedin, Martin, and Saffran (1994) have demonstrated a similar pattern in patients with left frontoparietal lesions using a word-similarity judgment task. One consistent finding is that the decrement on artifacts is associated with impaired performance on body parts.

The Weighted-Features Account of Category-Specific Disorders

How can we account for these category-specific semantic disorders? We have already mentioned the possibility that the brain organizes knowledge according to semantic category: animals in one network, foods in another, tools in a third, and so on. (See Caramazza & Shelton, 1998, for a proposal along these lines.) Warrington and her colleagues (Warrington & Shallice, 1984; Warrington & McCarthy, 1987) have taken a different stance, hypothesizing that category specificity in semantic breakdown reflects the properties that figure most importantly in the representations of objects. Warrington and Shallice pointed out that, unlike most plants and animals, artifacts

have clearly defined functions. The evolutionary development of tool use has led to finer and finer functional differentiations of artifacts for an increasing range of purposes. Individual inanimate objects have specific functions that are designed for activities appropriate to their function . . . jar, jug and vase are identified in terms of their function, namely, to hold a particular type of object, but the sensory features of each can vary considerably. By contrast, functional attributes contribute minimally to the identification of living things (e.g., lion, tiger and leopard), whereas sensory attributes provide the definitive characteristics (e.g., plain, striped, or spotted). (p. 849)

The idea here is that perceptual properties are more heavily weighted in differentiating representations of living things, whereas functional information figures more importantly in the representations of artifacts. The perceptual properties of living things are, of course, intrinsic to the entities and largely immutable, whereas, in the case of artifacts, many properties are free to vary. The range of objects that currently serve as radios, for example, necessitates that they be defined in terms of their function as opposed to their shape, color, or composition. The differential weighting of perceptual information in the case of living things was confirmed by Farah and McClelland (1991), who asked subjects to count the number of visual and functional descriptors in dictionary definitions of living and non-living entities. Visual properties dominated in both sets, but more so (a ratio of nearly 8:1) in the case of living things compared with artifacts (1.4:1).

The relative-weighting account does not deny that artifacts may have distinctive visual properties. However, it predicts that the loss of perceptual properties should be particularly damaging to the representations of living things, which are largely distinguished from one another by their physical characteristics. In contrast, artifacts are differentiated in terms of function, as well as by the manner in which they are manipulated. Body parts may pattern with artifacts because they, too, are differentiated by their functions, or possibly as a consequence of their roles in the utilization of these objects. In contrast, manufactured foods would be expected to pattern with living things: Foods serve the same basic function and are in large part distinguished by their sensory properties, such as color, shape, and taste.

The differential-weighting account is consistent with a model of semantic memory in which information about an object is distributed across a number of brain subsystems specialized for a particular type of knowledge. Allport (1985) outlined a network model consisting of subsystems that are specialized for particular types of information (visual, tactile, action-orientated). Information about a particular object (e.g., a telephone) is distributed across these subsystems, which are linked to one another by associations among co-occurring features. As a result, activation of features of an object in one subsystem will automatically activate other features of the object in other subsystems. On the assumption that these subsystems are anatomically distinct, it should be possible to disrupt them independently.

Warrington and McCarthy (1987) speculated that there might even be a finer-grained differentiation within the semantic system, such that some perceptual features (e.g., shape) figure more heavily in the representations of animals, whereas others (e.g., color, taste) are more salient in the distinctions among fruits and vegetables. As the experience of objects rests on their sensory and sensorimotor properties, it is reasonable to assume that these various characteristics are experienced through different modalities and that they are registered in different subsystems. As Shallice (1988) has put it,

such a developmental process can be viewed as one in which individual units (neurons) within the network come to be most influenced by input from particular input channels, and, in turn, come to have most effect on particular output channels. Complementarily, an individual concept will come to be most strongly represented in the activity of those units that correspond to the pattern of input-output pathways most required in the concept’s identification and use. The sets of units that are most critical for a related group of categories would then come to form semimodules. . . . The capacity to distinguish between members of a particular category would depend on whether there are sufficient neurons preserved in the relevant partially specialised region to allow the network to respond . . . differentially to the different items in the category. (pp. 302–303)

Although this position has sometimes been formulated in terms of distinct visual and verbal semantic systems (e.g., McCarthy & Warrington, 1988), “visual semantic” and “verbal semantic” can be conceptualized as “partially specialized subregions,” to use the terminology suggested by Shallice.

The Semantic Representation of Objects

It is possible to account for a number of neuropsychological phenomena within the framework of a distributed model. As we said, disproportionate impairment of living things (and foodstuffs) would reflect damage—or lack of access—to perceptual properties, which are heavily weighted in the representations of these entities. Worse performance on artifacts would reflect the loss of functional or perhaps action-based sensorimotor information (see Buxbaum & Saffran, 1998). The model also allows for the selective disruption of linkages between attribute domains, as well as damage to connections between specific domains and input and output systems. The literature contains descriptions of disorders of the latter type. For example, McCarthy and Warrington (1988) studied a patient (TOB) who was impaired on living things when queried verbally, but who was able to describe living things adequately when provided with pictures. Asked to define the word dolphin, for example, TOB said “a fish or a bird,” but when shown a picture he responded, “lives in water . . . trained to jump up and come out . . . In America during the war years they started to get this particular animal to go through to look into ships.” We have recently tested a patient with a similar deficit. She could not, for example, define the meaning of the word candle, responding that it had something to do with food (from can, perhaps, or candy), but when shown a picture she said, “You put them on the table at dinner, and they provide light.” The same patient performed at normal levels on an associative matching test with pictures but was severely impaired when the same items were presented as words. The model could account for this pattern by disruption of the linkages between lexical representations (word forms in Figure 21.5) and semantic information. Based on Warrington and McCarthy’s (1987) suggestion of finergrained distinctions, one should also see patients with deficits selective to animals or foods. Such patients have been reported; for example, Hart and Gordon (1992) described a patient who was impaired on animals but not fruits and vegetables, and Hart, Berndt, and Caramazza (1985) have reported the opposite pattern.

Does the living-things deficit go along with poor processing of perceptual features, as the weighted-features model would have it? This issue has been investigated in a number of different studies, with mixed results. Some have been favorable to the model (e.g., Breedin, Martin, et al., 1994; De Renzi & Lucchelli, 1994; Farah, Hammond, Mehta, & Ratcliff, 1989; Forde, Francis, Riddoch, Rumiati, & Humphreys, 1997; Gainotti & Silveri, 1996; Moss, Tyler, & Jennings, 1997), while others have found no difference between perceptual and other features (e.g., Barbarotto, Capitani, Spinnler, & Travelli, 1995; Caramazza & Shelton, 1998; Funnell & De Mornay Davies, 1996). Moreover, the positive findings are not as strong as they could be, in that the loss of perceptual information has generally been restricted to living things (Caramazza & Shelton). To explain why this feature deficit does not apply to artifacts as well, proponents of the model have proposed that because the features of man-made things are often closely related to their functions, it may be possible to generate perceptual properties for objects whose functional properties are retained (Moss et al., 1997; see also De Renzi & Lucchelli, 1994).

Despite these mixed findings, the weighted-features account, in our view, still merits serious consideration. For one thing, the anatomical locus of the living-things deficit is consistent with impaired processing of perceptual features. These patients tend to have damage in the inferior temporal cortex bilaterally (Breedin, Saffran, et al., 1994; Gainotti & Silveri, 1996; and note that herpes simplex encephalitis preferentially strikes at inferior and medial temporal cortices). The affected area borders on the region of visual association cortex that is concerned with the recognition of objects; and information from other sensory association areas converges on the anterior inferotemporal cortex on its way to medial structures such as the hippocampus. The model also makes sense from an evolutionary perspective. The need to know about the world in which we live is not unique to humans. Although language vastly expands the means for acquiring information, we, like other animals, learn about the world through visual and other sensory input.

Imaging Semantics in the Normal Brain

Recently, functional imaging techniques have been used in association with semantic tasks to identify brain regions involved in semantic operations. The neurologically intact participant is asked to name objects aloud or subvocally, to generate items from particular categories (e.g., names of animals), or to answer probe questions, at the same time that his or her brain activity is being imaged by PET or fMRI. One question addressed in such studies is whether semantic judgments to pictures and words activate a common substrate. The findings are that they do, and that the substrate is distributed within and around the left temporal lobe, specifically the temporoparietal junction, temporal-occipital junction (fusiform gyrus; BA 37), middle temporal gyrus, and inferior frontal gyrus (BA 11/47; Vandenberghe, Price, Wise, Josephs, & Frackowiak, 1996). This corresponds closely to the lesion sites in semantic dementia, except that anterior temporal lobe is not part of the activated network (see Murtha, Chertkow, Beauregard, & Evans, 1999). This raises questions as to whether anterior temporal atrophy affects semantic storage directly (as suggested by Breedin, Saffran, et al., 1994, among others) or indirectly, by interrupting connections to components of the semantic network located farther back in the temporal and occipital lobes. A third possibility, argued by H. Damasio, Grabowski, Tranel, Hichwa, and Damasio (1996), is that the left anterior temporal lobe plays a key role in mediating between semantics and the mental lexicon, such that damage to this area disrupts not semantics but lexical-phonological retrieval (for opposing arguments, see Murtha et al., 1999).

Other activation studies have sought to specify the particular functions of regions in this distributed network, by varying properties of the primary and baseline tasks. One finding is that the temporal-occipital area (fusiform gyrus; BA 37) is involved in the processing of semantics (Murtha et al., 1999), and not low-level perceptual processing (Kanwisher, Woods, Iacoboni, & Mazziotta, 1997). Support for this comes from a study by Beauregard and colleagues (1997), who described left fusiform activity during passive viewing of animal names but not abstract words. The suggestion from this study, and from others reporting enhanced fusiform activation during the processing of living entities (Perani et al., 1995), is that the left fusiform area is an important component of the circuitry involved in the processing of animate entities or visual semantic features.

As to the neural circuitry of inanimate entities, a study by A. Martin, Wiggs, Ungerleider, and Haxby (1996) comports well with the lesion evidence and the weighted-features account. These investigators examined silent and oral naming of animals and tools against a baseline task that involved the viewing of nonsense figures. In this study, both types of objects generated activity in the fusiform gyrus bilaterally; however, tools selectively activated left-middle temporal areas and the left premotor area. The premotor area activated in tool naming was also active in a previous study in which subjects imagined grasping objects with the right hand (A. Martin, Haxby, Lalonde, & Ungerleider, 1995). The implication is that sensorimotor circuits involved in grasp programming are activated during the naming of tools, and, hence, that grasp information is part of the semantic representation of tools (see also Grafton, Fadiga, Arbib, & Rizzolatti, 1997). It should be noted that not all neuroimaging studies of tools have described premotor activation. However, there is convergent evidence that whereas the network activated by animals has a bilateral distribution, the network for tools is restricted to the left hemisphere (Cappa, Perani, Schnur, Tettamanti, & Fazio, 1998; Perani et al., 1995; and for lesion evidence, Tranel, Damasio, & Damasio, 1997).

Recent studies have also shed light on why the left prefrontal region (BA 44, 45, 46, 47) is frequently activated in semantic tasks. It appears that these areas are not, as once thought, involved in the storage of semantic information (e.g., Peterson, Fox, Posner, Mintun, & Raichle, 1988). Rather, prefrontal activation in semantic tasks varies as a function of task difficulty and is more likely related to control processes, such as working memory (Murtha et al., 1999) and competitive selection (Thompson-Schill, D’Esposito, Aguirre, & Farah, 1997).

The Comprehension of Spoken Language

The comprehension of spoken input is a complex process, involving (a) analysis of speech sounds via the extraction of spectral and temporal cues from the speech signal; (b) use of the products of this analysis to access entries in the internal lexicon and ultimately the meanings of words; (c) syntactic analysis if the input is sentential; and (d) interpretation of the meaning of the sentence, a process that requires the integration of several different forms of information (lexical, syntactic, and semantic).

Speech Perception and Word Deafness

Spoken speech poses a number of problems for the listener. Much of the information is carried by rapid changes in the speech signal—the cues that differentiate consonants, for example. Also, the information in the speech stream is transient; although readers can reexamine a word (and there is evidence that they do; see, e.g., Altmann, Garnham, & Dennis, 1992), listeners cannot, particularly if the current word is followed (and thereby overwritten) by others. There are additional difficulties, identified in the literature as the segmentation and invariance problems. The segmentation problem refers to the fact that there are often no spaces—no silent gaps—to signal the boundaries between the words of an utterance. There is evidence that consistency in the stress patterns of words may be utilized for this purpose; for example, English generally places stress on the first syllable of nouns, a pattern that infants become familiar with during the first year of life (Jusczyk, Cutler, & Redanz, 1993). The invariance problem refers to the variability of the signals associated with a given phoneme, which reflect the influence of the phonemes that surround it (coarticulation). This variation is evident in spectrographic displays of speech stimuli, where it can be seen, for example, that the sound associated with the /b/ in about is not the same as that of the /b/ in table. Although speech perception has been studied extensively, there is no general agreement on how the human auditory system copes with these complexities (see Miller & Eimas, 1995, for a review).

There is also no consensus on the mechanisms that underlie the ability to identify spoken words. Some investigators claim that words are recognized on the basis of auditory properties alone (e.g., Klatt, 1989); others maintain that word perception is phonetically based, or that it utilizes abstract phonological representations, or relies on the analysis of syllabic units (see Miller & Eimas, 1995, for a summary of these views). Across languages, word recognition may depend on different aspects of auditory input; for example, some languages (Thai, Mandarin Chinese) utilize tonal information, although most do not.

There is experimental evidence that contextual information is influential in the perception of speech. For example, partial phonological information is more likely to be filled in by the perceiver if the absent phoneme (replaced by a cough or noise) is part of a word as opposed to a nonword (Ganong, 1980; Warren, 1970). This suggests that there is feedback from partially activated lexical representations to prelexical stages of analysis of the input signal; some models of speech perception incorporate such effects (e.g., McClelland & Elman, 1986), but others manage to accommodate this result without adopting this assumption (Miller & Eimas, 1995). There is also evidence that contextual information from other words in the sentence facilitates word recognition. Listeners are quicker to recognize a previously identified target word in a sentence context if the syntax is correct and the sentence is semantically coherent (e.g., Marslen-Wilson & Tyler, 1980). On the other hand, it has also been shown that word recognition does not require either full or accurate input; remarkably, the identification of a word is seldom impeded by errors on the part of the speaker or partial masking by noise (e.g., Miller & Eimas). It appears that words are activated in parallel on the basis of partial information (e.g., hearing the sound “sih” will activate city, citizen, silly, simple, etc.) and some words can be recognized before they are completed (the cohort theory; Marslen-Wilson & Welsh, 1978) although the presence of activated neighbors can also slow recognition of a given word (Luce, Pisoni, & Goldinger, 1990).

Whatever the nature of the mechanisms for speech perception and lexical access, it is clear that they are supported by portions of the temporal lobe—the left temporal lobe, in particular.Aportion of the superior temporal gyrus (Brodmann’s area 41, or Heschl’s gyrus), which extends medially into the Sylvian fissure, is the location of A1—primary auditory cortex, the brain region that receives input from earlier processing stations in the auditory pathway. Primary auditory cortex is surrounded by auditory association cortex, where the incoming signals undergo additional processing and identification. Evidence from functional imaging studies indicates that the left temporal lobe is more sensitive than the right when responding to auditory stimuli of brief temporal duration, an important characteristic of speech (e.g., Fitch, Miller, & Tallal, 1997). It is sometimes suggested that the left hemisphere’s dominant role in language function is an outgrowth of its capacity to process rapid changes in auditory signals (J. Schwartz & Tallal, 1980).

Lesions in the left temporal auditory association area (Wernicke’s area), which lies posterior and lateral to A1, give rise to an array of deficits that include impaired comprehension of spoken language as well as disturbances in production (see later discussion of Wernicke’s aphasia). More rare are cases in which the impairment is limited to speech perception. This disorder, known as pure word deafness, results from smaller lesions in the left temporal lobe, or in some cases from damage to the temporal lobes bilaterally (e.g., Takahashi et al., 1992; Yaqub, Gascon, Al-Nosha, & Whitaker, 1988). Both types of lesion are likely to cut off auditory input to the left temporal lobe; the input pathways include fibers from the thalamus (medial geniculate nucleus) and from the homologous area in the right hemisphere that projects to the left temporal lobe via the corpus callosum. One illustration of the effect of bilateral lesions is the case reported by Praamstra, Hagoort, Maasen, and Crul (1991). This patient initially manifested Wernicke’s aphasia as a result of a left temporal lesion; several years later, he suffered a right temporal lesion, which produced word deafness.

Patients with pure word deafness retain the ability to speak, and to understand written language; and while they continue to perceive spoken language as such, they have great difficulty comprehending speech and in repeating what is said to them. As an English-speaking word-deaf patient remarked to one of us, it seemed to him that people were speaking in a foreign language, and that his ears were disconnected from his voice (Saffran, Marin, & Yeni-Komshian, 1976). Word-deaf patients retain the ability to perceive vowels, which are long lasting and constant in form; but they perform poorly on tests of phoneme discrimination and identification that involve consonants. Consonants involve signals that undergo changes in frequency, and some include components that are very brief in duration.

Although word-deaf patients are severely impaired under most conditions, their comprehension of spoken language improves somewhat if they are allowed to read lips, or if other contextual information is provided (e.g., Saffran et al., 1976; Shindo, Kaga, & Tanaka, 1991). These effects suggest that top-down processes (information fed back from word representations) can be used to disambiguate a signal that is noisy or degraded. Some of these patients have no difficulty identifying nonspeech sounds, such as those produced by animals or musical instruments (e.g., Saffran et al.). Failure to recognize nonspeech stimuli is termed auditory agnosia, a condition generally associated with right temporal lobe lesions (e.g., Fujii et al., 1990). For a recent review and case summaries, see Simons and Lambon Ralph (1999).

Lexical Comprehension and Wernicke’s Aphasia

To comprehend a word, it is necessary for the input signal to contact the appropriate entry in the mental lexicon. The lexical entry provides access to the word’s meaning and to its grammatical properties (whether it is a noun or a verb; if a verb, whether it is transitive or intransitive, etc.), information that is required for the computation of syntactic structure.

Word comprehension failure is a cardinal feature of the syndrome known as Wernicke’s aphasia. These patients typically have large left temporal lobe lesions including not just the classical Wernicke’s area (posterior part of superior temporal gyrus) but also the posterior middle temporal gyrus and underlying white matter (Dronkers et al., 2000). Recent evidence suggests that a lesion restricted to Wernicke’s area will not produce a chronic Wernicke’s aphasia (Basso, Lecours, Moraschini, & Vanier, 1985; Dronkers et al.).

Wernicke’s aphasia is far more common than word deafness, and its impact on language functions is more extensive. The comprehension problem is not limited to spoken language; reading comprehension is usually affected as well, although there are cases in which the patient does much better with printed than spoken input (e.g., Ellis, Miller, & Sin, 1983; Heilman, Rothi, Campanella, & Wolfson, 1979; Hillis, Boatman, Hart, & Gordon, 1999). In addition, there are deficits in language production, written as well as spoken. These patients tend to have difficulty finding the right words. Instead, they may substitute words that are related in meaning, or they may rely on pronouns and general terms such as place and thing. Their production may also be riddled with nonwords (neologisms). In extreme cases, speech is reduced to semantic or neologistic jargon, which is difficult if not impossible to comprehend.

The nature of the word comprehension deficit in Wernicke’s aphasia is not well understood. Although it was earlier thought that the deficit reflects impaired phoneme perception (e.g., Luria, 1966), it is now recognized that there is little correlation between phoneme perception deficits and auditory comprehension impairments in aphasics (Blumstein, 1994). For example, Blumstein and her colleagues have demonstrated that patients with preserved phoneme discrimination may nevertheless be impaired in identifying speech sounds (Blumstein, Cooper, Zurif, & Caramazza, 1977; Blumstein, Tartter, Nigro, & Statlender, 1984), implying that comprehension may falter as a consequence of the speech input’s failing to contact phonemic representations. The evidence is less than definitive, however, since the phonemic identification task requires matching the spoken input to a printed representation, and it is not clear that all the patients tested in this way have been capable of meeting the task demands.

Another likely locus for impaired word comprehension is in the access to semantic representations. Word comprehension is most often assessed by means of word-picture matching tests. Wernicke’s aphasics perform poorly on such tests, and they have particular difficulty when the foils are phonologically similar to the target or belong to the same semantic category. The latter effect implicates semantic processing: If the patient were simply unable to perceive the speech sounds or map them onto a phonemic representation, the semantic similarity of the foils would not matter. That it does matter indicates that the perceived word is not accessing the full set of semantic features that distinguish one category member from another; recall that the same pattern was evident in semantic dementia.

One factor that differentiates at least some aphasics from semantic dementia patients is the aphasics’ relatively well preserved performance on tests that utilize pictorial material exclusively. A task of this nature is the Pyramids and Palm Trees test developed by Howard and Patterson (1992), in which the subject is required to match one of two pictures to a third on the basis of conceptual similarity (e.g., a palm or pine tree to a pyramid). The same task can be administered using word stimuli. Patients who do well on the picture version of the test but poorly on comparable word-based assessments clearly have difficulty accessing meaning from words. The neurological basis for such word-only semantic deficits has not been established. However, Hart and Gordon (1990) performed an anatomical study on 3 aphasic patients with unusually pure semantic comprehension deficits, manifested on tests with spoken and written words and with pictures. Lesion overlap was found in the posterior temporal (BA 22, 21) and inferior parietal (BA 39, 40) regions (Hart & Gordon). It is possible that lesions here disrupt pathways between regions concerned with phonemic or lexical aspects of word recognition and those where semantic information is stored.

The fact that Wernicke’s patients tend to perform better on picture-word matching tests when the foils are unrelated to the target suggests that they retain some knowledge of the meaning of the word. Other tasks provide additional evidence along these lines. One paradigm used to demonstrate partial preservation of semantic information depends on activation mediated by relationships among words, or priming. The subject hears or sees a word (the prime) that bears a relationship to a second word (the target); the task entails a response to the target, such as lexical decision (deciding whether it is a word or not) or pronunciation (if the word is written). Presentation of a semantically related prime normally speeds the response to the target word, in comparison to a prime that bears no relationship to the target. Milberg and Blumstein and their colleagues have shown thatWernicke’s aphasics who perform poorly on word comprehension tasks demonstrate semantic priming effects on tasks such as lexical decision (e.g., Milberg & Blumstein, 1981; Milberg, Blumstein, & Dworetzky, 1988).

Phonological and Word Processing: Conclusions

It can be concluded that the left temporal lobe (superior temporal gyrus in particular) has special responsibility for the perception of speech and for contact with stored lexical information (phonemic and semantic). Evidence cited earlier suggests that semantic information is distributed over extensive areas of the brain; however, it seems likely that associations between the phonological specifications for words and their meanings are supported by structures in the left temporal lobe. It is interesting that damage to this region generally does not produce total loss of comprehension ability. What is compromised is the specificity of the comprehension process: Patients are prone to semantic error, and may show less selectivity to phonological information that is off target. These are properties that might be predicted of a degraded lexical network. What cannot be ascertained with any certainty at present is the extent to which these response characteristics reflect the behavior of residual left hemisphere functions, dependence on right hemisphere mechanisms, or both. As noted earlier, there is evidence that recovery from aphasia sometimes depends on right hemisphere structures, as subsequent damage to the right hemisphere returns the patient to prerecovery levels of language performance (e.g., Basso et al., 1989). The recent use of functional imaging has uncovered cases in which the right temporal lobe shows greater activation in subjects with left temporal lesions, compared to normal subjects (e.g., Cardebat et al., 1994; Weiller et al., 1995).

Sentence Comprehension

The lexical representation of a word is presumed also to specify the grammatical information needed to compute sentence-level syntactic structure (e.g., whether the word is a noun or verb, and if a verb, whether it is transitive or intransitive). This information is used to parse the word string into constituent phrases (noun phrase, verb phrase) that are then related to one another in a way that specifies structural information, such as which nouns go with which verbs; what is the subject of the main verb, the direct object of the embedded verb; and so on.

These early operations—sometimes designated the firstpass parse (Frazier, 1990)—are in the service of recovering the underlying message. Subsequent operations (the secondpass parse) interpret the nouns in relation to the verb, in order to ascertain who did what to whom. These operations are highly complex, in that they entail integration of information recovered from the lexical entry of the verb (e.g., what arguments it assigns) with the structural relations given in the first-pass parse. The importance of the structural information is readily conveyed by the difference between these sentences: John gave Mary the broccoli and Mary gave John the broccoli. Both sentences contain the same words, but the structural positions occupied by the nouns—and hence the meaning of the two sentences—are different. In the first, John is the subject and hence the agent of the exchange action; in the second, John is the direct object and hence the recipient. It must be appreciated that the mapping between syntactic arguments and thematic roles is different for different verbs; in the case of receive, for example, the sentential subject is the recipient, not the agent; in the case of pass, the sentential subject can also be the theme (what passes; as in The broccoli passed from John to Mary).

The ease of recovering thematic role information depends, in part, on constituent structure, and it is made more difficult when there is a delay between the occurrence of a word and the information that specifies its thematic role. For example, in the object relative sentence The man that Tom’s sister Mary gave the broccoli to was named John, many words intervene between when the man appears and when it can be assigned the role of recipient.

There are also instances in which the structure of the sentence is temporarily ambiguous. Consider a sentence containing a reduced relative clause that is not marked by a relative pronoun, for example, The defendant examined by the lawyer turned out to be guilty, where defendant might initially be taken as the subject of the verb. This ambiguity can only be resolved by information that comes later in the sentence. An issue much debated in the sentence comprehension literature concerns the degree of independence of syntactic and semantic processing—specifically, whether early syntactic processing is influenced by the meaning of the verb. Some linguists and psycholinguists favor autonomous syntactic processing, at least in the early recovery of constituent structure (e.g., Frazier, 1990), although a good deal of the recent evidence supports interaction (e.g., Trueswell, Tanenhaus, & Garnsey, 1994). The case for interaction is strengthened by recent ERP studies that focus on a component of the ERP waveform (the N400) that is sensitive to semantic processing during sentence (and discourse) comprehension. The fact that this component is present within 200 ms of the first word and increases in amplitude with successive words is taken as evidence that semantic processing operates early and incrementally across a sentence (Brown, Hagoort, & Kutas, 2001).

Sentence Comprehension Disorders

Not surprisingly, patients who are impaired at the single-word level (e.g., Wernicke’s aphasics) are also impaired in comprehending sentences. Of greater interest is the performance of patients who do relatively well on single-word comprehension. The group whose sentence processing performance has attracted most interest is that of agrammatic Broca’s aphasics. These are patients whose primary deficit is in producing sentences; their output is characterized by simple and fragmented phrase structure and the omission and substitution of closedclass elements. By closed-class elements we mean free-standing function words (e.g., to, the, is) and bound affixes (e.g., -ing, -ed). In contrast to nouns, verbs, and adjectives, this segment of the vocabulary does not expand over the lifetime, hence the designation closed class. The nature of agrammatic speech is discussed in detail shortly; for present purposes, the important point is that agrammatic Broca’s aphasics typically demonstrate good comprehension of single words, particularly concrete nouns. They also do well with sentence comprehension, but only when the sentences and picture choices are semantically constrained. To understand what is meant by this, compare the following example of a nonreversible sentence, example number 3, with the semantically reversible sentence, example number 4. Whereas the lexical content constrains the meaning of number 3, in that apples cannot eat and boys are unlikely to be red, it does not constrain the meaning of number 4. An individual who was not sensitive to—or failed to utilize—the syntactic structure of number 4 would have difficulty determining which person was kissing the other, and which of the two happened to be tall.

  1. The apple that the boy ate was red.
  2. The boy that the girl kissed was tall.

In 1976, Caramazza and Zurif demonstrated that agrammatic Broca’s aphasics had difficulty understanding semantically reversible sentences such as number 4, although they performed quite well on semantically constrained sentences such as number 3. This finding has been replicated many times since. The comprehension pattern defined by good performance on semantically constrained sentences but poor performance on semantically reversible sentences has come to be known as receptive agrammatism.

Receptive agrammatism is most apparent with sentences like number 4, which, in addition to being reversible, also violates standard word order. In English, the agent generally precedes the verb and the recipient or patient follows it. In object relatives such as sentence 4 and passive sentences such as The boy was kissed by the girl, the recipient of the action (the boy) does not occupy the postverbal position, as it does in the canonical active (The girl kissed the boy). Object relatives and passives pose serious problems for receptive agrammatics.

The combination of agrammatic production and the apparent failure to use syntactic information in sentence comprehension gave rise to the notion of a central syntactic impairment in Broca’s aphasia that affected receptive and expressive language processing in similar ways (e.g., Berndt & Caramazza, 1980). It was suggested, for example, that receptive agrammatism reflected an insensitivity to closed-class elements that paralleled the patients’ difficulty in retrieving these elements in sentence production (e.g., Bradley, Garrett, & Zurif, 1980; for a more recent version of this hypothesis, see Pulvermuller, 1995). One implication of this view was that some or all aspects of syntactic knowledge was represented in the area of the left frontal lobe that is damaged in Broca’s aphasia. However, this hypothesis was soon challenged by other findings.

First, there were reports of patients who exhibited expressive agrammatism without receptive agrammatism (e.g., Miceli, Mazzucchi, Menn, & Goodglass, 1983; Nespoulous et al., 1988). Second, Linebarger, Schwartz, and Saffran (1983; and Linebarger, 1990, 1995) found that some patients with expressive and receptive agrammatism showed preserved sensitivity to a wide range of grammatical violations, including some that involved noncanonical sentence structures (e.g., *John was finally kissed Louise) and the types of closed-class elements that were absent from their speech (e.g., the passive morphology in the example just given). Testing in other laboratories confirmed these results (e.g., Shankweiler, Crain, Gorrell, & Tuller, 1989; Wulfeck, 1988) and ruled out the possibility that success in the grammaticality judgment task might be achieved without benefit of a syntactic analysis, for example, by simply rejecting unusual prosodic patterns created by the omission or addition of sentence constituents. (For evidence against this interpretation, see Berndt, Salasoo, Mitchum, & Blumstein, 1988.)

Word monitoring is another paradigm that has been used to investigate syntactic processing in aphasics. In this task, the subject hears a word that subsequently recurs in the context of a sentence. The instructions are to press a button when the word reappears. This method is highly sensitive to syntactic and semantic constraints; response latencies are shorter for words that appear in semantically anomalous but syntactically well-formed sentences (as in the following example 6) as compared to scrambled word strings (example 7), and they are shorter still for sentences that are semantically coherent (example 5; Marslen-Wilson & Tyler, 1980). Word monitoring is also sensitive to syntactic violation. For example, subjects take longer to recognize dog in example number 8 than in number 9.

5. Normal The bishop placed the crown on the king’s head
6. Anomalous The shelf kept the crown on the apartment’s church
  1. 7. Scrambled Shelf on the church crown the

                                     apartment’s  the kept

    (probe is crown)

The monitoring task has the virtue of simplicity: All the subject has to do is detect the word target; conscious deliberation as to grammaticality or plausibility is not required. Some agrammatic patients tested on this task have shown normal patterns of sensitivity to grammatical violations (Tyler, 1992; Tyler, Ostrin, Cooke, & Moss, 1995).

  1. 8. *He continued to struggle the dog but he couldn’t break free.
  2. 9. He continued to struggle with dog but he couldn’t break free.

Note the paradox: Agrammatics demonstrate sensitivity to structural constraints in monitoring and grammaticality judgment tasks, yet they fail to use structure information to guide sentence interpretation. To explain this paradox, we and our colleagues have suggested that the patients are impaired in utilizing the products of the first-pass parse to form accurate, verb-specific linkages between syntactic arguments and thematic roles. We termed this the mapping hypothesis (e.g., Linebarger, 1995; M. F. Schwartz, Linebarger, & Saffran, 1985; M. F. Schwartz, Linebarger, Saffran, & Pate, 1987). A different but related formulation was proposed by Grodzinksy (1990, 2000). According to Chomsky’s (1981) government and binding theory, some sentences are derived by movement of constituents from canonical positions to other positions in the sentence. This movement leaves behind a trace (t), which co-indexes the empty position with the element that was moved. In example 10, the trace (t1) co-indexes the boy with the empty direct object, which establishes its thematic role as the recipient of the kissing action:

  1. 10. The boy1 was kissed t1 by the girl.

Grodzinsky claimed that receptive agrammatism stems from inability to represent or utilize traces. As evidence he cited the fact that patients perform well on sentences that lack traces (e.g., active voice sentences in English) and poorly on those that contain them (e.g., passives and object relatives). While this generalization holds for many patients, there are also many exceptions; Berndt, Mitchum, and Haendiges (1996) found that across a number of studies, approximately one third of the patients performed poorly on actives, and another third performed equally poorly on actives and passives. Moreover, it has been shown that patients retain their sensitivity to traces in grammaticality judgment tasks, where they detect violations that illegally fill a gap that marks the presence of a trace (Linebarger, 1995). Nevertheless, the debate goes on; for the most recent statement of the trace-deletion hypothesis, see Grodzinsky (2000); for opposing arguments, see the discussion that accompanies that article.

The mapping-deficit hypothesis gave rise to efforts to rehabilitate receptive agrammatism by focusing on the relations between structural positions and thematic roles. These efforts have yielded mixed results (see reviews in Fink, 2001; Marshall, 1995) and a renewed appreciation for all that is required for mapping to be accomplished successfully. For one thing, the patient must retain access to the relevant information about verbs—their argument structure and their mapping requirement. This is clearly a problem for some patients (Breedin & Martin, 1996). Moreover, because thematic role assignment is an integrative process, with multiple forms of information contributing, it is demanding of computational resources. Conceivably, then, the deficit in sentence comprehension results from reduced resource capacity.

The resource account has been forcefully argued based on the finding that neurologically intact individuals show similar performance decrements as aphasics (albeit less severe) under experimental conditions that restrict resources (e.g., rapid serial visual presentation [RSVP] of sentence materials; Miyake, Carpenter, & Just, 1994). Moreover, the affected resource has been equated with “working memory for comprehension” and invoked also to explain why certain neurologically intact individuals (including healthy older adults) have trouble understanding syntactically complex sentences, such as object relatives (Just & Carpenter, 1992). On the other hand, it has been shown that Alzheimer’s patients with marked reduction in this working memory capacity do not show the aphasic comprehension pattern. Comprehension in these patients is systematically related to the number of propositions expressed in the sentence but not to the syntactic complexity, whereas in aphasics it is affected by both factors. Caplan and Waters (1999) argue that the contrasting comprehension patterns in these two populations result from different deficits: The aphasics are deficient in a resource dedicated to the computation of syntactic structure, whereas the problem in the Alzheimer’s patients reflects a general working memory limitation. Whether this account will stand up to further testing remains to be seen.

Neuroanatomy of Sentence Comprehension

Aphasia research demonstrates that both the posterior and anterior language areas are involved in language comprehension. To this point, functional imaging studies have corroborated the involvement of these areas, but have done little to elucidate their functions further.

Auditory stimuli activate superior temporal cortex in both cerebral hemispheres (Habib & Demonet, 1996; Demonet et al., 1992). The region activated by words as well as nonwords generally includes BA 22, 41, and 42 bilaterally (e.g., Binder et al., 1997). The left superior temporal lobe is activated by spoken stimuli in a language (Tamil) unfamiliar to the subjects (who were French), indicating that this area may be involved in prelexical processing of speech input (Mazoyer et al., 1993; but see Naatanen et al., 1997, for evidence of a language-specific response in the left temporal lobe).

Studies using electric current to disrupt operations carried out by the stimulated brain area have confirmed the importance of left temporal lobe structures in speech perception and comprehension. Boatman and colleagues (1995) examined such effects in three patients with indwelling subdural electrode arrays, implanted prior to surgery for intractable epilepsy. Three types of tasks were administered, with and without electrical stimulation: phoneme discrimination (e.g., pata); phoneme identification (matching a consonant-vowel syllable to an array of four written choices); and word and sentence comprehension. Stimulation sites in the left superior temporal gyrus elicited several different patterns: comprehension impaired, but discrimination and identification spared; comprehension and identification impaired, but discrimination spared; discrimination, identification, and comprehension all impaired. In each patient, the sites where stimulation disrupted all three functions were located more anteriorly than the sites where comprehension alone was impaired.

Sentence comprehension has been examined using functional imaging techniques. Several PET studies have shown greater regional cerebral blood blow (rCBF) in Broca’sareaas a function of increased syntactic complexity (Caplan, Alpert, & Waters, 1998; Stromswold, Caplan, Alpert, & Rauch, 1996). However, in an fMRI study by Just, Carpenter, Keller, Eddy, and Thulborn (1996), activity increased in both Broca’s and Wernicke’s areas as syntactic complexity increased, and the same was true (albeit at a lower level) for the right hemisphere homologues of these two areas. A more recent fMRI study describes right perisylvian activation during a grammaticality judgment task that required repair of the anomalous element, but not otherwise (Meyer, Friederici, &von Cramon, 2000). This unexpected evidence for a right hemisphere contribution to sentence processing is sure to be followed up and clarified in the next generation of functional imaging studies.

Another recent fMRI study, this one involving sentence reading, found activation in Broca’s and Wernicke’s areas, along with a region in the anterior temporal lobe (Bavelier et al., 1997) that was also active in an earlier PET study involving sentence materials (Mazoyer et al., 1993). It has been suggested that this anterior temporal area plays a role in morphosyntactic processing (Dronkers, Wilkins, Van Valin, Redfern, & Jaeger, 1994).

Sentence Comprehension: Conclusions

Despite extensive research, the contributions to sentence comprehension of Wernicke’s and Broca’s areas remain obscure. The patient studies just reviewed indicate that agrammatic Broca’s aphasics, who tend to have large frontoparietal lesions, still manage to perform well on syntactic processing tasks that are not resource demanding (i.e., grammaticality judgment tasks and comprehension of sentences with canonical word order). It may be that these more automatic syntactic tasks are supported by circuitry in and around Wernicke’s area, in which case the various resource accounts of comprehension deficit in Broca’s patients become increasingly plausible. Certainly, the proximity of Broca’s area to the dorsolateral prefrontal structures known to play a role in executive working memory (BA46, 9) lends credence to idea that the role of Broca’s area in syntactic processing is related to temporary information storage or manipulation (Caplan et al., 1998; Miyake et al., 1994). In the domain of visual processing, there is evidence that prefrontal cortex operates in tandem with more posterior brain regions to sustain activation across a delay (e.g., Goldman-Rakic, 1995; Smith et al., 1995). Our suspicion is that Broca’s area plays a similar role with respect to the language-processing regions in the temporal lobes.

The Production of Spoken Language

Lexical Retrieval

To a first approximation, one can conceptualize the production of a sentence as comprehension in reverse. The speaker’s task is to formulate a message that specifies the thematic content of the sentence, select the words and the syntactic form suitable to express this content, order the words in a manner dictated by the syntax, and encode this in a phonetic form for articulation.

It occasionally happens in normal speech that the selection of words from the mental lexicon (i.e., lexical retrieval) goes awry, such that the wrong word is uttered (examples 11, 12), or the right word at the wrong time (example 13), or with the wrong pronunciation (14). At other times, lexical retrieval comes up short, leading to the effortful search known as the tip-of-the-tongue (TOT) state.

  1. It’s a far cry from the twenty-five dollar (cents: semantic error)
  2. You look all set for an exhibition. (expedition: formal error)
  3. I left the briefcase in my cigar. (word exchange error)
  4. jepartment (department: sound error)

Close scrutiny of TOTs and speech errors has given rise to an influential theory of production that incorporates two stages of lexical retrieval, each associated with a controlling structure. The controlling structures are often conceptualized as frames with slots that receive the lexical representations (Bock & Levelt, 1994; Dell, 1986; Garrett, 1975; MacKay, 1972; Shattuck-Hufnagel, 1979). The first stage of lexical retrieval ends with selection of an abstract (prephonological) word form that is specified semantically and syntactically (Kempen & Huijbers, 1983). This is known as the lemma. Lemma retrieval is controlled by a syntactic frame; selected lemmas are inserted into slots marked for subject noun, main verb, and so on. The second stage of lexical retrieval adds phonological form information. This stage is controlled by frames that specify the phonological structure of a word or phrase; selected segments are inserted into slots marked for syllable onset consonant, medial vowel, and the like.

The two-stage theory explains TOTs as instances in which the lemma is retrieved but phonological retrieval fails. The speaker knows what she wants to say and can supply a definition or synonym. If the language is one that marks nouns for grammatical gender, the speaker in TOT may retain access to this syntactic feature despite being unable to report anything about how the word sounds (Badecker, Miozzo, & Zanuttini, 1995; Vigliocco, Antonini, & Garrett, 1997). However, retrieval of phonology is not always completely blocked. In one third to one half of experimentally elicited TOTs, speakers demonstrate partial access, in that they accurately report the first sound or letter of the sought-after word, its length or stress pattern, or words that sound similar to the target (see Brown, 1991, for review). This indicates that a word’s phonology is represented in piecemeal fashion and that second-stage retrieval involves multiple selection acts and a process of assembly. This is consistent with phonological speech errors, wherein individual phonological units, typically phonemes, undergo substitution (see example 14), addition (example 15), or deletion (16), or movement to a new location in the word or phrase (17–19).

  1. winnding (winning: sound addition)
  2. tremenly (tremendously; deletion)
  3. lork yibrary (York library: sound exchange)
  4. leading list (reading list: sound anticipation)
  5. beef needle (beef noodle: sound perseveration)

Aquestion much debated in the speech production literature is whether the two stages of lexical retrieval are informationally encapsulated (i.e., modular). They are rightly considered modular if semantic-syntactic information does not influence phonological retrieval and phonological information does not influence lemma retrieval. An influential model of this type is described in Levelt, Roelofs, and Meyer (1999). The alternative to the modular model is one that postulates interactive activation (Dell, 1986; Dell & Reich, 1981; Harley, 1984; Houghton, 1990; Stemberger, 1985). The hallmark of interactive activation models is their nonmodularity; because activation spreads continuously and bidirectionally, early stages of processing are influenced by information from later stages, and vice versa (McClelland & Rumelhart, 1981).

Do semantic and phonological information sources interact during word retrieval? There is evidence on both sides. Although most word substitution errors bear either a semantic (example 11) or phonological (12) relation to the target, the frequency of mixed (semantic plus phonological) word substitution errors (e.g., pelican for penguin) is significantly higher, for both normals and aphasics, than the modular model predicts it should be (Dell & Reich, 1981; Harley, 1984; N. Martin, Weisberg, & Saffran, 1989; N. Martin, Gagnon, Schwartz, Dell, & Saffran, 1996). Whereas the mixed error evidence favors the interactive model, however, experiments on the time course of semantic and phonological retrieval in lexical access show conclusively that the interaction, if it exists at all, must be limited. At the earliest points, processing appears to be exclusively semantic; however, just prior to articulation, the retrieval of phonological information completely dominates that of semantic information (Levelt et al., 1991; Schriefers, Meyer, & Levelt, 1990).

In response to these findings, Dell and colleagues have proposed an interactive activation model of retrieval in which interactivity is combined with a two-step selection process (Dell, Schwartz, Martin, Saffran, & Gagnon, 1997; see also Dell & O’Seaghdha, 1991). In effect, the model represents a compromise between strictly modular accounts and more fully interactive ones.

An Interactive Activation Model of Lexical Retrieval

Like other interactive activation models of lexical processing, the two-step interactive activation model (henceforth, 2IA) has lexical knowledge represented in a layered network of units or nodes. Nodes are not repositories for stored information, but rather simple, neuron-like devices that collect, summate, and transmit activation. An important distinction among connectionist models is whether they use a local or distributed style of representation. In localist models, nodes stand in one-to-one correspondence with psychologically meaningful properties. Such is the case in the 2-IA model, where the top level in the network represents semantic features, the middle level represents known lemmas, and the bottom level represents phonemes (Figure 21.6).

In this model, a concept is represented as a collection of semantic features. When these features are turned on (e.g., by a to-be-named picture), activation spreads freely for some period of time. It spreads to lemmas, which send a portion of their activation down to phonemes and up to the semantic features. From the phonemes, activation spreads back up to lemmas. This reciprocal feedback from phonemes to lemmas, occurring prior to lemma selection, is what makes the model interactive; information about the target’s phonology (step 2 information) is entering into selection at the first (semanticsyntactic) stage of lexical retrieval. After some time (assumed to vary with speech rate), the most highly activated lemma is selected by the syntactic frame (a single noun frame, in the case of naming). The second step happens when, after another period of activation spread, the most activated onset, vowel, and coda phonemes are selected. This ends the trial. If the model selects cat at step 1 and /k/, /æ/ and /t/ at step 2, it has performed correctly. Otherwise it has made an error.

All connections in the 2-IA model are excitatory and bidirectional. At each step in time, a node’s activation level is determined by what it was on the prior time step, the rate at which it dissipates activation (the decay factor), and how much activation it is receiving from other nodes. The latter is determined by the strength of its connections, as well as random noise that is added to the model to simulate a variety of probabilistic influences. To simplify matters, all connections in the model are assigned the same weights, so that all transmit activation with the same strength. Similarly, all nodes lose activation at the same (decay) rate. The weight and decay parameters are preset; there is no learning in the model.

We have seen that the model’s feedback connections cause it to behave interactively. What makes the model quasimodular is what happens at each of the selection steps. First, the selected node is linked to the controlling frame (syntactic at step 1, phonological at step 2) and its activation level is set to zero (postselection inhibition). Next, the selected node is given a strong activation jolt by the controlling frame, to get the next stage of encoding started. On account of these jolts, which operate in top-down fashion, semantic influences predominate early and phonological influence late in a trial, just as the experimental evidence requires (Dell & O’Seaghdha, 1991).

Biological Psychology of Language Research Paper

Let us now consider errors. Misselections at step 1 can give rise to semantic, formal, mixed, or unrelated errors. Semantic errors (dog for cat) are encouraged by the overlap in semantic features (see Figure 21.6). (The model does not specify the content of the features, but the overlap is intended to represent the fact that the concepts dog and cat, for example, share features such as “animate,” “animal,” “pet,” etc.) The features activated by a picture of a cat will also partially activate the semantic neighbors of cat, namely dog and rat. This sets up a competition among the semantically related lemmas and, because the system is noisy, the target will sometimes lose out, resulting in a semantic error.

Formal errors (mat for cat) are encouraged by bottom-up feedback to lemmas from the primed phonemes (/a/, /t/). The effect of this feedback is to activate the phonological neighbors of cat, thereby setting the stage for a formal error to be generated at step 1.

Neighbors of cat that are related to it semantically and phonologically (i.e., mixed competitors) benefit from both top-down and bottom-up activation. This confers an advantaged for these mixed neighbors over those that are purely semantic or purely phonological. As we noted earlier, it has consistently been found that semantic and formal influences are not independent (i.e., mixed errors are more likely than semantic errors that happen to be phonologically related or phonological errors that happen to be semantically related; e.g., del Viso, Igoa, & Garcia-Albert, 1991; Dell & Reich, 1981; N. Martin et al., 1989). Bottom-up feedback is the model’s way of explaining this mixed error effect.

Unrelated competitors benefit from neither top-down nor bottom-up activation. Still, in a noisy system, they will sometimes be selected, thereby creating an unrelated error.

Misselections at step 2 give rise to sound errors. Most of these are neologisms (e.g., dat for cat), but word errors may also arise at this step, when the substituted phoneme happens by chance to create another word (rat; mat).

This simple model can handle a number of facts about speech errors. We have already discussed how it explains the mixed error effect. Now consider formal errors: The speech error literature shows that such errors nearly always respect the syntactic category of the target (Fay & Cutler, 1977). In the model, this results from having formals arise at lemma selection, which is controlled by the syntactic frame. Moreover, the model’s interactivity explains why having many phonological neighbors has a protective effect against TOT states, formal errors, and phonological errors (Gordon, 2001; Harley & Bown, 1998; Vitevitch & Sommers, 2001; Vitevitch, 1997, 2001): Bottom-up feedback from the activated neighbors helps the target accumulate activation faster and more effectively than might otherwise be the case.

Finally, it has been shown that when the model is implemented on a computer, it is possible to set the parameters (i.e., connection weight, decay, noise, etc.) so that the model’s output closely matches the naming patterns of normal speakers performing the Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996). By naming pattern we mean the proportion of correct responses and various types of error that are produced. Simulated lesions, created by altering only one or two of the model’s parameters away from the normal setting, produce a diverse array of naming patterns closely matching the data of individual aphasic subjects. Moreover, the type of lesion that the model assigned to the patients turns out to be predictive of a number of other aspects of their behavior (Dell et al., 1997; Foygel & Dell, 2000).

We believe that the many accomplishments of the model constitute strong evidence for the correctness of 2-IA architecture for lexical retrieval. Not all agree with this, however. For lively debate and discussion, the reader should consult Ruml and Caramazza (2000) and the rejoinder in Dell and colleagues (2000), as well as Rapp and Goldrick (2000).

Disorders of Lexical Retrieval

Lexical retrieval failures are ubiquitous in aphasia. On picture naming tests, the standard for measuring lexical retrieval, aphasics routinely score below normal. Differences across aphasia subtypes, which are marked in conversational speech, are reduced in picture naming, with semantic and phonological errors occurring in all groups (Howard, Patterson, Franklin, Orchard-Lisle, & Morton, 1984). Severity effects are well documented: Patients who score lower on naming tests tend to produce errors that are more remote, semantically and phonologically, from the targets (Dell et al., 1997; Schuell & Jenkins, 1961; Schwartz & Brecher, 2000). On the other hand, individual patients of comparable severity may exhibit distinctive error patterns, featuring predominant or exclusive occurrence of one type of error or another. Indepth study of such patients has yielded important insights into the nature of lexical retrieval and the adequacy of current models. What follows is a brief and selective review.

Semantic Errors

Semantic impairment such as is evident on lexical comprehension tasks also predisposes to semantic errors in naming and other production tasks (Gainotti, Silveri, Villa, & Miceli, 1984). Patient KE, studied by Hillis, Rapp, Romani, and Carmazza (1990), provides a particularly clear example. Following a left hemisphere stroke, KE evinced significant difficulties in comprehension and production of single words. On a comprehension test involving auditory word-picture match, he chose the semantic foil on 40% of trials. He produced semantic errors at approximately the same rate in picture naming (38%) and on a variety of other lexical tasks (e.g., 32% semantic errors in written naming; 35% in oral reading). This pattern, and the fact that the same items elicited semantic errors in naming and comprehension tasks, indicates a common source for the semantic errors, presumably in the semantic system (see also Allport & Funnell, 1981; Howard & Orchard-Lisle, 1984).

Caramazza and colleagues have reported on other patients who produced high rates of semantic errors in naming, but who had intact lexical comprehension (Caramazza & Hillis, 1990; Rapp, Benzing, & Caramazza, 1997). Like KE, these patients made no formal or phonological errors in naming. The 2-IA model has difficulty explaining this pure semantic error pattern in patients with intact semantics, at least with the simulated lesions that have thus far been entertained. For example, lesioning the model in a way that restricts the spread of activation from semantics to lemmas promotes formal as well as semantic errors (on account of the feedback from phonemes), and lesioning it in a way that limits the spread of activation from lemmas to phonemes promotes phonological (nonword) errors. (For analysis and discussion, see Foygel & Dell, 2000; Rapp & Goldrick, 2000). The alternative model invoked by Caramazza and Hillis assumes that semantic representations directly activate phonological word forms (lexemes), and that they do so to a degree proportional to their shared semantics. For example, the semantic representation of chair activates the phonological forms for table, sofa, couch, and the like, in addition to that for chair. Normally, chair will be the most activated, and consequently will be produced. However, the argument goes, brain damage may render particular lexemes resistant to retrieval; and when that happens, another of the activated semantic cohort is likely to be substituted. In this way, the model accounts for the pure semantic naming pattern in patients with intact comprehension. However, the model has no explanation for formal errors, our next topic.

Formal Errors

Throughout much of the history of aphasia studies, formal errors were looked upon as another form of phonological dististortion, in the same category as nonword errors. As these errors began to take on importance in psycholinguistic production theories (Fay & Cutler, 1977), they attracted more interest from aphasiologists as well. The close scrutiny paid off with evidence that in certain patients the frequency of formal errors is greater than chance (i.e., greater than the frequency of word errors that happen to be phonologically related to the target or of phonological errors that happen by chance to be words). Moreover, compared to a corpus of words generated from random phoneme sequences, the formal errors gathered from patients are more likely to be nouns and to have a higher frequency of occurrence (Gagnon, Schwartz, Martin, Dell, & Saffran, 1997). This is clear indication that formal errors, at least in some patients, arise at the lexical stage and not from postlexical phonological substitution.

There have been several case studies published of patients who produce formal errors in naming at rates high enough to rule out chance occurrence (Best, 1996; Blanken, 1990, 1998; N. Martin & Saffran, 1992). NC, the patient reported by N. Martin and Saffran, was subsequently the subject of a modelling study in which his naming and repetition patterns were simulated with the computer-implemented 2-IA model (N. Martin, Dell, Saffran, & Schwartz, 1994; N. Martin, Saffran, & Dell, 1996). This study showed that when the model was lesioned by increasing the rate of activation decay throughout the network, the pattern of errors closely matched the high-formals pattern that NC produced early in his clinical course. Over time, NC’s performance improved and the naming pattern shifted in the direction of the normal pattern (more semantic than formal errors). This recovery pattern also was simulated, by shifting the value of the decay parameter closer to the normal setting. The reason a 2-IAdecay impairment promotes formal errors is that it allows operations occurring late in the retrieval interval (i.e., activation of phoneme nodes and phoneme-to-word feedback) to exert a more substantial influence on the character of errors than operations occurring earlier. For naming, this entails that formals are favored over semantic errors. This contrasts with a connection strength lesion, which limits the extent to which higher nodes prime lower nodes and additionally limit the feedback from lower to higher nodes. With network-wide lesions of connection strength, the rate of formal errors is lower, relative to semantic errors and nonwords.

Acquired Tip-of-the-Tongue Phenomena

Persons with aphasia frequently profess to knowing the word that names a given picture or that meets a definition, while at the same time being unable to say it. What is it that the patient actually knows in this state? One possibility is that he or she has in mind the preverbal concept that is appropriate to the picture or definition. Another possibility is that the patient has a specific word in mind and thus is in a state akin to TOT. The fact that first-phoneme cueing has a facilitative effect on word retrieval for many aphasic patients strongly supports the latter account.

An early TOT elicitation study, performed with patients from all the major aphasic categories, found that Broca’s, conduction, and Wernicke’s aphasics reliably succeeded in identifying the first letter of words they were unable to access in naming. The conduction group exhibited such partial knowledge more often than the others, whereas the anomic group did not exceed the chance rate for first-letter identification (Goodglass, Kaplan, Weintraub, & Ackerman, 1976). Case studies have shown that individual anomic patients vary in the type and amount of information that remains accessible to them. A patient reported by Badecker and colleagues (1995), who was a native Italian speaker, was unable to report anything at all about the phonological forms of words he failed to access in naming and sentence completion tasks. On the other hand, this patient was invariably able to report the grammatical gender of the words that eluded him, a clear indication that he had accessed the corresponding lemma. A French-speaking patient, in addition to being able to report grammatical gender, often provided spelling information (e.g., first letter) and, amazingly, the alternative meaning for a name that happened to be a homophone (Henaff Gonon, Bruckert, & Michel, 1989). An example from English would be if a patient failed to name a picture of a pad (tablet) but reported that the elusive word was slang for apartment. Anomic patient GM, studied in Lambon Ralph, Sage, and Roberts (2000), correctly reported the number of syllables of unavailable words and whether they were compounds, but he was unable to provide first letter or sound information.

It is apparent, then, that anomic and other patients are frequently in a condition in which they access less than the complete phonological specification of the target word. This is more likely to happen with low frequency words than high. Frequency is known to operate at the level of phonological retrieval (Jescheniak & Levelt, 1994), and it is consistent with this that patients access more phonological information in connection with high-frequency targets than low (Kay & Ellis, 1987).

For some patients, successful retrieval of target phonology is also subject to semantic influences. For example, GM, the anomic patient studied by Lambon Ralph and colleagues (2000), produced more omissions in naming when the picture was preceded by a semantically related word (semantic priming) and when naming trials were blocked by semantic category. Other patients are susceptible to miscueing, such that phonological retrieval is suppressed when a name is cued by the first sound of a semantic relative (e.g., picture of a tiger cued with the sound /1/; Howard & Orchard-Lisle, 1984). Miscueing induces some patients to make semantic substitutions (lion), which, if their semantics are intact, they promptly reject as the correct answer (Lambon Ralph et al.).

These acquired TOT phenomena strongly support models of lexical access that distinguish lemma retrieval from phonological retrieval and that allow phonological activation to begin even before lemma selection has been finalized (the cascading activation assumption; see McClelland, 1979). Without lemmas, it is hard to explain how access to grammatical features can be preserved in the absence of phonology. Without cascading activation, miscuing would be a mystery: Why would hearing /1/ induce the patient to say lion unless the phonology of lion had already been primed by the picture of the tiger? On a more basic level, these TOT phenomena demonstrate convincingly that one can activate partial information about a word’s pronunciation. This point is critical to the understanding of neologisms, our next topic.

Target-Related Neologisms

In an influential paper, Ellis et al., 1983 (also see Miller & Ellis, 1987) showed that the neologisms produced by a Wernicke’s aphasic patient (RD) had much in common with TOT phenomena. Relative to the target words, RD’s neologisms contained the correct phonemes and the correct number of syllables more often than would be expected by chance. Also, the probability of his being able to produce a word correctly was strongly dependent on frequency of usage; neologisms occurred much more often to low-frequency words. These seminal findings have been replicated in larger studies as well (e.g., Gagnon et al., 1997; Schwartz, Wilshire, Gagnon, & Polansky, 2002).

Our group has analyzed hundreds of neologisms generated by fluent aphasic patients tested on the PNT. These errors turn out to be graded with respect to target overlap: In some cases, error and target share many phonemes; in other cases, they share few. In general, there is a strong effect of severity, such that patients with low correctness scores in naming are likely to make more neologisms—and more remote neologisms—than are those with high correctness scores (Schwartz & Brecher, 2000; Schwartz et al., 2002). In addition, certain patients produce a disproportionate number of such errors. The Wernicke’s aphasic patient, RD, is one such case. Most others described in the literature are individuals with conduction aphasia (e.g., Caplan, Vanier, & Baker, 1986; Pate, Saffran, & Martin, 1987).

According to the 2-IA model, neologisms arise when—on the second retrieval step—one or more incorrect phonemes are selected. Such errors are encouraged by weak connections between lemmas and phonemes; very weak connections create a high rate of neologisms, including many that are remote from the target. (In runs of the model, extreme weakness causes phonemes to be selected at random.) Such a locus for neologisms is consistent with the finding that the neologism rate is higher for words that are low in frequency (see previous discussion) and that occupy sparse phonological neighborhoods (Gordon, 2002; Vitevitch, 2002).

Weak phonological connections constitute the model’s explanation for TOT states as well as for neologisms. Why do some patients with hypothesized weak connections mostly omit responses and engage in TOT search, whereas others produce errors containing mixtures of correct and incorrect phonemes? Apopular explanation centers on monitoring. The notion is that the omitters are capable of monitoring their internal speech for the quality of phonological access and therefore can suppress inaccuracies prior to or during articulation. The neologism producers, in contrast, do not routinely monitor their internal speech or suppress incipient errors. The spontaneous speech ofWernicke’s aphasia often evolves from neologistic to anomic—that is, to speech that contains mostly generic words (e.g., thing, place) and word-finding gaps (Kertesz & Benson, 1970). Presumably, the underlying retrieval problem persists while the ability to monitor recovers. A recent study of naming recovery described a patient who showed a decline in neologisms over time, in conjunction with a rise in omissions. His rate of self-corrections also increased over time but only for neologisms; there was no change in his ability to self-correct semantic errors (Schwartz & Brecher, 2000). It appears, then, that a complete account of lexical retrieval breakdown will have to include metalinguistic abilities such as self-monitoring and error detection. An important finding in this regard is that monitoring abilities in patients are not predicted by their performance on comprehension tests (Nickels & Howard, 1995), as some accounts would have predicted (e.g., Levelt, 1983).

The evidence reviewed so far supports the view that neologisms arise from faulty access to lexical phonology, when a wholly or partially deficient representation is filled in with substituted material.At one time, the substituted material was attributed to a neologism-producing device (Butterworth, 1979), but a more satisfactory account is that the substituted segments are constituents of the semantic and phonological neighbors activated in the course of lemma selection. This explanation is supported by a number of lines of evidence, including the miscuing phenomenon and the fact that the speed and accuracy of phonological access is influenced by the density of the target’s phonological neighborhood (Vitevitch, 2001). Additional evidence on this point is provided in O’Seaghda and Marin (1997) and Peterson and Savoy (1998).

This evidence notwithstanding, there is reason to believe that neologisms sometimes arise subsequent to lexical retrieval, at a point at which the retrieved phonemes are inserted into the structural frames that control selection and that specify consonant-vowel structure, syllable boundaries, and stress pattern across units as large as a phonological or syntactic phrase. (For differing versions of this process, see Dell, 1986; Garrett, 1982; Levelt et al., 1999; ShattuckHufnagel, 1979; and for arguments supporting a postlexical locus for at least some neologisms, see Buckingham, 1977, 1985, 1987; Butterworth, 1979; Ellis, 1985; Kohn, 1993; Kohn & Smith, 1994, 1995).

A postlexical locus has long been proposed for normal sound errors in order to account for such errors’ key properties: (a) the phonotactic regularity of errors, (b) the fact that consonants substitute for consonants and vowels for vowels, (c) syllable structure preservation in errors that move around, and (d) the fact that movement errors typically span a distance corresponding to a phonological or syntactic phrase.

In the aphasia literature, the postlexical account has been invoked primarily to explain why certain patients with conduction aphasia show a striking uniformity in the rate and characteristics of their neologisms across all types of production tasks, and why errors occur also in nonwords, which presumably are not lexically represented (Caplan et al., 1986; Pate et al., 1987).

Production models are now being designed that can account for many of the aforementioned characteristics of errors in terms of the architecture of the lexicon and the processing characteristics of lexical retrieval (e.g., Dell, Juliano, & Govindjee, 1993; Hartley & Houghton, 1996; Levelt et al., 1999; Vousden, Brown, & Harley, 2000). It remains to be seen whether such models will obviate the need for a postlexical phonological encoding stage.

Neuroanatomy of Lexical Retrieval

Indefrey and Levelt (2000) recently reviewed 58 studies that investigated the cerebral localization for word production. The majority of these involved PET or fMRI, but some utilized other techniques, including lesion analysis and eventrelated electrical and magnetic cortical activity (ERP, MEG). Indefrey and Levelt’s meta-analysis began with analysis of the word-production tasks utilized in the various studies: picture naming, verb and noun generation, word reading, word repetition, and others. A primary distinction was drawn between task-specific lead-in processes and the core component processes of word production. The core components they recognize include all that we have discussed here (e.g., lemma selection, phonological encoding-assembly). It also includes the subsequent processes of phonetic encoding and articulation. For each core component, Indefrey and Levelt identified brain regions that were reliably activated in experimental tasks sharing that component and not reliably activated in tasks that do not share that component. The following is a summary of results of this meta-analysis:

Lemma Selection

Activation of a lemma by semantics is a process shared by picture naming and word generation but not necessarily word reading. The evidence implicates the left middle temporal gyrus—especially the midportion—as a critical locus for the lexical processes up to and including lemma selection.

Lexeme Selection

The term lexeme refers to the lexical-phonological specification that some models postulate as the sole lexical level (e.g., Caramazza & Miozzo, 1997) and others postulate in addition to the lemma (e.g., Levelt et al., 1999). Indefrey and Levelt (2000), who adopt the latter position, maintain that lexeme selection takes place in picture naming, word generation, and word reading, but not in pseudoword reading. The regional activations that conform to this pattern are those in the left posterior superior and middle temporal gyri (corresponding in all or part to Wernicke’s area), as well as the left thalamus.

Phonological Encoding-Assembly

This component is considered present in all production tasks. Although no single region was activated in all the reviewed studies, the regions that came closest to fulfilling the requirement were the left posterior inferior frontal gyrus (Broca’s area) and the left middle superior temporal gyrus.

Phonetic Encoding and Articulation

To isolate these peripheral components of word production, the authors looked for areas that were activated in all overt pronunciation tasks that used silent tasks as the control condition, but that were not activated in silent tasks or tasks that controlled for articulation. This pattern was matched by a number of areas known to be involved in motor planning and control: parts of the pre- and postcentral gyri of both hemispheres, right supplementary motor area (SMA), and left and medial cerebellum.

Grammatical Encoding

What processes of mind and brain enable the ordinary speaker to structure words grammatically and in accordance with the intended message? The question is made more compelling—and the answers more elusive—by the fact that these processes operate outside of conscious awareness and beyond the reach of our introspection. In this section, we outline a theory of grammatical encoding that addresses key issues but also has many unresolved details. The theory has its roots in studies of spontaneous speech errors conducted throughout the 1970s and 1980s (e.g., Baars, Motley, & MacKay, 1975; Dell, 1986; Fromkin, 1971; Garrett, 1975, 1980, 1982; MacKay, 1987; Shattuck-Hufnagel, 1979; Stemberger, 1985). It continues to be refined and elaborated though behavioral experiments (see Bock & Levelt, 1994; Levelt, 1989, for reviews), aphasia studies (Berndt, 2001; Garrett, 1982; Saffran, 1982; Schwartz, 1987) and computational modeling (e.g., Dell, 1986; Dell, Chang, & Griffin, 1999; Roelofs, 1992; Stemberger, 1985). Our treatment of the theory follows closely that of Bock and Levelt, 1994. Readers interested in learning more should consult that reference, as well as Levelt’s 1989 book, Speaking.

Biological Psychology of Language Research Paper

The outlines of the theory are shown in Figures 21.7 and 21.8, which illustrate production of the sentence She was handing him the broccoli. The steps to encoding begin with the message, which expresses the intended meaning of the sentence, including the structure of the event (i.e., what the action is, who or what plays the role of agent, theme, etc.; Figure 21.7). Next come the two ordered procedures that together constitute grammatical encoding. The first procedure ( functional processing) assigns predicate-argument relations; the second procedure (positional processing) builds constitute structure (Figure 21.8).

Biological Psychology of Language Research Paper

The lexical entities that participate in functional processing are the lemmas of our previous discussion. The predicateargument structure constitutes the frame that selects lemmas and assigns each to an argument role. In the type of speech error known as the word exchange (see Example 13), two lemmas are misassigned the other’s role. This interpretation of word exchanges is made clearer by considering the exchange error in Example 20, which involves pronouns.

  1. Intended: She handed him the broccoli.

      Uttered: He handed her the broccoli.

In this error, the lemma for feminine pronoun singular and the lemma for masculine pronoun singular have been misassigned: The feminine pronoun fills the dative (indirect object) slot, and the masculine pronoun fills the nominative (subject) slot. In English, such misassignments emerge as exchanges of serial position, but this is true only because English uses fixed word order to express syntactic functions. In languages in which word order is freer and syntactic functions are marked by affixes, the exchanged nouns would carry the misassigned case (as the pronouns do in this example) regardless of how they were ordered. This is important because the functional frame is not concerned with serial order.

Biological Psychology of Language Research Paper

Assigning serial order is the business of the positionalprocessing stage, and it happens through the building of constituent structure. The entities that participate in positional processing are phrase fragments such as those shown in Figure 21.9 and the lexically specified arguments that attach to these fragments (Lapointe & Dell, 1989). When everything goes as planned, the nominative argument attaches to the slotted terminal branch of the Subject-NP fragment, the verb to the slotted terminal branch of the VP fragment, and so on. However, sometimes things go awry and the arguments attach to the wrong fragments. This is the presumed mechanism for the type of speech error known as the stranding exchange:

  1. Target: I’m not in the mood for reading

      Error: I’m not in the read for mooding.

The features of stranding exchanges are telling: First, the exchanging entities are not  words (as in Example 13) but rather stem morphemes. (In this example, reading is morphologically decomposed into the stem, read, and affix, ing.) Critically, the affix is stranded in its targeted location. Second, stranding exchanges—unlike word exchanges—do not respect grammatical class (e.g., in Example 21, a noun stem has exchanged with a verb stem), nor are the exchanging elements semantically related. The most important constraint on stranding exchanges is proximity; the interacting elements almost always occupy adjacent phrases in the same clause. Yet they can be separated by one or more closedclass words (such as for in this example); and closed-class words do not participate in stranding exchanges (i.e., one never sees errors like I’m not in the mood read foring. Apparently, closed-class words are invisible to the mechanism that inserts stems in fragments (and that is responsible for stranding exchanges). This is a key point, and we return to it later in this research paper.

The stem morphemes that participate in positional processing are equivalent to the phonologically specified lexemes discussed in the previous section. (This is supported by evidence that the entities that participate in stranding exchanges are often phonologically related; see Garrett, 1975; Lapointe & Dell, 1989.) Selecting a lexeme for insertion into a fragment slot triggers the process of phonological encoding, whereby the segments of the selected lexeme are selected and ordered. Among the types of phonological error that can arise at phonological encoding are so-called movement errors such as those in previous Examples 11 through 13. The existence of such errors indicates that more than one fragment is phonologically encoded at a time; the fact that the errors involve adjacent stems indicates that the span of anticipatory encoding is quite limited.

Disorders of Grammatical Encoding

Classical Agrammatism

Classical agrammatism has historically been considered a defining symptom of Broca’s aphasia, along with slow, poorly articulated, and dysprosodic. Classical agrammatism refers to the simplification and fragmentation of constituent structure and to the tendency to omit closed-class words. Persons with Broca’s aphasia exhibit agrammatism in spontaneous speech, event picture description, and sentence repetition. Table 21.2 reproduces sample picture descriptions we obtained from 10 mild-to-moderate agrammatic patients.

Biological Psychology of Language Research Paper

Cognitive Neuroscience of Language

To a first approximation, the fragmentation of sentence structure can be understood as a breakdown in the retrieval and coordination of phrase fragments. But what about the omission of function words? Earlier we noted that frequently used words are not subject to TOTs and phonological error as often as infrequent words; the same is true for short words, compared to long ones. Since function words are among the shortest and most frequent words in the language, their omission from agrammatic speech is unlikely to be due to failure of lexical-phonological retrieval. In support of this, agrammatics have been reported who produce function words normally in single word production tasks but omit them in the context of phrase- and sentence production (Nespoulous et al., 1988). This suggests that syntactic influences are at play, but beyond this, there is little agreement.

The next section puts forward a view that ascribes closed class omissions to the positional processing stage of grammatical encoding. We must acknowledge at the outset, however, that the evidence to be reviewed is not complete, and the arguments no unassailable. In particular, it is now widely acknowledged that fluent aphasics, as well as nonfluent agrammatics, have problems producing closed class elements in connected speech (e.g., Haarmann & Kolk, 1992; Caplan & Hanna, 1998), and whether this manifests primarily in omissions or in substitutions is affected by the speaker’s language (Menn & Obler, 1990) and by the task he or she is given (Hofstede & Kolk, 1994); moreover, the probablility that a given closed class element will be produced correctly is in part a function of its semantic and prosodic salience (Goodglass, 1973; Kean, 1979). Psychologist Herman Kolk argues that the best way to reconcile these diverse findings is to assume that aphasics of all kinds suffer from a reduced capacity for producing sentences and that they adapt to this impairment in various ways, depending on the particular task they are given and on other characteristics of their aphasia. In patients for whom speaking is effortful, i.e., most Broca’s aphasics, such adaptation takes the form of omission of the elements that carry least semantic weight, namely those of the closed-class vocabulary (Kolk & Heeschen, 1992; Kolk & Van Grunsven, 1985; see also Dick et al., 2001).

The Closed-Class Hypothesis

The closed-class hypothesis is a highly influential account of closed-class omissions; this account relates this symptom to the computational distinctiveness of the closed-class vocabulary at the positional processing stage. Several lines of evidence support computation distinctiveness; for example, closed-class elements do not participate in exchanges arising at this level and these elements are not subject to phonological error (but see Dell, 1990; Ellis et al., 1983). As formulated by Garrett (1975; 1980), the hypothesis explains these facts by proposing that closed-class elements are not retrieved from the lexicon in the way that open-class elements are, but instead come phonologically prepackaged on the terminal branches of positional frames. Because exchanges arise when lexically retrieved elements are assigned to the wrong slots, and closed-class words are not among the elements so assigned, they are not subject to exchange errors. Furthermore, because they are phonologically prepackaged, closed-class words are not subject to the substitutions, additions, and so on that happen at phonological assembly. On the other hand, if one assumes that the essential problem in agrammatism is generating positional frames, then their status as essential constituents of such frames would be expected to confer vulnerability on the closed-class elements.

Garrett’s hypothesis was subsequently expanded and modified by Lapointe (Lapointe, 1983, 1985), based on evidence that free and bound closed-class elements behave differently in agrammatic speech. Lapointe’s analysis of published cases of English- and Italian-speaking agrammatism revealed that the tendency to omit was restricted to freestanding function words; bound affixes, in contrast, were more likely to undergo substitution. Earlier studies had failed to notice the substitutions because English agrammatics tend to overuse bare verb stems—and this tendency lends itself to characterization as omission of the inflectional affix. Lapointe’s analysis treats this instead as substitution of the infinitive. Infinitives are among the least marked (i.e., least complex) verb forms in English, in terms of the semantic notions they encode. The markedness hierarchy is central to Lapointe’s account of verb-phrase production in agrammatism and provides an explanation for some of the across-language differences that one observes. For example, in Italian, the infinitive meaning is not expressed by the bare stem but rather by an affix. Accordingly, Italian agrammatics do not produce the bare verb stem, but they do substitute the infinitive affix for other, more marked, affixes.

Lapointeproposedthatonlyaffixescomefullyspecifiedon the terminal branches of phrase fragments. Function words, in contrast, are specified more abstractly and are subject to a separate retrieval process. The retrieval of phrase fragments and function words, according to Lapointe, utilizes a shared resource that is of limited supply in agrammatism. Consequently, the search for phrase fragments often terminates with the least marked fragments being retrieved and exhausts the resource capacity before the function words can be retrieved.

Additional evidence now suggests that function words are not inherent constituents of phrase fragments. The evidence comes from experiments in structural priming conducted by Bock and her colleagues with normal speakers. The essential observation behind structural priming is that speakers tend to repeat the structure of a sentence heard or spoken previously—even when the sentences differ in lexical and message-level content (Bock, 1986, 1989; Bock & Loebell, 1990; Bock, Loebell, & Morey, 1992). Analyses suggest that the effect of the prior exposure is to enhance the availability of the prime sentence’s constituent structure (e.g., whether it contains a prepositional phrase), making it more likely that the relevant phrase fragments will subsequently be repeated. The key finding (for present purposes) is that strong structural priming can be obtained even when the sentences differ in function word content (e.g., different prepositions). If the function words were an inherent part of the phrase fragments, fragments with different prepositions should prime one another less well than do those with identical prepositions; however, this is not the case (Bock, 1989).

Interestingly, structural priming can also be produced in agrammatic aphasic patients (Hartsuiker & Kolk, 1998; Saffran & Martin, 1997), in whom it yields complex structures of the sort that these patients rarely produce spontaneously (e.g., passives datives). This evidence is consistent with a resource account of agrammatic production. It also shows that constituent structure can be primed independently of any lexical content—including the function words— because priming with a transitive passive, such as in The boy was hit by a rock, for example, often produces an NP-V-PP structure (as is appropriate to a passive) that contains the wrong preposition, affix, or both.

Fractional Agrammatism

In certain conditions, omission and substitution of closedclass elements can be observed in patients who speak fluently and without the syntactic fragmentation evident in Table 21.2 (e.g., Haarmann & Kolk, 1992; Miceli et al., 1983, Case 2). The reverse pattern has also been described: A patient’s speech is fragmented but rich in closed-class vocabulary (Berndt, 1987; Saffran, Schwartz, & Marin, 1980a). To reconcile these findings with the original or revised version of the closed-class hypothesis, the syntactic fragmentation must be attributed to a separate resource decrement (Lapointe, 1985) or to a syntactic process that is independent of closedclass retrieval (Saffran et al, 1980a; Schwartz, 1987). The next section spells out the argument for syntactic deficits at the function assignment stage.

Functional Processing Deficits

In 1980 our group demonstrated for the first time that patients with agrammatic Broca’s aphasia have difficulty using word order to express the predicate-argument structure of a sentence (Saffran, Schwartz, & Marin, 1980b). Asked to describe simple events (e.g., a boy pushing a girl) or locative states (e.g., a pencil in the sink), our patients not infrequently began with the wrong noun and sometimes produced complete subject-object reversals. We replicated the effect in a sentence anagram task that required no speaking at all.

This word-order deficit, now well established, emerges only with specially designed materials—namely, those that exclude nonsyntactic strategies, such as mentioning the animate or most salient entity first or ordering the entities in accordance with their left-right placement in the picture (Chatterjee, Maher, & Heilman, 1995; Deloche & Seron, 1981; Menn et al., 1998; Saffran et al., 1980b). It has also been reported in a few fluent aphasic patients, whose production of constituent structures and closed-class elements was largely if not entirely intact (Caramazza & Miceli, 1991; R. Martin & Blossom-Stach, 1986). Such cases establish definitively that the word-order deficit is not a consequence of positional processing impairment; rather, it reflects impairment at the prior stage of predicate-argument assignment.

Earlier we noted that the verb plays a central role in predicate-argument assignment. It should not be surprising, then, that the semantic and syntactic properties of verbs (and other predicates, such as prepositions) have been found to influence the word-order problem (Jones, 1984; Saffran et al., 1980b). It has been known for some time that agrammatism is associated with impaired verb access (e.g., Miceli, Silveri, Villa, & Caramazza, 1984; Saffran et al., 1980a; Zingeser & Berndt, 1990) and that the verb-access deficit can arise at multiple levels (e.g., Berndt, Mitchum, Haendiges, & Sandson, 1997). Elucidating the relationship between verb impairments and grammatical encoding deficits remains an active area of research (Breedin & Martin, 1996; Breedin, Saffran & Schwartz, 1998; Gordon & Dell, 2002).

If agrammatics’ syntactic fragmentation were to stem from functional processing deficits involving verb use and predicate-argument assignment, it would explain why syntactic fragmentation and closed class omissions do not invariably co-occur. This account of syntactic fragmentation receives support from the speech samples in Table 21.2, where much of the struggle seems to arise in connection with finding the correct verb and conveying the functional roles of the three participants.

Neuroanatomy of Grammatical Encoding

Classical agrammatism is part of the clinical picture of Broca’s aphasia; consequently, the search for the anatomical substrate of grammatical encoding has focused on Broca’s area. Modern lesion studies have not been encouraging in this regard: They have shown that agrammatic Broca’s aphasia does not result from a lesion restricted to Broca’s area, but rather requires extension into adjacent frontal and parietal areas overlying the insula (i.e., frontal and parietal operculum) and to the insula itself (Dronkers et al., 2000; Mohr et al., 1978). More problematic still is the evidence that agrammatic Broca’s aphasia can be produced by posterior lesions that completely spare Broca’s area (Basso et al., 1985; Dronkers et al., 2000; Willmes & Poeck, 1993). A lesion correlation study focused specifically on agrammatism found that the responsible lesions, when localized, were distributed widely around the left perisylvian area (Vanier & Caplan, 1990). On one hand, this is not surprising; given the complexity of grammatical encoding and the evidence that it fractionates at multiple levels, it is unlikely to be localized to a single site. On the other hand, in our review of the functional imaging literature relating to sentence comprehension, Broca’s area was consistently found to activate in response to the syntactic load of the sentence. Could it be that this area is critical for syntactic processing in comprehension but not in production? This seems unlikely prima facie, given its location in close proximity to motor cortex. Moreover, in the only functional imaging study to date that utilized a speech production design, the results implicated the left Rolandic operculum, caudally adjacent to Broca’s area, as important for grammatical encoding (Indefrey et al., 2001).

At this time there is no simple way to reconcile the discrepant findings from lesion and metabolic imaging studies. However, it is important to remember that structural damage can produce far-reaching metabolic abnormality, so that the fact that a lesion is localized outside Broca’s area does not necessarily mean that the neural tissue there is functioning normally (Metter, 1987).

Contemporary neuroscience is increasingly coming to the view that Broca’s area is both too big and too small to serve as the neurological substrate for grammatical encoding. It is too big in the sense that it is structurally and functionally decomposable; it is too small in the sense that any one of its functions probably can only be understood in the context of a spatially distributed network of structures. An interesting characterization of the neural network for grammatical encoding has recently been put forward by Ullman and colleagues. Their declarative-procedural model of language maps the lexical-syntactic distinction onto the declarativeprocedural memory systems of the brain. The following quote, taken from a recent review, concerns the procedural system as it relates to grammatical encoding (Ullman, 2001; Bibliography: contained in the quoted passages are omitted here; our additions appear within in brackets).

[Procedural memory] has been implicated in learning new, and controlling well-established, motor and cognitive skills. Learning and remembering these procedures is largely implicit . . . The [procedural memory] system is rooted in portions of the frontal cortex (including Broca’s area and the supplementary motor area), the basal ganglia, parietal cortex and the dentate nucleus of the cerebellum . . . [Procedural memory] is important for learning or processing skills that involve action sequences. The execution of these skills seems to be guided in real time by the posterior parietal cortex, which is densely connected to frontal regions. Inferior parietal regions might serve as a repository for knowledge of skills, including information about stored sequences [phrase fragments?].

…Procedural memory subserves the implicit learning and use of a symbol-manipulating grammar across subdomains that include syntax, morphology and possibly phonology (how sounds are combined). The system might be especially important in grammatical-structure building—that is, in the sequential and hierarchicalcombinationofstoredforms(‘walk’ + ‘-ed’)andabstract representations into complex structures . . . One or more circuits between the basal ganglia and particular frontal regions might subserve grammatical processing and perhaps even finer distinctions, such as morphology versus syntax. From this point of view, the frontal cortex and basal ganglia are ‘domain general,’ in that they subserve non-linguistic and linguistic processes, but contain parallel, ‘domain’-specific’ circuits. (Ullman, 2001, p. 718)

The declarative/procedural model of language has so far been thoroughly tested only in the domain of past-tense processing (regular past tense formation is thought to involve the procedural system, irregular past tense formation the declarative system). Nevertheless, the model has attracted much attention on account of its admirable attempt to reconcile the theoretical description of language behavior with basic neurobiological principles. We expect to see more of this in the coming years.

Conclusion

The history of the biological characterization of language can be divided into phases. The early history was dominated by a simple and elegant neurological theory that reduced language to sensory and motor images. The second phase, which began in the mid 1970s and took linguistic theory as a jumping-off point, evolved techniques of behavioral experimentation to arrive at a richly detailed characterization of how neurologically intact speakers represent and use language, and how localized brain damage affects these psycholinguistic processes. We have now entered a third phase, characterized by the advent of models that simulate psycholinguistic stages and processes via the combined computations of neuron-like elements, and by the growth of functional imaging technology capable of localizing the neural networks that perform such computations. Computational modeling and functional brain imaging are the tools of cognitive neuroscience. As they mature, and as their availability and influence spreads, they are likely to provide new insights into the enduring mysteries surrounding language: What computational specializations did the human brain evolve for language? How did these emerge in phylogenesis? How do these neural specializations shape first-language learning, and how are they shaped by it? And what kind of plasticity is available to the brain that has experienced injury to these specialized mechanisms? When these questions are answered, we will have arrived at a truly comprehensive understanding of the biology of human language.

Bibliography:

  1. Allport, D. A. (1985). Distributed memory, modular subsystems, and dysphasia. In S. K. Newman & R. Epstein (Eds.), Current perspectives in dysphasia (pp. 32–60). Edinburgh, Scotland: Churchill Livingstone.
  2. Allport, D. A., & Funnell, E. (1981). Components of the mental lexicon. Philosophical Transactions of the Royal Society of London, 295B, 397–410.
  3. Altmann, G. T. M., Garnham, A., & Dennis, Y. (1992). Avoiding the garden path: Eye movements in context. Journal of Memory and Language, 31, 685–712.
  4. Baars, B. J., Motley, M. T., & MacKay, D. G. (1975). Output editing for lexical status in artificially elicited slips of the tongue. Journal of Verbal Learning and Verbal Behavior, 14, 382–391.
  5. Badecker, W., Miozzo, M., & Zanuttini, R. (1995). The two-stage model of lexical retrieval: Evidence from a case of anomia with selective preservation of grammatical gender. Cognition, 57, 193–216.
  6. Barbarotto, R., Capitani, E., Spinnler, H., & Travelli, C. (1995). Slowly progressive semantic impairment with category specificity. Neurocase, 1, 107–119.
  7. Basso, A., Gardelli, M., Grassi, M. P., & Mariotti, M. (1989). The role of the right hemisphere in recovery from aphasia: Two case studies. Cortex, 25, 555–556.
  8. Basso, A., Lecours, A. R., Moraschini, S., & Vanier, M. (1985). Anatomoclinical correlations of the aphasias as defined through computerized tomography: Exceptions. Brain and Language, 26, 201–229.
  9. Bavelier, D., Corin, D., Jezzard, P., Padmanabhan, S., Clark, V. P., & Karni, A. P. (1997). Sentence reading: Afunctional MRI study at 4 Telsa. Journal of Cognitive Neuroscience, 9, 664–686.
  10. Beauregard, M., Chertkow, H., Bub, D., Murtha, S., Dixon, R., & Evans, A. (1997). The neural substrate for concrete, abstract, and emotional word lexica: A positron emission tomography study. Journal of Cognitive Neuroscience, 9, 441–461.
  11. Behrmann, M., & Leiberthal, T. (1989). Category-specific treatment of lexical-semantic deficit: A single case study of global aphasia. British Journal of Disorders of Communication, 24, 281–299.
  12. Berndt, R. S. (1987). Symptom co-occurrence and dissociation in the interpretation of agrammatism. In M. Coltheart, G. Sartori, & R. Job (Eds.), The cognitive neuropschology of language, (pp. 221–233). London: Erlbaum.
  13. Berndt, R. S. (2001). More than just words: Sentence production in aphasia. In R. S. Berndt (Ed.), Handbook of neuropsychology (2nd ed., Vol. 3, pp. 173–187). Amsterdam: Elsevier.
  14. Berndt, R. S., & Caramazza, A. (1980). A redefinition of the syndrome of Broca’s aphasia: Implications for a neuropsychological model of language. Applied Linguistics, 1, 225–287.
  15. Berndt, R. S., Mitchum, C., & Haendiges, A. (1996). Comprehension of reversible sentences in “agrammatism”: A meta-analysis. Cognition, 58, 289–308.
  16. Berndt, R. S., Mitchum, C., Haendiges, A., & Sandson, J. (1997). Verb retrieval in aphasia: 1. Characterizing single word impairments. Brain and Language, 56, 68–106.
  17. Berndt, R. S., Salasoo, A., Mitchum, C., & Blumstein, S. E. (1988). The role of intonation cues in aphasic patients’performance of the grammaticality judgment task. Brain and Language, 34, 65–97.
  18. Best, W. M. (1996). When racquets are baskets but baskets are biscuits, where do the words come from? A single-case study of formal paraphasia. Cognitive Neuropsychology, 3, 369–409.
  19. Binder, J. R., Frost, J. A., Hammeke, T. A., Cox, R. W., Rao, S. M., & Prieto, T. (1997). Human brain language areas identified by functional magnetic resonance imaging. Journal of Neuroscience, 17, 353–362.
  20. Blanken, G. (1990). Formal paraphasias: A single case study. Brain and Language, 38, 534–554.
  21. Blanken, G. (1998). Lexicalisation in speech production: Evidence from form-related word substitutions in aphasia. Cognitive Neuropsychology, 15, 321–360.
  22. Blumstein, S. E. (1994). Impairments of speech production and speech perception in aphasia. Philosophical Transactions of the Royal Society of London, 346B, 29–36.
  23. Blumstein, S. E., Cooper, W. E., Zurif, E. B., & Caramazza, A. (1977). The perception and production of voice-onset time in aphasia. Neuropsychologia, 15, 371–383.
  24. Blumstein, S. E., Tartter, V. C., Nigro, G., & Statlender, S. (1984). Acoustic cues for the perception of place of articulation in aphasia. Brain and Language, 22, 128–149.
  25. Boatman, D., Lesser, R., & Gordon, B. (1995). Auditory speech processing in the left temporal lobe: An electrical interference study. Brain and Language, 51, 269–290.
  26. Bock, J. K. (1986). Syntactic persistence in language production. Cognitive Psychology, 18, 355–387.
  27. Bock, J. K. (1989). Closed-class immanence in sentence production. Cognition, 31, 163–186.
  28. Bock, J. K., & Levelt, W. J. M. (1994). Language production: Grammatical encoding. In M. A. Gernsbacher (Ed.), Handbook of psycholinguistics (pp. 945–984). San Diego: Academic Press.
  29. Bock, J. K., & Loebell, H. (1990). Framing sentences. Cognition, 35, 1–39.
  30. Bock, J. K., Loebell, H., & Morey, R. (1992). From conceptual roles to structural relations: Bridging the syntactic cleft. Psychological Review, 99, 150–171.
  31. Bradley, D. C., Garrett, M. F., & Zurif, E. B. (1980). Syntactic deficits in Broca’s aphasia. In D. Caplan (Ed.), Biological studies of mental processes (pp. 269–286). Cambridge: MIT Press.
  32. Breedin, S. D., & Martin, R. C. (1996). Patterns of verb impairment in aphasia: Analysis of four cases. Cognitive Neuropsychology, 13, 51–91.
  33. Breedin, S. D., Martin, N., & Saffran, E. M. (1994). Categoryspecific semantic impairments: An infrequent occurrence? Brain and Language, 47, 383–386.
  34. Breedin, S. D., & Saffran, E. M. (1999). Sentence processing in the face of semantic loss: A case study. Journal of Experimental Psychology: General, 128, 547–565.
  35. Breedin, S. D., Saffran, E. M., & Coslett, H. B. (1994). Reversal of the concreteness effect in a patient with semantic dementia. Cognitive Neuropsychology, 11, 617–660.
  36. Breedin, S. D., Saffran, E. M., & Schwartz, M. F. (1998). Semantic factors in verb retrieval: An effect of complexity. Brain and Language, 63, 1–31.
  37. Broca, P. (1865). Sur le siege de la faculte de langage articule. Bulletin d’Anthropologie, 5, 377–393.
  38. Brown, A. S. (1991). A review of the tip-of-the-tongue experience. Psychological Bulletin, 109, 204–223.
  39. Brown, C. M., Hagoort, P., & Kutas, M. (2001). Postlexical integration processes in language comprehension: Evidence from brain imaging research. In M. Gazzaniga (Ed.), The new cognitive neurosciences (2nd ed., pp. 881–896). Cambridge: MIT Press.
  40. Buckingham, H. D. (1977). The conduction theory and neologistic jargon. Language and Speech, 20, 174–184.
  41. Buckingham, H. D. (1985). Perseveration in aphasia. Edinburgh, Scotland: Churchill Livingstone.
  42. Buckingham, H. D. (1987). Phonemic paraphasias and psycholinguistic production models for neologistic jargon. Aphasiology, 1, 381–400.
  43. Butterworth, B. (1979). Hesitation and the production of verbal paraphasias and neologisms in jargon aphasia. Brain and Language, 8, 133–161.
  44. Buxbaum, L. J., & Saffran, E. M. (1998). Knowing “how” vs. “what for”: A new dissociation. Brain and Language, 65, 73–86.
  45. Buxbaum, L. J., Schwartz, M. F., & Carew, T. G. (1997). The role of semantic memory in object use. Cognitive Neuropsychology, 14, 219–254.
  46. Caplan, D., Alpert, N., & Waters, G. (1998). Effects of syntactic structure and propositional number on patterns of regional cerebral blood flow. Journal of Cognitive Neuroscience, 10, 541–552.
  47. Caplan, D., & Hanna, J. E. (1998). Sentence production by aphasic patients in a constrained task. Brain and Language, 63, 184– 218.
  48. Caplan, D., Vanier, M., & Baker, E. (1986). A case study of reproduction conduction aphasia: 1. Word production. Cognitive Neuropsychology, 3, 99–128.
  49. Caplan, D., & Waters, G. S. (1999). Verbal working memory and sentence comprehension. Behavioral and Brain Sciences, 22, 77–94.
  50. Cappa, S. F., Perani, D., Schnur, T., Tettamanti, M., & Fazio, F. (1998). The effects of semantic category and knowledge type on lexical-semantic access: A PET study. Neuroimage, 8, 350–359.
  51. Caramazza, A. (1984). The logic of neuropsychological research and the problem of patient classification in aphasia. Brain and Language, 21, 9–20.
  52. Caramazza, A., & Hillis, A. E. (1990). Where do semantic errors come from? Cortex, 26(1), 95–122.
  53. Caramazza, A., & Miceli, G. (1991). Selective impairment of thematic role assignment in sentence processing. Brain and Language, 41, 402–436.
  54. Caramazza, A., & Miozzo, M. (1997). The relation between syntactic and phonological knowledge in lexical access: Evidence from the “tip-of-the-tongue” phenomenon. Cognition, 64, 309–343.
  55. Caramazza, A., & Shelton, J. R. (1998). Domain-specific knowledge systems in the brain: The animate-inanimate distinction. Journal of Cognitive Neuroscience, 10, 1–34.
  56. Caramazza, A., & Zurif, E. G. (1976). Dissociation of algorithmic and heuristic processes in sentence comprehension: Evidence from aphasia. Brain and Language, 3, 572–582.
  57. Cardebat, D., Demonet, J.-F., Celsis, P., Puel, M., Viallard, G., & Marc-Vergnes, J.-P. (1994). Right temporal compensatory mechanisms in a deep dyphasic patient: A case report with activation study by PET. Neuropsychologia, 32, 97–103.
  58. Chatterjee, A., Maher, L. M., & Heilman, K. M. (1995). Spatial characteristics of thematic role representation. Neuropsychologia, 33, 643–648.
  59. Chomsky, N. (1981). Lectures on government and binding. Dordrecht, Holland: Foris.
  60. Coslett, H. B., & Monsul, N. (1994). Reading and the right hemisphere: Evidence from transcranial magnetic stimulation. Brain and Language, 46, 198–211.
  61. Damasio, A. R., Damasio, H., Rizzo, M., Varney, N., & Gersch, F. (1982). Aphasia with nonhemorrhagic lesions in the basal ganglia and internal capsule. Archives of Neurology, 39, 15–20.
  62. Damasio, H., & Damasio, A. R. (1989). Lesion analysis in neuropsychology. New York: Oxford University Press.
  63. Damasio, H., Grabowski, T. J., Tranel, D., Hichwa, R. D., & Damasio, A. R. (1996). A neural bases for lexical retrieval. Nature, 380, 499–505.
  64. De Renzi, E., & Lucchelli, F. (1994). Are semantic systems separately represented in the brain? The case of living category impairment. Cortex, 30, 3–25.
  65. del Viso, S., Igoa, J. M., & Garcia-Albert, J. E. (1991). On the autonomy of phonological encoding: Evidence from slips of the tongue in Spanish. Journal of Psycholinguistic Research, 20, 161–185.
  66. Dell, G. S. (1986). A spreading activation theory of retrieval in sentence production. Psychological Review, 93, 283–321.
  67. Dell, G. S. (1990). Effects of frequency and vocabulary type of phonological speech errors. Language and Cognitive Processes, 5, 313–349.
  68. Dell, G. S., Chang, G., & Griffin, Z. M. (1999). Connectionist models of language production: Lexical access and grammatical encoding. Cognitive Science, 23, 517–542.
  69. Dell, G. S., Juliano, C., & Govindjee, A. (1993). Structure and content in language production: A theory of frame constraints in phonological speech errors. Cognitive Science, 17, 149–195.
  70. Dell, G. S., & O’Seaghdha, P. (1991). Mediated and convergent lexical priming in language production: A comment on Levelt et al. Psychological Review, 98, 604–614.
  71. Dell, G. S., & Reich, P. A. (1981). Stages in sentence production: An analysis of speech error data. Journal of Verbal Learning and Verbal Behavior, 20, 611–629.
  72. Dell, G. S., Schwartz, M. F., Martin, N., Saffran, E. M., & Gagnon, D. (1997). Lexical access in aphasic and non-aphasic speakers. Psychological Review, 104, 811–838.
  73. Dell, G. S., Schwartz, M. F., Martin, N., Saffran, E. M., & Gagnon, D. (2000). The role of computational models in neuropsychological investigations of language: Reply to Ruml and Caramazza. Psychological Review, 107, 635–645.
  74. Deloche, G., & Seron, X. (1981). Sentence understanding and knowledge of the world: Evidence from a sentence-picture matching task performed by aphasic patients. Brain and Language, 14, 57–69.
  75. Demonet, J.-F., Chollet, F., Ramsay, S., Cardebat, D., Nespoulous, J.-L., Wise, R. J. S., Rascol, A., & Frackowiak, R. S. J. (1992). The anatomy of phonological and semantic processing in normal subjects. Brain, 115, 1753–1768.
  76. Dick, F., Bates, E., Wulfeck, B., Utman, J. A., Dronkers, N., & Gernsbacher, M. A. (2001). Language deficits, localization, and grammar: Evidence for a distributive model of language breakdown in aphasic patients and neurologically intact individuals. Psychological Review, 108, 759–788.
  77. Dronkers, N. F., Redfern, B. B., & Knight, R. T. (2000). The neural architecture of language disorders. In M. Gazzaniga (Ed.), The new cognitive neurosciences (2nd ed., pp. 949–960). Cambridge: Bradford/MIT Press.
  78. Dronkers, N. F., Wilkins, D. P., Van Valin, R. D. J., Redfern, B. B., & Jaeger, J. J. (1994). A reconsideration of the brain areas involved in the disruption of morphosyntactic comprehension. Brain and Language, 47, 461–463.
  79. Ellis, A. W. (1985). The production of spoken words: Acognitive neuropsychological perspective. In A. W. Ellis (Ed.), Progress in the psychology of language (pp. 107–145). Hillsdale, NJ: Erlbaum.
  80. Ellis, A. W., Miller, D., & Sin, G. (1983). Wernicke’s aphasia and normal language processing: Acase study in cognitive neuropsychology. Cognition, 15, 111–114.
  81. Farah, M., Hammond, K. H., Mehta, Z., & Ratcliff, G. (1989). Category-specificity and modality-specificity in semantic memory. Neuropsychologia, 27, 193–200.
  82. Farah, M., & McClelland, J. (1991). A computational model of semantic memory impairment: Modality specificity and emergent category specificity. Journal of Experimental Psychology: General, 120, 339–357.
  83. Farah, M., Meyer, M. M., & McMullen, P. A. (1996). The living/nonliving dissociation is not an artifact: Giving an a priori implausible hypothesis a strong test. Cognitive Neuropsychology, 13, 137–154.
  84. Fay, D., & Cutler, A. (1977). Malapropisms and the structure of the mental lexicon. Linguistics Inquiry, 8(3), 505–520.
  85. Fitch, R. H., Miller, S., & Tallal, P. (1997). Neurobiology of speech perception. Annual Review of Neuroscience, 20, 331–353.
  86. Forde, E. M. E., Francis, D., Riddoch, J. J., Rumiati, R. I., & Humphreys, G. W. (1997). On the links between visual knowledge and naming: A single case study of a patient with a category-specific impairment for living things. Cognitive Neuropsychology, 14, 403–458.
  87. Foygel, D., & Dell, G. S. (2000). Models of impaired lexical access in speech production. Journal of Memory and Language, 43, 182–216.
  88. Frazier, L. (1990). Exploring the architecture of the language processing system. In G. T. M. Altmann (Ed.), Cognitive models of speech processing (pp. 409–433). Cambridge: MIT Press.
  89. Freud, S. (1953). On aphasia. New York: International University (Original work published 1891)
  90. Friston, K. J. (1997). Imaging cognitive anatomy. Trends in Cognitive Sciences, 1, 21–27.
  91. Fromkin, V. A. (1971). The non-anomalous nature of anomalous utterances. Language, 47(1), 26–52.
  92. Fujii, T., Fukatsu, R., Watabe, S., Ohnuma, A., Teramura, K., Kimura, I., Saso, S., & Kogure, K. (1990). Auditory sound agnosia without aphasia following a right temporal lobe lesion. Cortex, 26, 263–268.
  93. Funnell, E., & De Mornay Davies, P. (1996). JBR: A reassessment of concept familiarity and a category-specific disorder for living things. Neurocase, 2, 461–474.
  94. Funnell, E., & Sheridan, J. (1992). Categories of knowledge? Unfamiliar aspects of living and nonliving things. Cognitive Neuropsychology, 9, 135–154.
  95. Gaffan, D., & Heywood, C. A. (1993). A spurious category-specific visual agnosia for living things in normal humans and nonhuman primates. Journal of Cognitive Neuroscience, 5, 118–128.
  96. Gagnon, D., Schwartz, M., Martin, N., Dell, G., & Saffran, E. M. (1997). The origins of form-related paraphasias in aphasic naming. Brain and Language, 59, 450–472.
  97. Gainotti, G., & Silveri, M. C. (1996). Cognitive and anatomical locus of lesion in a patient with a category-specific semantic impairment for living things. Cognitive Neuropsychology, 13, 357– 389.
  98. Gainotti, G., Silveri, C., Villa, G., & Miceli, G. (1984). Anomia with and without lexical comprehension disorders. Brain and Language, 29, 18–33.
  99. Gannon, P. J., Holloway, R. L., Broadfield, D. C., & Braun, A. R. (1998). Asymmetry of chimpanzee planum temporale: Humanlike pattern of Wernicke’s brain language area homolog. Science, 279, 220–222.
  100. Ganong, W. F. (1980). Phonetic categorization in auditory word perception. Journal of Experimental Psychology: Human Perception and Performance, 6, 110–125.
  101. Garrett, M. F. (1975). The analysis of sentence production. In G. H. Bower (Ed.), The psychology of learning and motivation (pp. 133–175). London: Academic Press.
  102. Garrett, F. (1980). Levels of processing in sentence production. In B. Butterworth (Ed.), Language production (Vol.1,pp.177–220). London: Academic Press.
  103. Garrett, M. F. (1982). Production of speech: Observations from normal and pathological language. In A. Ellis (Ed.), Normality and pathology in cognitive functions (pp. 19–76). London: Academic Press.
  104. Gazzaniga, M. S. (1983). Right hemisphere language following brain bisection: A 20 year perspective. American Psychologist, 38, 525–537.
  105. Geschwind, N. (1965). Disconnection syndromes in animals and man. Brain, 8, 237–294, 585–644.
  106. Geschwind, N., & Levitsky, W. (1968). Human and brain leftright asymmetries in temporal speech region. Science, 161, 186– 187.
  107. Goldman-Rakic, P. (1995). Architecture of the prefrontal cortex and the central executive. In J. Grafman, K. J. Holyoak, & F. Boller (Eds.), Structure and functions of the human prefrontal cortex (Vol. 769, Annals of the New York Academy of Sciences, 71–84). New York: New York Academy of Sciences.
  108. Goodglass, H. (1973). Studies on the grammar of aphasics. In H. Goodglass & S. Blumstein (Eds.), Psycholinguistics and aphasia. Baltimore: Johns Hopkins University Press.
  109. Goodglass, H., Kaplan, E., Weintraub, S., & Ackerman, N. (1976). The“tip-of-the-tongue”phenomenoninaphasia.Cortex,12,145– 153.
  110. Gordon, J. K. (2001). Phonological neighborhood effects in aphasic speech errors: Spontaneous and structured contexts. Manuscript submitted for publication.
  111. Gordon, J. K., & Dell, G. S. (2002). Learning to divide the labour between syntax and semantics: A connectionist account of deficits in light and heavy verb production. Brain and Cognition, 48, 376–381.
  112. Grafton, S. T., Fadiga, L., Arbib, M. A., & Rizzolatti, G. (1997). Premotor cortex activation during observation and naming of familiar tools. Neuroimage, 6, 231–236.
  113. Grodzinsky, Y. (1990). Theoretical perspectives on language deficits. Cambridge: MIT Press.
  114. Grodzinsky, Y. (2000). The neurology of syntax: Language use without Broca’s area. Behavioral and Brain Sciences, 23, 1–21.
  115. Haarmann, H. J., Just, M. A., & Carpenter, P. A. (1997). Aphasic sentence comprehension as a resource deficit: A computational approach. Brain and Language, 59, 76–120.
  116. Haarmann, H. J., & Kolk, H. H. J. (1992). The production of grammatical morphology in Broca’s and Wernicke’s aphasics: Speed and accuracy factors. Cortex, 28, 97–102.
  117. Habib, M., & Demonet, J.-F. (1996). Cognitive neuroanatomy of language: The contribution of functional neuroimaging. Aphasiology, 10, 217–234.
  118. Hagoort, P., Brown, C. M., & Osterhout, L. (1999). The neurocognition of syntactic processing. In C. M. Brown & P. Hagoort (Eds.), The neurocognition of language (pp. 273–305). Oxford: Oxford University Press.
  119. Harley, T. A. (1984). A critique of top-down independent levels models of speech production: Evidence from non-plan-internal speech errors. Cognitive Science, 8, 191–219.
  120. Harley, T. A., & Bown, H. E. (1998). What causes a tip-of-thetongue state? Evidence for lexical neighborhood effects in speech production. British Journal of Psychology, 89, 151–174.
  121. Hart, J., Berndt, R. S., & Caramazza, A. (1985). Category-specific naming deficit following cerebral infarction. Nature, 316, 439– 440.
  122. Hart, J., & Gordon, B. (1990). Delineation of single-word semantic comprehension deficits in aphasia, with anatomical correlation. Annals of Neurology, 27, 226–231.
  123. Hart, J., & Gordon, B. (1992). Neural subsystems for object knowledge. Nature, 359, 60–64.
  124. Hartley, T., & Houghton, G. (1996). A linguistically constrained model of short-term memory for nonwords. Journal of Memory and Language, 35, 1–31.
  125. Hartsuiker, R. J., & Kolk, H. H. J. (1998). Syntactic facilitation in agrammatic sentence production. Brain and Language, 62, 221–254.
  126. Head, H. (1926). Aphasia and kindred disorders of speech. Cambridge: Cambridge University Press.
  127. Heilman, K. M., Rothi, L., Campanella, D., & Wolfson, S. (1979). Wernicke’s and global aphasia without alexia. Archives of Neurology, 36, 129–133.
  128. Henaff Gonon, M., Bruckert, R., & Michel, F. (1989). Lexicalization in an anomic patient. Neuropsychologia, 27, 391– 407.
  129. Hillis, A. E., & Caramazza, A. (1991). Category-specific naming and comprehension impairment: A double dissociation. Brain, 114, 2081–2094.
  130. Hillis, A. E., Boatman, D., Hart, J., & Gordon, B. (1999). Making sense out of jargon: Aneurolinguistic and computational account of jargon aphasia. Neurology, 53, 1813–1824.
  131. Hillis, A. E., Rapp, B., Romani, D., & Carmazza, A. (1990). Selective impairments of semantics in lexical processing. Cognitive Neuropsychology, 7, 191–243.
  132. Hodges, J. R., Bozeat, S., Lambon Ralph, M., Patterson, K., & Spatt, J. (2000). The role of conceptual knowledge in object use: Evidence from semantic dementia. Brain, 123, 1913– 1925.
  133. Hodges, J. R., Garrard, P., & Patterson, K. (1998). Semantic dementia. In A. Kertesz & D. G. Munoz (Eds.), Pick’s disease and Pick complex (pp. 83–104). New York: Wiley-Liss.
  134. Hodges, J. R., Patterson, K., Oxbury, S., & Funnell, E. (1992). Semantic dementia: Progressive fluent aphasia with temporal lobe atrophy. Brain, 115, 1783–1806.
  135. Hofstede, B. T. M., & Kolk, H. H. J. (1994). The effects of task variation on the production of grammatical morphology in Broca’s aphasia: A multiple case study. Brain and Language, 46, 278– 328.
  136. Houghton, G. (1990). The problem serial order: A neural network model of sequence learning and recall. In R. Dale, C. Mellish, & M. Zock (Eds.), Current research in natural language generation (pp. 287–319). London: Academic Press.
  137. Howard, D., & Orchard-Lisle, V. (1984). On the origin of semantic errors in naming: Evidence from the case of a global aphasic. Cognitive Neuropsychology, 1, 163–190.
  138. Howard, D., & Patterson, K. (1992). Pyramids and palm trees: A test of semantic access from pictures and words. Bury St. Edmunds, UK: Thames Valley Test Company.
  139. Howard, D., Patterson, K. E., Franklin, S., Orchard-Lisle, V. M., & Morton, J. (1984). Consistency and variability in picture naming by aphasic patients. In F. C. Rose (Ed.), Recent advances in aphasiology (pp. 263–276). New York: Raven.
  140. Humphreys, G. W., Lamote, C., & Lloyd-Jones, T. J. (1995). An interactive activation approach to object processing: Effects of structural similarity, name frequency, and task in normality and pathology. Memory, 3, 535–586.
  141. Indefrey, P., Brown, C. M., Hellwig, F., Amunts, K., Herzog, H., Seitz, R., et al. (2001). A neural correlate of syntactic encoding during speech production. Proceedings of the National Academy of Sciences, 98, 5933–5936.
  142. Indefrey, P., & Levelt, W. J. M. (2000). The neural correlates of language production. In M. S. Gazzaniga (Ed.), The new cognitive neurosciences (2nd ed., pp. 845–866). Cambridge: MIT
  143. Jescheniak, J. D., & Levelt, W. J. M. (1994). Word frequency effects in speech production: Retrieval of syntactic information and of phonological form. Journal of Experimental Psychology: Learning, Memory, & Cognition, 20, 824–483.
  144. Jones, E. V. (1984). Word order processing in aphasia: Effect of verb semantics. In F. C. Rose (Ed.), Advances in neurology (Vol. 42, pp. 159–181). New York: Raven.
  145. Jusczyk, P. W., Cutler, A., & Redanz, N. (1993). Preference for the predominant stress patterns of English words. Child Development, 64, 675–687.
  146. Just, M. A., & Carpenter, P. A. (1992). A capacity theory of comprehension: Individual differences in working memory. Psychological Review, 99, 122–149.
  147. Just, M. A., Carpenter, P. A., Keller, T. A., Eddy, W. F., & Thulborn, K. R. (1996). Brain activation modulated by sentence comprehension. Science, 274, 114–116.
  148. Kanwisher, N. H., Woods, R. P., Iacoboni, M., & Mazziotta, J. C. (1997). A locus in human extrastriate cortex for visual shape analysis. Journal of Cognitive Neuroscience, 9, 133–142.
  149. Kay, J., & Ellis, A. W. (1987). A cognitive neuropsychological case study of anomia: Implications for psychological models of word retrieval. Brain, 110, 613–629.
  150. Kean, M.-L. (1979). Agrammatism: A phonological deficit? Cognition, 7, 69–73.
  151. Kempen, G., & Huijbers, P. (1983). The lexical process in sentence production and naming: Indirect election of words. Cognition, 14, 185–209.
  152. Kertesz, A., & Benson, D. E. (1970). Neologistic jargon: A clinicopathological study. Cortex, 6, 362–387.
  153. Klatt, D. H. (1989). Review of selected models of speech perception. In W. D. Marslen-Wilson (Ed.), Lexical representation and process (pp. 169–226). Cambridge: MIT Press.
  154. Kohn, S. E. (1993). Segmental disorders in aphasia. In G. Blanken, Dittmann, H. Grimm, J. C. Marshall, & C.-W. Wallesch (Eds.), Linguistic disorders and pathologies (pp. 197–209). Berlin: Walter de Gruyter.
  155. Kohn, S. E., & Smith, K. L. (1994). Distinctions between two phonological output deficits. Applied Psycholinguistics, 15, 75–95.
  156. Kohn, S. E., & Smith, K. L. (1995). Serial effects of phonemic planning during word production. Aphasiology, 7, 209–222.
  157. Kolk, H., & Heeschen, C. (1992). Agrammatism, paragrammatism, and the management of language. Language and Cognitive Processes, 7, 89–129.
  158. Kolk, H., & Van Grunsven, M. J. E. (1985). Agrammatism as a variable phenomenon. Cognitive Neuropsychology, 2, 347–384.
  159. Kutas, M., Federeier, K. D., & Sereno, M. I. (1999). Current approaches to mapping language in electromagnetic space. In C. Brown & P. Hagoort (Eds.), The neurocognition of language (pp. 359–392). Oxford: Oxford University Press.
  160. Kutas, M., & Hillyard, S. A. (1980). Reading senseless sentences: Brain potentials reflect semantic anomaly. Science, 207, 203– 205.
  161. Lambon Ralph, M. A., Graham, K. S., & Patterson, K. (1999). Is a picture worth a thousand words? Evidence from concept definitions by patients with semantic dementia. Brain and Language, 70, 309–335.
  162. Lambon Ralph, M. A., Sage, K., & Roberts, J. (2000). Classical anomia: A neuropsychological perspective on speech production. Neuropsychologia, 38, 186–202.
  163. Lapointe, S. (1983). Some issues in the linguistic description of agrammatism. Cognition, 14, 1–40.
  164. Lapointe, S. (1985). A theory of verb form use in the speech of agrammatic aphasics. Brain and Language, 24, 100–155.
  165. Lapointe, S., & Dell, G. S. (1989). A synthesis of some recent work in sentence production. In G. N. Carlson & M. K. Tanenhaus (Eds.), Linguistic structure in language processing (pp. 107– 156). Dordrecht: KluwerAcademic Publishers.
  166. Levelt, W. J. M. (1983). Monitoring and self-repair in speech. Cognition, 14, 41–104.
  167. Levelt, W. J. M. (1989). Speaking: From intention to articulation. Cambridge: MIT Press.
  168. Levelt, W. J. M., Praamstra, P., Meyer, A. S., Helenius, P., & Salmelin, R. (1998). An MEG study of picture naming. Journal of Cognitive Neuroscience, 10, 553–567.
  169. Levelt, W. J. M., Roelofs, A., & Meyer, A. S. (1999). A theory of lexical access in speech production. Behavioral and Brain Sciences, 22, 1–75.
  170. Levelt, W. J. M., Schriefers, H., Vorberg, D., Meyer, A. S., Pechmann, T., & Havinga, J. (1991). The time course of lexical access in speech production: A study of picture naming. Psychological Review, 98, 122–142.
  171. Lichtheim, L. (1885). On aphasia. Brain, 7, 433–484.
  172. Linebarger, M. (1990). Neuropsychology of sentence parsing. In A. Caramazza (Ed.), Cognitive neuropsychology and neurolinguistics: Advances in models of cognitive function and impairment (pp. 55–122). Hillsdale, NJ: Erlbaum.
  173. Linebarger, M. (1995). Agrammatism as evidence about grammar. Brain and Language, 50, 52–91.
  174. Linebarger, M., Schwartz, M. F., & Saffran, E. M. (1983). Sensitivity to grammatical structure in so-called agrammatic aphasics. Cognition, 13, 641–662.
  175. Luce, P. A., Pisoni, D. B., & Goldinger, S. D. (1990). Similarity neighborhoods of spoken words. In G. T. M. Altmann (Ed.), Cognitive models of speech processing (pp. 122–147). Cambridge: MIT Press.
  176. Luria, A. R. (1966). Higher cortical functions in man. New York: Basic Books.
  177. MacKay, D. G. (1972). The structure of words and syllables: Evidence from errors in speech. Cognitive Psychology, 3, 210–227.
  178. MacKay, D. G. (1987). The organization of perception and action: A theory for language and other cognitive skills. New York: Springer.
  179. Marie, P. (1906). Revision de la question de l’aphasie: La troisieme convolution frontale gauche ne joue aucun role speciale dans la fonction du langage. Semaine Medicale (Paris), 21, 241–247.
  180. Marshall, J. (1995). The mapping hypothesis and aphasia therapy. Aphasiology, 9(6), 517–539.
  181. Marslen-Wilson, W. D., & Tyler, L. K. (1980). The temporal structure of spoken language understanding. Cognition, 8, 1–71.
  182. Marslen-Wilson, W. D., & Welsh, A. (1978). Processing interactions and lexical access during word recognition in continuous speech. Cognitive Psychology, 10, 29–63.
  183. Martin, A., Haxby, J. V., Lalonde, C. L., & Ungerleider, L. G. (1995). Discrete cortical regions associated with knowledge of color and knowledge of action. Science, 270, 102–105.
  184. Martin, A., Wiggs, C. L., Ungerleider, L. G., & Haxby, J. V. (1996). Neural correlates of category-specific knowledge. Nature, 379, 649–652.
  185. Martin, N., Dell, G. S., Saffran, E. M., & Schwartz, M. F. (1994). Origins of paraphasias in deep dysphasia: Testing the consequences of a decay impairment to an interactive spreading activation model of lexical retrieval. Brain and Language, 47, 609–660.
  186. Martin, N., Gagnon, D., Schwartz, M. F., Dell, G. S., & Saffran, E. M. (1996). Phonological facilitation of semantic errors in normal and aphasic speakers. Language and Cognitive Processes, 11, 257–282.
  187. Martin, N., & Saffran, E. M. (1992). A computational account of deep dysphasia: Evidence from a single case study. Brain and Language, 43, 240–274.
  188. Martin, N., Saffran, E. M., & Dell, G. S. (1996). Recovery in deep dysphasia: Evidence for a relationship between auditory-verbal STM capacity and lexical errors in repetition. Brain and Language, 52, 83–113.
  189. Martin, N., Weisberg, R. W., & Saffran, E. M. (1989). Variables influencing the occurrence of naming errors: Implications for models of lexical retrieval. Journal of Memory and Language, 28, 462–485.
  190. Martin, R., & Blossom-Stach, C. (1986). Evidence of syntactic deficits in a fluent aphasic. Brain and Language, 28, 196–234.
  191. Mazoyer, G. M., Tzourio, N., Frak, V. C., Syrota, A., Murayama, N., & Levrier, O. (1993). The cortical representation of speech. Journal of Cognitive Neuroscience, 5, 467–479.
  192. McCarthy, R. A., & Warrington, E. K. (1988). Evidence for modality-specific meaning systems in the brain. Nature, 334, 428–430.
  193. McClelland, J. L. (1979). On the time relations of mental processes: An examination of systems of processes in cascade. Psychological Review, 86, 287–330.
  194. McClelland, J. L., & Elman, J. L. (1986). The TRACE model of speech perception. Cognitive Neuropsychology, 18, 1–86.
  195. McClelland, J. L., & Rumelhart, D. E. (1981). An interactive activation model of the effects of context on perception: Part 1. Psychological Review, 88, 375–407.
  196. McClelland, J. L., & Rumelhart, D. (1986). Parallel distributed processing (Vol. 1). Cambridge: MIT Press.
  197. Menn, L., & Obler, L. K. (1990). Cross-language data and theories of agrammatism. In L. Menn & L. K. Obler (Eds.), Agrammatic aphasia (Vol. 2, pp. 1369–1389). Amsterdam: John Benjamin.
  198. Menn, L., Reilly, K. F., Hayashi, M., Kamio, A., Fujita, I., & Sasanuma, S. (1998). The interaction of preserved pragmatics and impaired syntax in Japanese and English aphasic speech. Brain and Language, 61, 183–225.
  199. Metter, E. J. (1987). Neuroanatomy and physiology of aphasia: Evidence from positron emission tomography. Aphasiology, 1, 3–33.
  200. Metter, E. J., Hanson, W. R., Jackson, C. A., Kempler, D., van Lancker, D., Mazziotta, J. C., & Phelps, M. E. (1990). Temporoparietal cortex in aphasia: Evidence from positron emission tomography. Archives of Neurology, 47, 1235–1238.
  201. Meyer, M., Friederici, A. D., & von Cramon, D. Y. (2000). Neurocognition of auditory sentence comprehension: Event related fMRI reveals sensitivity to syntactic violations and task demands. Cognitive Brain Research, 9, 19–33.
  202. Miceli, G. A., Mazzucchi, A., Menn, L., & Goodglass, H. (1983). Contrasting cases of Italian agrammatic aphasics without comprehension disorder. Brain and Language, 19, 56–97.
  203. Miceli, G. A., Silveri, C., Villa, G., & Caramazza, A. (1984). The basis for the agrammatic’s difficulty in producing main verbs. Cortex, 20, 207–220.
  204. Milberg, W., & Blumstein, S. E. (1981). Lexical decision and aphasia: Evidence for semantic processing. Brain and Language, 14, 371–385.
  205. Milberg, W., Blumstein, S. E., & Dworetzky, B. (1988). Phonological processing and lexical access in aphasia. Brain and Language, 34, 279–293.
  206. Miller, D., & Ellis, A. W. (1987). Speech and writing error in “neologistic jargonaphasia”: A lexical activation hypothesis. In M. Coltheart, G. Santori, & R. Job (Eds.), The cognitive neuropsychology of language (pp. 253–271). London: Erlbaum.
  207. Miller, J. L., & Eimas, P. D. (1995). Speech perception: From signal to word. Annual Review of Psychology, 46, 467–492.
  208. Miyake, A., Carpenter, P. A., & Just, M. A. (1994). A capacity approach to syntactic comprehension disorders: Making normal adults perform like aphasic patients. Cognitive Neuropsychology, 11, 671–717.
  209. Mohr, J. P., Pessin, M. S., Finkelstein, S., Funkenstein, H. H., Duncan, G. W., & Davis, K. (1978). Broca’s aphasia: Pathological and clinical. Neurology, 28, 311–324.
  210. Moss, H. E., Tyler, L. K., & Jennings, F. (1997). When leopards lose their spots: Knowledge of visual properties in category-specific deficits for living things. Cognitive Neuropsychology, 14, 901– 950.
  211. Mummery, C. J., Patterson, K., Price, C., Ashburner, J., Frackowiak, R. S., & Hodges, J. R. (2000). A voxel-based morphometry study of semantic dementia: Relationship between temporal lobe atrophy and semantic memory. Annals of Neurology, 47, 36–45.
  212. Mummery, C. J., Patterson, K., Wise, R. J. S., Vandenberghe, R., Price, C. J., & Hodges, J. R. (1999). Disrupted temporal lobe connections in semantic dementia. Brain, 122, 61–73.
  213. Murtha, S., Chertkow, H., Beauregard, M., & Evans, A. (1999). The neural substrate of picture naming. Journal of Cognitive Neuroscience, 11, 399–423.
  214. Naatanen, R., Lehtokoski, A., Lennes, M., Cheour, M., Huotilainen, M., Iivonen, A., Vainio, M., Alku, P., Ilmoniemi, R. J., Luuk, A., Allik, J., Sinkkonen, J., & Alho, K. (1997). Language-specific phoneme representations revealed by electric and magnetic brain responses. Nature, 385, 432–434.
  215. Naeser, M. A., Alexander, M. P., Helm-Estabrooks, N. R. G., Levine, H. L., Laughlin, S. A., & Geschwind, N. (1982). Aphasia with predominantly subcortical lesion sites. Archives of Neurology, 39, 2–14.
  216. Nespoulous, J.-L., Dordain, M., Perron, C., Ska, B., Caplan, D., Mehler, J., & Lecours, A. R. (1988). Agrammatism in sentence production without comprehension deficits: Reduced availability of syntactic structures and/or grammatical morphemes: A case study. Brain and Language, 33, 273–295.
  217. Nickels, L., & Howard, D. (1995). Phonological errors in aphasic naming: Comprehension, monitoring, and lexicality. Cortex, 31, 209–237.
  218. Nobre, A. C., Allison, T., & McCarthy, G. (1994). Word recognition in the human inferior temporal lobe. Nature, 372, 260–264.
  219. O’Seaghda, P. G., & Marin, J. W. (1997). Mediated semanticphonological priming: Calling distant relatives. Journal of Memory and Language, 36, 226–252.
  220. Ojemann, G., & Mateer, C. (1979). Human language cortex: Localization of memory, syntax, and sequential motor-phoneme identification systems. Science, 205, 1401–1403.
  221. Pate, D. S., Saffran, E. M., & Martin, N. (1987). Specifying the nature of the production impairment in a conduction aphasic: A case study. Language and Cognitive Processes, 2, 43–84.
  222. Penfield, W., & Roberts, L. (1959). Speech and brain mechanisms. Princeton: Princeton University Press.
  223. Perani, D., Cappa, S. F., Bettinardi, V., Bressi, S., Gorno-Tempini, , Matarrese, M., & Fazio, F. (1995). Different neural systems for the recognition of animals and man-made tools. NeuroReport, 6, 1637–1641.
  224. Peterson, P. R., & Savoy, P. (1998). Lexical selection and phonological encoding during language production: Evidence for cascaded processing. Journal of Experimental Psychology: Learning, Memory, & Cognition, 24, 539–557.
  225. Peterson, S. E., Fox, P. T., Posner, M. I., Mintun, M. A., & Raichle, M. E. (1988). Positron emission tomographic studies of the cortical anatomy of single-word processing. Nature, 331, 585–589.
  226. Pick, A. (1913). Die agrammatischen sprachstorungen. Berlin: Springer.
  227. Plaut, D. C. (1996). Relearning after damage in connectionist networks: Toward a theory of rehabilitation. Brain and Language, 52, 25–82.
  228. Plaut, D. C., & Shallice, T. (1993). Deep dyslexia: A case study of connectionist neuropsychology. Cognitive Neuropsychology, 10, 377–500.
  229. Praamstra, P., Hagoort, P., Maasen, B., & Crul, T. (1991). Word deafness and auditory cortical function: A case history and hypothesis. Brain, 1197–1225.
  230. Pulvermuller, F. (1995). Agrammatism: Behavioral description and neurobiological explanation. Journal of Cognitive Neuroscience, 7, 165–181.
  231. Rapp, B., Benzing, L., & Caramazza, A. (1997). The autonomy of lexical orthography. Cognitive Neuropsychology, 14, 71–104.
  232. Rapp, B., & Goldrick, M. (2000). Discreteness and interactivity in spoken word production. Psychological Review, 107, 460–499.
  233. Rasmussen, T., & Milner, B. (1977). The role of early left-brain injury in determining lateralization of cerebral speech functions. In S. J. Dimond & D. A. Blizard (Eds.), Evolution and lateralization of the brain (Vol. 299 of Annals of the New York Academy of Sciences, pp. 355–369). New York: New York Academy of Sciences.
  234. Roach, A., Schwartz, M. F., Martin, N., Grewal, R. S., & Brecher, A. (1996). The Philadelphia naming test: Scoring and rationale. Clinical Aphasiology, 24, 121–133.
  235. Roelofs, A. (1992). Aspreading activation theory of lemma retrieval in speaking. Cognition, 42, 107–142.
  236. Rugg, M. D. (1999). Functional neuroimaging in cognitive neuroscience. In C. Brown & P. Hagoort (Eds.), The neurocognition of language (pp. 15–36). Oxford: Oxford University Press.
  237. Ruml, W., & Caramazza, A. (2000). An evaluation of a computational model of lexical access: Comments on Dell et al. (1977). Psychological Review, 107, 609–634.
  238. Sacchett, C., & Humphreys, G. W. (1992). Calling a squirrel a squirrel but a canoe a wigwam: A category-specific deficit for artifactual objects and body parts. Cognitive Neuropsychology, 9, 73–86.
  239. Saffran, E. M. (1982). Neuropsychological approaches to the study of language. British Journal of Psychology, 73, 317–337.
  240. Saffran, E. M., Dell, G. S., & Schwartz, M. F. (2000). Computational modeling of language disorders. In M. S. Gazzaniga (Ed.), The new cognitive neurosciences (pp. 933–948). Cambridge: MIT Press.
  241. Saffran, E. M., Marin, O. S. M., & Yeni-Komshian, G. (1976). An analysis of speech perception in word deafness. Brain and Language, 3, 209–228.
  242. Saffran, E. M., & Martin, N. (1997). Effects of structural priming on sentence production in aphasics. Language and Cognitive Processes, 12, 877–882.
  243. Saffran, E. M., & Schwartz, M. F. (1994). Of cabbages and things: Semantic memory from a neuropsychological perspective—A tutorial review. In C. Umilta & M. Moscovitch (Eds.), Attention and performance XV: Conscious and nonconscious information processing (pp. 507–536). Cambridge, MA: Bradford.
  244. Saffran, E. M., Schwartz, M. F., & Marin, O. S. M. (1980a). Evidence from aphasia: Isolating the components of a production model. In B. Butterworth (Ed.), Language production (Vol. 1, pp. 221–241). London: Academic Press.
  245. Saffran, E. M., Schwartz, M. F., & Marin, O. S. M. (1980b). The word order problem in agrammatism: I. Production. Brain and Language, 10, 249–262.
  246. Schriefers, H., Meyer, A. S., & Levelt, W. J. M. (1990). Exploring the time-course of lexical access in production: Picture-word interference studies. Journal of Memory and Language, 29, 86– 102.
  247. Schuell, H., & Jenkins, J. J. (1961). Reduction of vocabulary in aphasia. Brain, 84, 243–261.
  248. Schuell, H., Jenkins, J. J., & Carroll, J. M. (1962). Factor analysis of the Minnesota Test for differential diagnosis of aphasia. Journal of Speech and Hearing Research, 5, 439–469.
  249. Schwartz, J., & Tallal, P. (1980). Rate of acoustic change may underlie hemispheric specialization for speech perception. Science, 207, 1380–1381.
  250. Schwartz, M. F. (1987). Patterns of speech production deficit within and across aphasia syndromes: Application of a psycholinguistic model. In M. Colheart:, R. Job, & G. Sartori (Eds.), The cognitive neuropsychology of language. London: Erlbaum.
  251. Schwartz, M. F., & Brecher, A. (2000). A model-driven analysis of severity, response characteristics, and partial recovery in aphasics’ picture naming. Brain and Language, 73, 62–91.
  252. Schwartz, M. F., Linebarger, M. C., & Saffran, E. M. (1985). The status of the syntactic deficit theory of agrammatism. In M.-L. Kean (Ed.), Agrammatism. Orlando, FL: Academic Press.
  253. Schwartz, M. F., Linebarger, M. C., Saffran, E. M., & Pate, D. S. (1987). Syntactic transparency and sentence interpretation in aphasia. Language and Cognitive Processes, 2, 85–113.
  254. Schwartz, M. F., Marin, O. S. M., & Saffran, E. M. (1979). Dissociation of language function in dementia: A case study. Brain and Language, 7, 277–306.
  255. Schwartz, M. F., Wilshire, C. E., Gagnon, D., & Polansky, M. (2002). The origins of nonword phonological errors in aphasics’ picture naming. Manuscript in progress.
  256. Shallice, T. (1988). From neuropsychology to mental structure. Cambridge: Cambridge University Press.
  257. Shankweiler, D., Crain, S., Gorrell, P., & Tuller, B. (1989). Reception of language in Broca’s aphasia. Language and Cognitive Processes, 4, 1–33.
  258. Shattuck-Hufnagel, S. (1979). Speech errors as evidence for a serial ordering mechanism in speech production. In W. E. Cooper & E. C. T. Walker (Eds.), Sentence processing: Psycholinguistic studies presented to Merrill Garrett (pp. 295–342). Hillsdale, NJ: Erlbaum.
  259. Shindo, M., Kaga, K., & Tanaka, Y. (1991). Speech discrimination and lip reading in patients with word deafness or auditory agnosia. Brain and Language, 4, 153–161.
  260. Simons, J. S., & Lambon Ralph, M. A. (1999). The auditory agnosias. Neurocase, 5, 379–406.
  261. Smith, E. E., Jonides, J., Koeppe, R. A., Awh, E., Schumache, E. H., & Minoshima, S. (1995). Spatial vs. object working memory: PET investigations. Journal of Cognitive Neuroscience, 7, 337– 356.
  262. Snowden, J. S., Goulding, P. J., & Neary, D. (1989). Semantic dementia: A form of circumscribed cerebral atrophy. Behavioural Neurology, 2, 167–182.
  263. Snowden, J. S., Griffiths, H., & Neary, D. (1994). Semantic dementia: Autobiographical contribution to preservation of meaning. Cognitive Neuropsychology, 11, 265–288.
  264. Stemberger, J. P. (1985). An interactive activation model of language production. In A. Ellis (Ed.), Progress in the psychology of language (pp. 143–186). London: Erlbaum.
  265. Stewart, F., Parkin, A. J., & Hunkin, N. M. (1992). Naming impairments following recovery from herpes simplex encephalitis: Category specific? Quarterly Journal of Experimental Psychology, 44A, 261–284.
  266. Stromswold, K., Caplan, D., Alpert, N., & Rauch, S. (1996). Localization of syntactic comprehension by positron emission tomography. Brain and Language, 52, 452–473.
  267. Swaab, T., Brown, C., & Hagoort, P. (1997). Spoken sentence comprehension in aphasia: Event-relate potential evidence for a lexical integration deficit. Journal of Cognitive Neuroscience, 9, 39–66.
  268. Takahashi, N., Kawamura, M., Shinotou, H., Hirayama, K., Kaga, K., & Shindo, M. (1992). Pure word deafness due to left hemisphere damage. Cortex, 28, 295–303.
  269. Thompson-Schill, S. L., D’Esposito, M., Aguirre, G. K., & Farah, M. J. (1997). Role of left inferior prefrontal cortex in retrieval of semantic knowledge: A reevaluation. Proceedings of the National Academy of Sciences, 94, 14792–14797.
  270. Tranel, D., Damasio, H., & Damasio, A. (1997). A neural basis for the retrieval of conceptual knowledge. Neuropsychologia, 35, 1319–1327.
  271. Trueswell, J. C., Tanenhaus, M. K., & Garnsey, S. M. (1994). Semantic influences on parsing: Use of thematic role information in syntactic ambiguity resolution. Journal of Memory and Language, 33, 285–318.
  272. Tulving, E. (1972). Episodic and semantic memory. In E. Tulving & W. Donaldson (Eds.), Organization of memory. New York: Academic Press.
  273. Tyler, L. K. (1992). Spoken language comprehension: An experimental approach to disordered and normal processing. Cambridge: MIT Press.
  274. Tyler, L. K., Ostrin, R. K., Cooke, M., & Moss, H. E. (1995). Automatic access of lexical information in Broca’s aphasics: Against the automaticity hypothesis. Brain and Language, 48, 131–162.
  275. Ullman, M. T. (2001). A neurocognitive perspective on language: Thedeclarative/proceduralmodel.NatureReviews/Neuroscience, 2,717–726.
  276. Uylings, H. B. M., Malofeeva, L. J., Bogolepova, I. N., Amunts, K., & Zilles, K. (1999). Broca’s language area from a neuroanatomical and developmental perspective. In C. Brown & P. Hagoort (Eds.), The neurocognition of language (pp. 319–336). Oxford: Oxford University Press.
  277. Van Orden, G. C., Pennington, B. F., & Stone, G. O. (2001). What do double dissociations prove? Cognitive Science, 25, 111–172.
  278. Vandenberghe, R., Price, C., Wise, R., Josephs, O., & Frackowiak, R. S. J. (1996). Functional anatomy of a common semantic system for words and pictures. Nature, 383, 254–256.
  279. Vanier, M., & Caplan, D. (1990). CT-scan correlates of agrammatism. In L. Menn & L. K. Obler (Eds.), Agrammatic aphasia: A cross-language narrative source book (pp. 37–114). Amsterdam: Benjamins.
  280. Vigliocco, G., Antonini, T., & Garrett, M. F. (1997). Grammatical gender is on the tip of Italian tongues. Psychological Science, 8, 314–317.
  281. Vitevitch, M. S. (1997). The neighborhood characteristics of malapropisms. Language and Speech, 40, 211–228.
  282. Vitevitch, M. S. (2001). The influence of phonological similarity neighborhoods on speech production. Manuscript submitted for publication.
  283. Vitevitch, M. S., & Sommers, M. S. (2001). The role of phonological neighbors in the tip-of-the-tongue state. Manuscript submitted for publication.
  284. Vousden, J. I., Brown, G. D., & Harley, T. A. (2000). Serial control of phonology in speech production: A hierarchical model. Cognitive Psychology, 41, 101–175.
  285. Warren, R. M. (1970). Perceptual restoration of missing speech sounds. Science, 167, 392–393.
  286. Warrington, E. K. (1975). The selective impairment of semantic memory. Quarterly Journal of Experimental Psychology, 27, 635– 657.
  287. Warrington, E. K., & McCarthy, R. A. (1983). Category-specific access dysphasia. Brain, 106, 859–878.
  288. Warrington, E. K., & McCarthy, R. A. (1987). Categories of knowledge: Further fractionations and an attempted integration. Brain, 110, 1273–1296.
  289. Warrington, E. K., & Shallice, T. (1984). Category-specific semantic impairments. Brain, 107, 829–853.
  290. Weiller, C., Isensee, C., Rijntejes, M., Huber, W. I., Muller, S., Bier, D., Dutschka, K., Woods, R. P., North, J., & Diener, H. C. (1995). Recovery from Wernicke’s aphasia: A position emission tomographic study. Annals of Neurology, 37, 723– 732.
  291. Wernicke, C. (1874). Der aphasische symtomencomplex. Breslau: Cohn and Weigart.
  292. Willmes, K., & Poeck, K. (1993). To what extent can aphasic syndromes be localized? Brain, 116, 1527–1540.
  293. Wulfeck, B. (1998). Grammaticality judgments and sentence comprehension in aphasia. Journal of Speech, Language, and Hearing Research, 31, 72–81.
  294. Yaqub, B. A., Gascon, G. G., Al-Nosha, M., & Whitaker, H. (1988). Pure word deafness (acquired verbal auditory agnosia) in an Arabic speaking patient. Brain, 111, 457–466.
  295. Zarahn, E., Aguirre, G., & D’Esposito, M. (1997). A trial-based experimental design for fMRI. Neuroimage, 6, 122–138.
  296. Zingeser, L. B., & Berndt, R. S. (1990). Retrieval of nouns and verbs by agrammatic and anomic aphasics. Brain and Language, 39, 14–32.
Primate Cognition Research Paper
Computational Models of Neural Networks Research Paper