Text Comprehension and Discourse Processing Research Paper

Academic Writing Service

Sample Text Comprehension and Discourse Processing Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. If you need a research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our custom research paper writing service for professional assistance. We offer high-quality assignments for reasonable rates.

Psychology is a newcomer to discourse analysis, which has been practiced for a long time by other disciplines. Indeed, discourse analysis in the form of rhetoric was among the first disciplines studied in our culture. This tradition continues strongly into our days, but it has spawned many offshoots, both within philosophy and beyond: Formal semantics has a long tradition (e.g., Seuren, 1985); within linguistics, text linguistics became important in the 1970s (Halliday & Hasan, 1976; van Dijk, 1972); natural language processing by computers and computational linguistics became prominent in the 1980s (e.g., Jurafsky & Martin, 2000); and, at about the same time, models of how discourse is processed were developed through cooperation of linguists, computer scientists, and psychologists (e.g., W. Kintsch & van Dijk, 1978; Schank & Abelson, 1977) as a branch of the new cognitive science. Research of the latter type is the concern of this research paper.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


Before we focus on psychological process models of discourse comprehension, a comment is required on the two major issues that have existed throughout the long history of discourse analysis and that are still unresolved. The first controversy has to do with a difference in viewpoint. Some discourse analysts view language essentially as a means for information transmission. A speaker or writer intends to transmit information to a listener or reader. Information is factual, propositional. Researchers in this tradition focus on story understanding, memory for factual material presented in texts, learning from texts, and the inferences involved in this process. Examples of this approach are, for instance, the psychological work of W. Kintsch and van Dijk (1978) and the linguistic work reviewed by Lyons (1977). However, information transmission is only one function of language. Social interaction is another, and a competing research tradition focuses on this aspect of discourse. Language is often used not to transmit information, but rather to establish social roles, to regulate social interactions, to amuse, and to entertain. Labov (1972) or H. H. Clark (1996) exemplify this research tradition. Although most students of language would agree that both approaches are legitimate and valuable, the obviously desirable integration of these fields of research has not yet been achieved.

Since the days of Aristotle and Plato, some have viewed language as basically orderly and logical, at least in its underlying essence, while others have claimed that language is messy and anomalous by its very nature. The former tradition has tended to develop logical and mathematical theories of language. Such theories can be both elegant and highly informative (e.g., Barwise & Perry, 1983, for semantics; Chomsky, 1965, for syntax). However, the opposing tradition has always delighted in pointing out the gap between theory and reality, as well as the seemingly boundless irrationality of language and language use. Although this conflict continues unabated today, an intermediate position has emerged in recent years that may yet alter the nature of this debate. Connectionist models of language (Elman et al., 1996; also hybrid models like W. Kintsch, 1998) are formal, with all the advantages that mathematical models provide, but they do not employ the concept of logical rules. Thus, connectionist models may be better able to account for the disorderly part of language while retaining the important advantages of a mathematical model.




The comprehension processes involved in reading a text and in listening to spoken discourse are essentially the same. Texts and conversations are very different in their properties and structure, task demands, and contextual constraints, but the comprehension processes are similar. That is, both make demands on working memory, both require relevant background knowledge, and both are constructive processes in which inferences and construction play a crucial role. We describe the features that set apart reading comprehension and comprehension of conversations, but most of what we have to say in this research paper holds for both.

In this research paper we first discuss the role of memory in text comprehension, focusing on short-term working memory and long-term working memory. Then we review studies that are concerned with what people remember from reading a text, and how they learn from reading a text. Of particular importance here is what a reader has to already know in order to be able to acquire new knowledge from a text, and the role of constructive processes in comprehension and learning. We then turn to a consideration of current models of comprehension and knowledge representations. Finally, we discuss experiments investigating the factors that influence comprehension, making comprehension easier or making it more difficult.

Memory and Text Comprehension

Working Memory

Text comprehension is a task that requires processing and integration of a sequential series of symbols; as such, memory processes—especially working memory, due to its storage and computational abilities—are strongly implicated in comprehension ability (Carpenter, Miyake, & Just, 1994; Just & Carpenter, 1987). Unlike early characterizations of working memory as a storage system used to hold a few chunks of information, working memory has come to be seen as a limited resource for which processing and storage demands compete. Working memory can be seen as a sort of attentional work space that keeps information active for short-term use while it directs cognitive resources for task performance.

It is easy to see how the demands required by text comprehension should draw heavily on working memory resources. At the same time text is decoded and processed, important ideas or current propositions must be maintained in memory and retrieved at key points in the comprehension process. Maintaining ideas or propositions from a text at the same time new text is analyzed is necessary to form inferences, develop an understanding of text coherence, recognize inconsistencies, and so on. Accordingly, researchers have come to regard working memory as a key component of comprehension processes and the possible source of individual differences in comprehension.

Daneman and Carpenter (1980; 1983) theorized that individual differences in working memory capacity could explain individual differences in reading comprehension. They argued that reading processes of poor readers make heavy demands on working memory that result in a trade-off compromising the working memory capacity for maintaining text information. As a result, poor readers are unable to make the appropriate connections between text necessary to recognize inconsistencies and, presumably, to link text and form inferences necessary for expert comprehension. To measure the functional capacity of working memory for reading, Daneman and Carpenter developed a measure called the reading-span test. This test requires readers to read aloud a series of unrelated sentences at the same time that they memorize the final word in each sentence. Sentences are presented in sets containing varied numbers of sentences, and the largest number of sentences for which a participant can recall all memorized final words in at least 60% of the sets of that size is defined as the reading span. Reading span differs among individuals—from about 2 to 5.5 for college students (Just & Carpenter, 1992)—but can also be influenced by text complexity or other demanding types of text processing (e.g., linguistic ambiguity or text distance).

When reading span is consistently tested, empirical evidence demonstrates that working memory capacity is a reliable predictor of proficiency in text processing. Daneman and Carpenter (1980, 1983) have linked reading spans to various tests of reading comprehension (including the verbal SAT) and have demonstrated that reading span can reliably predict the likelihood of a reader discovering inconsistencies in a text. In a series of experiments, Carpenter et al. (1994) demonstrated that reading span accounted for systematic differences in the way college students processed text. These authors argue that individuals with limited working memory capacity are disproportionately affected by manipulations that increase the demand on working memory resources during comprehension. The decline in performance by low-span individuals occurs regardless of whether the increase in demand is integrated into the comprehension task (for example, increasing syntactic complexity or introducing ambiguity into a text) or represents a demand external to the comprehension process (for example, a set of unrelated memory items).

Regardless of the source of memory demand, it is important to note that working memory capacity does not just affect the amount of information that can be retained during reading of a text. In fact, many of the systematic comprehension differences associated with working memory capacity reflect higher-level processes of text integration and representation. For example, Carpenter et al. (1994) found that high-span readers were more likely to keep multiple representations of a homograph active until context could be determined; they also found that high-span readers were better able to integrate text information that was separated by increasing amounts of intervening text than low-span readers. Similarly, Whitney, Ritchie, and Clark (1991) found that individuals with high working memory capacity were better able to maintain ambiguous interpretations of a text, whereas low-span individuals were much more likely to choose specific text interpretations earlier in their reading. Consistent with all these findings, calculation of the demands a text is likely to have on working memory has been shown to predict the actual comprehensibility of the text (Britton & Gulgoz, 1991; J. R. Miller & Kintsch, 1980).

Empirical evidence also ties working memory capacity directly to comprehension processes. Singer and Ritchot (1996) found that individuals with high reading spans were better able to verify bridging inferences about a text. Singer, Andrusiak, Reisdorf, and Black (1992) found that higher working memory capacity supported inference processing. Other studies have confirmed that working memory consistently predicts inference making and text learning (Haenggi & Perfetti, 1994; Myers, Cook, Kambe, Mason, & O’Brien, 2000). Finally, research has demonstrated that known components of working memory can be tied to specific types of inferential processes. Friedman and Miyake (2000) demonstrated that maintaining the spatial and causal aspects of a situation model—a type of cognitive representation of comprehended text that is discussed later in this research paper—could be tied to the visuospatial and verbal components, respectively, of working memory.

The implications of these studies are clear: Working memory has important and measurable ties to comprehension processes and, all else remaining equal, individuals with high working memory capacity are at a comprehension advantage. However, it should be noted that although working memory capacity has been shown to have a reliable influence on measures of inference and learning, other factors can be equally important in predicting comprehension skills. For example, domain knowledge can strongly influence the amount of learning an individual takes from a text; high domain knowledge can compensate for poor decoding skills, low working memory capacity, very demanding texts, and so on. As we discuss later in this research paper, many factors can influence the ultimate comprehension performance of an individual, and no single factor is sufficient to predict success or failure in comprehension.

Long-Term Working Memory in Discourse Comprehension

Working memory, as previously discussed, is our name for the information that is active and available in consciousness. Whereas text comprehension clearly depends upon active processing, storage, and retrieval of information, working memory is strictly limited in sheer capacity and in the duration for which items are kept active. Working memory limitations cannot explain empirical evidence that shows capable readers to be relatively insensitive to interruptions, to be resistant to interference, and to have accurate recall that far exceeds the capacity of working memory (for a summary, see W. Kintsch, 1998). Thus, working memory is clearly insufficient to manage the heavy demands of comprehension. Discourse comprehension requires ready access to a large amount of information, significantly more than laboratory measurements of the capacity of working memory indicate is available. Van Dijk and Kintsch (1983, p. 347) list the following memory requirements for discourse comprehension— information that must be available for analysis and reanalysis; graphemic and phonological information; words and phrases, often whole sentences; the propositional structure of the text, microstructure as well as the macrostructure (The concepts of text microstructure and macrostructure will be discussed later in this research paper; for now, consider the macrostructure to represent the high-level gist of a text and the microstructure to represent the detailed content of a text.); the emerging situation model; lexical knowledge and general world knowledge; and goals, subgoals, and the general task context. Each of these components of the memory system involved in text comprehension could exceed the capacity of short-term working memory—but they are all required for the process of comprehension, and are demonstrably used in that process. How can these facts about memory demands in comprehension be reconciled with the strong laboratory evidence for a strictly limited working memory capacity of three or four chunks?

Psychologists have sometimes despaired in the face of this puzzle, asserting that real-life memory is totally different from memory studied in the laboratory. Laboratory results have been claimed to be unnatural, irrelevant, and hence useless (Jenkins, 1974). Recalling information read in the daily paper at breakfast or retelling the complicated plot of a novel is quite easy; however, it takes an hour of hard work to memorize a list of 100 random words in the laboratory! An individual cannot repeat more than about nine digits on a digit span test, but the experienced physician keeps in mind seemingly endless chunks of patient information, laboratory data, relevant disease knowledge, alternative diagnoses, and so on. Such information can be shown to influence the physician’s reasoning and decision processes—but how could it fit into the limited capacity working memory we have identified in laboratory research?

Ericsson and Kintsch (1995) have provided an answer to these questions, and were able to successfully reconcile everyday memory phenomena with the results of laboratory studies of memory since the days of Ebbinghaus. Their argument is based on a distinction between short-term working memory and long-term working memory. Short-term working memory is what has typically been studied in the laboratory; it plays an important role in discourse comprehension, as discussed in the previous section. Long-term working memory (LTWM) is different: It is not capacity limited, but it only functions under certain rather restrictive conditions. Nevertheless, these are the conditions under which we observe prodigious feats of memory in real life.

Long-term working memory (see also W. Kintsch, 1998; W. Kintsch, Patel, & Ericsson, 1999) is a skill experts acquire. In fact, becoming an expert in any cognitive task involves the acquisition of LTWM skills. The skill consists in the ability to access information in long-term memory via cues in short-term working memory without time-consuming and resource-demanding retrieval operations. Experts can access relevant information in their long-term memory quickly (in about 400 ms) and effortlessly. This accessible portion of their long-term memory becomes part of their working memory—their LTWM. How much information can be accessed depends on the nature and efficiency of the retrieval structures experts have formed, but there are no capacity limitations. Thus, experts retrieve task relevant knowledge and experiences quickly and without effort, and recall what they did with ease. Examples of such expert memory are the physician making a medical diagnosis, the chess master playing blindfold chess (for further discussion on development of expertise)—and all of us when we use our expertise in reading familiar materials, such as a story or the typical newspaper article.

Long-term working memory cannot be used in traditional laboratory experiments. Ebbinghaus wanted to study what he saw as pure memory unaffected by our daily experience; hence he invented the nonsense syllable. And although modern psychologists no longer use the nonsense syllable, they have followed Ebbinghaus’s lead in excluding or controlling the role of experience in their experiments as carefully as is possible. The types of tasks used in traditional laboratory experiments thus remove the essential component of LTWM— experience. When it comes to repeating a string of digits or memorizing a list of words, we are all novices, and we cannot use whatever LTWM skills we might possess. However, when we read an article or participate in a conversation on a familiar topic, a lifetime of experiences and a rich store of knowledge become relevant. We comprehend as experts and remember as experts. Of course, our expertise is limited to certain familiar, frequently experienced topics, or to some restricted professional domain. If we read or listen outside our domain of expertise, we immediately become aware of our inability to comprehend what we read because we cannot activate the required background knowledge. In unfamiliar domains, our recall is equally limited because we do not have the knowledge that would allow the proficient and easy recall that occurs with familiar texts. Unfamiliar domains restrict the use of experience just as in the laboratory, where the experimenters carefully design their experiments in such a way to prevent us from using whatever knowledge we might have.

For the remainder of this section, assume someone is reading a simple text in a familiar domain—a straightforward story, for example. Alternatively, one could assume that someone is listening to a story, for example a soap opera. Although soap opera stories are rarely straightforward, they (like most stories we encounter) are about human affairs (no pun intended), motivations, actions, character—things we have experienced throughout our lives. We are familiar with these concepts in the form of texts, but primarily we understand them through our actions and interactions in the social world. Thus, we are highly familiar with most stories in general, with the words and syntax used in the story, and with the schematic structure of the story itself. In other words, the reader is an expert. In reading such a text, LTWM comes into play in two ways.

First of all, the reader activates relevant knowledge automatically. The necessary concepts, frames, scripts, schemata (Schank & Abelson, 1977), and personal experiences immediately link information in the text held in working memory to the reader’s general knowledge and episodic memory. That does not mean that this knowledge enters into short-term working memory, or becomes conscious; it only means that it is available to be used, should there be any reason to use it. Text comprehension researchers have described this process as one of making inferences (W. Kintsch, 1993). This is not always an accurate description, if we mean by making an inference that some statement not directly contained in the text is derived from the text with the help of relevant background knowledge and becomes part of the mental representation of the text. That happens, or can happen, but knowledge activation does not necessarily imply an explicit inference. Activated knowledge simply becomes available for further use—if there is further use. There is a definite need for knowledge activation in many experiments, when the experimenter asks a question or presents a relevant word in a lexical decision task. In uncontrived situations the need for knowledge activation may arise spontaneously, as when a reader detects a gap in his or her understanding that can only be filled through some problem-solving activity involving that knowledge. But in the normal course of automatic reading comprehension, activated knowledge merely stands by in LTWM. For example, consider the bridging inference involved in the well-known sentence pair of Haviland and Clark (1974):

We checked the picnic supplies. The beer was warm.

Understanding this sentence does not involve the inference Picnic supplies normally include beer, in the sense that this inference statement becomes an explicit part of the mental representation of this text. Rather, picnic supplies as well as beer both make strongly associated information, such as beer is frequently a part of picnic supplies, available in LTWM. This requires a little extra processing time; 219 ms in this experiment, in comparison with a control sentence pair in which beer was explicitly mentioned in the first sentence. This knowledge activation suffices to establish the coherence between the two sentences and allows the comprehension process to proceed without the reader ever becoming conscious of a bridging problem. Note that this use of LTWM entirely depends on the availability of strong automatic retrieval links between the words of the sentence and the contents of long-term memory. Consider a different example:

The weather was calm. Connors used Kevlar sails.

Anyone but an expert sailor will not automatically find this to be coherent text, because there is nothing in our long-term memory that strongly links calm weather either to Connors or to Kevlar sails. We might figure out that perhaps Kevlar sails are good for calm weather—but that is not an automatic processes. Rather, it is a controlled problem-solving process with significant time and resource demands. Long-term working memory functions only in those situations in which we can rely on strongly overlearned knowledge: that is, in domains where we are experts.

A second way in which LTWM plays a role in text comprehension is by ensuring that the mental representation of the text that already has been constructed remains readily accessible as reading continues. If we read something, it is not only necessary to link what we read with our long-term store of knowledge and experiences, but it is also necessary that we link what we read now with relevant earlier portions of the text. These portions cannot be held in short-term working memory. We know from our own experience as well as from experimental studies (e.g. Jarvella, 1971) that no more than the current sentence—if it is not too long—is held in the focus of attention during reading. We also know that we effortlessly retrieve referents and relevant propositions from earlier portions of the text when needed to construct the meaning of the current sentence. Comprehending a text implies linking its various parts effectively in such a way as to permit easy retrieval.That is to say, comprehension implies the construction of a new network in LTWM. Of course, unlike the wellestablished links between text and long-term knowledge, the newly generated textbase is subject to forgetting.

Thus, LTWM during text comprehension includes shortterm working memory—the sentence currently in the focus of attention—plus relevant knowledge activated from longterm memory that is directly linked via strong retrieval structures to the current contents of short-term working memory. It also includes the textbase (including contextual information, such as reading goals) that has already been generated, of which the presently worked-on sentence is a continuation.

Long-term working memory as previously described is incidental, an inherent by-product of the process of text comprehension. This is also the case for the physician and chess master. The chess master learns to play chess—not to memorize chess boards. It is worth noting, however, that LTWM can be intentional—as in the case of the runner who invented an encoding and retrieval system that allowed him to memorize long sequencesofrandomdigits(Ericsson,Chase,&Faloon,1980),or the waiter who learned to use retrieval structures to memorize the orders of his customers (Ericsson & Polson, 1988). However, what we all do naturally in text comprehension is functionally equivalent to the memorial strategies employed in thesecases.

Aspects of Comprehension

Previously in this research paper, multiple facets of comprehension have been alluded to, but not discussed. Comprehension is a complex process. Multiple factors influence the comprehension of individuals; these factors include characteristics of the text as well as those of the reader or comprehender. Further, the goal of comprehension—whether memory for information or true understanding of such—can be influenced by factors both internal and external to the learner.

Memory for Text

Often when people talk about learning from a text, they speak about recalling information from that text. It is not surprising that many students equate learning from a text with memorizing its content, because traditional tests of learning have focused primarily on the recall of information. Multiple-choice, fill-in-the blank, and true-or-false components from standard educational tests typically require only surface memory for the source information. However, there is a distinction to be made between memory for a text and learning from a text (W. Kintsch, 1998). Three levels of text representation have been identified by van Dijk and Kintsch (1983): the surface level, the textbase, and the situation model. The surface level and textbase relate to memory for a text, whereas the situation model concerns learning from a text. Memory for a text reflects superficial recognition or recall of information, whereas true learning from a text, as discussed in the next section, involves integration of text material with prior knowledge.

Memory for a text can exist at several levels and typically is demonstrated by recognition or recall tasks. Being able to identify or verify exact passages, sentences, or words that appeared in a text involves surface-level knowledge of the text. This type of task involves recognition of previously read text and is the most superficial type of text processing in that it requires no understanding of the text’s meaning. One can memorize a sentence or learn to recite a poem without ever really understanding the contents (W. Kintsch, 1998). But when most individuals attempt to memorize a text, they are not really trying to faithfully encode the surface-level representation of the text. Normally they are attempting to create a textbase representation of the text.

Creation of a textbase differs from surface-level knowledge of a text in that the textbase does not necessarily represent the exact words or sentences used in the text. Instead, the textbase contains a representation of the ideas or propositions contained within a text. The information contained in the textbase can be tied directly to the information derived from the text, without any additional knowledge or inferences that the reader might contribute to such information (W. Kintsch, 1998). Thus, it is entirely possible for a textbase representation to be incomplete or incoherent. This is especially true because texts often are not completely explicit and require the reader to make inferences to connect ideas in the text. A textbase representation, then, requires readers to generate a faithful representation of the information contained in a text, but does not require them to form more than a superficial level of understanding about that information (McNamara, Kintsch, Songer, & Kintsch, 1996).

As previously noted, text memory generally is tested using recognition and recall methods. Sentence recognition tasks reveal that most individuals have surprisingly good and long-lasting memory for what they read (W. Kintsch, 1998). Interestingly, various studies have found that the recognizability of a sentence is related to its importance to the text: Major text propositions are recognized more easily than minor, detail-oriented propositions (C. I. Walker & Yekovich, 1984). Not only are the text-relevant characteristics of the target sentence important, but characteristics of the distractor sentences also influence the likelihood that a reader will incorrectly “recognize” it as a sentence from the text. Distractors that are more relevant to the reader’s representation of the text tend to be confused with the actual text. Paraphrases are most likely to be mistaken as original text, followed by inferences, then topic-relevant distractors and, finally, topic-irrelevant distractors (W. Kintsch, Welsch, Schmalhofer, & Zimny, 1990). W. Kintsch et al. (1990) not only identified the pattern by which distractors are confused with original text, but also they analyzed the extent to which different text representations—surface level, textbase, or situation model (an integrated representation of text information and background knowledge)—are negatively affected by delay. Recognition tested before and after a 4-day delay demonstrated no decline in recognition memory for the situation model, a substantial decline (50% loss of strength) for the textbase, and a complete loss of surface information.

Thus, recognition memory depends not only on the strength of text representation formed during reading, but also upon the type of representation formed and the degree to which distractor sentences approach this representation. In general, recognition memory is quite good and long lasting but does not offer the learner much in the way of useful, transferable knowledge.

Another way to test text memory is through methods that focus on the recall of text. Commonly, summarization is used to assess recall of text, especially because longer texts lend themselves to reproduction of their macrostructure but not their microstructure (Bartlett, 1932). Presumably this result occurs because recall of a text progresses in a top-down, hierarchical manner through a text representation (e.g., W. Kintsch & van Dijk, 1978; Lorch & Lorch, 1985). Indeed, evidence does demonstrate that facilitating text organization produces better recall. An extensive literature on advance organizers (Corkill, 1992; see alsoAusubel, 1960) suggests that use of advance organizers presented before learning may facilitate recall and organization of knowledge. Studies on expository text (e.g., Lorch & Lorch, 1995; Lorch, Lorch, & Inman, 1993) have found that text components that signaled the structure of a text produced better memory for text ideas and their organization. In a study that included writing quality as an independent variable, Moravcsik and Kintsch (1993) found that well-written, organized texts facilitated recall.

Well-written texts may offer another advantage to students other than the ease with which text macrostructure is identified and encoded—these texts also may require less background knowledge and facilitate more complete understanding than poorly written texts. It is important to recognize that the recall of a text is only as good as the individual’s representation of the text. Thus, in cases in which an individual develops an incomplete or erroneous representation of the text, the summary of the text will reflect those problems. Especially in cases when individuals lack requisite background knowledge or when the subject matter is technical, well-written and wellorganized texts may be critical to encourage complete, accurate representations of text. Again, although recall memory for a text can be quite good depending upon the quality of the textbase representation, recall memory is limited in use to tests of knowledge rather than applications of it.

Inferences

Inferences in text comprehension play a crucial role in comprehension. The total information that is necessary for a true understanding of a text is rarely stated explicitly in the text. Much is left unsaid, with the expectation that a well-informed and motivated reader will fill it in. Indeed, texts that aspire to be fully explicit, like some legal documents, are very hard and boring to read. For most texts, readers must construct the meaning of a text—although this task requires sufficient clues for processing, overwhelming readers with redundant and superfluous cues is not to their advantage at all. How people infer what is not stated explicitly in a text has been an active topic of investigation among text researchers. It also has been a fairly confused issue, because researchers have not always distinguished adequately between different types of inferences.

Inferences are often directed toward linking different parts of a text. One distinction that must be made in this respect is between the cohesion and coherence of a text. Cohesion (Halliday & Hasan, 1976) refers to the linguistic signals that link sentences in a passage; that is, it is a characteristic of the linguistic surface structure of a text. Typical cohesive devices, for instance, are sentence connectives, such as but or however. Coherence (van Dijk & Kintsch, 1983) refers to linkages at the propositional level, which may or may not be signaled linguistically. For instance,

  • The weather was sunny all week. But on Sunday it snowed.

is both cohesive and coherent, whereas

  • The weather was sunny all week. On Sunday it snowed.

lacks the cohesive but, but is nevertheless coherent because of our knowledge that Sunday is a day of the week. Although linguists typically study cohesion, most of the psychological research concerns coherence. In general, explicit cohesive markers in a passage allow for faster processing but do not affect recall if coherence can be inferred without them (Sanders & Noordman, 2000).

Bridging inferences are necessary to establish coherence when there is no explicit link between two parts of a passage, as in (b). Bridging inferences have been studied extensively (Haviland & Clark, 1974; Myers et al., 2000; Revlin & Hegarty, 1999). They are necessary for true understanding, because otherwise the two parts of the passage would be unrelated in the mental representation of the text.

However, not all inferences have to do with coherence. Elaborative inferences do not link pieces of text, but rather enrich the text through the addition of information from the reader’s knowledge, experience, or imagination. Thus, elaborations link a text with the reader’s background, fulfilling a very important function, as is further discussed in the section on learning from texts.

Much of what is called inferencing has already been discussed in this research paper’s section on long-term working memory. For instance, the so-called inference in (b) is not a true inference at all, but represents automatic knowledge activation. Readers do not have to actively infer that Sunday and week are related in a certain way—they know it automatically and their long-term working memory provides them with the necessary coherence link. We are dealing here not with true inferences, but rather with automatic knowledge retrieval.

There are other types of automatic inferences, however, that are not purely a question of knowledge retrieval. For instance, readers of

  • Three turtles sat on a log. A fish swam under the log.automatically infer
  • The turtles were above the fish. (Bransford, Barclay, & Franks, 1972.)

This inference is an automatic consequence of forming an appropriate situation model, for example an image of the situation described in (c).

Strategic inferences are a controlled process, as opposed to automatic inferences (W. Kintsch, 1993, 1998). Strategic inferences may involve knowledge retrieval, but in the absence of long-term working memory structures, so that the retrieval process is resource consuming and often quite difficult. Or they can be true inferences, not just retrieving preexisting knowledge, as in logical inferences such as modus ponens, which require special training for most people. Predicting when strategic inferences will be made is quite difficult. It depends on a host of factors such as reading goals, motivation, and background knowledge. For instance, in reading a story, readers sometimes but by no means always make forward or predictive inferences (Klin, Guzman, & Levine, 1999). Indeed, text researchers disagree strongly as to the prevalence of strategic inferences. Some minimalists (McKoon & Ratcliff, 1992) find very little evidence for such inferences, while others (Graesser, Singer, & Trabasso, 1994) disagree. The question is when such inferences are made— spontaneously, as an integral part of reading a text (like bridging inferences), or in response to special task demands such as a question or verification test afterwards. It seems clear that this is not an issue that is capable of a general resolution. Rather, the answer must depend on the exact condition of reading because this kind of inference process is under strategic control of the reader.

Learning From Text

Learning from a text means that the reader understands the content and is able to use the information in ways that are not specific to the text. Thus, learning involves much more than storage of a text for recall. Unlike memory for a text, actual learning from the text requires integration of information into the reader’s existing knowledge and creates a flexible and powerful representation of the new information. This integrated representation of text information is called the situation model.

Development of a situation model has many benefits for learners. Individuals who have created powerful situation models are able to transfer their knowledge and apply it to new domains or situations. The situation model is not just a more flexible representation, it is the longest lasting of the text representations. Because it integrates text information with a reader’s existing knowledge, it offers the long-term potential to be transferred to other situations and to be incorporated into other learning situations. Thus, construction of the situation model represents true learning from a text (W. Kintsch, 1994; Zwaan & Radvansky, 1998).

Avariety of methods have been used to assess the strength of the situation model that an individual constructs. Ideally, the method used to assess the situation model must differentiate between a textbase representation and the situation model. Tasks that adequately assess the situation model above and beyond the textbase generally require the learner to transfer or generalize the information from a text in a new situation. Short-answer questions requiring inferences and transfer, concept-key word sorting tasks (McNamara et al., 1996; Wolfe et al., 1998), and changes in knowledge mapping before and after reading a text (Ferstl & Kintsch, 1999) all have been used to asses the strength of the situation model after learning.

Although it may seem that the situation model is a more desirable goal of reading than a textbase representation is, the purpose for which a text is being studied should be considered when comparing the effectiveness of the textbase and the situation model. Because traditional academic tests (such as multiple-choice recognition) often emphasize textbase learning, students seeking a peak performance on such exams may do well to emphasize textbase learning during their study. At the least, when text memory will be assessed, students should prevent emphasizing the situation model at the expense of textbase learning. However, students who desire long-term benefits from text learning are best aided by emphasizing situation model development.

To some extent, the situation model is dependent upon construction of an accurate and complete textbase. Without this foundation, integration with background knowledge is prone to error, misconceptions, and gaps. However, just as central to the situation model is the presence of adequate and appropriate domain knowledge with which text information can be integrated. Thus, it is essential for comprehension that texts be matched appropriately to readers who have the background knowledge necessary to comprehend them. Wolfe et al. (1998) demonstrated that matching readers to texts that are suited to their levels of background knowledge can result in substantial comprehension benefits. Understanding the role of domain knowledge in comprehension is a key aspect of predicting the success of text comprehension for an individual reading a certain text.

Domain Knowledge

Clearly, domain knowledge is a very powerful variable that affects situation model development and, thus, learning from text. Because development of a situation model requires adequate prior knowledge, it is logical to assume that level of domain knowledge is important in determining the extent to which individuals will learn from a text. Empirical evidence in the experimental literature supports the idea that domain knowledge is exceedingly important in predicting comprehension (Recht & Leslie, 1988; Schneider, Körkel, & Weinert, 1989; C. H. Walker, 1987; Wolfe et al., 1998). The results overwhelmingly demonstrate that high domain knowledge improves comprehension performance, even when experiments control for factors such as IQ (W. Kintsch, 1998). To some extent, high domain knowledge can also compensate for poor reading skill. Of course, domain knowledge cannot compensate for complete lack of reading skill or deficient decoding skills. However, for individuals who have basic but low-level reading skills, high levels of domain knowledge can cancel out such disadvantages under the right circumstances (e.g., given a text that utilizes the domain of expertise). For example, Adams, Bell, and Perfetti (1995) demonstrated that domain knowledge and reading skill can trade off in order to equate reading comprehension.

Domain knowledge has been shown to impact comprehension at a deeper level than that of factors external to the individual. Moravcsik and Kintsch (1993) investigated the interactive effects of domain knowledge, text quality (good vs. poor writing and organization), and participants’ reading ability in comprehension. Results demonstrated that without appropriate domain knowledge, readers could not form appropriate inferences about the text. Although high- and lowknowledge readers generated about the same global number of inferences, most of the those created by low-knowledge readers were erroneous. In contrast, high-quality texts (with good, organized writing) facilitated recall of a text but not formation of a situation model. Thus, although good writing can help readers, it does not compensate for lack of adequate domain knowledge when learning is the goal.

Text Factors

Although text factors cannot overcome factors internal to the individual (adequate and appropriate domain knowledge), they can influence a reader’s comprehension. In order to create a situation model from text, readers must form a coherent textbase that can be integrated with prior knowledge. For low-knowledge readers, texts with a clear macrostructure (e.g., texts with embedded headings or clear topic sentences) facilitate both memory for and learning from text. Empirical evidence supports this claim. Brooks, Dansereau, Spurlin, and Holley (1983) compared the comprehension performance of individuals after reading a text containing embedded headings versus a text without these embedded headings. A comprehension test administered immediately after reading showed only small benefits for the text with headings, but a test 2 days later revealed significant benefits for readers exposed to the embellished text. In a second experiment, however, Brooks et al. found that headings were not well used by students unless accompanied by instructions on using the headings as processing aids. Thus, the extent to which students spontaneously attend to and make use of text headings may predict the headings’ effectiveness.

Other manipulations of text components also have been successful in promoting reader comprehension. Britton and Gulgoz (1991) improved comprehension of texts unfamiliar to students by identifying and repairing coherence gaps in a text (according to the method proposed by J. R. Miller & Kintsch, 1980). The effect of this manipulation is explicit presentation of text structure achieved by connecting information that normally requires bridging inferences (W. Kintsch, 1998). Thus, removing coherence gaps and making the text more fully explicit has the effect of reducing the number of inferences the reader must make, thereby facilitating comprehension. Other research has supported the conclusion that making text macrostructure clear has comprehension benefits (Beck, McKeown, Sinatra, & Loxterman, 1991; Lorch & Lorch, 1995; Lorch et al., 1993; McNamara et al., 1996). As discussed earlier, clear presentation of text macrostructure facilitates the recall of text information and the organization of text representations.

However, some evidence suggests that when readers have ample domain knowledge, texts that do not require inferencing or active processing are not ideal for facilitating comprehension (W. Kintsch, 1998; McNamara et al., 1996). Surprisingly, high-knowledge readers actually can learn more (as indicated by situation model measures) from text with relatively low coherence (McNamara & Kintsch, 1996; McNamara et al., 1996). The interpretation of this effect is that high-knowledge readers must work harder to make sense out of a low coherence text; this text-relevant processing results in formation of a better-developed situation model, whereas recall is not influenced. Other methods that encourage active text processing have similar benefits; these include frequent self-explanations or use of advance outlines that do not match the structure of the text (W. Kintsch, 1998).

Although active processing is a powerful determinant of text learning, it is important to remember that increasing the difficulty of text is fruitful only for the reader with adequate knowledge. Further, increasing text difficulty often is problematic and time consuming. This may explain the large number of instructional programs that have been designed to teach active strategies for comprehension. The effectiveness of such strategies is unclear; some research shows clear benefits after teaching strategies and some does not. For example, Palincsar and Brown (1984) found that training children on general comprehension processes results in strong and generalizable improvements in text understanding. Yet in a study with 6- to 8-year-olds Cain (1999) found that although poor comprehenders did have poorer knowledge of metacognitive strategies for reading when compared to readers their own age, their comprehension performance was worse even when compared to that of younger readers with the same level of metacognitive ability. These mixed results probably stem from the difficulty of ensuring that children and adult readers use the strategy in the absence of continual monitoring, together with individual differences in the efficiency with which the strategy is performed.

Conversation

Conversation is an interesting case for comprehension. Clearly, understanding a speaker’s meaning during conversation is essential to the successful progression and conclusion of communication. For the most part, comprehension of oral discourse follows the same principles as text-based comprehension. However, conversation is a unique form of comprehension in several ways. First, the purposes of conversational comprehension and text comprehension usually differ. Conversations can be used to transmit information, but often serve more human, social roles. Conversations involve exchange of information and may seek to amuse, entertain, or punish. Consistent with these aspects of social interaction, conversations are also unique in the process by which information is added to conversation and the extent to which frequent comprehension checks are made by both the speaker and the comprehender during communication. Interestingly, conversation requires frequent checks for comprehension before it can proceed; individuals contributing to a conversation repeatedly and continually check understanding before continuing along a conversational path (H. H. Clark & Schaefer, 1989).

Just as background knowledge facilitates comprehension of text, conversation involves what H. H. Clark and Schaefer (1989) call common ground among participants. Common ground describes the personal beliefs and knowledge that a participant brings to the conversation. However, common ground is not exactly like background knowledge, which remains stable even as readers make connections between a text and background knowledge and integrate text ideas into the knowledge. Common ground is a more flexible entity— it changes, is added to, or is destroyed and rebuilt during the course of a conversation (H. H. Clark & Schaefer, 1989). Comprehension checks called grounding (H. H. Clark, 2000; H. H. Clark & Schaefer, 1989) continually assess the state of common ground. Various techniques for grounding exist, but they all elucidate the extent a speaker’s communicative intent is clear to the listener. If grounding reveals a problem, a repair is initiated to reestablish common ground before the rest of the conversation can ensue.

Speakers do not ignore a listener’s background knowledge when contributing to a conversation, but rather attempt to modify their contributions based on their assessment of the other’s knowledge. Isaacs and Clark (1987) studied experts and novices participating in a conversation requiring knowledge of New York City. These researchers found that the participants were able to assess each other’s level of expertise and modify their conversation accordingly. In their study, experts modified their contributions to be more explicit, and during the task novices acquired specialized knowledge, which could be used subsequently. Thus, the comprehension ofeachutteranceisnotonlyevaluated,butthedegreetowhich common ground must be improved for successful communication is also assessed and modified. Unlike text comprehension, this assessment allows some potential comprehension problems to be avoided before they are encountered.

According to H. H. Clark and Schaefer (1989), contributions in conversation serve not only to highlight misunderstandings for clarification, but also to offer essential evidence of successful understanding during the course of an exchange. By the process of repeatedly checking understanding, the common ground between participants in a conversation is both established and added to in the course of the conversation. However, conversation can lack explicit links between contributions and can require inferences by the other participants. H. H. Clark and Schaefer call the inferential processes of conversation bridging and accommodation. Analogous to text inferences, these conversational processes rely upon knowledge and experience: Inferential processes add to the understanding of the contribution just offered, and the interpretation created by the inference often is made explicit by a contribution from the participant making the inference.

Amazingly, participants pursue conversational goals, establish common ground, repeatedly check understanding, make inferences, and continue to advance the conversation moreorlesssmoothlywithoutnoticeablelapsesforprocessing or planning. Certainly, contributions to conversations occasionally fail and repairs must be made, for example, by repetition or rephrasing. However, a surprisingly large portion of conversation is involved in demonstrating positive understanding and, considering the multiple processes involved in each exchange, conversation proceeds with remarkable ease.

Clearly, conversation benefits from the continuous efforts of participants to establish that comprehension of contributions has been successful. Conversation also is highly practiced and individuals can be considered experts in contributing to discourse. Normally, participants also benefit from an inherent interest in the conversation at hand. Interest and motivation have long been presumed to be important factors in comprehension, but the manner in which they influence conversational or text comprehension is not well understood.

Purpose and Interest

Clearly, factors internal to the comprehender can have as much or more influence on ultimate learning as do text or conversational factors that either promote or hinder comprehension. Generally, factors such as the goals or purpose of the reader and his or her interest in the text at hand have been considered to be quite important in understanding comprehension. However, it is difficult to specify methods by which such factors can be objectively measured. Further, it is unclear by what mechanisms purpose and interest may affect comprehension processes. It has been suggested that increased interest in a text frees up attentional resources, leading to increased processing of the text; indeed, recent research has found that individuals perform a secondary task faster when reading an interesting as opposed to less interesting text (McDaniel, Waddill, Finstad, & Bourg, 2000). However, McDaniel et al.’s study did not find a recall benefit related to text interest, despite the general finding that increased interest results in increased recall for text material (for a review, see Alexander, Kulikowich, & Jetton, 1994). The difficulty of reconciling these results simply highlights the fact that the interactions between purpose, interest, and other variables internal to the comprehender and their influence on comprehension is only poorly understood at this point.

The purpose of text processing is somewhat easier to manipulate than text interest in that researchers may specify specific outcomes or products that the comprehender will be asked to produce after the reading task. Research has demonstrated that the nature of some educational tasks can promote certain types of comprehension. For example, requiring students to write arguments about information promotes construction of situation models and understanding of information (Wiley & Voss, 1999). Regardless of the type of product that readers must produce after comprehension, different purposes during learning may change or influence behaviors directly related to comprehension performance. Narvaez, van den Broek, and Ruiz (1999) found that simply manipulating whether readers had a study or entertainment purpose changed on-line reading behaviors as well as metacognitive checks on comprehension. In this study, students who read expository texts with a study purpose were more likely to repeat sections of the text, were more likely to evaluate the text during reading, and were more likely to acknowledge comprehension difficulties related to gaps in the background knowledge. However, it is interesting to note that some effects of reader purpose appear to depend upon the type of text. For example, Narvaez et al. found that strategic behaviors for comprehension were weaker for narrative as compared to expository texts.

Regardless of an individual’s purpose in pursuing a text, interest in the text is clearly relevant to comprehension processes. Research on this topic varies widely on the type of interest manipulated (e.g., whether texts are matched to individual interests and knowledge, or texts are manipulated to include details that appeal more generally to readers), but for the most part has demonstrated that increased interest leads to increasedmemoryforandcomprehensionoftexts.Inareview of research manipulating both reader background knowledge and interest, Alexander et al. (1994) argued that most studies find that interest is positively related to learning from text. However,theyacknowledgethatstrongerandmoreconsistent effects are found when interest is predicted by a reader’s prior knowledge of and long-term interest in a topic rather than by the specific characteristics of an individual text.

This is not to argue that interest-related characteristics of an individual text are not influential in text processing. The effects of seductive details—bits of information in a text that are considered intrinsically interesting but unimportant to the major text ideas—are an interesting case. In general, studies have found that seductive details are well remembered and sometimes are recalled better than main text ideas (e.g., Alexander et al., 1994; Schraw, 1998). Although Schraw (1998) found that seductive details were remembered better than main text ideas, he also found that seductive details did not interfere with recall for global text information. Thus, enhancing a text with seductive details may increase interest and promote memory for such intrinsically interesting information, but may do little to improve overall memory for the topic at hand.

Other types of text manipulations may affect interest without adding unnecessary information to text. Sadoski, Goetz, and Rodriguez (2000) found that the concreteness of a text was a strong predictor of interest in a text. Manipulating a text to use concrete descriptions may enhance interest in a text and promote recall. However, not all texts or concepts can be expressed in a concrete way and doing so may compromise abstract or complex relationships in certain texts. For these types of texts, it is difficult to envision modifications that would increase text interest without sacrificing the rigor of the text.

Certainly the factors previously discussed and other factors a reader brings to the text (e.g., emotion) are important to comprehension performance, and the influence of such factors should be included in a complete model of comprehension. We are confident that cognitive psychology will continue to explore these issues and will be able to describe the ways in which the individual interacts with a text during comprehension. The current and future challenge for research in text comprehension will be to continue to uncover individual factors and text variables that influence and support learning from texts and to integrate such knowledge into the already complex picture of what factors predict what and how much an individual will learn from a text.

Models of Comprehension

Schema-Based Models

Early comprehension models heavily emphasized the role of top-down processes. Comprehension was thought to involve (a) schema activation through key words or phrases in the text, followed by (b) filling the slots of the schema with relevant information from the text (Anderson & Pichert, 1978; Rumelhart & Ortony, 1977; Schank & Abelson, 1977). An extreme version of such a theory was the artificial intelligence (AI) program FRUMP (DeJong, 1979), which actually attempted to understand news reports in this way. It was never meant as a psychological theory, but it illustrates nicely both the strengths and weaknesses of a schema-based approach. FRUMP was equipped with a large number of schemas relevant to news reports (e.g., a schema for accidents). A schema could be activated by appropriate key words in the text (e.g., crash). Once activated, it serves as a guide for searching the text for schema-relevant information: What sort of vehicle crashed? How many people? Killed? Wounded? Causes of the crash?

The comprehension problem was thereby greatly simplified. One did not have to fully understand a text, but merely find certain well-specified items of information. As an AI program, FRUMP turned out to be fatally limited. The main difficulty was that the schema often did not fit the facts of a text. Even for something as well-structured as an accident report, one needs to look for different information in stories about a car crash, a plane crash, or a skier crashing into a tree. It is simply not possible to predefine adequate schemas for all (or even most) texts. Schank (1982) realized this and modified his approach accordingly by introducing memory organization packets—building blocks from which to construct a schema. It was clear that a simple schema-based approach would not work, neither in AI nor as a psychological model.

Nevertheless, schemas play a major role in comprehension, and every psychological model of comprehension uses schemas in one way or another (Whitney, Budd, Bramucci, & Crane, 1995). However, schemas are no longer regarded as the sole or even the most important control structure in comprehension. Instead, prior knowledge and expectations— some in the form of schemas—are top-down influences that interact with a variety of bottom-up processes to yield what we call comprehension.

A Psychological Process Model

Comprehension has many facets and there are many ways to model comprehension: Rhetoric and linguistics represent an ancient and important tradition, whereas artificial intelligence programs are a recent innovation. Psychological process models take a different approach yet. They build on the constraints provided by our knowledge of the perceptual and cognitive processes involved in comprehension: word perception and recognition, attention, short- and long-term memory, retrieval processes, sentence comprehension, knowledge representation and activation, and the like. Of course, psychological process models cannot neglect the constraints imposed by the text to be comprehended, and indeed, it may be the case that textual constraints dominate the comprehension process, relegating cognitive aspects to a minor role—whichis the premise of purely linguistic or AI approaches. However, the recent research on psychological models of comprehension suggests otherwise.

The attempt to analyze comprehension in psychological terms began with the model of W. Kintsch and van Dijk (1978; van Dijk & Kintsch, 1983). The model is based on the assumption that the limitations of working memory force readers (or listeners) to decode one sentence at a time. Decoding consists of translating the sentence from natural language to a general and universal mental language—a propositional representation. In spite of its name, this propositional structure is not a full semantic representation of the meaning of a sentence or a text; rather, it is designed merely to capture the core idea—how people understand a sentence when they are not analyzing it in all its detail. This sort of representation is useful mainly because it allows the psychologist to count so-called idea units in the comprehension as well as reproduction of a text. Counting words is not very useful with texts. In a list of random words, whether a subject recalls 12 or 15 words is a meaningful statistic. The exact number of words someone recalls from a text, on the other hand, is not necessarily directly related to either comprehension or memory; for most purposes, the number of ideas matters, rather than the words expressing them. Thus, the propositional representation of the sentence John read the old book in the library is

Predicate: READ

Agent: JOHN

Object: BOOK

Modifier: OLD

Location: LIBRARY

Paraphrasing this sentence as The book, which was old, was read by John in the library does not change this propositional representation. For purposes of scoring a recall protocol, one can count either sentence as one complex proposition, or as one core proposition, one modifier, and one location, depending on the grain of the analysis that is desired.

The W. Kintsch and van Dijk model assumes that understanding a text means constructing a propositional representation of the text. This representation consists of a network of propositions. Propositions that share a common argument are linked (in the example above, the proposition would be linked to other propositions containing one or more of the arguments John, book, or library). However, propositions can be linked only if they reside in the reader’s working memory at the same time. The capacity of this working memory is limited (estimates usually range between three to five propositions). A spreading activation process among the propositions in working memory determines their activation level. As the next sentence in a text is read, working memory is cleared: The propositions from the previous processing cycle are added to long-term memory and the propositions from the current sentence(s) are added to working memory. However, to ensure continuity, the most activated proposition(s) from the last cycle is retained in a short-term buffer, so that it can be linked with the propositions of the current sentence. In this way, a connected textbase is gradually constructed as the text is processed sentence by sentence. This textbase is called the microstructure of the text. It represents the meaning of all the sentences of a text in terms of a propositional network, as an ideal reader would construct it. The links in this structure are determined jointly by the nature of the text and by the capacity limits of working memory and the short-term buffer. Furthermore, those propositions that are most strongly interlinked in this network will gain the greatest memory strength in the spreading activation process.

In addition, the W. Kintsch and van Dijk model also constructs a macrostructure representation of a text. Schemas play a role at this level: They allow the reader to identify the structurally most important propositions in a text and their interrelationships, thus providing a basis for the formation of a macrostructure. Intuitively, the macrostructure represents the gist of a text, whereas the microstructure represents all of its detailed content.

In a large number of studies, the W. Kintsch and van Dijk model has been shown to predict the data from psychological experiments with texts quite well—comprehension as well as memory (e.g., Graesser, Millis, & Zwaan, 1997; W. Kintsch, 1974; W. Kintsch & van Dijk, 1978; van Dijk & Kintsch, 1983). The model thus justified the basic premise of the psychological processing approach to text comprehension: that cognitive constraints as well as linguistic constraints must be taken into account in modeling text comprehension.

The Construction-Integration Model

The mental representation that results from reading a text is, however, only in part determined by the content and structure of the text itself—the process that the van Dijk and Kintsch (1983) model attempts to describe. The reader’s goals and prior knowledge are equally important factors. Schema theory provided the first account of how prior knowledge influences comprehension. An alternative account, which leaves room for the top-down effects of schemas but relies more heavily on bottom-up processes, has been developed by W. Kintsch (1988, 1998) within the general framework of the van Dijk and Kintsch processing model.

Consider what happens when a reader encounters a homonym in a discourse context. Almost always, only the context-appropriate meaning of the word comes to mind. However, experimental studies, using both lexical decision and eye movement methods, suggest for a very brief period of time, about 350 ms, both meanings of a homonym are activated under certain conditions (Rayner, Pacht, & Duffy, 1994; Swinney, 1979). This observation suggests that it is not a top-down process, such as a schema, that primes the context-appropriate meaning or filters out the inappropriate ones, but that all meanings are activated and that the context then suppresses the activation of inappropriate meanings. The construction-integration model is based on this idea. It assumes that construction processes during comprehension—both at the word level as well as at the syntactic and discourse levels—are context independent and unconstrained. Thus, they are inherently promiscuous. However, context quickly imposes its constraints. Constructions that are consistent with each other support each other in a spreading activation process, and inconsistent and irrelevant constructions become deactivated. According to this model, the construction process results in an incoherent mental representation. An integration process is needed to turn this contradictory tangle of hypotheses into a coherent mental representation. This integration is essentially a process of constraint satisfaction. It works quickly enough so that inappropriate initial hypotheses do not reach the level of consciousness. According to experimental results (e.g., Till, Mross, & Kintsch, 1988) it takes about 300–350 ms for word meanings to become fixated in a discourse context, and 500–700 ms for topic inferences.

Schemas play an important role in the constructionintegration model, because they are likely to be activated in the construction phase of the process, just like many other knowledge structures. However, once activated, an appropriate schema will most likely become the central unit in the integration phase, attracting relevant pieces of information and thereby deactivating schema-irrelevant constructions.

Kintsch (1998) describes how this model can account for a wide variety of experimental findings, such as the construction of word meanings in discourse, priming in discourse, syntactic parsing, macrostructure formation, generating inferences, and the construction of situation models. The construction-integration model has also been successfully applied to how people solve mathematical word problems, and beyond the sphere of text comprehension, to action planning, problem solving, and decision making (for more information on human performance in these tasks). In other words, the model aspires to be a general theory of comprehension, not just of text comprehension.

The Collaborative Activation-Based Production System Architecture

The bottom-up, spreading activation component of the construction-integration model is quite successful and has been included in most subsequent models of text comprehension. Models of comprehension can be broadly described as attempts to instantiate activation-based theories of comprehension within limitations suggested by other cognitive processes. Given the importance of working memory resourcesforcomprehension,itisnotsurprisingthatmanymodels have focused on constraints surrounding comprehension processes when developing simulations. Just and Carpenter (1992) developed a model of sentence comprehension that attempted to account for characteristics of comprehension based on a flexible but limited capacity system simulating the constraints of working memory. It should be noted that although the capacity-constraints of the collaborative actionbased production system (CAPS) are based on working memory characteristics, they relate to theoretically based, higher-level activation limits rather than to modality-specific buffers commonly thought to exist within working memory (e.g., Baddeley, 1986).

The CAPS architecture is a combination of a production system and an activation-based connection system that Just and Carpenter (1992) used to produce a simulation of their theory. According to the theory, activation is responsible both for storage and processing components of language comprehension. In CAPS, an element is activated either by being constructed from text (written or spoken), constructed by a process, or retrieved from long-term memory. Like the construction-integration model, CAPS does not neglect the influence of top-down effects of context. In fact, CAPS assumes that activation of text propositions and background knowledge proceeds similarly to the construction-integration model. However, the difference in CAPS appears when the comprehension processes approach capacity limits.

Although elements with above-threshold activation are available to comprehension processes, complications occur when the amount of activation required for elements exceeds the total activation available in the system. Capacity limits in CAPS do not necessarily result in deactivation of weak elements, but rather in an overall decrease of system activation. In CAPS, activation for maintaining elements as well as for processing these elements is shared. Thus, capacity limits on activation can lead to forgetting of old elements as well as decreased processing of current elements.

Just and Carpenter’s (1992) model is quite successful at modeling comprehension differences produced by texts with differing working memory demands as read by individuals with varying working memory capacity. Interestingly, Just and Carpenter (1992) argue that their evidence suggests that activation capacity is a single resource. They assert that because increasing demand by a variety of methods—for example, increasing text distance or ambiguity, reducing available working memory capacity—produces consistent effects on comprehension, it is reasonable to assume that the same mechanisms underlie diverse types of comprehension processing. Clearly, cognitive processes are subject to capacity limits, and the power of this model lies in the dynamic manner in which it accounts for such limits in comprehension.

The Capacity-Constrained Construction-Integration Model

Inclusion of working memory constraints in comprehension models offers some clear benefits to explaining individual differences in comprehension. Both the construction-integration model and the CAPS architecture are quite successful in explaining some aspects of comprehension. Given that construction-integration seeks to model comprehension in general (rather than stopping with text comprehension) but CAPS provides a successful account of individual differences in text comprehension based on working memory constraints, could the two models be combined as a capacity-limited model of general comprehension? The capacityconstrained construction integration model (CCCI; Goldman & Varma, 1995) attempts to combine the ways in which knowledge is constructed, represented, and integrated in the constructionintegration (CI) model within the more flexible capacityconstrained CAPS system. Instantiating construction-integration in a working-memory limited system has the effect of changing the way in which propositions are held over for additional processing cycles. Whereas the CI model uses a buffer of fixed size to simulate limitations of working memory in text processing, Goldman and Varma’s (1995) CCCI model retains all propositions not exceeding capacity limitations for further processing. When capacity limits are reached (as in CAPS), new propositions may draw activation away from retained elements, which gracefully fall below threshold.

The main strength of the CCCI model is that it reproduces the major, successful comprehension results of the construction-integration model at the same time as it automatically produces stronger weights for propositions representing main points from a text passage instead of assigning initial weights to reflect differences in text importance. Thus, providing the construction-integration model with working memory limits may help us understand how comprehension processes arrive at different representation strengths for different text elements.

The Landscape Model

The landscape model (van den Broek, Risden, Fletcher, & Thurlow, 1996) also assumes that patterns of activation work within constraints during a cyclical process of comprehension. However, the landscape model deals more specifically with the process by which coherence is computed and represented during comprehension. In this model, activation strengths during each processing cycle are set on a 5-point scale determined by the degree to which the concept is necessary to establish coherence in the text. Accordingly, concepts that are explicitly defined are assigned the highest weights, whereas inferences that are not necessary to establish coherence receive the lowest activation weights. Concepts that contribute to coherence are weighted to varying degrees along this continuum as a function of their degree of contribution to the coherence.

The landscape model draws its name from the patterns of activations seen for text concepts across all processing cycles during comprehension. That is, an activation map of all concepts across cycles is constructed and graphically demonstrates the degree to which concepts are activated during the progression of the story, as well as the number of concepts that are concurrently activated in each cycle of comprehension. According to van den Broek et al., the topography of activation suggests the way in which comprehended text becomes encoded as a stable, coherent representation. Further, van den Broek et al. argue that the total activation of a concept across cycles predicts the importance of the concept to the story and that concepts activated together during a processing cycle will be linked in memory.

Testing by van den Broek et al. (1996) suggests that the activation of concepts during processing cycles can predict patterns of human recall for story concepts. In their research, nearly all (94%) concepts first recalled by participants were the concepts that demonstrated greatest overall activation during the course of reading. Further, the pattern of subsequent concepts recalled was predicted by the degree to which the prior and subsequent concepts were coactivated during reading. The landscape model, then, provides a description of and a general methodology for testing the ways concepts are emphasized and linked in a text. However, the landscape model falls short of offering a theoretical rationale for the ways in which humans construct, represent, and integrate their knowledge. In general, models of comprehension reflect similar assumptions about the way in which knowledge is represented, but it is valid to question the precise nature of such representations.

Models of Knowledge Representation

One of the central problems in cognitive science is how to model human knowledge. How can we define knowledge? The word know is used in so many ways; is what we know always knowledge? Consider this list, selected from the 11 senses of know listedinWordNet(http://www.cogsci.princeton.edu/~wn/):

  1. I know who is winning the game.
  2. She knows how to knit.
  3. Galileo knew that the earth moved around the sun.
  4. Do you know my sister?
  5. I know the feeling!
  6. His greed knew no limits.
  7. I know Latin.
  8. This child knows right from wrong.

Examples 3, 4, and 7 would seem to be clear examples of knowledge, but how does one draw the line? But suppose we knew what knowledge was.What, then, is its structure, how is it organized? Semantic hierarchies, feature systems, schemas, and scripts, or one huge associative net?All of these possibilities and several more have had their sponsors, as well as their critics. But once again, suppose we had a workable model of what human knowledge structures are like. How could we then determine what the content of these structures actually is? There are two ways to do this: One can hand-code all knowledge, as it is done in a dictionary or encyclopedia, except more systematically and more complete, or one can build a system that learns all it needs to know. We discus an example of both approaches, both of which have proven their usefulness for psychological research on discourse comprehension.

WordNet

WordNet is what a dictionary should be. Unlike most dictionaries, WordNet aspires to be a complete and exhaustive list of all word meanings or senses in the English language; it defines these meanings with a general phrase and some illustrative examples, and lists certain semantically related terms (Fellbaum, 1998; G. A. Miller, 1996). This is all done by hand coding. Each word in the language has an internal structure in WordNet, consisting of the syntactic categories of the word and, for each category, the number of different semantic senses (together with informal definitions and examples). Thus, the word bank is both a noun and a verb. For the noun, 10 senses are listed (the first two are familiar financial institution and river bank; the 10th is a flight maneuver). The verb bank has seven senses in WordNet. Furthermore, each word (actually, each word sense) is related to other words by a number of semantic relationships that are specified in WordNet: synonymy (e.g. financial institution is a synonym of bank-1), coordinate relationship (lending institution is a coordinate term for bank-1), hyponymy (. . . is a kind of bank), holonymy (bank is part of . . .), and meronymy (parts of bank). Thus, a detailed, explicit description of the lexicon of the English language is achieved, structured by certain semantic relations.

WordNet is a useful and widely used tool for psycholinguists and linguists. Nevertheless, it has certain limitations, some of which arise from the need for hand coding. WordNet is the reified intuition of its coders, limited by the chosen format (e.g., the semantic relations that are made explicit). But language changes, there are individual differences, and people can use words creatively in novel ways and be understood (E. V. Clark, 1997). The mental lexicon may not be static, as WordNet necessarily must be, but may evolve dynamically, and the context dependency of word meanings may be so strong as to make a listing of fixed senses illusory.

The task of hand coding a complete lexicon of the English language is certainly a daunting one. Hand coding all human knowledge presents significant additional difficulties. Nevertheless, the CYC (CYC is a very large database in which human knowledge is formally represented by a language called CycL. CYC is a registered trademark of Cycorp. The interested reader is directed to http://www.cyc.com/tech.html for more information.) system of Lenat and Guha (1990) attempts just that. CYC postulates that all human knowledge can be represented as a network of propositions. Thus, it has a local, propositional structure, as well as global structure—the relations among propositions and the operations that these relations afford. Like WordNet, however, CYC is a static structure, always vulnerable because some piece of human knowledge has not been coded or acts in an unanticipated way in a new context.

Therefore, some authors have argued for knowledge representations that learn what they need to know and thus are capable of keeping up with the demands of an ever-changing context. One such proposal is reviewed in the following section.

Latent Semantic Analysis

Latent semantic analysis (LSA) is a machine learning procedure that constructs a high-dimensional semantic space from an input consisting of a large amount of text. LSA analyzes the pattern of co-occurrences among words in many thousands of documents, using the well-known mathematical technique of singular value decomposition. This technique allows one to extract 300–500 dimensions of meaning that are capable of representing human semantic intuitions with considerable accuracy. LSA generates a semantic space in which words as well as sentences or whole texts are represented as mathematical vectors. The angle between two vectors (as measured by their cosine) provides a useful, fully automatic measure of the semantic similarity between the words they represent. Thus, we can compute the semantic similarity between any two word pairs or any two texts.

Randomly chosen word pairs tend to have an average cosine very near zero (M = .02, SD = .06), whereas a sample of 100 singular and plural word pairs (e.g., house, houses) have much higher, but not perfect, average cosines (M = .66, SD = .15). What is computed here is not word overlap or word co-occurrence, but something entirely new: a semantic distance in a high-dimensional space that was constructed from such data.

The distinction between measurement of word overlap and semantic content as measured by LSAis illustrated in the following example taken from Butcher and Kintsch (2001). Two students learn a text containing the following statement: The phonological loop responds to the phonetic characteristics of speech but does not evaluate speech for semantic content. In a summary, Student A writes “The rehearsal loop that practices speech sounds does not pick up meaning in words. Rather, it just reacts whenever it hears something that sounds like language.” Student B writes, “The loop that listens to words does not understand anything about the phonetic noises that it hears. All it does is listen for noise and then responds by practicing that noise.” As human comprehenders, we can see that Student A has a better understanding of the text and has constructed a more appropriate summary of that bit of information. Using LSA to compare each student’s summary with the learned text, we find that Student A’s text has a cosine of .62 with the original text, whereas Student B’s text has a cosine of only .40 with the original text (Only the relative values of cosines generated for equivalent types of text can be compared. Cosines for word pairs and sentence pairs, for instance, are not comparable.). Note that this result is not due to overlapping words in the text and summaries; Student A repeats two words from the original sentence but Student B repeats three words from the original sentence. Using the relative values of the cosines, LSAtells us what we have concluded by reading the texts: Student A’s summary is a closer semantic match to the original text than that of Student B. The differences between the texts are subtle but clear; although Student B is not completely confused, his summary does reflect a less thorough understanding of the original content than does Student A’s summary. For more detailed descriptions of LSA, see Landauer (1998), Landauer and Dumais (1997), and Landauer, Foltz, and Laham (1998).

Before examining the achievements of LSA, its limitations must be discussed, for LSA is by no means a complete semantic theory; rather, it provides a strong basis for building such a theory. First, LSA disregards syntax and syntax obviously plays a role in determining the meaning of sentences. Second, LSA can learn only from written text, whereas human experience is based on perception, action, and emotion—the real world, not just words—as well. Third, LSA starts with a tabula rasa, whereas the acquisition of human knowledge is subject to epigenetic constraints that determine its very character. Surprisingly, neither of these problems is fatal. Much can be achieved without syntax, and it is possible to bring syntactic information to bear within the LSA framework, at least to some extent, as we discuss later in this research paper. Words are not all of human knowledge, but language has evolved to talk about all human affairs—action, perception, emotion. Thus, words mirror the nonverbal aspects of human experience—not with complete accuracy, but enough to make LSAuseful. Finally, LSAdoes not learn from scratch but from language. Thus its input already incorporates the epigenetic rules that structure human knowledge.

LSA makes semantic judgments that are humanlike in many ways, but it can only perform correctly when it has been trained on an appropriate textual corpus. One of the semantic spaces that has been constructed represents the knowledge of a typical American high-school graduate: It is based on a text of more than 11 million words, comprising over 90,000 different words and over 36,000 documents. It is a model of what a high-school student would know if all his or her experience were limited to reading these texts. In one respect this is not much, but in another it is a considerable achievement. It will, for instance, pass the TOEFL test of English as foreign language: Given a rare word (like abandoned) and several alternatives (like forsake, aberration, and deviance) it will choose the correct one, because forsake has a higher cosine (.20) with the target word than the other alternatives (.09 and .09). On the other hand, it will fail an introductory psychology multiple-choice exam, because the high-school reading material does not contain enough psychology texts. If we create a new space by teaching LSApsychology with a standard introductory text, however, it will pass the test: Asked to match attention to the alternatives memory, selectivity, problem solving, and language, it will correctly choose selectivity, because the cosine between attention and selectivity is .52 and the cosines between attention and the other alternatives are only .17, .05, and .07, respectively.

LSA is a powerful tool for the simulation of psycholinguistic phenomena. Landauer and Dumais (1997) have discussed vocabulary acquisition as the construction of a semantic space, modeled by LSA; Laham (2000) investigated the emergence of natural categories from the LSA space; Foltz, Kintsch, and Landauer (1998) have used LSAto analyze textual coherence; and Butcher and Kintsch (2001) have used LSA as an analytic tool in the study of writing. LSA has also been used effectively in a number of applications that depend on an effective representation of verbal meaning. To mention just some of the practical applications, there is first the use of LSA to select instructional texts that are appropriate to a student’s level of background knowledge (Wolfe et al., 1998). Second, LSA provides feedback about their writing to 6th-grade students summarizing science or social science texts (E. Kintsch et al., 2000). And last but not least, LSAhas been successfully employed for essay grading. LSA grades the content of certain types of essays as well and as reliably as human professionals (Landauer, Laham, & Foltz, 2000). The humanlike performance of LSA in these areas strongly suggests that the way meaning is represented in LSA is closely related to the way humans operate.

Again, LSA does a very good job of representing semantic meaning, but it does not represent all the components of language that humans may use in comprehension. For one thing, people use syntax in the construction of meaning, whereas LSAdoes not. However, it might be possible to combine LSA with other psychological process theories, thereby expanding the scope of an LSA-based theory of meaning. W. Kintsch (2001) has combined an LSA knowledge base with a spreading activation model of comprehension, thereby offering a solution to the problem of how word senses might be generated in a discourse context—instead of being prelisted, as in WordNet.

According to LSA, word meanings are vectors in a highdimensional semantic space. The meaning of a two-word sentence in LSA is the centroid of the two-word vectors. Thus, for The horse runs and The color runs, we compute the vectors {horse, runs} and {color, runs}. However, there is a problem, for the meaning of run in the two contexts is somewhat different; two different senses of the verb run are involved.

In the CI model of discourse comprehension (W. Kintsch, 1988, 1998), mental representations of a text are constructed via a constraint satisfaction process, computationally realized via a spreading-activation mechanism: The semantic relations among the concepts and propositions of a text are strengthened if they fit into the overall context and deactivated if they do not. This idea can be extended to the predication problem. Those aspects of the predicate (run in our example) that are appropriate for its argument are strengthened and the others are de-emphasized. This is achieved by means of a constraint satisfaction process in the manner of the CI model, in which the argument is allowed to select related relevant terms from the neighborhood of the predicate, which are then used to modify the predicate vector appropriately (W. Kintsch, 2001).

This turns out to be a powerful algorithm. It correctly computes that The bridge collapsed is related to failure and that The runner collapsed is related to race. It differentiates appropriately between A pelican is a bird and The bird is a pelican. It also correctly computes the meaning of metaphors—for example, that My lawyer is a shark is more related to viciousness than to fish (W. Kintsch, 2000). Furthermore, it computes that The student washed the table is more related to The table is clean than The student is clean. And it mirrors many of the well-documented asymmetries and context effects in human similarity judgments (W. Kintsch, 2001).

LSA by itself models the associative foundation of meaning. Together with the spreading-activation mechanism of the CI theory, it allows us to model a broad range of additional phenomena, but we still fall short of a complete semantic theory. We need to explore other psychological process theories of human thought processes that can be combined with an LSAknowledge base to further broaden the scope of an LSAbased semantic theory. Research on LSA is still new, but one can expect that it will have an increasingly large impact on the way we think about comprehension and the way we do research on language in the coming years.

Conclusions

Overall, cognitive psychology has made great strides in understanding the factors that predict individual differences in comprehension. We have learned about both factors internal to the learner (such as background knowledge) and external to the individual (such as text organization or conversational coherence) that determine comprehension. The variables influencing comprehension performance interact in quite complex ways; as discussed earlier, readers who are knowledgeable about a subject learn better from a difficult text, whereas readers with less prior knowledge about a topic learn better from a more coherent, organized text. Thus, no single factor can be shown to be sufficient to ensure adequate comprehension by a learner, and no single prescription can be recommended for all learners in all situations.

The practical applications of comprehension research are obvious; with adequate understanding of the variables that influence reading and listening comprehension, educators can manipulate situations to maximize learning for an individual in a set of particular circumstances. Even though cognitive psychologists understand many of the variables that influence learning, unfortunately we are far from developing a complete model of comprehension. There currently is no exact recipe for creating comprehension in a learner. We know about some key ingredients of the comprehension recipe and how they contribute to a successful performance, but we do not fully understand the extent to which changes in these factors exert a direct influence on comprehension and the extent to which they impact other variables in the learning situation. In addition, we have a lot to learn about the individual variables that are difficult to quantify (e.g., motivation, persistence, interest) but undoubtedly are critical in a full model of comprehension. The current and future challenge for research in comprehension is to continue to uncover variables in input and individual factors that influence and support learning and to integrate such knowledge into the already complex picture of what makes a good learner.

Bibliography:

  1. Adams, B. C., Bell, L. C., & Perfetti, C. A. (1995). A trading relationship between reading skill and domain knowledge in children’s text comprehension. Discourse Processes, 20, 307–323.
  2. Alexander, P. A., Kulikowich, J. M., & Jetton, T. L. (1994). The role of subject-matter knowledge and interest in the processing of linear and nonlinear texts. Review of Educational Research, 64, 201–252.
  3. Anderson, R. C., & Pichert, J. W. (1978). Recall of previously unrecallable material following a shift in perspective. Journal of Verbal Behavior and Verbal Learning, 17, 1–12.
  4. Ausubel, D. P. (1960). The use of advance organizers in the learning and retention of meaningful verbal material. Journal of Educational Psychology, 51, 267–272.
  5. Baddeley, A. (1986). Working Memory. New York: Oxford University Press.
  6. Bartlett, F. C. (1932). Cambridge, UK: Cambridge University Press.
  7. Barwise, J., & Perry, J. (1983). Situations and attitudes. Cambridge, MA: MIT Press.
  8. Beck, I. L., McKeown, M. G., Sinatra, G. M., & Loxterman, J. A. (1991). Revising social studies texts from a text-processing perspective: Evidence of improved comprehensibility. Reading Research Quarterly, 27, 251–276.
  9. Bransford, J. D., Barclay, J. R., & Franks, J. J. (1972). Sentence memory: A constructive versus interpretive approach. Cognitive Psychology, 3, 193–209.
  10. Britton, B. K., & Gulgoz, S. (1991). Using Kintsch’s computational model to improve instructional text: Effects of repairing inference calls on recall and cognitive structures. Journal of Educational Psychology, 83, 329–345.
  11. Brooks, L. W., Dansereau, D. F., Spurlin, J. E., & Holley, C. D. (1983). Effects of headings on text processing. Journal of Educational Psychology, 75, 292–302.
  12. Butcher, K. R., & Kintsch, W. (2001). Support of content and rhetorical processes of writing: Effects on the writing process and the written product. Cognition and Instruction, 19, 277–322.
  13. Cain, K. (1999). Ways of reading: How knowledge and use of strategies are related to reading comprehension. British Journal of Developmental Psychology, 17, 293–312.
  14. Carpenter, P. A., Miyake, A., & Just, M. A. (1994). Working memory constraints on the resolution of lexical ambiguity: Maintaining multiple interpretations in neutral contexts. Journal of Memory and Language, 33, 175–202.
  15. Chomsky, N. (1965). Aspects of the theory of syntax. Cambridge, MA: MIT Press.
  16. Clark, E. V. (1997). Conceptual perspective and lexical choice in acquisition. Cognition, 64, 1–37.
  17. Clark, H. H. (1996). Using language. Cambridge, UK: Cambridge University Press.
  18. Clark, H. H. (2000). Conversation. In A. E. Kazdin (Ed.), Encyclopedia of psychology (pp. 292–294). New York: Oxford University Press.
  19. Clark, H. H., & Schaefer, E. R. (1989). Contributing to discourse. Cognitive Science, 13, 259–294.
  20. Corkill, A. J. (1992). Advance organizers: Facilitators of recall. Educational Psychology Review, 4, 33–67.
  21. Daneman, M., & Carpenter, P. A. (1980). Individual differences in working memory and reading. Journal of Verbal Learning and Verbal Behavior, 19, 450–466.
  22. Daneman, M., & Carpenter, P. A. (1983). Individual differences in integrating information between and within sentences. Journal of Experimental Psychology: Learning, Memory, and Cognition, 9, 561–584.
  23. DeJong, G. F. (1979). Prediction and substantiation:Anew approach to natural language processing. Cognitive Science, 3, 251–272.
  24. Elman, J. L., Bates, E. A., Johnson, M. H., Karmiloff-Smith, A., Parisi, D., & Plunkett, K. (1996). Rethinking innateness: A connectionist perspective on development. Cambridge, MA: MIT Press.
  25. Ericsson, K. A., Chase, W. G., & Faloon, S. (1980). Acquisition of a memory skill. Science, 208, 1181–1182.
  26. Ericsson, K. A., & Kintsch, W. (1995). Long-term working memory. Psychological Review, 102, 211–245.
  27. Ericsson, K. A., & Polson, P. (1988). An experimental analysis of the mechanisms of a memory skill. Memory and Cognition, 14, 305–316.
  28. Fellbaum, C. (Ed.). (1998). WordNet: An electronic lexical database. Cambridge, UK: Cambridge University Press.
  29. Ferstl, E., & Kintsch, W. (1999). Learning from text: Structural knowledge assessment in the study of discourse comprehension. In H. V. Ostendorp & S. Goldman (Eds.), The construction of mental representations during reading (pp. 247–278). Mahwah, NJ: Erlbaum.
  30. Foltz, P. W., Kintsch, W., & Landauer, T. K. (1998). The measurement of textual coherence with Latent Semantic Analysis. Discourse Processes, 25, 285–308.
  31. Friedman, N. P., & Miyake, A. (2000). Differential roles for visuospatial and verbal working memory in situation model construction. Journal of Experimental Psychology: General, 129, 61–83.
  32. Goldman, S. R., & Varma, S. (1995). CAPing the constructionintegration model of discourse comprehension. In C. A. Weaver III, S. M. Mannes, & C. R. Fletcher (Eds.), Discourse comprehension: Essays in honor of Walter Kintsch (pp. 337– 358). Hillsdale, NJ: Erlbaum.
  33. Graesser, A. C., Millis, K. K., & Zwaan, R. A. (1997). Discourse comprehension. Annual Review of Psychology, 48, 163–189.
  34. Graesser, A. C., Singer, M., & Trabasso, T. (1994). Constructing inferences during narrative text comprehension. Psychological Review, 101, 371–395.
  35. Haenggi, D., & Perfetti, C. A. (1994). Processing components of college-level reading comprehension. Discourse Processes, 17, 83–104.
  36. Halliday, M. A. K., & Hasan, R. (1976). Cohesion in English. London: Longman.
  37. Haviland, S. E., & Clark, H. H. (1974). What’s new? Acquiring new information as a process in comprehension. Journal of Verbal Behavior and Verbal Learning, 13, 512–521.
  38. Isaacs, E. A., & Clark, H. (1987). Bibliography: in conversation between experts and novices. Journal of Experimental Psychology: General, 116, 26–37.
  39. Jarvella, R. J. (1971). Syntactic processing of connected speech. Journal of Verbal Learning and Verbal Behavior, 10, 409– 416.
  40. Jenkins, J. J. (1974). Remember that old theory of memory? Well, forget it! American Psychologist, 29, 785–795.
  41. Jurafsky, D., & Martin, J. H. (2000). Speech and language processing. Upper Saddle River, NJ: Prentice-Hall.
  42. Just, M. A., & Carpenter, P. A. (1987). The psychology of reading and language comprehension. Boston: Allyn & Bacon.
  43. Just, M. A., & Carpenter, P. A. (1992). A capacity theory of comprehension: Individual differences in working memory. Psychological Review, 99, 122–149.
  44. Kintsch, E., Steinhart, D., Stahl, G., Matthews, C., Lamb, R., & the LSA Research Group. (2000). Developing summarization skills through the use of LSA-backed feedback. Interactive Learning Environments, 8, 87–109.
  45. Kintsch, W. (1974). The representation of meaning in memory. Hillsdale, NJ: Erlbaum.
  46. Kintsch, W. (1988). The use of knowledge in discourse processing: A construction-integration model. Psychological Review, 95, 163–182.
  47. Kintsch, W. (1993). Information accretion and reduction in text processing: Inferences. Discourse Processes, 16, 193–202.
  48. Kintsch, W. (1994). Text comprehension, memory, and learning. American Psychologist, 49, 294–303.
  49. Kintsch, W. (1998). Comprehension: A paradigm for cognition. New York: Cambridge University Press.
  50. Kintsch, W. (2000). Metaphor comprehension: A computational theory. Psychonomic Bulletin & Review, 7, 257–266.
  51. Kintsch,W. (2001). Predication. Cognitive Science, 25, 173–202.
  52. Kintsch, W., Patel, V. L., & Ericsson, K. A. (1999). The role of longterm working memory in text comprehension. Psychologia, 42, 186–198.
  53. Kintsch, W., & van Dijk, T. A. (1978). Towards a model of text comprehension and production. Psychological Review, 85, 363–394.
  54. Kintsch, W., Welsch, D., Schmalhofer, F., & Zimny, S. (1990). Sentence memory: A theoretical analysis. Journal of Memory and Language, 29, 133–159.
  55. Klin, C. M., Guzman, A. E., & Levine, W. H. (1999). Prevalence and persistence of predictive inferences. Journal of Memory and Language, 40, 593–604.
  56. Labov, W. (1972). Sociolinguistic patterns. Philadelphia: University of Pennsylvania Press.
  57. Laham, R. D. (2000). Automated content assessment of text using Latent Semantic Analysis to simulate human cognition. Unpublished doctoral dissertation, University of Colorado, Colorado.
  58. Landauer, T. K. (1998). Learning and representing verbal meaning: Latent Semantic Analysis theory. Current Directions in Psychological Science, 7, 161–164.
  59. Landauer, T. K., & Dumais, S. T. (1997). A solution to Plato’s problem: The Latent Semantic Analysis theory of acquisition, induction and representation of knowledge. Psychological Review, 104, 211–240.
  60. Landauer, T. K., Foltz, P. W., & Laham, D. (1998). An introduction to Latent Semantic Analysis. Discourse Processes, 25, 259–284.
  61. Landauer, T. K., Laham, D., & Foltz, P. (2000). The Intelligent Essay Assessor. IEEE Intelligent Systems, 27–31.
  62. Lenat, D., & Guha, R. (1990). Building large knowledge-based systems. Reading, MA: Addison-Wesley.
  63. Lorch, R. F., & Lorch, E. P. (1985). Topic structure representation and text recall. Journal of Educational Psychology, 77, 137–148.
  64. Lorch, R. F., & Lorch, E. P. (1995). Effects of organizational signals on text-processing strategies. Journal of Educational Psychology, 87, 537–544.
  65. Lorch, R. F., Lorch, E. P., & Inman, W. E. (1993). Effects of signaling topic structure on text recall. Journal of Educational Psychology, 85, 281–290.
  66. Lyons, J. (1977). Cambridge, UK: Cambridge University Press.
  67. McDaniel, M. A., Waddill, P. J., Finstad, K., & Bourg, T. (2000). The effects of text-based interest on attention and recall. Journal of Educational Psychology, 92, 492–502.
  68. McKoon, G., & Ratcliff, R. (1992). Inference during reading. Psychological Review, 99, 440–466.
  69. McNamara, D. S., Kintsch, E., Songer, N. B., & Kintsch, W. (1996). Are good texts always better? Text coherence, background knowledge, and levels of understanding in learning from text. Cognition and Instruction, 14, 1–43.
  70. McNamara, D. S., & Kintsch, W. (1996). Learning from text: Effect of prior knowledge and text coherence. Discourse Processes, 22, 247–288.
  71. Miller, G. A. (1996). The science of words. New York: Freeman.
  72. Miller, J. R., & W. Kintsch (1980). Readability and recall for short passages: A theoretical analysis. Journal of Experimental Psychology: Human Learning and Memory, 6, 335–354.
  73. Moravcsik, J. E., & Kintsch, W. (1993). Writing quality, reading skills, and domain knowledge as factors in text comprehension. Canadian Journal of Experimental Psychology, 47, 360–374.
  74. Myers, J. L., Cook, A. E., Kambe, G., Mason, R., & O’Brien, E. J. (2000). Semantic and episodic effects on bridging inferences. Discourse Processes, 29, 179–199.
  75. Narvaez, D., van den Broek, P., & Ruiz, A. B. (1999). The influence of reading purpose on inference generation and comprehension in reading. Journal of Educational Psychology, 91, 488–496.
  76. Palincsar, A. S., & Brown, A. L. (1984). Reciprocal teaching of comprehension-fostering and comprehension-monitoring activities. Cognition and Instruction, 1, 117–175.
  77. Rayner, K., Pacht, J. M., & Duffy, S. A. (1994). Effects of prior encounter and global discourse bias on the processing of lexically ambiguous words: Evidence from eye fixations. Journal of Memory and Language, 33, 527–544.
  78. Recht, D. R., & Leslie, L. (1988). Effect of prior knowledge on good and poor readers. Journal of Educational Psychology, 80, 16–20.
  79. Revlin, R., & Hegarty, M. (1999). Resolving signals to cohesion: Two models of bridging inference. Discourse Processes, 27, 77–102.
  80. Rumelhart, D. E., & Ortony, A. (1977). The representation of knowledge in memory. In R. C. Anderson, R. J. Spiro, & W. E. Montague (Eds.), Schooling and the acquisition of knowledge (pp. 99–135). Hillsdale, NJ: Erlbaum.
  81. Sadoski, M., Goetz, E. T., & Rodriguez, M. (2000). Engaging texts: Effects of concreteness on comprehensibility, interest, and recall in four text types. Journal of Educational Psychology, 92, 85–95.
  82. Sanders, T. J. M., & Noordman, L. G. M. (2000). The role of coherence relations and their linguistic markers in text processing. Discourse Processes, 29, 37–60.
  83. Schank, R. C. (1982). Dynamic memory. Cambridge, UK: Cambridge University Press.
  84. Schank, R. C., & Abelson, R. P. (1977). Scripts, plans, goals, and understanding. Hillsdale, NJ: Erlbaum.
  85. Schneider, W., Körkel, J., & Weinert, F. (1989). Domain-specific knowledge and memory performance: A comparison of highand low-aptitude children. Journal of Educational Psychology, 81, 306–312.
  86. Schraw, G. (1998). Processing and recall differences among selective details. Journal of Educational Psychology, 90, 3–12.
  87. Seuren, P. A. M. (1985). Discourse semantics. New York: Basil Blackwell.
  88. Singer, M., Andrusiak, P., Reisdorf, P., & Black, N. L. (1992). Individual differences in bridging inference processes. Memory & Cognition, 20, 539–548.
  89. Singer, M., & Ritchot, K. F. M. (1996). The role of working memory capacity and knowledge access in text inference processing. Memory & Cognition, 24, 733–743.
  90. Swinney, D. A. (1979). Lexical access during sentence comprehension: (Re)consideration of context effects. Journal of Verbal Learning and Verbal Behavior, 18, 645–659.
  91. Till, R. E., Mross, E. F., & Kintsch, W. (1988). Time course of priming for associate and inference words in a discourse context. Memory & Cognition, 16, 283–298.
  92. van den Broek, P., Risden, K., Fletcher, C. R., & Thurlow, R. (1996). A“landscape” view of reading: Fluctuating patterns of activation and the construction of a stable memory representation. In B. K. Britton & A. C. Graesser (Eds.), Models of understanding text (pp. 165–187). Hillsdale, NJ: Erlbaum.
  93. van Dijk, T. A. (1972). Some aspects of text grammars. The Hague, The Netherlands: Mouton. van Dijk, T. A., & Kintsch, W. (1983). Strategies of discourse comprehension. New York: Academic Press.
  94. Walker, C. H. (1987). Relative importance of domain knowledge and overall aptitude on acquisition of domain-related knowledge. Cognition and Instruction, 4, 25–42.
  95. Walker, C. I., & Yekovich, F. R. (1984). Script based inferences: Effects of text and knowledge variables on recognition memory. Journal of Verbal Learning and Verbal Behavior, 23, 357–370.
  96. Whitney, P., Budd, D., Bramucci, R. S., & Crane, R. S. (1995). On babies, bathwater, and schemata: A reconsideration of top-down processes in comprehension. Discourse Processes, 20, 135–166.
  97. Whitney, P., Ritchie, B. G., & Clark, M. B. (1991). Workingmemory capacity and the use of elaborative inferences in text comprehension. Discourse Processes, 14, 133–145.
  98. Wiley, J., & Voss, J. F. (1999). Constructing arguments from multiple sources: Tasks that promote understanding and not just memory for text. Journal of Educational Psychology, 91, 301–311.
  99. Wolfe, M. B. W., Schreiner, M. E., Rehder, B., Laham, D., Foltz, P., Kintsch, W., & Landauer, T. K. (1998). Learning from text: Matching readers and texts by Latent Semantic Analysis. Discourse Processes, 25, 309–336.
  100. Zwaan, R. A., & Radvansky, G. A. (1998). Situation models in language comprehension and memory. Psychological Bulletin, 123, 162–185.
Reading Research Paper
Concepts and Categorization Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!