Reading Research Paper

Academic Writing Service

Sample Reading Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. If you need a research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our custom research paper writing service for professional assistance. We offer high-quality assignments for reasonable rates.

Reading is a vast topic to which entire textbooks are devoted (Crowder & Wagner, 1992; Just & Carpenter, 1987; Rayner & Pollatsek, 1989). We have selected five topics within the field of reading. The topics we have chosen, and think are central to understanding skilled reading (as opposed to understanding language comprehension in general) are (a) visual word identification, (b) the role of sound coding in word identification and reading, (c) eye movements during reading, (d) word identification in context, and (e) eye movement control in reading.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


Before discussing each of these five topics, we would like to place them in context by listing what we see as the central questions in the psychology of reading:

  1. How are printed words identified?
  2. How does the speech processing system interact with word identification and reading?
  3. Are printed words identified differently in isolation than in text?
  4. How does the fact that readers typically make about four to five eye movements per second affect the reading process?
  5. How does the reader go beyond the meaning of individual words? This question relates to how sentences are parsed, how the literal meaning of a sentence is constructed, how anaphoric links are established, how inferences are made, and so on.
  6. What is the end product of reading? What new mental structures are formed or retained as a result of reading? How does the skill of reading develop?
  7. How can we characterize individual differences among readers in the same culture and differences in readers across cultures?
  8. How can we characterize reading disabilities?
  9. Can we improve on so-called normal reading? Is speedreading possible?

These 10 questions typically represent the chapters in textbooks on the psychology of reading (Crowder & Wagner, 1992; Just & Carpenter, 1987; Rayner & Pollatsek, 1989). The topics we discuss here have been studied extensively by experimental psychologists for the past 25 years. Prior to discussing word identification per se, we briefly review the primary methods that have been used to study word identification. In most word identification experiments, words are presented in isolation and subjects are asked to make some type of response to them. However, because one of the primary goals in studying word identification is to make inferences about how words are identified during reading, we go beyond isolated word identification in much of our discussion and discuss word identification in the context of reading.




Methods Used to Study Word Identification

In this section, we focus on three methods used to examine word identification: (a) tachistoscopic presentations, (b) reaction time measures, and (c) eye movements. Although various other techniques, such as letter detection (Healy, 1976), visual search (Krueger, 1970), and Stroop interference (MacLeod, 1991) have been used to study word identification, we think it is incontrovertible that the three methods we discuss in the following section have been most widely used to study word identification and reading. More recently, investigators in cognitive neuroscience have been using brain imaging and localization techniques—especially eventrelated potentials (ERP), functional magnetic resonance imaging (fMRI) and positron-emission tomography (PET)— to study issues related to which parts of the brain are activated when different types of words are processed. However, in our view, these techniques have not yet advanced our understanding of word identification per se and are thus beyond the scope of this research paper.

Perhaps the oldest paradigm used to study word identification is tachistoscopic (i.e., very brief) presentation of a word (often followed by some type of masking pattern). Although tachistoscopes per se have been largely replaced by computer presentations of words on a video monitor, we use the term tachistoscopic presentation for convenience throughout this research paper. With tachistoscopic presentations, words are presented for a very brief time period (on the order of 30–60 ms) followed by a masking pattern, and subjects either have to identify the word or make some type of forcedchoice response. Accuracy is therefore the major dependent variable with tachistoscopic presentations.

The most common method used to study word identification is some type of response time measure. The three types of responses to words typically used are (a) naming, (b) lexical decision, and (c) categorization. With naming, subjects name a word aloud as quickly as they can; with lexical decision, they must decide whether a letter string is a word or a nonword as quickly as they can; and with categorization, they must decide whether a word belongs to a certain category (usually a semantic category). Naming responses typically take about 400–500 ms, whereas lexical decisions typically take 500–600 ms and categorization takes about 650–700 ms. Although response time is the primary dependent variable, error rates are also recorded in these studies: Naming errors are typically rare (1% or less), whereas errors in lexical decision times are typically about 5% and error rates in categorization tasks may be as high as 10–15%.

The third major technique used to study word identification (particularly in the context of reading) is eye movement monitoring: Participants are asked to read either single sentences or longer passages of text as their eye movements are recorded. One great advantage of eye tracking (i.e., eye movement monitoring), other than the fact that participants are actually reading, is that a great deal of data is obtained (so that not only measures associated with a given target word can be obtained, but measures of processing time for words preceding and following the target word are also available). The three most important dependent variables for examining word identification in reading are first-fixation duration (the duration of the first fixation on a word), gaze duration (the sum of all fixations on a word prior to moving to another word), and the probability of skipping a word.

Word Identification

Surprisingly, one of the problems in experimental psychology on which researchers have made little headway is understanding how objects are recognized. We still have very little understanding of how one can easily recognize a common object like a dog or chair in spite of seeing it from varying viewpoints and distances, and in spite of that fact that different exemplars of these categories are quite different visually. Basically, models that have tried to understand object identification, often called models of pattern recognition (Neisser, 1967; Uhr, 1963), fall into two classes.

In the first class, template models, wholistic memory representations of object categories, called templates, are compared to the visual input that comes in, and the template that best matches the visual input signals what the object is. An immediate question that comes to mind is what form these templates would have to be in order for this scheme to work. In one version, there is only one template per category; this assumption, however, does not work very well because a template that matches an object well seen from one viewpoint is not likely to match well when the same object is seen from a different viewpoint. In an attempt to remedy this problem, some versions of the template model posit a so-called preprocessing stage, in which the image is normalized to the template before the comparison; however, so far no particularly plausible normalization routines have been suggested because it is not clear how a person could normalize an image without prior knowledge of what the object is. Another possibility is that many templates exist for each object category; however, it is not clear whether memory could store all of these object templates, nor how all of the templates would have been stored in the first place.

The other class of models are called feature models. They differ in their details, but essential to all of these models is that objects are defined by a set of visual features. Although this kind of formulation sounds more reasonable than the template model to most people, it may not be any better a solution to the general problem because it is not at all clear what the defining visual features are for most real-world objects. In fact, most of the more successful artificial intelligence (AI) pattern recognition devices use some sort of template model. Their success, however, relies heavily on the fact that they are typically only required to distinguish among at most a few dozen objects rather than the many thousands of objects with which humans must cope.

This rather pessimistic introduction to object identification, in general, would suggest that we have learned little about how words are identified; however, that is not the case. Even though visual words are clearly artificial stimuli that evolution has not programmed humans to identify, there are several ways in which the problem of identifying words is simpler than that of identifying objects in general. The first is that, with a few exceptions, we do not have to deal with identifying words from various viewpoints: We almost always read text right side up. (It is quite difficult to read text from unusual angles.) Second, if we confine ourselves to recognizing printed words, we do not encounter that much variation from one exemplar to another. Most type fonts are quite similar, and those that are unusual are in fact difficult to read, indicating that they are indeed poor matches to our mental representations of the letters. Thus, the problem of understanding how printed words are identified may not be as difficult as understanding how objects are identified. One possibility is that we have several thousand templates for words we know. Or perhaps in alphabetic languages, all we need are a set of templates for each letter of the alphabet (more likely, two sets of templates—one for uppercase letters and one for lowercase letters).

Do We Recognize Words Through the Component Letters?

The previous discussion hints at one of the basic issues in visual word recognition: whether readers of English identify words directly through a visual template of a word, or whether they go through a process in which each letter is identified and then the word as a whole is identified through the letters (we discuss encoding of nonalphabetic languages shortly). In a clever tachistoscopic paradigm, Reicher (1969) and Wheeler (1970) presented participants (see Figure 20.1) with either (a) a four-letter word (e.g., word); (b) a single letter (e.g., d); or (c) a nonword that was a scrambled version of the word (e.g., orwd). In each case, the stimulus was masked and, when the mask appeared, two test letters, (e.g., a d and a k) appeared above and below the location where the critical letter (d in this case) had appeared. The task was to decide which of the two letters had been in that location. Note that either of the test letters was consistent with a word—word or work—so that participants could not be correct in the task merely by guessing that the stimulus was a word. The exposure duration was adjusted so that overall performance was about 75% (halfway between chance and perfect).

     Fig. 20.1

Quite surprisingly, the data showed that participants were about 10% more accurate in identifying the letter when it was in a word than when it was a single letter in isolation! This finding certainly rules out the possibility that the letters in words are encoded exclusively one at a time (presumably in something like a left-to-right order) in order to enable recognition. This superiority of words over single letters (at least superficially) may seem to be striking evidence for the assertion that words (short words at least) are encoded through something like a visual template. However, there is another possibility: that words are processed through their component letters, but the letters are encoded in parallel, and somehow their organization into words facilitates the encoding process. In fact, several lines of evidence indicate that this parallel-letter encoding model is a better explanation of the data than is the visual template model. First, all the words in this experiment were all uppercase; it seems unlikely that people would have visual templates of words in uppercase, because words rarely appear in that form. Second, performance in the scrambled-word condition was about the same as it was in the single-letter condition. Thus, it appears that letters, even in nonpronounceable nonwords, are processed in parallel. Third, subsequent experiments (e.g., Baron & Thurston, 1973; Hawkins, Reicher, Rogers, & Peterson, 1976) showed that the word superiority effect extends to pseudowords (i.e., orthographically legal and pronounceable nonwords like mard): that is, letters in pseudowords are also identified more accurately than are letters in isolation. (In fact, many experiments found virtually no difference between words and pseudowords in this task.) Because it is extremely implausible that people have templates for pseudowords, they cannot merely have visual templates of words unconnected to the component letters. Instead, it seems highly likely that all short strings of letters are processed in parallel and that for words or wordlike strings, there is mutual facilitation in the encoding process.

Although the above explanation in terms of so-called mutual facilitation may seem a bit vague, several successful and precise quantitative models of word encoding have accounted very nicely for the data in this paradigm. The two original ones were by McClelland and Rumelhart (1981) and Paap, Newsome, McDonald, and Schwaneveldt (1982). In both of these models, there are both word detectors and letter detectors. In the McClelland and Rumelhart model, there is explicit feedback from words to letters, so that if a stimulus is a word, partial detection of the letters will excite the word detector, which in turn feeds back to the letter detectors to help activate them further. In the Paap et al. model, there is no explicit feedback; instead, a decision stage effectively incorporates a similar feedback process. Moreover, both of the models successfully explain the superiority of pseudowords over isolated letters. That is, even though a pseudoword like mard has no mard detector, it has quite a bit of letter overlap with several words (e.g., card, mark, maid). Thus, its component letters will get feedback from all of these word detectors, which for the most part will succeed in activating the detectors for the component letters in mard. Although this verbal explanation might seem to indicate that the facilitation would be significantly less for pseudowords than for words because there is no direct match with a single word detector, both models in fact quantitatively gave a good account of the data.

To summarize, the aforementioned experiments (and many related ones) all point to the conclusion that words (short words, at least) are processed in parallel, but through a process in which the component letters are identified and feed into the word identification process. Above, we have been vague about what letter detector means. Are the letter detectors that feed into words abstract letter detectors (i.e., caseand font-independent) or specific to the visual form that is seen? (Needless to say, if there are abstract letter detectors, they would have to be fed by case-specific letter detectors, as it is unlikely that a single template or set of features would be able to recognize a and A as the same thing.) As we have mentioned, the word superiority experiments chiefly used all uppercase letters, and it seems implausible that there would be prearranged hook-ups between the uppercase letters and the word detectors. Other experiments using a variety of techniques (e.g., Besner, Coltheart, & Davelaar, 1984; Evett & Humphreys, 1981; Rayner, McConkie, & Zola, 1980) also indicate that the hook-up is almost certainly between abstract letter detectors and the word detectors. One type of experiment had participants either identify individual words or read text that was in MiXeD cAsE, like this. Even though such text looks strange, after a little practice, people can read it almost as fast as they read normal text (Smith, Lott, & Cronnell, 1969). Among other things, this research indicates that word shape (i.e., the visual pattern of the word) plays little or no part in word identification.

These word superiority effect experiments, besides showing that letters in words are processed in parallel, suggest that word recognition is quite rapid. The exposure durations in these experiments that achieved about 75% correct recognition was typically about 30 ms, and if the duration is increased to 50 ms, word identification is virtually perfect. This does not necessarily mean, however, that word identification only takes 50 ms—it merely shows that some initial visual encoding stages are completed in something like 50 ms. However, after 50 ms or so, it may just be that the visual information is held in a short-term memory buffer, but it has not yet been fully processed. In fact, most estimates of the time to recognize a word are significantly longer than that (Rayner & Pollatsek, 1989). As we have previously noted, it takes about 500 ms to begin to name a word out loud, but that is clearly an upper estimate because it also includes motor programming and execution time. Skilled readers read about 300 words per minute or about 5 words per second, which would suggest that one fifth of a second or 200 ms might not be a bad guess for how long it takes to identify a word. Of course in connected discourse, some words are predictable and can be identified to the right of fixation in parafoveal vision, so that not all words need to be fixated. On the other hand, readers have to do more than identify words to understand the meaning of text. However, most data point to something like 150–200 ms as a ballpark estimate of the time to encode a word.

Automaticity of Word Encoding

One surprising result from the word encoding literature is that encoding of words seems to be automatic; that is, people can’t help encoding words. The easiest demonstration of this is called the Stroop effect (Stroop, 1935; see MacLeod, 1991 for a comprehensive review). There is actually some controversy about how strongly automatic the Stroop effect is (see Besner, Stolz, & Boutilier, 1997).That is, it may not be the case that people always process a word when they are trying their best not to process it. However, it appears that even in some cases when they are trying not to process it, they still do. In the Stroop task, people see words written in colored ink (e.g., they see red in green ink) and their task is to ignore the word and name the color (in this case, they should say green). The standard finding is that when the word is a different color name, participants are slowed down considerably in their naming and make considerable errors compared to a control condition (e.g., something like &&&& written in colored ink). In fact, even color-neutral words (i.e., noncolor names such as desk) slow down naming times. Such findings suggest that people are just unable to ignore the words. Moreover, these effects persist even with days of practice.The effect is not limited to naming colors; one gets similar slowing of naming times if one is to name a common object that has a name superimposed on it—for example, a picture of a cat with the word dog superimposed on the middle of the cat (Rayner & Posnansky, 1978; Rayner & Springer, 1986; Rosinski, Golinkoff, & Kukish, 1975).

Another way in which word processing appears to be automatic is that people encode the meaning of a word even though they are not aware of it. This has been demonstrated using the semantic priming paradigm (Meyer & Schvaneveldt, 1971). In this paradigm, two words, a prime and a target, are seen in rapid succession. The details of the experiments differ, but in some, participants just look at the prime and name the target. The phenomenon of semantic priming is that naming times are approximately 30 ms faster when the prime is semantically related to the target (e.g., dog–cat) than when it is not (e.g, desk–cat). The most interesting version of this paradigm occurs when the prime is presented subliminally (Balota, 1983; Carr, McCauley, Sperber, & Parmelee, 1982; Marcel, 1983). Usually this is achieved by a very brief presentation of the prime (about 10–20 ms) followed by a pattern mask and then the target. The amazing finding is that a priming effect (often almost as strong as when the prime is visible) occurs even in cases where the subject can not reliably report whether anything appeared before the pattern mask, let alone what the identity of the prime was. Thus, individuals are encoding the meaning of the prime even though they are unaware of having done so.

Word Encoding in Nonalphabetic Languages

So far, we have concentrated on decoding words in alphabetic languages, using experiments in English as our guide. For all the results we have described so far, there is no reason to believe that the results would come out differently in other languages. However, some other written languages use different systems of orthography. Space does not permit a full description of all of these writing systems nor what is known about decoding in them (see Rayner & Pollatsek, 1989, chapter 2, for a fuller discussion of writing systems).

Basically, there are two other systems of orthography, with some languages using hybrids of several systems. First, the Semitic languages use an alphabetic system, but one in which few of the vowels are represented, so that the reader needs to supply the missing information. In Hebrew, there is a system with points (little marks) that indicate the vowels that are used for children beginning to read; in virtually all materials read by adult readers, however, the points are omitted. The other basic system is exemplified by Chinese, which is sometimes characterized as so-called picture writing, although that term is somewhat misleading because it oversimplifies the actual orthography. In Chinese, the basic unit is the character, which does not represent a word, but a morpheme, a smaller unit of meaning, which is also a syllable. (In English, for instance, compound words such as cow/boy would be two morphemes, as would prefixed, suffixed, and inflected words such as re/view, safe/ty, and read/ing.) The characters in Chinese are, to some extent, pictographic representations of the meaning of the morpheme; in many cases, however, they have become quite schematic over time, so that a naive reader would have a hard time guessing the meaning of the morpheme merely by looking at the form of the character. In addition, characters are not unitary in that a majority are made up of two radicals, a semantic radical and a phonetic radical. The semantic radical gives some information about the meaning of the word and the phonetic radical gives some hint about the pronunciation, although it is quite unreliable. (In addition, the Chinese character system is used to represent quite widely diverging dialects.)

A hybrid system is Japanese, which uses Chinese characters (called Kanji in Japanese) to represent the roots of most content words (nouns, verbs, and adjectives), which are not usually single syllables in Japanese. This is supplemented by a system of simpler characters, called Kana, in which each Kana character represents a syllable. One Kana system is used to represent function words (prepositions, articles, conjunctions) and inflections; another Kana system is used to represent loanwords from other languges, such as baseball. Another fairly unique system is the Korean writing system, Hangul. In Hangul, a character represents a syllable, but it is not arbitrary, as in Kana. Instead, the component “letters” are represented not in a left-to-right fashion, but rather are all superimposed in the same character. Thus, in some sense, Hangul is similar to an alphabetic language.

The obvious question for languages without alphabets is whether encoding of words in such languages is more like learning visual templates than encoding is in alphabetic languages. However, as we hope the previous discussion indicates, thinking of words as visual templates even in Chinese is an oversimplification, as a word is typically two characters, and each character typically has two component radicals. Nonetheless, the system is different from an alphabetic language in that one has to learn how each character is pronounced and what it means, as opposed to an alphabetic language in which (to some approximation) one merely has to know the system in order to be able to pronounce it and know what it means (up to homophony). In fact, the Chinese orthography is hard for children to learn. One indication of this is that Chinese children are typically first taught a Roman script (Pin yin), which is a phonetic representation of Chinese, in the early grades. They are only taught the Chinese characters later, and then only gradually—a few characters at a time. It thus appears that having an alphabet is indeed a benefit in reading, and that learning word templates is difficult—either because it is easier to learn approximately 50 templates for letters than to learn several thousand templates for words, or because the alphabetic characters allow one to derive the sound of the word (or both).

Sound Coding in Word Identification and Reading

So far, we have discussed word identification as if it were a purely visual process. That is to say, the prior section tacitly assumed that a process of word identification involves detectors for individual letters (in alphabetic languages), which feed into a word detector, in which the word is defined as a sequence of abstract letters. (In fact, one detail that was glossed over in the discussion of the parallel wordidentification models is that the positions of individual letters need to be encoded precisely; otherwise people could not tell dog from god.) However, given that alphabets are supposed to code for the sounds of the words, it seems plausible that the process of identifying words is not a purely visual one, and that it also involves accessing the sounds that the letters represent and possibly assembling them into the sound of a word. Moreover, once one thinks about accessing the sound of a word, it becomes less clear what the term word identification actually means. Is it accessing a sequence of abstract letters, accessing the sound of the word, accessing the meaning of the word, or some combination of all three? In addition, what is the causal relationship between accessing the three types of codes? One possibility is that one merely accesses the visual code—more or less like finding a dictionary entry—and then looks up the sound of the word and the meaning in the “dictionary entry.” (This must be an approximation of what happens in orthographies such as Chinese.) Another relatively simple possibility is that for alphabetic languages, the reader must first access the sound of the word and can only then access the meaning. That is to say, according to this view, the written symbols merely serve to access the spoken form of the language, and a word’s meaning is tied only to the spoken form. On the other hand, the relationship may be more complex. For example, the written form may start to activate both the sound codes and the meaning codes, and then the three types of codes send feedback to each other to arrive at a solution as to what the visual form, auditory form, and meaning of the word are. There are probably few topics in reading that have generated as much controversy as this: what the role of sound coding is in the reading process.

As mentioned earlier, naming of words is quite rapid (within about 500 ms for most words). Given that a significant part of this time must be taken up in programming the motor response and in beginning to execute the motor act of speaking, it certainly seems plausible that accessing the sound code could be rapid enough to be part of the process of getting to the meaning of a word. But even if the sound code is accessed at least as rapidly as the meaning, it may not play any causal role. Certainly, there is no logical necessity for involving the sound codes, because the sequence of letters is sufficient to access the meaning (or meanings) of the word; in the McClelland and Rumelhart (1981) and Paap et al. (1982) models, access to the lexicon (and hence word meaning) is achieved via a direct look-up procedure, which only involves the letters which comprise a word. However, before examining the role of sound coding in accessing the meanings of words, let us first look at how sound codes themselves are accessed.

The Access of Sound Codes

There are three general possibilities for how we could access the pronunciation of a letter string. Many words in English have irregular pronunciations (e.g., one), such that their pronunciations cannot be derived from the spelling-to-sound rules as defined by the language. In these cases, it appears that the only way to access the sound code would be via a direct access procedure by which the word’s spelling is matched to a lexical entry within the lexicon. In the above example, the letters o-n-e would activate the visual word detector for one, which would in turn activate the subsequent lexical entry. After this entry is accessed, the appropriate pronunciation for the word (/wun/) could be activated. In contrast, other words have regular pronunciations (e.g., won). Such words’ pronunciations could also be accessed via a direct route, but their sound codes could also be constructed through the utilization of spelling-to-sound correspondence rules or by analogy to other words in the language. Finally, it is of course possible to pronounce nonwords like mard. Unless all possible pronounceable letter strings have lexical entries (which seems unlikely), nonwords’ sound codes would have to be constructed.

Research on patients with acquired dyslexia, who were previously able to read normally but suffered a stroke or brain injury, has revealed two constellations of symptoms that seem to argue for the existence of both the direct and the constructive routes to a word’s pronunciation (Coltheart, Patterson, & Marshall, 1980). In one type, surface dyslexia, the patients can pronounce both real words and nonwords but they tend to regularize irregularly pronounced words (e.g., pronouncing island as iz-land). In contrast to those with surface dyslexia, individuals with deep and phonemic dyslexia can pronounce real words (whether they are regular or irregular), but they cannot pronounce nonwords. Researchers initially believed that individuals with surface dyslexia completely relied on their intact constructive route, whereas those with deep dyslexia completely relied on their direct route. However, researchers now realize that these syndromes are somewhat more complex than had been first thought, and the descriptions of them here are somewhat oversimplified. Nonetheless, they do seem to argue that the two processes (a direct look-up process and a constructive process) may be somewhat independent of each other.

Assuming that these two processes exist in normal skilled readers (who can pronounce both irregular words and nonwords correctly), how do they relate to each other? Perhaps the simplest possibility is that they operate independently of each other in a race, so to speak. Whichever process finishes first would presumably win, determining the pronunciation. Thus, because the direct look-up process cannot access pronunciations of nonwords, the constructive process would determine the pronunciations of nonwords. What would happen for words? Presumably, the speed of the direct look-up process would be sensitive to the frequency of the word in the language, with low-frequency words taking longer to access. However, the constructive process, which is not dependent on lexical knowledge, should be largely independent of the word’s frequency. Thus, for common (i.e. frequent) words, the pronunciation of both regular and irregular words should be determined by the direct look-up process and should take more or less the same time. For less frequent words, however, both the direct and constructive processes would be operating because the direct access process would be slower. Thus, for irregular words, there would be conflict between the pronunciations generated by the two processes; therefore one would either expect irregular words to be pronounced more slowly (if the conflict is resolved successfully), or there would be errors if the word is regularized.

The data from many studies are consistent with such a model. A very reliable finding (Baron & Strawson, 1976; Perfetti & Hogaboam, 1975) is that regular words are pronounced (named) more quickly than are irregular words. However, the difference in naming times between regular and irregular words is a function of word frequency: For highfrequency words there is little or no difference, but there is a large difference for low-frequency words. However, the process of naming is likely to be more complex than a simple race, as people usually make few errors in naming, even for low-frequency irregular words. Thus, somehow, it appears that the two routes cooperate in some way to produce the correct pronunciation, but when the two routes conflict in their output, there is slowing of the naming time (Carr & Pollatsek, 1985). It is worth noting, however, that few words are totally irregular. That is to say, even for quite irregular words like one and island, the constructive route would produce a pronunciation that had some overlap with the actual pronunciation.

Before leaving this section, we must note that there is considerable controversy at the moment concerning exactly how the lexicon is accessed. In the traditional dual route models that we have been discussing (e.g., Coltheart, 1978; Coltheart, Curtis, Atkins, & Haller, 1993; Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), there are two pathways to the lexicon, one from graphemic units to meaning directly, and one from graphemic units to phonological units, and then to meaning (the phonological mediation pathway). A key aspect of these models is that (a) the direct pathway must be used to read exception words (e.g., one) for which an indirect phonological route would fail and (b) the phonological route must be used to read pseudowords (e.g., nufe) that have no lexical representation. Another more recent class of models, often termed connectionist models, takes a different approach. These models take issue with the key idea that we actually have a mental lexicon. Instead, they assume that processing a word (or pseudoword) comes from an interaction of the stimulus and a mental representation which represents the past experience of the reader. However, this past experience is not represented in the form of a lexicon, but rather from patterns of activity that are distributed in the sense that one’s total memory, in some sense, engages with a given word, rather than a single lexical entry. In addition, this memory is nonrepresentational, in that the elements are just relatively arbitrary features of experience rather than being things like words or letters (Harm & Seidenberg, 1999; Plaut, McClelland, Seidenberg, & Patterson, 1996; Seidenberg & McClelland, 1989). For this process to work rapidly enough for one to recognize a word in a fraction of a second, these models all assume that this contact between the current stimulus and memory must be in parallel across all these features. For this reason, these models are often termed parallel distributed processing (PDP) models. Resonance models (Stone & Van Orden, 1994; Van Orden & Goldinger, 1994) are a similar class of models that posit a somewhat different type of internal memory structure. Because these models are complex and depend on computer simulation in which many arbitrary assumptions need to be made in order for the simulations to work, it is often hard to judge how well they account for various phenomena. Certainly, at our present state of knowledge, it is quite difficult to decide whether this nonrepresentational approach is an improvement on the more traditional representational models (see Besner, Twilley, McCann, & Seergobin, 1990; Coltheart et al., 1990; Seidenberg & McClelland, 1990). For the purposes of our present discussion, a major difference in emphasis between the models is that for the connectionist models, processes that would look like the phonological route in the more traditional models enter into the processing of regular words, and processes that would look like direct lexical look-up enter into the processing of pseudowords.

Sound Codes and the Access of Word Meanings

In the previous section we discussed how readers access a visual word’s sound codes. However, a much more important question is how readers access a visual word’s meaning (or meanings). As previously indicated, this has been a highly contentious issue on which respected researchers have stated quite differing positions. For example, Kolers (1972) claimed that processing during reading does not involve readers’ formulating articulatory representations of printed words, whereas Gibson (1971) claimed that the heart of reading is the decoding of written symbols into speech. Although we have learned a great deal about this topic, the controversy represented by this dichotomy of views continues, and researchers’ opinions on this question still differ greatly.

Some of the first attempts to resolve this issue involved the previously discussed lexical decision task. One question that was asked was whether there was a difference between regularly and irregularly spelled words, under the tacit assumption that the task reflects the speed of accessing the meaning of words (Bauer & Stanovich, 1980; Coltheart, 1978). These data unfortunately tended to be highly variable: Some studies found a regularity effect whereas others did not. Meyer, Schvaneveldt, and Ruddy (1974) utilized a somewhat different paradigm and found that the time for readers to determine whether touch was a word was slower when it was preceded by a word such as couch (which presumably primed the incorrect pronunciation) as compared to when it was preceded by an unrelated word. However, there is now considerable concern that the lexical decision task is fundamentally flawed as a measure of so-called lexical access that is related to accessing a word’s meaning. The most influential of these arguments was that this task is likely to induce artificial checking strategies before making a response (Balota & Chumbley, 1984, 1985).

A task that gets more directly at accessing a word’s meaning is the categorization task. As noted earlier, in this task, participants are given a category label (e.g., tree) and then are given a target word (e.g., beech, beach, or bench) and have to decide whether it represented a member of the preceding category (Van Orden, 1987; Van Orden, Johnston, & Hale, 1988; Van Orden, Pennington, & Stone, 1990). The key finding was that participants had difficulties rejecting homophones of true category exemplars (e.g. beach). Not only were they slow in rejecting these items, they typically made 10–20% more errors on these items than on control items that were visually similar (e.g., bench). In fact, these errors persisted even when people were urged to be cautious and go slowly. Moreover, this effect is not restricted to word homophones. A similar, although somewhat smaller effect was reported with pseudohomophones (e.g., brane). Moreover, in a similar semantic relatedness judgment task (i.e., decide whether the two words on the screen are semantically related), individuals are slower and make more errors on false homophone pairs such as pillow-bead (Lesch & Pollatsek, 1998). (Bead is a false homophone of pillow because bead could be a homophone of bed, analogously to head’s rhyming with bed.) These findings with pseudohomophones and false homophones both indicate that it is unlikely that such results are merely due to participants’ lack of knowledge of the target words’ spelling, and that assembled phonology plays a significant role in accessing a word’s meaning.

Still, in order for sound codes to play a crucial role in the access of word meaning, they must be activated relatively early in word processing. In addition, these sound codes must be activated during natural reading, and not just when words are presented in relative isolation (as they were in the aforementioned studies). To address these issues, Pollatsek, Lesch, Morris, and Rayner (1992) utilized a boundary paradigm (Rayner, 1975) to examine whether phonological codes were active before words were even fixated (and hence very early in processing). Although we discuss the boundary paradigm in more detail later in this research paper, it basically consists of presenting a parafoveal preview of a word or a letter string to the right of a boundary within a sentence. When readers’ eyes move past the boundary and toward a parafoveal target word, the preview changes. In the Pollatsek et al. study, the preview word was either identical to the target word (rains), a homophone of it (reins), or an orthographic control word that shared many letters with the target word (ruins). That is, participants often see a different word in the target word location before they fixate it, although they are virtually never aware of any changes. The key finding was that reading was faster when the preview was a homophone of the target than when it was just orthographically similar; this indicates that in reading text, sound codes are extracted from words even before they are fixated, which is quite early in the encoding process. In fact, data from a similar experiment indicate that Chinese readers also benefit from a homophone of a word in the parafovea (Pollatsek, Tan, & Rayner, 2000).

Some other paradigms, however, have come up with less convincing evidence for the importance of sound coding in word identification. One, in fact, used a manipulation in a reading study similar to the preview study with three conditions: correct homophone, incorrect homophone, and spelling control (e.g., “Even a cold bowl of cereal/serial/ verbal . . . .”). However, in this study, when a wrong word appeared (either the wrong homophone or the spelling control) it remained in the text throughout the trial. People read short passages containing these errors, and the key question was whether the wrong homophones would be less disruptive than the spelling controls because they “sounded right.” In these studies (Daneman & Reingold, 1993, 2000; Daneman, Reingold, & Davidson, 1995) there was a disruption in the reading process (measured by examining the gaze duration on the target word) for both types of wrong words, but no significant difference between the wrong homophones and the spelling control (although they did find more disruption for the spelling control slightly later in processing). This finding is consistent with a view in which sound coding plays only a backup role in word identification. On the other hand, Rayner, Pollatsek, and Binder (1998) found greater disruption for the spelling control than for the wrong homophone even on immediate measures of processing. However, even in the Rayner et al. study, the homophone effects were relatively subtle (far more so than in Van Orden’s categorization paradigm). Thus, it appears that sentence and paragraph context may interact with word processing to make errors (be they phonological or orthographical) less damaging to the reading process. Finally, we should note that at the moment there is some controversy about the exact nature of the findings in these homophone substitution studies (Jared, Levy, & Rayner, 1999) and with respect to the use of such substitutions to study sound coding in reading (Starr & Fleming, 2001). However, for the most part, the results obtained from studies using homophone substitutions are broadly consistent with other studies examining sound coding in which homophones are not used.

Summary

Although it does seem clear that phonological representations are used in the reading process, it is a matter of controversy how important these sound codes are to accessing the meaning of a word. Certainly, the categorical judgment studies make clear that sound coding plays a large role in getting to the meaning of a word, and the parafoveal preview studies indicate that sound codes are accessed early when reading text. However, the data from the wrong-homophone studies in reading seem to indicate that the role of sound coding in accessing word meanings in reading may be a bit more modest. In contrast, most cognitive psychologists do agree that phonological codes are activated in reading and play an important role by assisting short-term memory (Kleiman, 1975; Levy, 1975; Slowiaczek & Clifton, 1980).

Eye Movements in Reading

The experiments we have discussed thus far have mainly studied individuals who are viewing words in isolation. However, fluent reading consists of much more than simply processing single words—it also involves the integration of successive words into a meaningful context. In this section, we discuss a number of factors that seem to influence the ease or difficulty with which we read words embedded in text. Ultimately, one could view the research within the realm of reading as an attempt to formulate a list of all the variables that have an influence on reading processes. Ideally, if we had an exhaustive list of each and every constituent factor in reading (and, of course, how each of these factors interacted with one another), we could develop a complete model of reading. Although quite a bit of work needs to be done in order to accomplish such an ambitious endeavor, a great deal of progress has been made. In particular, as the potential for technical innovation has improved, researchers have developed more accurate and direct methodologies for studying the reading process. One of these innovations, which has been used extensively for the past 25 years, has involved using readers’ eye movements in order to uncover the cognitive processes involved in reading.

Basic Facts About Eye Movements

Although it may seem as if our eyes sweep continuously across the page as we read, our eyes actually make a series of discrete jumps between different locations in the text, more or less going from left to right across a line of text (see Huey, 1908; Rayner, 1978, 1998). More specifically, typical eye movement activity during reading consists of sequences of saccades, which are rapid, discrete, jumps from location to location, and fixations, during which the eyes remain relatively stable for periods that last, on average, about a quarter of a second. The reason that continual eye movements are necessary during reading is that our visual acuity is generally quite limited. Although the retina itself is capable of detecting stimuli from a relatively wide visual field (about 240  of visual angle), high-acuity vision is limited to the fovea, which consists of only the center 2  of visual angle (which for a normal reading distance consists of approximately six to eight letters). As one gets further away from the point of fixation (toward the parafovea and eventually the periphery), visual acuity decreases dramatically and it is much more difficult to see letters and words clearly.

The purpose of a saccade is to focus a region of text onto foveal vision for more detailed analysis, because reading on the basis of only parafoveal-peripheral information is generally not possible (Rayner & Bertera, 1979; Rayner, Inhoff, Morrison, Slowiaczek, & Bertera, 1981). Saccades are relatively fast, taking only about 20–50 ms (depending on the distance covered). In addition, because their velocity can reach up to 500 /s, visual sensitivity is reduced to a blur during an eye movement, and little or no new information is obtained while the eye is in motion. Moreover, one is not aware of this blur due to saccadic suppression (Dodge, 1900; Ishida & Ikeda, 1989; Matin, 1974; Wolverton & Zola, 1983). Eye movements during reading range from less than one character space to 15–20 character spaces (although such long saccades are quite rare and typically follow regressions, see below), with the eyes typically moving forward approximately eight character spaces at a time. As words in typical English prose are on average five letters long, the eyes thus move on average a distance that is roughly equivalent to the length of one and one-half words.

Although (perhaps not surprisingly) the eyes typically move from left to right (i.e., in the direction of the text in English), about 10–15% of eye movements shift backwards in text and are termed regressions (Rayner, 1978, 1998; Rayner & Pollatsek, 1989). For the most part, regressions tend to be short, as the eyes only move a few letters. Readers often make such regressions in response to comprehension difficulty (see Rayner, 1998, for a review), but regressive eye movements may also occur when the eyes have moved a little too far forward in the text and a small backwards correction is needed in order for us to process a particular word of interest. Longer regressions do occur occasionally, and when such movements are necessary in order to correctly comprehend the text, readers are generally accurate at moving their eyes back to the location within the text that caused them difficulty (Frazier & Rayner, 1982; Kennedy & Murray, 1987).

Given the blur of visual information during the physical movement of the eyes, the input of meaningful information takes place during fixations (Ishida & Ikeda, 1989; Wolverton & Zola, 1983). As we discuss later in this research paper, readers tend to fixate on or near most words in text, and the majority of words are only fixated once (Just & Carpenter, 1980). However, some words are skipped (Ehrlich & Rayner, 1981; Gautier, O’Regan, & LaGargasson, 2000; O’Regan, 1979, 1980; Rayner & Well, 1996). Word skipping tends to be related to word length: Short words (e.g., function words like the or and) are skipped about 75% of the time, whereas longer words are rarely skipped. More specifically, as length increases, the probability of fixating a word increases (Rayner & McConkie, 1976): Two- to three-letter words are fixated around 25% of the time, but words with eight or more letters are almost always fixated (and are often fixated more than once before the eyes move to the next word). However, as we discuss later, longer content words that are highly predictable from the preceding context are also sometimes skipped.

Fixation durations are highly variable, ranging from less than 100 ms to over 500 ms, with a mean of about 250 ms (Rayner & Pollatsek, 1989). One important question is whether this variability in the time readers spend fixating on words is only due to low-level factors such as word length or whether such variability may also be due to higher level influences as well. As the prior sentence suggests, it is clear that low-level variables are important. Word length in particular has been found to have a powerful influence on the amount of time a reader fixates on a word (Kliegl, Olson, & Davidson, 1982; Rayner & McConkie, 1976; Rayner, Sereno, & Raney, 1996): As word length increases, fixation times increase as well. The fact that readers tend to fixate longer words for longer periods of time is perhaps not surprising— such an effect could simply be the product of the mechanical (i.e., motor) processes involved in moving and fixating the eyes. What has been somewhat more controversial is whether eye movement measures can also be used to infer moment-tomoment cognitive processes in reading such as the difficulty in identifying a word.

There is now a large body of evidence, however, that the time spent fixating a word is influenced by word frequency: Fixation times are longer for words of lower frequency (i.e., words less frequently seen in text) than for words of higher frequency, even when the low-frequency words are the same length as the high-frequency words (Hyönä & Olson, 1995; Inhoff & Rayner, 1986; Just & Carpenter, 1980; Kennison & Clifton, 1995; Rayner, 1977; Rayner & Duffy, 1986; Rayner & Fischer, 1996; Raney & Rayner, 1995; Rayner & Raney, 1996; Rayner et al., 1996; Sereno & Rayner, 2000; Vitu, 1991). As with words in isolation, this is presumably because the slower direct access process for words of lower frequency increases the time to identify them. Furthermore, there is a spillover effect for low-frequency words (Rayner & Duffy, 1986; Rayner, Sereno, Morris, Schmauder, & Clifton, 1989). When the currently fixated word is of low frequency, cognitive processing may be passed downstream in the text, leading to longer fixation times on the next word. A corollary to the spillover effect is that when words are fixated multiple times within a passage, fixation durations on these words decrease, particularly if they are of low frequency (Hyönä & Niemi, 1990; Rayner, Raney, & Pollatsek, 1995). Finally, the nature of a word’s morphology also has a mediating effect on fixation times. Lima (1987), for example, found that readers tend to fixate for longer periods of time on prefixed words (e.g., revive) as compared to pseudoprefixed words (e.g., rescue). More recently Hyönä and Pollatsek (1998) found that the frequency of both the morphemes of compound words influenced fixation time on the word for compound words that were equated on the frequency of the word. However, the timing was different; the first morpheme influenced the duration of the initial fixation on the word, whereas the second morpheme only influenced later processing on the word. Similarly, Niswander, Pollatsek, and Rayner (2000) found that the frequency of the root morpheme of suffixed words (e.g. govern in government) affected the fixation time on the word. Thus, at least some components of words, in addition to the words themselves, are influencing fixation times in reading.

The Perceptual Span

A central question in reading is how much information we can extract from text during a single fixation. As mentioned earlier, the data show that our eyes move approximately once every quarter of a second during normal reading, suggesting that only a limited amount of information is typically extracted from the text on each fixation. This, coupled with the physical acuity limitations inherent in the visual system, suggests that the region of text on the page from which useful information may be extracted on each fixation is relatively small.

Although a number of different techniques have been used in attempts to measure the size of the effective visual field (or perceptual span) in reading, most of them have rather severe limitations (see Rayner, 1975, 1978 for a discussion). One method which has proven to be effective, however, is called the moving window technique (McConkie & Rayner, 1975; Rayner, 1986; Rayner & Bertera, 1979; N. R. Underwood & McConkie, 1985). This technique involves presenting readers with a window of normal text around the fixation point on each fixation, with the information outside that window degraded in some manner. In order to accomplish this, readers’ eye movements and fixations are continuously monitored and recorded by a computer while they read text presented on a computer monitor, and, when the eyes move, the computer changes the text contingent on the position of the eyes. In a typical experiment, an experimenter-defined window of normal text is presented around the fixation point, while all the letters outside the window are changed to random letters. The extent of the perceptual span may be examined by manipulating the size of the window region. The logic of this technique is that if reading is normal for a window of a particular size (i.e., if people read both with normal comprehension and at their normal rate), then information outside this window is not used in the reading process.

Figure 20.2 illustrates a typical example of the moving window technique. In this example, a hypothetical reader is presented with a window of text that consists of 4 letters to the left of fixation and 14 letters to the right of fixation (fixation points are indicated by asterisks). As can be seen in the figure, the window of normal text follows the reader’s fixation points—if the eyes make a forward saccade, the window moves forward, but if the eyes make a backward saccade (a regression), the window moves backward as well.

     Fig. 20.2

Studies using this technique have consistently shown that the size of the perceptual span is relatively small. For readers of alphabetical languages such as English, French, and Dutch, the span extends from the beginning of the currently fixated word or about three to four letters to the left of fixation (McConkie & Rayner, 1976; Rayner, Well, & Pollatsek, 1980; N. R. Underwood & McConkie, 1985) to about 14–15 letters to the right of fixation (DenBuurman, Boersma, & Gerrissen, 1981; McConkie & Rayner, 1975; Rayner, 1986; Rayner & Bertera, 1979). Thus, the span is asymmetric to the right for readers of English. Interestingly, for written languages such as Hebrew (which are printed from right to left), the span is asymmetric to the left of fixation (Pollatsek, Bolozky, Well, & Rayner, 1981).

The perceptual span is influenced both by characteristics of the writing system and characteristics of the reader. Thus, the span is considerably smaller for Japanese text (Ikeda & Saida, 1978; Osaka, 1992). For Japanese text written vertically, the effective visual field is five to six character spaces in the vertical direction of the eye movement (Osaka & Oda, 1991). More recently, Inhoff and Liu (1998) found that Chinese readers have an asymmetric perceptual span extending from one character left of fixation to three character spaces to the right. (Chinese is now written from left to right.) Furthermore, Rayner (1986) found that beginning readers at the end of the first grade had a smaller span, consisting of about 12 letter spaces to the right of fixation, than did skilled readers, whose perceptual span was 14–15 letter spaces to the right of fixation. Thus, it seems that the size of the perceptual span is defined by not only our physical limitations (our limited visual acuity), but also by the amount and difficulty of the information we need to process as we read. As text density increases, our perceptual span decreases, and we are only able to extract information from smaller areas of text.

Another issue regarding the perceptual span is whether readers acquire information from below the line which they are reading. Inhoff and Briihl (1991; Inhoff & Topolski, 1992) examined this issue by recording readers’ eye movements as they read a line from a target passage while ignoring a distracting line of text (taken from a related passage) located directly below target text. Initially, readers’ answers to multiple-choice questions suggested that they had indeed obtained information from both attended and unattended lines. However, when readers’eye movements were examined, that data showed that they occasionally fixated the distractor text. When these extraneous fixations were removed from the analysis, there was no indication that readers obtained useful semantic information from the unattended text. Pollatsek, Raney, LaGasse, and Rayner (1993) more directly examined the issue by using a moving window technique. The line the reader was reading and all lines above it were normal, but the text below the currently fixated line was altered in a number of ways (including replacing lines of text with other text and replacing the letters below the currently fixated line with random letters). Pollatsek et al. (1993) found that text was read most easily when the normal text was below the line and when there were Xs below the line. None of the other conditions differed from each other, which suggests that readers do not obtain semantic information from below the currently fixated line.

Although the perceptual span is limited, it does extend beyond the currently fixated word. Rayner, Well, Pollatsek, and Bertera (1982) presented readers with either a three-word window (consisting of the fixated word and the next two words), a two-word window (consisting of the fixated word and the next word), or a one-word window (consisting only of the currently fixated word). When reading normal, unperturbed text (the baseline), the average reading rate was about 330 words per minute (wpm), and the same average reading rate was found in the three-word condition. However, in the two-word window condition, when the amount of text available to the reader was reduced to only two words, the average reading rate fell to 300 wpm, and the reading rate slowed to 200 wpm in the one-word window condition. So, it seems that if skilled readers are allowed to see three words at a time, reading may proceed normally, but if the amount of text available for processing is reduced to only the currently fixated word, they can read reasonably fluently, but at only two-thirds of normal speed. Hence, although readers may extract information from more than one word per fixation, the area of effective vision is no more than three words.

One potential limitation of the moving window technique is that reading would be artifactually slowed if readers could see the display changes occurring outside the window of unperturbed text and are simply distracted by them. If this were the case, one could argue that data obtained using the moving window technique are confounded—slower reading rates in the one-word condition mentioned above could either be due to readers’ limited perceptual span or to the fact that readers are simply distracted by nonsensical letters in their peripheries. In some instances this is true: When the text falling outside the window consists of all Xs, the reader is generally aware of where the normal text is and where the Xs are. In contrast, if random letters are used instead of Xs, readers are generally unaware of the display changes taking place in their peripheries, although they may be aware that they are reading more slowly and may have the impression that something is preventing them from reading normally. More directly, however, readers’conscious awareness of display changes are not related to reading speed in that participants in moving window experiments can actually read faster when the text outside of the window is Xs as opposed to random letters. This is most likely the case because random letters are more likely to lead to misidentification of other letters or words, whereas Xs are not.

The Acquisition of Information to the Right of Fixation

So far we have discussed the fact that when readers are not allowed to see letters or words in the parafovea, reading rates are slowed, indicating that at least some characteristics of the information from the parafovea are necessary for fluent reading. Another important indication that readers extract information from text to the right of fixation is that we do not read every word in text, indicating that words to the right of fixation can be partially (or fully) identified and skipped (incidentally, in cases where a word is skipped, the duration of the fixation prior to the skip tends to be inflated; Pollatsek, Rayner, & Balota, 1986). As mentioned earlier, short function words (e.g., conjunctions and articles) and words that are highly predictable or constrained by the preceding context are also more likely to be skipped than are long words or words that are not constrained by preceding context. Such a pattern in skipping rates indicates that readers obtain information from both the currently fixated word and from the next (parafoveal) word, but it also seems to indicate that the amount of information from the right of fixation is limited (e.g., because longer words tend not to be skipped). This suggests that the major information used in the parafovea is the first few letters of the word to the right of the fixated word.

Further evidence for this conclusion comes from an additional experiment conducted by Rayner et al. (1982). In this experiment, sentences were presented to readers in which there was either (a) a one-word window; (b) a two-word window, or (c) the fixated word, visible together with partial information from the word immediately to the right of fixation (either the first one, two, or three letters; the remaining letters of the word to the right of fixation were replaced by letters that were either visually similar or visually dissimilar to the ones they replaced). The data showed that as long as the first three letters of the word to the right of fixation were normal and the others were replaced by letters that were visually similar to the letters that they replaced, reading was as fast as when the entire word to the right was available. However, the other letter information is not irrelevant, because when the remainder of the word was replaced by visually dissimilar letters, reading rates were slower as compared to when the entire word to the right was available, indicating that more information is processed than just the first three letters of the next word (see also Lima 1987; Lima & Inhoff, 1985).

In addition to the extraction of partial word information from the right of fixation, word length information is also obtained from the parafovea, and this information is used in computing where to move the eyes next (Morris, Rayner, & Pollatsek, 1990; O’Regan, 1979, 1980; Pollatsek & Rayner, 1982; Rayner, 1979; Rayner, Fischer, & Pollatsek, 1998; Rayner & Morris, 1992; Rayner et al., 1996). Word length information may also be utilized by readers to determine how parafoveal information is to be used—sometimes enough parafoveal letter information can be obtained from short words that they can be identified and skipped. In contrast, partial word information extracted from a longer parafoveal word may not usually allow full identification of the word but may facilitate subsequent foveal processing when the parafoveal word is eventually fixated (Blanchard, Pollatsek, & Rayner, 1989).

Integration of Information Across Fixations

The extraction of partial word information from the parafovea suggests that it is integrated in some fashion with information obtained from the parafoveal word when it is subsequently fixated. A variety of experiments have been conducted to determine the kinds of information that are involved in this synthesis. One experimental method that has been used to investigate this issue, the boundary paradigm (Rayner, 1975), is a variation of the moving window technique discussed earlier. Similar to the moving window paradigm, text displayed on a computer screen is manipulated as a function of where the eyes are fixated, but in the boundary paradigm, only the characteristics of a specific target word in a particular location within a sentence are manipulated. For example, in the sentence The man picked up an old map from the chart in the bedroom, when readers’ eyes move past the space between the and chart, the target word chart would change to chest. (The rest of the sentence remains normal throughout the trial.) By examining how long readers fixate on a target word as a function of what was previously available in the target region prior to fixation, researchers can make inferences about the types of information that readers obtained from the target word prior to fixating upon it.

Two different tasks have been used to examine the integration of information across saccades: reading and word naming. In the reading studies, fixation time on the target word is the primary dependent variable. In the naming studies (Balota & Rayner, 1983; McClelland & O’Regan, 1981; Rayner, 1978; Rayner, McConkie, & Ehrlich, 1978; Rayner et al., 1980), a single word or letter string is presented in the parafovea, and when the reader makes an eye movement toward it, it is replaced by a word that is to be named as quickly as possible. The influence of the parafoveal stimulus is assessed by measuring the effect of the parafoveal stimuli on naming times. Surprisingly, in spite of the differences in procedure (text vs. single words) and dependent variables (eye movement measures vs. naming latency), virtually identical effects of the parafoveal stimulus have been found in the reading and naming studies.

Findings from the naming task indicate that if the first two or three letters of the parafoveal word are retained following the eye movement and subsequent boundary display change (i.e., if the first few letters of the to-be-fixated parafoveal word are preserved across the saccade), naming times are facilitated as compared to when these letters change across the saccade. Parafoveal processing is spatially limited, however, in that this facilitation was found when the parafoveal word was presented 3  or less from fixation, but not when the parafoveal stimulus was 5  from fixation (i.e., about 15 character spaces). Furthermore, when the parafoveal stimulus was presented 1  from fixation, naming was faster when there was no change than when only the first two or three letters were preserved across the saccade, but when the parafoveal stimulus was presented farther away from fixation (2.3 or 3 ), naming times were virtually identical regardless of whether only the first two to three letters or all of the letters are were preserved across the saccade.

Hence, it is clear that readers can extract partial word information on one fixation to use in identification of a word on a subsequent fixation, but precisely what types of information may be carried across saccades? One possibility is that this integration is simply a function of the commonality of visual patterns from two fixations, such that the extraction of visual codes from the parafovea facilitates processing via an image-matching process. McConkie and Zola (1979; see also Rayner et al., 1980) tested this prediction by asking readers to read text in alternating case such that each time they moved their eyes, the text in the parafovea shifted from one alternated case pattern to its inverse (e.g., cHaNgE shifted to ChAnGe). Counter to the prediction that visual codes are involved in the integration of information across fixations, readers didn’t notice the case changes and reading behavior was not different from the control condition in which there were no case changes from fixation to fixation. Because changing visual features did not disrupt reading, it appears that visual codes are not combined across saccades during reading. However, readers extract abstract (i.e., case-independent) letter information from the parafovea (Rayner et al., 1980).

A number of other variables have been considered. One possibility is that some type of phonological (sound) code is involved in conveying information across saccades. As we discussed earlier, Pollatsek et al. (1992; see also Henderson, Dixon, Petersen, Twilley, & Ferreira, 1995) utilized both a naming task and a reading task; they found that a homophone of a target word (e.g., beachbeech) presented as a preview in the parafovea facilitated processing of the target word seen on the next fixation more than did a preview of a word that was visually similar to the target word (e.g.,bench). However, they also found that the visual similarity of the preview to the target played a role in the facilitative effect of the preview so that abstract letter codes are also preserved across saccades.

Morphemes, or the smallest units of meaning, have also been examined as a possibility for facilitating information processing across saccades, but the evidence for this suggestion has thus far been negative. In another experiment Lima (1987) used words that contained true prefixes (e.g., revive) and words that contained pseudoprefixes (e.g., rescue). If readers extract morphological information from the parafovea, then a larger preview benefit (the difference in fixation time between when a parafoveal preview of the target was available to the reader as compared to when a preview was not available) should be found for the prefixed words. Lima, however, found an equal benefit in the prefixed and pseudoprefixed conditions, indicating that prefixes are not involved in the integration of information across saccades. Furthermore, in a similar study, Inhoff (1989) presented readers with either the first morpheme of a true compound word such as cow in cowboy or the first morpheme of a pseudocompound such as car in carpet, and the study found no differences in the sizes of the parafoveal preview benefits.

Finally, it has been suggested that semantic (meaning) information in the parafovea may aid in later identification of a word (G. Underwood, 1985), but studies examining this issue have generally been negative. Rayner, Balota, and Pollatsek (1986) reported a boundary experiment in which readers were shown three possible types of parafoveal previews prior to fixating on a target word. For example, prior to fixating on the target wordtune, readers could have seen a parafoveal preview of either turc (orthographically similar), song (semantically related), or door (semantically unrelated). In a simple semantic priming experiment (with a naming response), semantically similar pairs (tune-song) resulted in a standard priming effect. However, when these targets were embedded in sentences, a parafoveal preview benefit was found only in the orthographically similar condition (supporting the idea that abstract letter codes are involved in integrating information from words across saccades), but there was no difference in preview benefit between the related and unrelated conditions (see alsoAltarriba, Kambe, Pollatsek, & Rayner, 2001). Thus, readers apparently do not extract semantic information from to-be-fixated parafoveal words.

The research we have reported here has focused on the fact that information extracted from a parafoveal word decreases the fixation time on that word when it is subsequently fixated. However, recently, a number of studies have examined whether information located in the parafovea influences the processing of the currently fixated word or, in similar terms, whether readers may process two or more words in parallel.

Murray (1998) designed a word comparison task in which readers were to asked to detect a one-word difference in meaning between two sentences. Fixation times on target words were shorter when the parafoveal word was a plausible continuation of the sentence as compared to when it was an implausible continuation. In another study, Kennedy (2000) instructed subjects to discriminate whether successively fixated words were identical or synonymous to each other, and found that fixation times on fixated (foveal) words were longer when the parafoveal word had a high frequency of occurrence as compared to a low frequency of occurrence.

It is possible, however, that the nature of attentional allocation is different in word comparison tasks than it is in more naturalistic reading tasks. In fact, several studies have demonstrated that the frequency of the word to the right of fixation during reading does not influence the processing of the fixated word (Carpenter & Just, 1983; Henderson & Ferreira, 1993; Rayner et al., 1998). To examine more closely whether properties of parafoveal words may have an effect on the viewing durations of the currently fixated word during natural reading, Inhoff, Starr, and Shindler (2000) constructed sentence triplets in which readers were allowed one of three types of parafoveal preview. In the related condition, when readers fixated on a target word (e.g., traffic), they saw a related word (e.g.,light) in the parafovea. In the unrelated condition, when readers fixated on the target word (e.g., traffic), they saw a semantically unrelated word (e.g., smoke) in the parafovea. Finally, in the dissimilar condition, upon fixating a target word, readers saw a series of quasi-random letters in the parafovea (e.g., govcq). Readers’ fixation times on target words were shortest in the related condition (though not different from the unrelated word) and longest in the dissimilar condition, suggesting that they at least processed some degree of abstract letter information from the parafoveal stimuli in parallel with the currently fixated word. However, semantic properties (i.e., meaning) of the parafoveal word had little effect on the time spent reading the target word.

Summary

The relative ease with which we read words is influenced by a number of variables, which include both low-level factors such as word length and high-level factors such as word frequency. The region of text from which readers may extract useful information on any given fixation is limited to the word being fixated and perhaps the next one or two words to the right. Moreover, the information that may be obtained to the right of fixation is generally limited to abstract letter codes (McConkie & Zola, 1979; Rayner et al., 1980) and phonological codes (Pollatsek et al., 1992), both of which may play a role in integrating information from words across saccades. Although there is no evidence that indicates that visual, morphological, or semantic information extracted from the parafovea aids later word identification, there is some controversy as to whether words may (under some circumstances and to some extent) be processed in parallel.

Word Identification in Context

There are many studies measuring either accuracy of identification in tachistoscopic (i.e., very brief) presentations (Tulving & Gold, 1963), naming latency (Becker, 1985; Stanovich & West, 1979, 1983), or lexical decision latency (Fischler & Bloom, 1979; Schuberth & Eimas, 1977) that have also demonstrated contextual effects on word identification. These experiments typically involved having subjects read a sentence fragment like The skiers were buried alive by the sudden. . . . The subjects were then either shown the target word avalanche very briefly and asked to identify it or the word was presented until they made a response to it (such as naming or lexical decision). The basic finding in the brief exposure experiments was that people could identify the target word at significantly briefer exposures when the context predicted it than when it was preceded either by neutral context, inappropriate context, or no context. In the naming and lexical decision versions of the experiment, a highly constraining context facilitated naming or lexical decision latency relative to a neutral condition such as the frame The next word in the sentence will be. We should note that there has been some controversy over the appropriate baseline to use in these experiments, but that is beyond the scope of this research paper. We turn now to a discussion of context effects when readers are reading text.

In the previous section we discussed a number of variables that influence the ease or difficulty with which a word may be processed during reading. As we have pointed out, much of the variation in readers’ eye fixation times can be explained by differences in word length and word frequency. In addition, a number of variables involved in text processing at a higher level have also been found to affect the speed of word identification. For example, we have already mentioned that a parafoveal word is more likely to be skipped if it is predictable from prior sentence context (Ehrlich & Rayner, 1981; O’Regan, 1979). Moreover, such predictable words are also fixated for shorter periods of time (Balota, Pollatsek, & Rayner, 1985; Binder, Pollatsek, & Rayner, 1999; Inhoff, 1984; Rayner, Binder, Ashby, & Pollatsek, 2001; Rayner & Well, 1996; Schustack, Ehrlich, & Rayner, 1987).

Before moving on, we should clarify what we mean when we talk about predictability. In the studies we discuss in this section, predictability is generally assessed by presenting a group of readers with a sentence fragment up to, but not including, the potential target word. They are then asked to guess what the next word in the sentence might be. In most experiments, a target word is operationally defined as predictable if more than 70% of the readers are able to guess the target word based on prior sentence context, and unpredictable if fewer than 5% of the readers are able to guess the target word. We should note that during this norming process, readers generally take up to several seconds to formulate a guess, whereas during natural reading, readers only fixate each word in the text for about 250 ms. This makes it unlikely that predictability effects in normal silent reading are due to such a conscious guessing process. Moreover, most readers’ introspection is that they are rarely if ever guessing what the next word will be as they read a passage of text. Hence, although we talk about predictability extensively in this section, we are certainly not claiming the effects are due to conscious prediction. They may be due to something like an unconscious process that is somewhat like prediction, although it would likely be quite different from conscious prediction.

Although these predictability effects on skipping rates are quite clear, there is some controversy as to the nature of these effects. One possibility is that contextual influences take place relatively early on during processing and, as such, affect the ease of processing a word (i.e., lexical access). An alternative view is that contextual influences affect later stages of word processing, such as the time it takes to integrate the word into ongoing discourse structures (i.e., text integration). One stumbling block in resolving this issue is that some evidence suggests that fixation time on a word is at least in part affected by higher level text integration processing. For example, O’Brien, Shank, Myers, and Rayner (1988) constructed three different versions of a passage that contained one of three potential phrases early in the passage (e.g., stabbed her with his weapon, stabbed her with his knife, or assaulted her with his weapon). When the word knife appeared later in the passage, readers’ fixation times on knife were equivalent for stabbed her with his weapon and stabbed her with his knife, presumably because readers had inferred when reading the former phrase that the weapon was a knife (i.e., it is unlikely that someone would be stabbed with a gun). In contrast, when the earlier phrase was assaulted her with his weapon, fixation durations on the later appearance of knife were longer. That is, in this last case, the fixation duration on knife reflected not only the time to understand the literal meaning of the word, but also to infer that the previously mentioned weapon was a knife.

Thus, a major question about these effects of predictability is whether the effect occurs because the manipulation actually modulates the extraction of visual information in the initial encoding of the word, or whether the unpredictable word is harder to integrate into the sentence context, just as knife is harder to process in the above example if it is not clear from prior context that the murder weapon is a knife. Balota et al. (1985) examined this question by looking at the joint effects of predictability of a target word and the availability of the visual information of the target word. Participants were given two versions of a sentence—one that was highly predictable from prior sentence context or one that was not predictable (e.g., Since the wedding day was today, the baker rushed the wedding cake/pies to the reception). The availability of visual information was manipulated by changing the parafoveal preview. Prior to when a reader’s eyes crossed a boundary in the text (e.g., the n in wedding), the parafoveal preview letter string was either identical to the target (e.g., cake for cake and pies for pies), visually similar to the target (cahc for cake and picz for pies), identical to the alternative word (pies for cake and vice versa), or visually similar to the alternative word (picz for cake and cahc for pies). The results replicated earlier findings that predictable words are skipped more often than are unpredictable words, but more importantly, visually similar previews facilitated fixation times on predictable words more than on unpredictable words. Moreover, there was a difference in the preview benefit for cake and cahc, but there was no difference in the benefit for pies and picz, so that readers were able to extract more visual information (i.e., ending letters) from a wider region of the parafovea when the target was predictable as compared to unpredictable. The fact that predictability interacts with these visual variables indicates that at least part of the effect of predictability is on initial encoding processes. If it merely had an effect after the word was identified, one would have no reason to expect it to interact with these orthographic variables.

Resolution of Ambiguity

The studies we have discussed up to this point clearly show that there are powerful effects of context on word identification in reading. However, they don’t make clear what level or levels of word identification are influencing the progress of the eyes through the text. For example, virtually all the phenomena discussed so far could merely be reflecting the identification of the orthographic or phonological form of a word.

The studies we discuss in the following section have tried to understand how quickly the meaning of a word is understood and how the surrounding sentential context interacts with the this process of meaning extraction. Two ways in which researchers have tried to understand these processes are (a) resolution of lexical ambiguity and (b) resolution of syntactic ambiguity.

There are now a large number of eye movement studies (see Binder&Rayner,1998;Duffy,Morris,&Rayner,1988;Kambe, Rayner, & Duffy, 2001; Rayner & Duffy, 1986; Rayner & Frazier, 1989; Rayner, Pacht, & Duffy, 1994; Sereno, Pacht, & Rayner, 1992) that have examined how lexically ambiguous words (like straw) are processed during reading. Such lexically ambiguous words potentially allow one to understand when and how the several possible meanings of a word are encoded.That is, the orthographic and phonological forms of a word like straw do not allow you to determine what the intended meaning of the word is (e.g., whether it is a drinking straw or a dried piece of grass). Clearly, for such words, there is no logical way to determine which meaning is intended if the word is seen in isolation, and the determination of the intended meaning in a sentence depends on the sentential context. As indicated previously, of greatest interest is how quickly the meaning or meanings of the word are extracted and at what point the sentential context comes in and helps to disambiguate between the two (or more generally, several) meanings of an ambiguous word. To help think about the issues, consider two extreme possibilities. One is that all meanings of ambiguous words are always extracted, and only then does the context come in and help the reader choose which was the intended meaning (ifitcan). The other extreme would be that context always enters the disambiguation process early and that it blocks all but the intended meaning from being activated.As we will see in the following discussion, the truth is somewhere betweentheseextremes.

Two key variables that experimenters have manipulated to understand the processing of lexically ambiguous words are (a) whether the information in the context prior to the ambiguous word allows one to disambiguate the meaning and (b) the relative frequencies of the two meanings. To make the findings as clear as possible, the manipulations on each of the variables are fairly extreme. In the case of the prior context, either it is neutral (i.e., it gives no information about which of the two meanings is intended) or it is strongly biasing (i.e., when people read the part of the sentence up to the target word and are asked to judge which meaning was intended, they almost always give the intended meaning). In the sentences in which the prior context does not disambiguate the meaning, however, the following context does. Thus, in all cases, the meaning of the ambiguous word should be clear at the end of the sentence. For the relative frequencies of the two meanings, experimenters either choose words that are balanced (like straw), for which the two likely meanings are equally frequent in the language, or they chose ones for which one of the meanings is highly dominant (such as bank, for which the financial institution meaning is much more frequent than the slope meaning). To simplify exposition, in this discussion we assume that these ambiguous words have only two distinct meanings, although many words have several shades of meaning, such as slight differences in the slope meaning of bank.

The basic findings from this research indicate that both meaning dominance and contextual information influence the processing of such words. When there is a neutral prior context, readers look longer at balanced ambiguous words (like straw) than they do at control words matched in length and word frequency. This evidence suggests that both meanings of the ambiguous word have been accessed and that the conflict between the two meanings is causing some processing difficulty. However, when the prior context disambiguates the meaning that should be instantiated, fixation time on a balanced ambiguous word is no longer than it is on the control word. Thus, for these balanced ambiguous words, the contextual information helps readers choose the appropriate meaning quickly—apparently before they move on to the next word in the text. In contrast, for ambiguous words for which one meaning is much more dominant (i.e., much more frequent) than the other, readers look no longer at the ambiguous word than they do at the control word when the prior context is neutral. Thus, it appears in these cases that only the dominant meaning is fully accessed and that there is little or no conflict between the two meanings. However, when the following parts of the sentence make it clear that the less frequent meaning should be instantiated, fixation times on the disambiguating information are quite long and regressions back to the target word are frequent (also indicating that the reader incorrectly selected the dominant meaning and now has to reaccess the subordinate meaning). Conversely, when the prior disambiguating information instantiates the less frequent meaning of the ambiguous word, readers’ gaze durations on the ambiguous word are lengthened (relative to an unambiguous control word). Thus, in this case, it appears that the contextual information increases the level of activation for the less frequent meaning so that the two meanings are in competition (just as the two meanings of a balanced ambiguous word are in competition in a neutral context).

In sum, the data on lexically ambiguous words make clear that the meaning of a word is processed quite rapidly: The meaning of an ambiguous word, in at least some cases, is apparently determined before the saccade to the next word is programmed. Moreover, it appears that context, at least in some cases, enters into the assignment of meaning early: It can either shorten the time spent on a word (when it boosts the activation of one of two equally dominant meanings) or prolong the time spent on a word (when it boosts the activation of the subordinate meaning). For a more complete exposition of the theoretical ideas in this section (the reordered access model), see Duffy et al., 1988, and Duffy, Kambe, and Rayner, 2001.

A second type of ambiguity that readers commonly encounter is syntactic ambiguity. For example, consider a sentence like While Mary was mending the sock fell off her lap. When one has read the sentence up to sock (i.e., While Mary was mending the sock), the function of the phrase the sock is ambiguous: It could either be the object of was mending or it could be (as it turns out to be in the sentence) the subject of a subordinate clause. How do readers deal with such ambiguities? Similar types of question arise with this type of ambiguity as with lexical ambiguity. One obvious question is whether readers are constructing a syntactic representation of the sentence on line, so to speak, or whether syntactic processing lags well behind encoding individual words. For example, one possibility is that there is no problem with such ambiguities because they are temporary—that is, if the reader waits until the end of the sentence before constructing a parse of the sentence, then there may be no ambiguity problem. In contrast, if such ambiguities cause readers problems, then one has evidence that syntactic processing, like meaning processing, is more on line and closely linked in time to the word identification process.

The data on this issue are quite clear, as many studies have demonstrated that such temporary ambiguities do indeed cause processing difficulty; furthermore, data indicate that these processing difficulties often can occur quite early (i.e., immediately when the eyes encounter the point of ambiguity). For example, Frazier and Rayner (1982) used sentences like the While Mary was mending the sock fell off her lap example previously cited. They found that when readers first came to the word fell, they either made very long fixations on it or they regressed back to an earlier point in the sentence (where their initial parse would have gone astray). A full explanation of this phenomenon would require going into considerable detail on linguistic theories of parsing, a topic that is beyond the scope of this research paper. However, the explanation, in one sense, is similar to the lexical ambiguity situation in which one meaning is dominant—that is, in many cases one syntactic structure is dominant over the other. In this case, assigning the direct object function to the sock is highly preferred. From the data, it thus becomes clear that readers initially adopt this incorrect interpretation of the sentence (are led up the garden path, so to speak), and only then can construct the correct parse of the sentence with some difficulty. The phenomenon is somewhat different from lexical ambiguity because (a) the dominance of one interpretation over another is not easily modified by context manipulations, and (b) it appears that the reinterpretation needs to be constructed rather than accessed, as is the case with a different meaning of an ambiguous word (Binder, Duffy, & Rayner, 2001).

Summary

As discussed in this section, the ease or difficulty with which readers process words is affected not only by lexical factors such as word frequency and word length, but also by higher level, postlexical factors (such as those involved in text integration) as well. It has been argued that many variables, such as word frequency, contextual constraint, semantic relationships between words, lexical ambiguity, and phonological ambiguity influence the time it takes to access a word. However, it seems unlikely that syntactic disambiguation effects (e.g., the fact that fixation times on syntactically disambiguating words are longer than fixation times on words that are not syntactically disambiguating) are due to the relatively low-level processes involved in lexical access. One plausible framework for thinking about these effects (see Carroll & Slowiaczek, 1987; Hyönä, 1995; Pollatsek, 1993; Rayner & Morris, 1990; Reichle, Pollatsek, Fisher, & Rayner, 1998) is that lexical access is the primary engine driving the eyes forward, but that higher level (postlexical) processes may also influence fixation times when there is a problem (e.g., a syntactic ambiguity).

Models of Eye Movement Control

Earlier in this research paper we outlined some models of word identification. However, these models only take into account the processing of words in isolation and are not specifically designed to account for factors that are part and parcel of fluent reading (e.g., the integration of information across eye movements, context effects, etc.). In the past, modelers have tended to focus on one aspect of reading and have tended to neglect others. The models of LaBerge and Samuels (1974) and Gough (1972), for example, focused on word encoding, whereas Kintsch and van Dijk’s (1978) model mainly addressed integration of text. Although having such a narrow focus on a model of reading is perhaps not ideal, there is some logic behind such an approach. Models that are broad in scope tend to suffer from a lack of specificity. The reader model of Just and Carpenter (1980; see also Thibadeau, Just, & Carpenter, 1982) illustrates one example of this difficulty. It attempted to account for the reading comprehension processes ranging from individual eye fixations to the integration of words into sentence context (e.g., clauses). Although it was a comprehensive and highly flexible model of reading, its relatively nebulous nature made it difficult for researchers to use the model to make specific predictions about the reading process.

In the past few years, however, a number of models have been proposed that have been generally designed to expand upon models of word perception and specifically designed to explain and predict eye movement behavior during fluent reading. Because these models are based upon the relatively observable behavior of the eyes, they allow researchers to make specific predictions about the reading process. However, as with many issues in reading, the nature of eye movement models is a matter of controversy. Eye movement models can be separated into two general categories: oculomotor models (e.g., O’Regan, 1990, 1992), which posit that eye movements are primarily controlled by low-level mechanical (oculomotor) factors and are only indirectly related to ongoing language processing; and processing models (Morrison, 1984; Henderson & Ferreira, 1990; Just & Carpenter, 1980; Pollatsek, Reichle, & Rayner, in press; Reichle et al., 1998; Reichle, Rayner, & Pollatsek, 1999), which presume that lexical and other moment-to-moment cognitive processing are important influences on when the eyes move. Although space prohibits an extensive discussion of the pros and cons of each of these models, in this section we briefly delineate some of the more influential contributions to the field.

According to oculomotor models, the decision of where to move the eyes is determined both by visual properties of text (e.g., word length, spaces between words) and by the limitations in visual acuity that we discussed in a previous section. Furthermore, the length of time spent actually viewing any given word is postulated to be primarily a function of where the eyes have landed within the word. That is to say, the location of our fixations within words is not random. Instead, there is a preferred viewing location—as we read, our eyes tend to land somewhere between the middle and the beginning of words (Radach & McConkie, 1998; Rayner, 1979). Vitu (1991) also found that although readers’ eyes tended to land on or near this preferred viewing location, when they viewed longer words (101 letters), readers initially fixated near the beginning of the word and then made another fixation near the end of the word (Rayner & Morris, 1992).

One of the more prominent oculomotor models is the strategy-tactics model (O’Regan, 1990, 1992; Reilly & O’Regan, 1998). The model accounts for the aforementioned landing position effects by stipulating that words are most easily identified when they are fixated just to the left of the middle of the word, but that readers may adopt one of two possible reading strategies—one riskier, so to speak, than the other. According to the risky strategy, readers can just try to move their eyes so that they fixate on this optimal viewing position within each word. In addition, readers may also use a more careful strategy, so that when their eyes land on a nonoptimal location (e.g., too far toward the end of the word), they can refixate and move their eyes to the other end of the word.

Without going into too much detail, the strategy-tactics models make some specific predictions about the nature of eye movements during reading. For example, they predict that the probability of a reader’s refixating a word should only be a function of low-level visual factors (such as where the eyes landed in the word) and that it should not be influenced by linguistic processing. However, Rayner et al. (1996) found that the probability of a refixation was higher for words of lower frequency than for words of higher frequency even when the length of the two words was matched. Due to this and other difficulties, many researchers believe that oculomotor models are incomplete and that, although they do give good explanations of how lower level oculomotor factors influence reading, they largely ignore the influence of linguistic factors such as word frequency and word predictability.

As we discussed earlier, readers’ eye movements are influenced by factors other than just word frequency (e.g., predictability, context, etc.). Given the influence of these linguistic variables, some researchers have developed models that are based upon the assumption that eye movements are influenced by both lexical (linguistic) factors and by moment-to-moment comprehension processes. It should be noted that these models generally do not exclude the influence of the low-level oculomotor strategies inherent in oculomotor models, but they posit that this influence is small relative to that of cognitive factors. Overall, then, processing theorists posit that the decision of when to move the eyes (fixation duration) is primarily a function of linguisticcognitive processing, and the decision of where to move the eyes is a function of visual factors.

Although a number of models (e.g., Morrison, 1984) have utilized such a framework, the most recent and extensive attempt to predict eye movement behavior during reading is the E-Z Reader model (Reichle & Rayner, 2001; Reichle et al., 1998; Reichle et al., 1999). Currently, E-Z reader includes a number of variables that have been found to influence both fixation durations and fixation locations. Importantly, its computational framework has been used to both simulate and predict eye movement behavior. Although the E-Z Reader model is complex, it essentially consists of four processes: a familiarity check, the completion of lexical access (i.e., word recognition), the programming of eye movements, and the eye movements themselves. When a reader first attends to a word, (which is usually before the reader fixates the word) lexical access of the fixated word begins. However, before lexical access is complete, a rougher familiarity check is completed first. The familiarity check is a function of the word’s frequency in the language, its contextual predictability, and the distance of the word from the center of the fovea. (It may be the point at which a reasonable match is made with either the orthographic or phonological entry in the lexicon.) After the familiarity check has been completed, an initial eye-movement program to the next word is initiated and the lexical access process continues (in parallel), either of which may be completed first. Finally, lexical access is completed (perhaps this reflects when the meaning of the word is encoded).

The model has been able to account successfully for many of the findings from the eye movement literature. However, it is admittedly incomplete, as the only cognitive processes that are posited to influence eye movements relate to word identification, whereas phenomena such as the syntactic ambiguity studies we briefly discussed earlier indicate that language processes of a somewhat higher order influence eye movements as well. One way to think of the E-Z reader model is that it explains the mechanisms that drive the eyes forward in reading and that higher order processes, such as syntactic parsing and constructing the meanings of sentence and paragraphs, lag behind this process of comprehending words and do not usually intervene in the movement of the eyes. Given that these higher order processes lag behind word identification, it would probably slow skilled reading appreciably if the eyes had to wait for successful completion of these processes. We think that a more likely scenario is that these higher order processes intervene in the normal forward movement of the eyes (driven largely by word identification, as in the E-Z reader model) only when a problem is detected (such as an incorrect parse of the sentence in the syntactic ambiguity example discussed earlier); then the so-called normal processing is interrupted and a signal goes out either not to move the eyes forward, to execute a regression back to the likely point of difficulty and begin to recompute a new syntactic or higher-order discourse structure, or both.

Conclusions

For the past century, researchers have struggled to understand the complexities of the myriad cognitive processes involved in reading. In this research paper we have discussed only a few of these processes, and we have primarily focused on the visual processes that are responsible for word identification during reading, both in isolation and in context. Although many issues still remain unresolved, a growing body of experimental data have emerged that has allowed researchers to develop a number of models and computer simulations to better explain and predict reading phenomena.

So what do we really know about reading? Many researchers would agree that words are accessed through some type of abstract letter identities (Coltheart, 1981; Rayner et al., 1980), and that letters (at least to some extent) may be processed in parallel. It is also clear that sound codes are somehow involved in word identification, but the details involved in this process are not clear. We do know, for example, that words’ phonological representations are activated relatively early (perhaps within 30–40 ms and most likely even before a word is fixated). The time course of phonological processing would seem to indicate that sound codes are used to access word meaning, but studies that have attempted to study this issue directly have been criticized for a variety of reasons. Overall, it seems likely that there are two possible routes to word meaning: a direct letter-to-meaning lookup and an indirect constructive mechanism that utilizes sound codes and the spelling-to-sound rules of a language. However, the internal workings of these two mechanisms are underspecified, and researchers are still speculating on the nature of words’sound codes (e.g., are they real or abstract?).

Although we may get the subjective impression that we are able to see many words at the same time when we read, the amount of information we can extract from text is actually quite small (though we may realize that there are multiple lines of text or that there are many wordlike objects on the page). Furthermore, the process by which we extract information from this limited amount of text is somewhat complex. We are able to extract information from more than one word in a fixation, and some information that is obtained during one fixation may be used on the next fixation. Hence, the processing of words during reading is both a function of the word being fixated as well as the next word or two within the text.

The time spent looking at a word is a function of a variety of factors including its length, frequency, sound characteristics, morphology, and predictability. However, even before a word is fixated, some information has already been extracted from it. On some occasions, a word can be fully identified and skipped. Most of the time, however, partial information is extracted and integrated with the information seen when it is fixated. The extent to which parafoveal processing aids identification of a word on the next fixation is still under examination, but readers are at least able to integrate abstract

letter information and some sound information across the two fixations. In addition, the predictability of a word within a sentence context has an effect on the speed of word identification, with predictable words processed faster than are unpredictable words. The reasons for this are a matter of debate. However, effects of context on word identification are generally small, and much of the work on word perception suggests that visual information can be processed quickly even without the aid of context. Thus, predictability and other contextual factors may actually only play a limited role in word processing in reading. More specifically, as Balota et al. (1985) have shown, context primarily influences the amount of information that may be extracted from the parafovea and thus, more generally, context may become increasingly important when visual information is poor.

Bibliography:

  1. Altarriba, J., Kambe, G., Pollatsek, A., & Rayner, K. (2001). Semantic codes are not used in integrated information across eye fixations in reading: Evidence from fluent Spanish-English bilinguals. Perception & Psychophysics, 63, 875–890.
  2. Balota, D. A. (1983). Automatic semantic activation and episodic memory encoding. Journal of Verbal Learning and Verbal Behavior, 22, 88–104.
  3. Balota, D. A., & Chumbley, J. I. (1984). Are lexical decisions a good measure of lexical access? The role of word frequency in the neglected decision stage. Journal of Experimental Psychology: Human Perception and Performance, 10, 340–357.
  4. Balota, D. A., & Chumbley, J. I. (1985). The locus of wordfrequency effects in the pronunciation task: Lexical access and/or production? Journal of Memory and Language, 24, 89–106.
  5. Balota, D. A., Pollatsek, A., & Rayner, K. (1985). The interaction of contextual constraints and parafoveal visual information in reading. Cognitive Psychology, 17, 364–390.
  6. Balota,D.A.,&Rayner,K.(1983).Parafovealvisualinformationand semantic contextual constraints. Journal of Experimental Psychology: Human Perception and Performance, 9, 726–738.
  7. Baron, J., & Strawson, C. (1976). Use of orthographic and wordspecific knowledge in reading words aloud. Journal of Experimental Psychology: Human Perception and Performance, 2, 386–393.
  8. Baron, J., & Thurston, I. (1973). An analysis of the word superiority effect. Cognitive Psychology, 4, 207–228.
  9. Bauer, D., & Stanovich, K. E. (1980). Lexical access and the spelling-to-sound regularity effect. Memory & Cognition, 8, 424–432.
  10. Becker, C. A. (1985). What do we really know about semantic context effects during reading? In D. Besner, T. G. Waller, & G. E. MacKinnon (Eds.), Reading research: Advances in theory and practice (pp. 125–166). New York: Academic Press.
  11. Besner, D., Coltheart, M., & Davelaar, E. (1984). Basic processes in reading: Computation of abstract letter identities. Canadian Journal of Psychology, 38, 126–134.
  12. Besner, D., Stolz, J. A., & Boutilier, C. (1997). The Stroop effect and the myth of automaticity. Psychonomic Bulletin & Review, 4, 221–225.
  13. Besner, D., Twilley, L., McCann, R. S., & Seergobin, K. (1990). On the association between connectionism and data: Are a few words necessary? Psychological Review, 97, 432–446.
  14. Binder, K. S., Duffy, S. A., & Rayner, K. (2001). The effects of thematic fit and discourse context in syntactic ambiguity resolution. Journal of Memory and Language, 44, 297–324.
  15. Binder, K. S., Pollatsek, A., & Rayner, K. (1999). Extraction of information to the left of the fixated word in reading. Journal of Experimental Psychology: Human Perception and Performance, 25, 1142–1158.
  16. Binder, K. S., & Rayner, K. (1998). Contextual strength does not modulate the subordinate bias effect: Evidence from eye fixations and self-paced reading. Psychonomic Bulletin & Review, 5, 271–276.
  17. Blanchard, H. E., Pollatsek, A., & Rayner, K. (1989). The acquisition of parafoveal word information in reading. Perception & Psychophysics, 46, 85–94.
  18. Carpenter, P. A., & Just, M. A. (1983). What your eyes do while your mind is reading. In K. Rayner (Ed.), Eye movements in reading: Perceptual and language processes (pp. 275–307). New York: Academic Press.
  19. Carr, T. H., McCauley, C., Sperber, R. D., & Parmelee, C. M. (1982). Words, pictures, and priming: On semantic activation, conscious identification, and the automaticity of information processing. Journal of Experimental Psychology: Human Perception and Performance, 8, 757–777.
  20. Carr, T. H., & Pollatsek, A. (1985). Recognizing printed words: A look at current models. In D. Besner, T. G. Waller, & G. E. MacKinnon (Eds.), Reading research: Advances in theory and practice (Vol. 5, pp. 2–82 ). Orlando, FL: Academic Press.
  21. Carroll,P.J.,&Slowiaczek,M.L.(1987).Modelsandmodules:Multiple pathways to the language processor. In J. L. Garfield (Ed.), Modularity in knowledge representation and natural-language understanding (pp. 221–248). Cambridge, MA: MIT Press.
  22. Coltheart, M. (1978). Lexical access in simple reading tasks. In G. Underwood (Ed.), Strategies of information processing (pp. 151–216). San Diego, CA: Academic Press.
  23. Coltheart, M. (1981). Disorders of reading and their implications for models of normal reading. Visible Language, 15, 245–286.
  24. Coltheart, M., Curtis, B., Atkins, P., & Haller, M. (1993). Models of reading aloud: Dual-route and parallel-distributed-processing approaches. Psychological Review, 100, 589–608.
  25. Coltheart, M., Patterson, K., & Marshall, J. (1980). Deep dyslexia. London: Routledge & Kegan Paul.
  26. Coltheart, M., Rastle, K., Perry, C., Langdon, R., & Ziegler, J. (2001). DRC: Adual route cascaded model of visual word recognition and reading aloud. Psychological Review, 108, 204–256.
  27. Crowder, R. G., & Wagner, R. K. (1992). The Psychology of reading. New York: Oxford University Press.
  28. Daneman, M., & Reingold, E. M. (1993). What eye fixations tell us about phonological recoding during reading. Canadian Journal of Experimental Psychology, 47, 153–178.
  29. Daneman, M., & Reingold, E. M. (2000). Do readers use phonological codes to activate word meanings? Evidence from eye movements. In A. Kennedy, R. Radach, D. Heller, & J. Pynte (Eds.), Reading as a perceptual process (pp. 447–474). Amsterdam: North Holland.
  30. Daneman, M., Reingold, E. M., & Davidson, M. (1995). Time course of phonological activation during reading: Evidence from eye fixations. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 884–898.
  31. DenBuurman, R., Boersma, T., & Gerrissen, J. F. (1981). Eye movements and the perceptual span in reading. Reading Research Quarterly, 16, 227–235.
  32. Dodge, R. (1900). Visual perception during eye movement. Psychological Review, 7, 454–465.
  33. Duffy, S. A., Kambe, G., & Rayner, K. (2001). The effect of prior disambiguating context on the comprehension of ambiguous words: Evidence from eye fixations. In D. Gorfein (Ed.), On the consequences of meaning selection (pp. 27–43). Washington, DC: American Psychological Association.
  34. Duffy, S. A., Morris, R. K., & Rayner, K. (1988). Lexical ambiguity and fixation times in reading. Journal of Memory and Language, 27, 429–446.
  35. Ehrlich, S. F., & Rayner, K. (1981). Contextual effects on word perception and eye movements during reading. Journal of Verbal Learning and Verbal Behavior, 20, 641–655.
  36. Evett, L. J., & Humphreys, G. W. (1981). The use of abstract graphemic information in lexical access. Quarterly Journal of Experimental Psychology, 33A, 325–350.
  37. Fischler, I., & Bloom, P. (1979). Automatic and attentional processes in the effects of sentence context on word recognition. Journal of Verbal Learning and Verbal Behavior, 18, 1–20.
  38. Frazier, L., & Rayner, K. (1982). Making and correcting errors during sentence comprehension: Eye movements in the analysis of structurally ambiguous sentences. Cognitive Psychology, 14, 178–210.
  39. Gautier, V., O’Regan, J. K., & LaGargasson, J. F. (2000). ‘The skipping’revisited in French: programming saccades to skip the article ‘les’. Vision Research, 40, 2517–2531.
  40. Gibson, E. J. (1971). Perceptual learning and the theory of word perception. Cognitive Psychology, 2, 351–368.
  41. Gough, P. B. (1972). One second of reading. In J. F. Kavanagh & I. G. Mattingly (Eds.), Language by ear and by eye (pp. 331–358). Cambridge, MA: MIT Press.
  42. Harm, M. W., & Seidenberg, M. S. (1999). Phonology, reading acquisition, and dyslexia: Insights from connectionist models. Psychological Review, 106, 491–528.
  43. Hawkins, H. L., Reicher, G. M., Rogers, M., & Peterson, L. (1976). Flexible coding in work recognition. Journal of Experimental Psychology: Human Perception and Performance, 2, 380–385.
  44. Healy, A. F. (1976). Detection errors on the word the: Evidence for reading units larger than letters. Journal of Experimental Psychology, 2, 235–242.
  45. Henderson, J. M., Dixon, P., Petersen, A., Twilley, L. C., & Ferreira, F. (1995). Evidence for the use of phonological representations during transsaccadic word recognition. Journal of Experimental Psychology: Human Perception and Performance, 21, 82–97.
  46. Henderson, J. M., & Ferreira, F. (1990). Effects of foveal processing difficulty on the perceptual span in reading: Implications for attention and eye movement control. Journal of Experimental Psychology: Learning, Memory, and Cognition, 16, 417–429.
  47. Henderson, J. M., & Ferreira, F. (1993). Eye movement control during reading: Fixation measures reflect foveal but not parafoveal processing difficulty. Canadian Journal of Experimental Psychology, 47, 201–221.
  48. Huey, E. B. (1908). The psychology and pedagogy of reading. New York: Macmillan.
  49. Hyönä, J. (1995). Do irregular letter combinations attract readers’ attention? Evidence from fixation locations in words. Journal of Experimental Psychology: Human Perception and Performance, 21, 68–81.
  50. Hyönä, J., & Niemi, P. (1990). Eye movements in repeated reading of a text. Acta Psychologica, 73, 259–280.
  51. Hyönä, J., & Olson, R. K. (1995). Eye movement patterns among dyslexic and normal readers: Effects of word length and word frequency. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 1430–1440.
  52. Hyönä, J., & Pollatsek, A. (1998). Reading Finnish compound words: Eye fixations are affected by component morphemes. Journal of Experimental Psychology: Human Perception and Performance, 24, 1612–1627.
  53. Ikeda, M., & Saida, S. (1978). Span of recognition in reading. Vision Research, 18, 83–88.
  54. Inhoff, A. W. (1984). Two stages of word processing during eye fixations in the reading of prose. Journal of Verbal Learning and Verbal Behavior, 23, 612–624.
  55. Inhoff, A. W. (1989). Parafoveal processing of words and saccade computation during eye fixations in reading. Journal of Experimental Psychology: Human Perception and Performance, 15, 544–555.
  56. Inhoff, A. W., & Briihl, D. (1991). Semantic processing of unattended text during selective reading: How the eyes see it. Perception & Psychophysics, 49, 289–294.
  57. Inhoff, A. W., & Liu, W. (1998). The perceptual span and oculomotor activity during the reading of Chinese sentences. Journal of Experimental Psychology: Human Perception and Performance, 24, 20–34.
  58. Inhoff, A. W., & Rayner, K. (1986). Parafoveal word processing during eye fixations in reading: Effects of word frequency. Perception & Psychophysics, 40, 431–439.
  59. Inhoff, A. W., Starr, M. S., & Shindler, K. (2000). Is the processing of words during a fixation of text strictly serial? Perception & Psychophysics, 62, 1474–1484.
  60. Inhoff, A. W., & Topolski, R. (1992). Lack of semantic activation from unattended text during passage reading. Bulletin of the Psychonomic Society, 30, 365–366.
  61. Ishida, T., & Ikeda, M. (1989). Temporal properties of information extraction in reading studied by a text-mask replacement technique. Journal of the Optical Society A: Optics and Image Science, 6, 1624–1632.
  62. Jared, D., Levy, B., & Rayner, K. (1999). The role of phonology in the activation of word meanings during reading: Evidence from proofreading and eye movements. Journal of Experimental Psychology: General, 128(3), 219–264.
  63. Just, M. A., & Carpenter, P. A. (1980). Atheory of reading: From eye fixations to comprehension. Psychological Review, 87, 329–354.
  64. Just,M.A.,&Carpenter,P.A.(1987).Thepsychologyofreadingand language comprehension. Newton, MA:Allyn & Bacon.
  65. Kambe, G., G., Rayner, K., & Duffy, S. A. (2001). Global context effects on processing lexically ambiguous words. Memory & Cognition, 29, 363–372.
  66. Kennedy, A. (2000). Parafoveal processing in word recognition. Quarterly Journal of Experimental Psychology, 53A, 429–455.
  67. Kennedy, A., & Murray, W. S. (1987). The components of reading time: Eye movement patterns of good and poor readers. In J. K. O’Regan & A. Levy-Schoen (Eds.), Eye movements: From physiology to cognition (pp. 509–520). Amsterdam: North Holland.
  68. Kennison, S. M., & Clifton, C. (1995). Determinants of parafoveal previewbenefitinhighandlowworkingmemorycapacityreaders: Implications for eye movement control. Journal of Experimental Psychology:Learning,Memory,andCognition,21,68–81.
  69. Kintsch, W., & van Dijk, T. A. (1978). Toward a model of text comprehension and production. Psychological Review, 85, 363–394.
  70. Kleiman, G. M. (1975). Speech recoding in reading. Journal of Verbal Learning and Verbal Behavior, 14, 323–339.
  71. Kliegl, R., Olson, R. K., & Davidson, B. J. (1982). Regression analyses as a tool for studying reading processes: Comments on Just and Carpenter’s eye fixation theory. Memory & Cognition, 10, 287–296.
  72. Kolers, P. (1972). Experiments in reading. Scientific American, 227, 84–91.
  73. Krueger, L. (1970). Visual comparison in a redundant display. Cognitive Psychology, 1, 341–357.
  74. LaBerge, D., & Samuels, J. (1974). Toward a theory of automatic information processing in reading. Cognitive Psychology, 6, 293–323.
  75. Lesch, M. F., & Pollatsek, A. (1998). Evidence for the use of assembled phonology in accessing the meaning of printed words. Journal of Experimental Psychology: Learning, Memory and Cognition, 24, 573–592.
  76. Levy, B. A. (1975). Vocalization and suppression effects in sentence memory. Journal of Verbal Learning and Verbal Behavior, 14, 304–316.
  77. Lima, S. D. (1987). Morphological analysis in sentence reading. Journal of Memory and Language, 26, 84–99.
  78. Lima, S. D., & Inhoff, A. W. (1985). Lexical access during eye fixations in reading: Effects of word-initial letter sequences. Journal of Experimental Psychology: Human Perception and Performance, 11, 272–285.
  79. MacLeod, C. (1991). Half a century of research on the Stroop effect: An integrative review. Psychological Bulletin, 109, 163–203.
  80. Marcel, J. (1983). Conscious and unconscious perception: Experiments on visual masking.Cognitive Psychology, 15, 197–237.
  81. Matin, E. (1974). Saccadic suppression: A review. Psychological Bulletin, 81, 899–917.
  82. McClelland, J. L., & O’Regan, J. K. (1981). Expectations increase the benefit derived from parafoveal visual information in reading words aloud. Journal of Experimental Psychology: Human Perception and Performance, 7, 634–644.
  83. McClelland, J. L., & Rumelhart, D. E. (1981). An interactive activation model of context effects in letter perception: Part 1. An account of basic findings. Psychological Review, 88, 375–407.
  84. McConkie, G. W., & Rayner, K. (1975). The span of the effective stimulus during a fixation in reading. Perception & Psychophysics, 17, 578–586.
  85. McConkie, W., & Rayner, K. (1976). Asymmetry of the perceptual span in reading. Bulletin of the Psychonomic Society, 8, 365–368.
  86. McConkie, G. W., & Zola, D. (1979). Is visual information integrated across successive fixations in reading? Perception & Psychophysics, 25, 221–224.
  87. Meyer, D. E., & Schvaneveldt, R. W. (1971). Facilitation in recognizing pairs of words: Evidence of a dependence between retrieval operations. Journal of Experimental Psychology, 90, 227–234.
  88. Meyer, D. E., Schvaneveldt, R. W., & Ruddy, M. G. (1974). Functions of graphemic and phonemic codes in visual wordrecognition, Memory & Cognition, 2, 309–321.
  89. Morris, R. K., Rayner, K., & Pollatsek, A. (1990). Eye movement guidance in reading: The role of parafoveal letter and space information. Journal of Experimental Psychology: Human Perception and Performance, 16, 268–281.
  90. Morrison, R. E. (1984). Manipulation of stimulus onset delay in reading: Evidence for parallel programming of saccades. Journal of Experimental Psychology: Human Perception and Performance, 10, 667–682.
  91. Murray, S. (1998). Parafoveal pragmatics. In G. Underwood (Ed.), Eye guidance in reading and scene perception (pp. 181–200). Oxford, UK: Elsevier.
  92. Neisser, U. (1967). Cognitive psychology. New York: AppletonCentury-Crofts.
  93. Niswander, E., Pollatsek, A., & Rayner K. (2000) The processing of derived and inflected suffixed words during reading. Language and Cognitive Processes, 15, 389–420.
  94. O’Brien, E. J., Shank, D. M., Myers, J. L., & Rayner, K. (1988). Elaborative inferences during reading: Do they occur on-line? Journal of Experimental Psychology: Learning, Memory, and Cognition, 14, 410–420.
  95. O’Regan, J. K. (1979). Eye guidance in reading: Evidence for linguistic control hypothesis. Perception & Psychophysics, 25, 501–509.
  96. O’Regan, J. K. (1980). The control of saccade size and fixation duration in reading: The limits of linguistic control. Perception & Psychophysics, 28, 112–117.
  97. O’Regan, J. K. (1990). Eye movements and reading. In E. Kowler (Ed.), Eye movements and their role in visual and cognitive processes (pp. 395–453). Amsterdam: Elsevier.
  98. O’Regan, J. K. (1992). Optimal viewing position in words and the strategy-tactics theory of eye movements in reading. In K. Rayner (Ed.), Eye movements and visual cognition: Scene perception and reading (pp. 333–354). New York: Springer-Verlag.
  99. Osaka, N. (1992). Size of saccade and fixation duration of eye movements during reading: Psychophysics of Japanese text processing. Journal of the Optical Society of America A, 9, 5–13.
  100. Osaka, N., & Oda, K. (1991). Effective visual field size necessary for vertical reading during Japanese text processing. Bulletin of the Psychonomic Society, 29, 345–347.
  101. Paap, K. R., Newsome, S. L., McDonald, J. E., & Schvaneveldt, R. W. (1982). An activation-verification model for letter and word recognition: The word superiority effect. Psychological Review, 89, 573–594.
  102. Perfetti, C. A., & Hogaboam, T. W. (1975). The relationship between single word decoding and reading comprehension skill. Journal of Educational Psychology, 67, 461–469.
  103. Plaut, D. C., McClelland, J. L., Seidenberg, M. S., & Patterson, K. (1996). Understanding normal and impaired word reading: Computational principles in quasi-regular domains. Psychological Review, 103, 56–115.
  104. Pollatsek, A. (1993). Eye movements in reading. In D. M. Willows, R. S. Kruk, & E. Corcos (Eds.), Visual processes in reading and reading disabilities (pp. 105, 125–157). Hillsdale, NJ: Erlbaum.
  105. Pollatsek, A., Bolozky, S., Well, A. D., & Rayner, K. (1981). Asymmetries in the perceptual span for Israeli readers. Brain and Language, 14, 174–180.
  106. Pollatsek, A., Lesch, M., Morris, R. K., & Rayner, K. (1992). Phonological codes are used in integrating information across saccades in word identification and reading. Journal of Experimental Psychology: Human Perception and Performance, 18, 148–162.
  107. Pollatsek, A., Raney, G. E., LaGasse, L., & Rayner, K. (1993) The use of information below fixation in reading and in visual search. Canadian Journal of Experimental Psychology, 47, 179–200.
  108. Pollatsek, A., & Rayner, K. (1982). Eye movement control in reading: The role of word boundaries. Journal of Experimental Psychology: Human Perception and Performance, 8, 817–833.
  109. Pollatsek, A., Rayner, K., & Balota, D. A. (1986). Inferences about eye movement control from the perceptual span in reading. Perception & Psychophysics, 40, 123–130.
  110. Pollatsek, A., Reichle, E., & Rayner, K. (in press). Modeling eye movements in reading. In J. Hyönä, R. Radach, and H. Denbel (Eds.), The mind’s eyes: Cognitive and applied aspects of eye movements. Oxford, UK: Elsevier.
  111. Pollatsek, A., Tan, L.-H., & Rayner, K. (2000). The role of phonological codes in integrating information across saccadic eye movements in Chinese character identification. Journal of Experimental Psychology: Human Perception and Performance, 26, 607–633.
  112. Radach, R., & McConkie, G. W. (1998) Determinants of fixation positions in words during reading. In G. Underwood (Ed.), Eye guidance in reading and scene perception (pp. 77–100). Oxford, UK: Elsevier.
  113. Raney, G. E., & Rayner, K. (1995). Word frequency effects and eye movements during two readings of a text. Canadian Journal of Experimental Psychology, 49, 151–172.
  114. Rayner, K. (1975). The perceptual span and peripheral cues in reading. Cognitive Psychology, 7, 65–81.
  115. Rayner, K. (1977). Visual attention in reading: Eye movements reflect cognitive processes. Memory & Cognition, 4, 443–448.
  116. Rayner, K. (1978). Eye movements in reading and information processing. Psychological Bulletin, 85, 618–660.
  117. Rayner, K. (1979). Eye guidance in reading: Fixation locations within words. Perception, 8, 21–30.
  118. Rayner, K. (1986). Eye movements and the perceptual span in beginning and skilled readers. Journal of Experimental Child Psychology, 41, 211–236.
  119. Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124, 372–422.
  120. Rayner, K., Balota, D.A., & Pollatsek,A. (1986).Against parafoveal semantic preprocessing during eye fixations in reading. Canadian Journal of Psychology, 40, 473–483.
  121. Rayner, K., & Bertera, J. H. (1979). Reading without a fovea. Science, 206, 468–469.
  122. Rayner, K., Binder, K., Ashby, J., & Pollatsek, A. (2001). Eye movement control in reading: Word predictability has little influence on initial landing positions in words. Vision Research, 41, 943–954.
  123. Rayner, K., & Duffy, S. A. (1986). Lexical complexity and fixation times in reading: Effects of word frequency, verb complexity, and lexical ambiguity. Memory & Cognition, 14, 191–201.
  124. Rayner, K., & Fischer, M. H. (1996). Mindless reading revisited: Eye movements during reading and scanning are different. Perception & Psychophysics, 58, 734–747.
  125. Rayner, K., Fischer, M. H., & Pollatsek, A. (1998). Unspaced text interferes with both word identification and eye movement control. Vision Research, 38, 1129–1144.
  126. Rayner, K., & Frazier, L. (1989). Selection mechanisms in reading lexically ambiguous words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 15, 779–790.
  127. Rayner, K., Inhoff, A. W., Morrison, R., Slowiaczek, M. L., & Bertera, J. H. (1981). Masking of foveal and parafoveal vision during eye fixations in reading. Journal of Experimental Psychology: Human Perception and Performance, 7, 167–179.
  128. Rayner, K., & McConkie, G. W. (1976). What guides a reader’s eye movements. Vision Research, 16, 829–837.
  129. Rayner, K., McConkie, G. W., & Ehrlich, S. F. (1978). Eye movements and integrating information across fixations. Journal of Experimental Psychology: Human Perception and Performance, 4, 529–544.
  130. Rayner, K., McConkie, G. W., & Zola, D. (1980). Integrating information across eye movements. Cognitive Psychology, 12, 206–226.
  131. Rayner, K., & Morris, R. K. (1990). Do eye movements reflect higher order processes in reading? In R. Groner, G. d’Ydewalle, & R. Parnham (Eds.), From eye to mind: Information acquisition in perception, search, and reading (pp. 170–190). Amsterdam: North Holland.
  132. Rayner, K., & Morris, R. K. (1992). Eye movement control in reading: Evidence against semantic preprocessing. Journal of Experimental Psychology: Human Perception and Performance, 18, 163–172.
  133. Rayner, K., Pacht, J. M., & Duffy, S. A. (1994). Effects of prior encounter and global discourse bias on the processing of lexically ambiguous words: Evidence from eye fixations. Journal of Memory and Language, 33, 527–544.
  134. Rayner, K., & Pollatsek, A. (1989). The psychology of reading. Englewood Cliffs, NJ: Prentice-Hall.
  135. Rayner, , Pollatsek, A., & Binder, K. S. (1998). Phonological codes and eye movements in reading. Journal of Experimental Psychology: Learning, Memory and Cognition, 24, 476–497.
  136. Rayner, K., & Posnansky, C. (1978). Stages of processing in word identification. Journal of Experimental Psychology: General, 107, 64–80.
  137. Rayner, K., & Raney, G. E. (1996). Eye movement control in reading and visual search: Effects of word frequency. Psychonomic Bulletin & Review, 3, 238–244.
  138. Rayner, K., Raney, G. E., & Pollatsek, A. (1995). Eye movements and discourse processing. In R. F. Lorch & E. J. O’Brien (Eds.), Sources of coherence in reading (pp. 9–36). Hillsdale, NJ: Erlbaum.
  139. Rayner, K., Sereno, S. C., Morris, R. K., Schmauder, A. R., & Clifton, C. (1989). Eye movements and on-line language comprehension processes. Language and Cognition Processes, 4(Special issue), 21–49.
  140. Rayner, K., Sereno, S. C., & Raney, G. E. (1996). Eye movement control in reading: A comparison of two types of models. Journal of Experimental Psychology: Human Perception and Performance, 22, 1188–1200.
  141. Rayner, K., & Springer, C. J. (1986). Graphemic and semantic similarity effects in the picture-word interference task. British Journal of Psychology, 77, 207–222.
  142. Rayner, K., & Well, A. D. (1996). Effects of contextual constraint on eye movements in reading: A further examination. Psychonomic Bulletin & Review, 3, 504–509.
  143. Rayner, K., Well, A. D., & Pollatsek, A. (1980). Asymmetry of the effective visual field in reading. Perception & Psychophysics, 27, 537–544.
  144. Rayner, K., Well, A. D., Pollatsek, A., & Bertera, J. H. (1982). The availability of useful information to the right of fixation in reading. Perception & Psychophysics, 31, 537–550.
  145. Reicher, G. M. (1969). Perceptual recognition as a function of meaningfulness of stimulus material. Journal of Experimental Psychology, 81, 275–280.
  146. Reichle, E., Pollatsek, A., Fisher, D. L., & Rayner, K. (1998). Towards a model of eye movement control in reading. Psychological Review, 105, 125–157.
  147. Reichle, E., & Rayner, K. (2001). Cognitive processing and models of reading. In G. K. Hung & K. J. Ciuffreda (Eds.), Models of the visual system (pp. 565–604). New York: Kluwer Academic/ Plenum Publishers.
  148. Reichle, E., Rayner, K., & Pollatsek, A. (1999). Eye movement control in reading: Accounting for initial fixation locations and refixations within the E-Z Reader model. Vision Research, 39, 4403–4411.
  149. Reilly, R., & O’Regan, J. K. (1998). Eye movement control in reading: A simulation of some word-targeting strategies. Vision Research, 38, 303–317.
  150. Rosinski, R. R., Golinkoff, R. M., & Kukish, K. (1975). Automatic semantic processing in a picture-word interference task. Child Development, 46, 243–253.
  151. Schuberth, R. E., & Eimas, P. D. (1977). Effects of context on the classification of words and nonwords. Journal of Experimental Psychology: Human Perception and Performance, 3, 27–36.
  152. Schustack, M. W., Ehrlich, S. F., & Rayner, K. (1987). The complexity of contextual facilitation in reading: Local and global influences. Journal of Memory and Language, 26, 322–340.
  153. Seidenberg, M. S., & McClelland, J. L. (1989). Adistributed, developmental model of word recognition and naming. Psychological Review, 96, 523–568.
  154. Seidenberg, M. S., & McClelland, J. L. (1990). More words but still no lexicon: Reply to Besner et al. (1990). Psychological Review, 97, 447–452
  155. Sereno, S. C., Pacht, J. M., & Rayner, K. (1992). The effect of meaning frequency on processing lexically ambiguous words: Evidence from eye fixations. Psychological Science, 3, 296–300.
  156. Sereno, S. C., & Rayner, K. (2000). Spelling-sound regularity effects on eye fixations in reading. Perception & Psychophysics, 62(2), 402–409.
  157. Slowiaczek, M. L., & Clifton, C., (1980). Subvocalization and reading for meaning. Journal of Verbal Learning and Verbal Behavior, 19, 573–582.
  158. Smith, F., Lott, D., & Cronnell, B. (1969). The effect of type size and case alternation on word identification. American Journal of Psychology, 82, 248–253.
  159. Stanovich, K. E., & West, R. F. (1979). Mechanisms of sentence context effects in reading: Automatic activation and conscious attention. Memory & Cognition, 7, 77–85.
  160. Stanovich, K. E., & West, R. F. (1983). On priming by a sentence context. Journal of Experimental Psychology: General, 112, 1–36.
  161. Starr, M. S., & Fleming, K. K. (2001). A rose by any other name is not the same: The role of orthographic knowledge in homophone confusion errors. Journal of Experimental Psychology: Learning, Memory, and Cognition, 27, 744–760.
  162. Stroop, J. R. (1935). Studies of interference in serial verbal reactions. Journal of Experimental Psychology, 18, 643–662.
  163. Stone, G. O., & Van Orden, G. (1994). Building a resonance framework for word recognition using design and system principles. Journal of Experimental Psychology: Human Perception and Performance, 20, 1248–1268.
  164. Thibadeau, R., Just, M. A., & Carpenter, P. A. (1982). Amodel of the time course and content of human reading. Cognitive Science, 6, 101–155.
  165. Tulving, E., & Gold, C. (1963). Stimulus information and contextual information as determinants of tachistoscopic recognition of words. Journal of Experimental Psychology, 66, 319–327.
  166. Underwood, G. (1985). Eye movements during the comprehension of written language. In A. W. Ellis (Ed.), Progress in the psychology of language (Vol. 2, pp. 45–71). London: Erlbaum.
  167. Underwood, N. R., & McConkie, G. W. (1985). Perceptual span for letter distinctions during reading. Reading Research Quarterly, 20, 153–162.
  168. Uhr, L. (1963). “Pattern recognition” computers as models for form perception. Psychological Bulletin, 60, 40–73
  169. Van Orden, G. C. (1987). A rows is a rose: Spelling, sound, and reading. Memory & Cognition, 15, 181–198.
  170. Van Orden, G. C., & Goldinger, S. D. (1994). Interdependence of form and function in cognitive systems explains perception of printed words. Journal of Experimental Psychology: Human Perception and Performance, 20, 1269–1291.
  171. Van Orden, G. C., Johnston, J. C., & Hale, B. L. (1988). Word identification in reading proceeds from spelling to sound to meaning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14, 371–386.
  172. Van Orden, G. C., Pennington, B. F., & Stone, G. O. (1990). Word identification in reading and the promise of subsymbolic psycholinguistics. Psychological Review, 97, 488–522.
  173. Vitu, F. (1991). The influence of parafoveal processing and linguistic context on the optimal landing position effect. Perception & Psychophysics, 50, 58–75.
  174. Wheeler, D. D. (1970). Processes in word recognition. Cognitive Psychology, 1, 59–85.
  175. Wolverton, G. S., & Zola, D. (1983). The temporal characteristics of visual information extraction during reading. In K. Rayner (Ed.), Eye movements in reading: Perceptual and language processes (pp. 41–52). New York: Academic Press.
Language Comprehension and Production Research Paper
Text Comprehension and Discourse Processing Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!