Computational Psycholinguistics Research Paper

Academic Writing Service

Sample Computational Psycholinguistics Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. iResearchNet offers academic assignment help for students all over the world: writing from scratch, editing, proofreading, problem solving, from essays to dissertations, from humanities to STEM. We offer full confidentiality, safe payment, originality, and money-back guarantee. Secure your academic success with our risk-free services.

Psycholinguistics is the scientific discipline that studies how people acquire a language and how they comprehend and produce this language. The increasing complexity of the models of human language processing which have been evolved in this discipline makes the development and evaluation of computer implemented versions of these models more and more important to understand these models and to derive predictions from them. Computational psycho-linguistics is the branch of psycholinguistics that develops and uses computational models of language processing to evaluate existing models with respect to consistency and adequacy as well as to generate new hypotheses.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code

1. Cognitive Modeling And Computational Psycholinguistics

Not every computer model mimicking linguistic behavior can be declared as being psychologically relevant. For example, the ELIZA question-answering system ( Weizenbaum 1966) is able to imitate the role of a therapist very well, although the system’s behavior is based solely on simple pattern matching mechanisms that do not say anything about the linguistic and non-linguistic processes underlying a dialogue. In order to be a psycholinguistically relevant computer model, the model must simulate a specific human cognitive function. As a prerequisite for the construction of such a model, formal analyses of the linguistic domain and empirical studies are required to determine the complexity of that domain and of the parameter values of the respective cognitive function to be modeled.

This research methodology corresponds to the typical methodology in cognitive modeling. In fact, there is only one difference between cognitive modeling and computational psycholinguistics: While cognitive modeling deals with all aspects of human cognition, computational psycholinguistics is confined to architectures and mechanisms for human language processing.

1.1 Computer Models In Psycholinguistics

In essence, psycholinguistics addresses the questions of how speakers put pre-linguistic concepts into words, and then combine these words to larger units in order to produce the oral or written output, and how a listener parses the auditory or visual input into meaningful units and arrives at an understanding of the input.

Although psycholinguistics as part of the inter-disciplinary field of the cognitive sciences shares the view that cognition should essentially be regarded as computation, the research methodology in psycho-linguistics is primarily oriented at experimental studies with the computer as a tool for the selection and presentation of appropriate stimulus material, and the exact measurements of reaction times, etc. Computer modeling itself does not play a major role in model development. Surveys of psycholinguistic research (e.g., Gernsbacher 1994) dedicate only a minor part to computer modeling in psycholinguistic research, and the use of computer models to evaluate models of (sub)tasks of language comprehension or production. This means that, compared with approximately 100 years of psycholinguistic research and 25 years of institutionalized cognitive science, computational psycholinguistics can be glossed as a brand new area of psycholinguistic research. Computer modeling turns out to be advantageous in model development for at least two reasons.

First, specifying a model only verbally makes it more difficult to check its completeness and overall consistency, if the model is appropriately complex. If such a model has been realized as a computer program and produces the desired input-output effect, it is not just a specification of relations in the model domain, but also a fully specified and consistent theory of the investigated cognitive function.

Second, the implemented model may be used to generate hypotheses: the compulsion of being explicit in every detail while developing the computer model may result in predictions under conditions which have empirically not been investigated before. These pre-dictions can then be tested by new experiments that will either support or not support the predictions made by the program.

1.2 Techniques For Modeling Human Language Processing

In general, a reconstruction of an empirical fact by a model can be done in each of two ways: a narrow one or a process-oriented one. The former means that the computer model realizes an input–output relation that is identical to that of human beings. However, whether the processes underlying this mapping from informational inputs to the desired outputs correspond to the processes of humans or not is irrelevant. The latter means that the computer system does not only realize a correct input-output relation, but it also accounts for the internal representations and operations which will be used and performed accordingly, within the human mind. Artificial Intelligence typically develops models of the first class, while computational psycholinguistics strives for models of the second class. Evaluation criteria for the resulting computer models are the same as for every model: their descriptive and explanatory adequacy, simplicity, generality, and falsifiability. An important additional criterion is the possibility to match empirical results to the system’s behavior. Computer models of the same task can also be evaluated in a model-to-model comparison.

Techniques used in computational psycholinguistics range from all kinds of symbolic processing mechanisms like graph unification, planning, or deductive reasoning to connectionist approaches. The advantage of symbolic mechanisms is the achievement of a high level of abstraction that makes it easier to check the adequacy of the system’s behavior. One disadvantage of symbolic systems is the possible rigidity so that exceptions to the rule require additional treatments. This robustness problem is problematic for several aspects of human language processing, because often decisions for the production or recognition of a specific linguistic form or structure turn out to be highly context-dependent (e.g., the modeling of typical speech errors).

Connectionist approaches, especially activation spreading, are also quite popular in computational psycholinguistics. The connectionist paradigm by-passes the robustness problem and it accounts for learnability. Its disadvantage is the lack of modeling of rule-based structural relationships that is assumed to be an essential characteristic of many tasks in human language processing.

2. Stages In Human Language Processing

Although the various proposed architectures of the human language processing systems and the respective mechanisms differ considerably in detail, there is not much disagreement on the basic stages involved in transforming concepts into a linguistic utterance and vice versa.

2.1 Stages In Language Production

Most psychological models of language production agree on the division of the production process into three major stages that are called, following (Levelt 1989), conceptualization, formulation, and articulation. Differentiation with respect to the sub-components, interrelations, and functions of these components takes very different directions.

Conceptualization for language production comprises the selection, preparation, and linearization of pre-linguistic information. These processes lead to a conceptual representation which functions as input to a formulator.

Formulation involves the conversion of the output from the conceptual level into linguistic structures. It includes the selection of appropriate lexical items, and the retrieval of syntactic structures and phonetic information. It is widely accepted that the selection of lexical items consists of a two-step process. The first step involves the selection of lemmas, i.e., semantically appropriate items together with syntactic features. The second selection process concerns lexeme retrieval where phonological information becomes available. The relationship between both retrieval processes is still controversial, which finds its expression in the different architectural bases and the directionality of activation flow in the corresponding computer models of this task. The result of the formulation process is an articulatory plan. Finally, articulation can be described as the realization of that phonetic plan and its execution at the motoric level.

One fundamental difference between the various production models relates to how the flow of in-formation between the various processing stages is designed. Sequential models assume a unidirectional flow from the conceptualizer to the formulator, and from the formulator to the articulator. The enormous rate of human speech production is explained in these models by incremental processing, i.e., the processing of information in a piecemeal way: All modules work in parallel, but not on the same piece of information. Increments will be handed down to subsequent modules although the whole informational unit has not been completed yet.

In contrast to sequential models, interactive models allow direct feedback from subordinate to super-ordinate modules. Sequential models have the ad-vantage of being easier to check empirically and simulate via computer programs. However, they have a completeness problem concerning the relationship between language and conceptualization. Sequential models must assume intermediate representations that reflect the previous decisions and the momentary knowledge. For example, all relevant conceptual decisions must have been concluded at the point where a conceptual fragment is handed down to the formulator. Nothing can be left unspecified which is relevant for selection processes in the formulator since the formulator itself does not provide criteria for decisions relating to content.

Both types of information flow are often directly related to the symbolic and connectionist processing paradigm, although there is no cogent link between a specific type of information flow and propositional or connectionist models.

2.2 Stages In Language Comprehension

From a logical point of view, the basic stages in comprehending a linguistic utterance must be the same as in language production, although there is overwhelming evidence that production is not simply the reverse of comprehension. The visual (in the case of reading) or auditory (in the case of listening) input must be perceived, the single words must be recognized, and the syntactic structure of the respective utterance units must be determined. Based on this structure the meaning of the utterances will be determined. During the discourse comprehension the meaning will be embedded into the overall discourse situation. The same issues as in production models arise in models of language comprehension with respect to information flow and feedback. Disagreements exist about the architecture of the human language comprehension system, as well as about the underlying specific processes.

However, one characteristic feature that models of language comprehension should support is incrementality. There are many studies showing that as soon as a word has been recognized, it is immediately attached to the preceding information (Crocker et al. 2000). Perhaps the most convincing argument for incremental processing in sentence comprehension comes from the so-called garden-path effect. People starting to read a sentence like ‘The horse raced past the barn fell’ (Bever 1970) interpret the initial sequence (The horse) (raced) ( past the barn) as ‘subject,’ ‘predicate,’ and ‘direction.’ Only after the last word has been recognized do they have a feeling of being stranded (in other words they are ‘led up the garden path’), because the syntactic analysis made so far has turned out to be false. In actuality, the sentence must be interpreted as ‘the horse that was raced past the barn fell.’

3. Computer Models Of Human Language Processing

Although most computer models in production and comprehension simulate processes around the level of lexical items (cf. Norris 1999), there are also several models at the sentence level. Computer models of the mechanisms in sentence comprehension simulate how people obtain a particular syntactic and semantic analysis for a sentence (e.g., Crocker 1996, Crocker et al. 2000). Computer models of sentence production simulate how people construct, depending on retrieved lexical information, the syntactic structure of a sentence while simulating typical speech errors that occur on the sentence level (cf. de Smedt and Kempen 1987). Computer models that go beyond the sentence level to the production or comprehension of discourse turn out to be much harder to develop, since it is very difficult to determine all discourse-related parameters in experimental studies. From a broader perspective, the phenomena to be considered on the discourse level are all related to the notion of inference; they range from the interpretation of referring expressions to listener modeling.

The computer models that are presented in this section are models of various stages in language processing. The list is far from being exhaustive: It does not include all computer-implemented models that have been developed for one processing task, nor do the models completely cover all tasks in human language processing. The models presented should give an impression of the advantages of computer modeling in psycholinguistic research.

Differences in the architectural basis and the techniques used depend on the theoretical background and the system’s task. Each of the computer models presented below will be characterized along the following dimensions: first, the task in human language processing, which the model simulates, will be outlined. The architectural basis for the processing mechanisms will be characterized and the technique used for the simulation will be presented. Finally, relations between the system’s behavior and empirical data will be outlined.

3.1 Computer Models Of Language Production

Since models of human conceptualization are in the early stages of development, elaborated computer models of conceptualization are currently not on the market. Issues related to the topic of how people organize their pre-linguistic knowledge in order to put it into language have been addressed primarily in Artificial Intelligence research, especially in Natural Language Generation, but the models developed in these disciplines are not intended to simulate human processing. Conceptualization is still very under-developed in computational psycholinguistics.

Computer modeling starts at the interface between conceptualization and formulation, viz. the retrieval of lemmas from the lexicon, given pre-linguistic information that needs to be expressed. Models of lemma retrieval come in two forms, symbolic and connectionist approaches, but only two connectionist models shall be characterized that are based on spreading activation networks, because they show the merits of computational modeling in psycholinguistics very well.

The decompositional model of (Dell and O’Seaghdha 1992) aims at the selection of appropriate lemmas so that the relevant conceptual features can be linguistically expressed. The system’s architecture is non-modular and interactive. Hence, direct feedback is possible. The system uses a network consisting prima facie of nodes for conceptual features and nodes for the lemmas. Bidirectionality of the links between these two layers allows this model also to be used for language comprehension. After activating the conceptual features, activation spreads towards the lemma nodes and back. Depending on the chosen parameter values for the spreading formula, the model makes several empirical predictions with respect to the activation of semantically related lemmas and the speed of activation. However, the model also in-correctly retrieves hyponyms because all features that activate a word will also activate the word’s hyponyms. Hyponyms are the semantically subordinated lemmas of a given lemma. For example, if the conceptual features activate the word ‘pet,’ they will also activate the words ‘dog,’ ‘cat,’ etc. These hyponyms can receive a higher activation value than the target word ‘pet.’

The nondecompositional model of (Roelofs 1992) also aims at the selection of lemmas, but in contrast to the previous model, concepts are represented as one node, not as conceptual features, i.e., the model assumes that concepts cannot be decomposed into conceptual primitives. In this model, activation spreads towards the lemma nodes. The lemma will be selected which receives an activation level exceeding that of other nodes by some critical amount. The model correctly describes the activation of lemmas given some activation of concepts, and it does not suffer from the hyponym problem. Furthermore, the model makes some predictions on activation speed, which have been tested empirically and turn out to be correct.

Since an evaluation of both models along all dimensions is almost impossible, only the hyponym problem is discussed further. The missing hyponym problem of the non-decompositional activation model, together with the resulting correct empirical predictions, are strong arguments for the assumption that concepts do not consist of conceptual primitives. Whether mental representations of conceptualized objects, events, times, etc. are built up from conceptual primitives or not, is a topic that has been intensively discussed for a long time in linguistics and the philosophy of mind. Since the successful implementation of a cognitive function is the proof of the theory’s consistency, the results argue against conceptual decomposition.

Two systems simulating the next step in language production, namely the construction of syntactic structures of the sentences to be expressed, are now presented.

The incremental parallel formulator (IPF ) (de Smedt 1990) simulates how syntactic structures are constructed in a piecemeal way, if conceptual increments are given as input at specific time points. These increments are mapped onto single words or thematic roles. The system is based on a modular architecture and uses graph unification as operation. Graph unification is the standard operation in computational grammar formalisms. It allows the combination of two informational units to a third one if the in-formation is not contradictory. The IPF formulator simulates parallelism in formulation. By means of the time-point and accessibility of incoming conceptual fragments the system also explains specific word order variations.

The flexible incremental generator (FIG) (Ward 1992) is a connectionist model with an interactive architecture. The task of incremental sentence construction is realized on the basis of an associative network. The input for the formulation process is conceptual information that is provided with an activation value. Contrary to the previous model, this input will not be offered incrementally but as a whole. However, the sentence production process is incremental because sentences are uttered word by word. Activation spreading through the network results in the activation of words; those words with the highest activation will be uttered. The model accounts for speech errors, because non-intended concepts might receive an activation.

Although the architectural basis and processing strategies of the models are completely different, they show that incremental processing is psychologically plausible. In the first model the time span between incoming fragments corresponds directly to incrementality effects. The second model describes the incremental selection of words. However, no pre-dictions can be derived directly from either model. They differ in their assumption about where incremental processing is involved, namely on the level of conceptualization and formulation or during formulation only. With respect to the processing mechanisms that are used, both models are equally simple, because each system uses only one operation.

It has been claimed that connectionist approaches are more robust than symbolic ones. However, for incrementality effects during sentence construction, this statement is too general. The IPF model shows that a symbolic processing strategy does not necessarily impose constraints on the data to be explained.

3.2 Computer Models Of Language Comprehension

Since comprehension begins with the recognition of words, this section deals first with models of spoken word recognition. In spoken word recognition, the models proposed must answer two main questions. First, they must describe how the sensory input is mapped onto the lexicon, and second, what the processing units are in this process.

The TRACE model (McClelland and Elman 1986) is an interactive model that simulates how a word will be identified from a continuous speech signal. The problem with this task is that continuity of the signal does not provide clear hints about where a word begins and where it ends. By means of activation spreading through a network that consists of three layers (features, phonemes, and words), the system generates competitor forms, converging to the ultimately selected word. Competition is realized by inhibitory links between candidates.

The SHORTLIST model (Norris 1994) is based on a modular architecture with two distinct processing stages. It uses spreading activation as well, but in a strictly bottom-up way. Contrary to TRACE, it generates a restricted set of lexical candidates during the first stage. During the second stage, the best fitting words are linked via an activation network.

Both models account for experimental data on the time course of word recognition. Assumptions on the direction of activation flow and its nature lead to several differing predictions, but the main different prediction concerns lexical activation. While TRACE assumes that a very large number of items are activated, SHORTLIST assumes that a much smaller set of candidates is available so that the recognition of words beginning at different time points is explained differently.

The last model is a model of sentence processing. In sentence processing, one of the fundamental questions is why certain sentences receive a preferred syntactic structure and semantic interpretation.

The SAUSAGE MACHINE (Fodor and Frazier 1980) is a parsing model that assumes two stages in parsing with both stages having a limited working capacity. The original idea behind the model is to explain preferences in syntactic processing solely on the architectural basis by means of the limitations in the working memories.

The SAUSAGE MACHINE is a quasi-deterministic model, because one syntactic structure for each sentence is generated. Only if the analysis turns out to be wrong, is a reanalysis performed. Since reanalyzing a sentence is a time consuming process, the system tries to avoid this whenever possible. The model accounts for garden path effects and the difficulty in under-standing multiple center embedded sentences (like ‘the house the man sold burnt down’). Furthermore, the model explains interpretation preferences by means of the limitation in working memories. However, it is now understood that the architecture of a system cannot be the only factor that is responsible for processing preferences, but additional parsing principles must be assumed ( Wanner 1980). Newer computational models of sentence processing show that an explanation of several phenomena in sentence processing requires the early check of partial syntactic structures with lexical and semantic knowledge (Hemforth 1993).

4. Conclusion

The outline of different computational models of human language processing shows that psycho-linguistics benefits in several respects from the use of simulation models. The influence of several factors in language processing such as the directionality of activation or information flow, the link between pre-linguistic and linguistic knowledge, layers of information in specific processing tasks, limitations in working memory, and many others, have been tested successfully by simulation models.

The dominant use of activation spreading as a processing mechanism for stages in language processing below the sentence level is probably due to the derivation of computational models from psycho-linguistic models, which have been developed on the basis of statistical methods. Using symbolic operations is primarily inspired by linguistic investigations. Models of human language processing that originate from linguistic research do not explain the time course of the underlying mechanisms or statistically significant effects, but do explain the linguistic competence underlying the skill of speaking and comprehending. Hence, computer models aiming at various effects in sentence processing predominantly use symbolic operations to explain this competence.

This situation suggests that one future direction in computational psycholinguistics could be the development of models combining symbolic with connectionist methods to describe the interplay of various tasks in language production or comprehension.

An excellent introduction to various computer models of subtasks in human language processing is given by the contributions in Dijkstra and de Smedt (1996).


  1. Bever T G 1970 The cognitive basis for linguistic structures. In: Hayes J R (ed.) Cognition and the development of language. Wiley, New York
  2. Crocker M W 1996 Computational psycholinguistics. An inter-disciplinary approach to the study of language. Kluwer, Dordrecht, The Netherlands
  3. Crocker M W, Pickering M, Clifton C 2000 (eds.) Architectures and Mechanisms for Language Processing. Cambridge University Press, Cambridge, UK
  4. Dell G S, O’Seaghdha P G 1992 Stages of lexical access in language production. Cognition 42: 287–314
  5. de Smedt K 1990 IPF: An incremental parallel formulator. In: Dale R, Mellish C, Zock M (eds.) Current Research in Natural Language Generation. Academic Press, London, pp. 167–92
  6. de Smedt K, Kempen G 1987 Incremental sentence production, self-correction and co-ordination. In: Kempen G (ed.) Natural Language Generation. New Results in Artificial Intelligence, Psychology and Linguistics. Nijhoff (Kluwer Academic Publishers), Dordrecht, The Netherlands, pp. 365–76
  7. Dijkstra T, de Smedt K (eds.) 1996 Computational psycho- linguistics. Taylor & Francis, London
  8. Fodor J D, Frazier L 1980 Is the human sentence parsing mechanism an ATN? Cognition 8: 417–59
  9. Gernsbacher M A 1994 (ed.) Handbook of psycholinguistics. Academic Press, San Diego, CA
  10. Hemforth B 1993 Kognitives Parsing. Reprasentation und Verarbeitung sprachlichen Wissens. [Cognitive parsing. The representation and processing of linguistic knowledge.] Infix, St Augustin
  11. Levelt W J M 1989 Speaking. MIT Press, Cambridge, MA
  12. McClelland J L, Elman J L 1986 The TRACE model of speech perception. Cognitive Psychology 18: 1–86
  13. Norris D 1994 SHORTLIST: A connectionist model of continuous speech recognition. Cognition 52: 189–234
  14. Norris D 1999 Computational psycholinguistics. In: Wilson R A, Keil F C (eds.) The MIT Encyclopedia of the Cognitive Sciences. MIT Press, Cambridge, MA, pp. 168–70
  15. Roelofs A 1992 A spreading-activation theory of lemma retrieval in speaking. Cognition 42: 107–42
  16. Wanner E 1980 The ATN and the sausage machine: Which one is baloney? Cognition 8: 209–25
  17. Ward N 1992 A parallel approach to syntax for generation. Artificial Intelligence 57: 183–225
  18. Weizenbaum J 1966 ELIZA: A computer program for the study of natural language communication between man and machine. Communications of the ACM 9(1): 36–45
Syntactic Constituent Structure Research Paper
Communicative Competence Research Paper


Always on-time


100% Confidentiality
Special offer! Get 10% off with the 24START discount code!