Lexicon Research Paper

Academic Writing Service

Sample Lexicon Research Paper. Browse other  research paper examples and check the list of research paper topics for more inspiration. If you need a research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our research paper writing service for professional assistance. We offer high-quality assignments for reasonable rates.

1. The Notion Of Lexicon

The lexicon is standardly viewed as a listing of all the morphemes of a language, with information indicating how each morpheme behaves in the components of grammar involving phonology, syntax, and semantics. In no small part, the shape and character of grammar is determined by what the lexicon contains for these other grammatical devices. Nevertheless, both historically and conventionally, the lexicon has been seen as the passive module in the system of grammar.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


More recently, the model of the lexicon has undergone significant revision and maturation. In particular, two trends have driven the architectural concerns of lexical researchers: (a) a tighter integration of compositional operations of syntax and semantics with the lexical information structures that bear them; and (b) a serious concern with how lexical types reflect the underlying ontological commitments of the grammar. In the process, the field has moved towards addressing more encompassing problems in linguistic theory, such as those below:

(a) How can we explain the polymorphic nature of language?




(b) How can we capture the creative use of words in novel contexts?

(c) How can semantic types predictably map to syntactic representations?

(d) What are the ‘atoms’ of lexical knowledge, if they exist at all?

In this research paper, we first review the conventional view of the lexicon and then contrast this with the theories of lexical information that have emerged since around 1990.

By all accounts, the conventional model of the lexicon is that of a database of words, ready to act in the service of more dynamic components of the grammar. This view has its origins squarely in the generative tradition (Chomsky 1955) and has been an increasingly integral part of the concept of the lexicon ever since. While the ‘Aspects’ model of selectional features restricted the relation of selection to that between lexical items, work by Jackendoff (1972) and McCawley (1968), showed that selectional restrictions must be available to computations at the level of derived semantic representation rather than at deep structure. But where did this view come from? In order to understand both the classical model of the lexicon as a database and the current models of lexically encoded grammatical information, it is necessary to appreciate the structuralist distinction between ‘syntagmatic processes’ and ‘paradigmatic systems’ in language. The lexicon has emerged as the focal point communicating between these two components, and can be seen as a hook which links the information at these two levels. One can go further still and view the elements of the lexicon as not just the building blocks for the more active components of the grammar, but also as actively engaging the building principles themselves.

While syntagmatic processes refer to the influence of horizontal elements on a word or phrase, paradigmatic systems refer to vertical substitutions in a phrasal structure. Syntagmatics evolved into the theory of abstract syntax while paradigmatics was all but abandoned in generative linguistics. In an early discussion of syntagmatic dependencies, Hjelmslev (1943) uses the term ‘selection’ explicitly in the modern sense and notes the importance of integrating paradigmatic systems with the syntagmatic processes they participate in. For Hjelmslev, there are two possible types of relations that can exist between elements in a syntagmatic process: ‘interdependence’ and ‘determination’, the latter of which is related to the notion of selectional restriction as developed by Chomsky (1965). As Cruse (1986) notes ‘One reason that selectional restrictions were not integrated into mechanisms of grammatical selection and description in the 1970s and 1980s is that, if they are imposed correctly, the grammar is forced to model two computations:

(a) the entailment relations between selectional restrictions as features must be modeled formally, in order to contribute to the computation of a syntactic description;

(b) the manner in which selectional features or constraints contribute to the determination of the meaning of expressions must be enriched in order to exploit these very features.’

Recently, with the convergence of several areas in linguistics (lexical semantics, computational lexicons, and type theories) several models for the determination of selection have emerged which actively integrate these central syntagmatic processes into the grammar, by making explicit reference to the paradigmatic systems which allow for grammatical constructions to be partially determined by selection. Examples of this approach are Generative Lexicon Theory (Bouillon and Busa 2001; Pustejovsky 1995), and to a certain extent, Construction Grammar (Goldberg 1995), CCG (Steedman 1997), and HPSG (Pollard and Sag 1994).

These recent theoretical developments have led to a new direction in lexical design. Rather than restricting the scope of the lexicon to that of a passive database, current frameworks have re-architected the relationship between syntactic and semantic representations and the underlying recurrence relations that generate them. These developments have helped to characterize the approaches to lexical design in terms of a hierarchy of semantic expressiveness. There are at least three such classes of lexical description, which are defined (cf. Pustejovsky 1995 for discussion):

(a) Sense Enumerative Lexicons: lexical items have a single type and meaning, and ambiguity is treated by multiple listings of words.

(b) Polymorphic Lexicons: lexical items are active objects, contributing to the determination of ‘meaning in context’ under well-defined constraints.

(c) Unrestricted Sense Lexicons: meanings of lexical items are determined mostly by context and conventional use. Few, if any, restrictions are imposed on how a word may refer.

Although there have been proponents for each class of lexical description defined here, the most promising direction seems to be a careful and formal elucidation of the polymorphic lexicons, and this will form the basis of our subsequent discussion of both the structure and content of lexical entries.

2. The Structure Of A Lexical Entry

As mentioned in Sect. 1, it is generally agreed that there are three components to a lexical item: phonological, syntactic, and semantic information. In this research paper, we mainly focus on the manner in which syntactic and semantic representations are encoded in the lexical entry.

There are two types of syntactic knowledge associated with a lexical item: its ‘category’ and its ‘subcategory.’ The former includes traditional classifications of both the major categories, such as noun, verb, adjective, adverb, and preposition, as well as the minor categories, such as adverbs, conjunction, quantifier elements, and determiners.

Knowledge of the subcategory of a lexical item is typically information that differentiates categories into distinct, distributional classes. This sort of information may be usefully separated into two types, ‘contextual features’ and ‘inherent features.’ The former are features that may be defined in terms of the contexts in which a given lexical entry may occur. Subcategorization information marks the local syntactic context for a word. It is this information that ensures that the verb ‘devour,’ for example, is always transitive in English, requiring a direct object; the lexical entry encodes this requirement with a subcategorization feature specifying that an NP appear to its right. Another type of context encoding is collocational information, where patterns that are not fully productive in the grammar can be tagged. For example, the adjective ‘heavy’ as applied to ‘drinker’ and ‘smoker’ is collocational and not freely productive in the language (Mel’cuk 1988). ‘Inherent features,’ on the other hand, are properties of lexical entries that are not easily reduced to a contextual definition, but rather refer to the ontological typing of an entity. These include such features as count mass (e.g., ‘pebble’ vs. ‘water,’ abstract, animate, human, physical, and so on.

Semantic information can also be separated into two categories: ‘base semantic typing’ and ‘selectional typing.’ While the former identifies the semantic class that a lexical item belongs to (such as entity, event, property), the latter class specifies the semantic features of arguments and adjuncts to the lexical item.

2.1 Word Classes And Typing Information

There are two major approaches to classifying lexical items by their type: syntactic and semantic. (Another influential tradition for verb classification, which we will not discuss in detail here, is a more descriptive approach to word classes, where membership is defined on the basis of grammatical behavior and verbal valency alternation, such as that elaborated on and compiled in Levin (1993)

One obvious way to organize lexical knowledge, be it syntactic or semantic, is by means of lexical inheritance mechanisms. In fact, much recent work has focused on how to provide shared data structures for syntactic and morphological knowledge (Flickinger et al., 1985). Evans and Gazdar (1990) provide a formal characterization of how to perform inferences in a language for multiple and default inheritance of linguistic knowledge. The language developed for that purpose, DATR, uses value-terminated attribute trees to encode lexical information. Briscoe et al. (1993) describe a rich system of types for allowing default mechanisms into lexical type descriptions.

Type structures can express the inheritance of syntactic features (Sanfilippo 1993) as well as the relationship between more conventional taxonomic information, such as that shown in Sect. 2.2 (cf. Copestake and Briscoe 1992, Pustejovsky 1995, Pustejovsky and Boguraev 1993) and Fig. 1.

Lexicon Research Paper

Given a semi-lattice of types such as this, the lexical items in the language can be associated with a much richer system of differentiated semantic classes. Verbs may also be structured in hierarchical relations, as done in HPSG, LFG, and other lexical frameworks (cf. Pollard and Sag 1994, Alsina 1992, Koenig and Davis 1999).

Lexicon Research Paper

2.2 Argument Structure

Once the base syntactic and semantic typing for a lexical item has been specified, its subcategorization and selectional information must be encoded in some form. There are two major techniques for representing this type of knowledge:

(a) associate ‘named roles’ with the arguments to the lexical item;

(b) associate a logical decomposition with the lexical item; meanings of arguments are determined by how the structural properties of the representation are interpreted (cf. Hale and Keyser 1993, Levin and Rappaport 1995, Pustejovsky 1995).

One influential way of encoding selectional behavior is the theory of thematic relations (cf. Gruber 1976, Jackendoff 1972). Thematic relations are typically defined as partial semantic functions of the event being denoted by the verb or noun, and behave according to a predefined calculus of roles relations (e.g., Carlson 1984, Chierchia 1989, Dowty 1989). For example, semantic roles such as agent, theme, and goal can be used to partially determine the meaning of a predicate, when they are associated with the grammatical arguments to a verb.

The theory of argument structure as developed by Williams (1981), Grimshaw (1990), and others can be seen as a move towards a more minimalist description of semantic differentiation in the verb’s list of parameters. The argument structure for a word can be seen as the simplest specification of its semantics, indicating the number and type of parameters associated with the lexical item as a predicate. For example, the verb ‘build’ can be represented as a predicate taking two arguments, while the verb ‘give’ takes three arguments.

(1) (a) build (x, y)

 (b) give (x, y, z)

What originally began as the simple listing of the parameters or arguments associated with a predicate has developed into a sophisticated view of the way arguments are mapped onto syntactic expressions. Williams’ (1981) distinction between ‘external’ (the underlined arguments) and ‘internal’ arguments and Grimshaw’s proposal for a hierarchically structured representation (cf. Grimshaw (1990) provide us with the basic syntax for one aspect of a word’s meaning. Similar remarks hold for the argument list structure in HPSG (Pollard and Sag 1994) and LFG (Bresnan 1994).

The interaction of a structural argument list and a rich system of types, such as that presented, provides a mechanism for semantic selection that overcomes the difficulties mentioned in Section 2. The most direct impact of semantic type systems on syntactic subcategorization can be seen with the analysis of a simple example.

(2) (a) The man /the rock fell.

 (b) The man /*the rock died.

Returning to the example in (2), consider how the selectional distinction for the feature [+/- animacy] is modeled. For the purpose of illustration, the arguments of a verb will be represented in a list structure, where each argument is identified as being typed with a specific value.

Figure 1.1

In the sentences in (2), it is intuitively clear how rocks can’t die and men can but it is still not obvious how this inference is computed, given what we would assume are the types associated with the nouns ‘rock’ and ‘man’ respectively. What accomplishes this computation is a rule of subtyping, Θ, that allows the type associated with the noun ‘man’ (i.e., human) to also be accepted as the type animate, which is what the predicate ‘die’ requires of its argument as stated in (3b) (4) Θ [human C animate]: human→animate

The rule Θ applies, since the concept human is subtyped under animate in the type hierarchy. Parallel considerations rule out the noun ‘rock’ as a legitimate argument to ‘die’ since it is not subtyped under animate. Hence, one of the concerns given for how syntagmatic processes can systematically keep track of which ‘selectional features’ are entailed and which are not is partially addressed by such lattice traversal rules as the one presented here.

Selection can also be employed to solve the problem of polymorphism, where the same lexical item can appear in multiple syntactic contexts, as illustrated in (5) below.

(5) (a) Mary began to read the novel.

 (b) Mary began reading the novel.

 (c) Mary began the novel.

Generative Lexicon Theory handles such examples by the use of semantic selection (cf. Pustejovsky 1995) and canonical syntactic mapping rules, specifying how semantic types correspond to syntactic expressions. Here, the verb ‘begin’ is typed as taking an event description as its internal argument, begin (Ind, ξ), which can be realized in one of the three syntactic forms shown. This illustrates how a generative system of type operations can account for polymorphic behavior of selection in the syntax.

2.3 Decomposition And Event Structure

The second major approach to the specification of lexical knowledge is that taken by decompositional theories. Most of this research has focused on the lexical specification for verbs, their arguments, and their syntactic behavior. Some recent work, however, has been done on noun semantics as well (Busa 1996). In this section, we examine the motivations for lexical decomposition in linguistic theory, and the various proposals that have emerged on how to encode lexical knowledge as structured forms. We then relate this to the manner in which verbs refer to events, since this directly impacts the nature of the lexical decomposition structure.

Since Davidson (1967), events have played an increasingly important role in the determination of verb meaning. While early researchers on decompositional models (Dowty 1979, Lakoff 1965, McCawley) made no ontological commitments to events in the semantics for verbs, a new synthesis has emerged in recent years which attempts to model verb meanings as complex predicative structures with rich event structures (cf. Hale and Keyser 1993, Parsons 1990, Pustejovsky 1991, Tenny 1994). This research has developed the idea that the meaning of a verb can be analyzed into a structured representation of the event that the verb designates, and has furthermore contributed to the realization that verbs may have complex, internal event structures. Recent work has converged on the view that complex events are structured into an inner and an outer event, where the outer event is associated with causation and agency, and the inner event is associated with telicity (completion) and change of state. Under this view, a canonical accomplishment predicate as in ‘John sliced the bread,’ for example, can be represented as composed of an inner and an outer event. The inner event is the telic event in which the bread undergoes a change of state, and the outer event is the event in which John acts agentively (to do whatever is involved in the act of slicing). Since the outer event causes the inner one, it is associated lexically with causation.

Although there is a long tradition of analyzing causation as a relation between two events in the philosophical (cf. Davidson 1967) and psychological literature (cf. Miller and Johnson-Laird 1976, Schank 1973) in contemporary models of natural language semantics this idea has only recently been adopted. For example, Carter (1976), one of the earlier researchers in this area, represents the meaning of the verb ‘darken’ as follows:

(6) x CAUSE (( y BE DARK) CHANGE))

The predicate CAUSE is represented as a relation between a causer argument x and an inner expression involving a change of state in the argument y, although there is an intuition that the cause relation involves a causer and an event. Carter does not make this commitment explicitly. Levin and Rappaport (1988) follow a similar strategy, with a CAUSE predicate relating a causer argument and an inner expression involving a change of state in the argument y. The change of state is represented with the predicate BECOME:

(7) wipe the floor clean:

x CAUSE [y BECOME (AT) z] BY [x ‘wipe’ y]]

x CAUSE [floor BECOME (AT) clean] BY

[x ‘wipe’ floor]

The work of Levin and Rappaport, building on Jackendoff’s Lexical Conceptual Structures, has been influential in articulating the internal structure of verb meanings (see Levin and Rappaport 1995).

Jackendoff (1990) develops an extensive system of what he calls ‘Conceptual Representations’ which parallel the syntactic representations of sentences of natural language. These employ a set of canonical predicates including CAUSE, GO, TO, and ON, and canonical elements including Thing, Path, and Event. These approaches represent verb meaning by decomposing the predicate into more basic predicates. This work owes obvious debt to the innovative work within generative semantics, as illustrated by McCawley’s (1968), analysis of the verb ‘kill.’ Recent versions of lexical representations inspired by generative semantics can be seen in the Lexical Relational Structures of Hale and Keyser 1993, where syntactic tree structures are employed to capture the same elements of causation and change of state as in the representations of Carter, Levin and Rapoport, Jackendoff, and Dowty.

Pustejovsky (1988, 1991) extends the decompositional approach presented in Dowty (1979) by explicitly reifying the events and subevents in the predicative expressions. Unlike Dowty’s treatment of lexical semantics, where the decompositional calculus builds on prepositional or predicative units (as previously discussed), a ‘syntax of event structure’ makes explicit reference to quantified events as part of the word meaning. Pustejovsky further introduces a tree structure to represent the temporal ordering and dominance constraints on an event and its subevents. For example, a predicate such as ‘build’ is associated with a complex event such as that shown in (8):

(8) [transition[e1: PROCWSS] [e2 : STATE]] The process consists of the building Activity itself, while the state represents the result of there being the object built. Grimshaw (1990) adopts this theory in her work on argument structure, where complex events such as ‘break’ are given a similar representation. In such structures, the process consists of what x does to cause the breaking, and the state is the resultant state of the broken item. The process corresponds to the outer causing event as discussed, and the state corresponds in part to the inner change of state event. Both Pustejovsky and Grimshaw differ from the authors above in assuming a specific level of representation for event structure, distinct from the representation of other lexical properties. Furthermore, they follow Higginbotham (1985) in adopting an explicit reference to the event place in the verbal semantics.

2.4 Qualia Structure

Thus far, we have focused on the lexical information associated with verb entries. All of the major categories, however, are encoded with syntactic and semantic feature structures that determine their constructional behavior and subsequent meaning at logical form. How this is accomplished, of course, varies from theory to theory.

In Generative Lexicon Theory, it is assumed that word meaning is structured on the basis of four generative factors, or ‘qualia roles,’ that capture how humans understand objects and relations in the world and provide the minimal explanation for the linguistic behavior of lexical items (these are inspired in large part by Moravcsik’s (1975, 1990) interpretation of Aristotelian vaitia).

formal: the basic category that distinguishes the object within a larger domain;

constitutive: the relation between an object and its constituent parts;

telic: its purpose and function; factors involved in the object’s origin or ‘coming into being.’

Qualia structure is at the core of the generative properties of the lexicon, since it provides a general strategy for creating new types. For example, consider the properties of nouns such as ‘rock’ and ‘chair.’ These nouns can be distinguished on the basis of semantic criteria which classify them in terms of general categories such as natural kind, artifact object. Although very useful, this is not sufficient to discriminate semantic types in a way that also accounts for their grammatical behavior. A crucial distinction between ‘rock’ and ‘chair’ concerns the properties which differentiate natural kinds from ‘artifacts’: functionality plays a crucial role in the process of individuation of artifacts, but not of natural kinds. This is reflected in grammatical behavior, whereby ‘a good chair,’ or ‘enjoy the chair’ are well-formed expressions reflecting the specific purpose for which an artifact is designed, but ‘good rock’ or ‘enjoy a rock’ are semantically ill-formed since for ‘rock’ the functionality (i.e., TELIC) is undefined. Exceptions exist when new concepts are referred to, such as when the object is construed relative to a specific Activity, such as in ‘The climber enjoyed that rock’; ‘rock’ itself takes on a new meaning, by virtue of having telicity associated with it, and this is accomplished by integration with the semantics of the subject NP. Although ‘chair’ and ‘rock’ are both physical object, they differ in their mode of coming into being (i.e., AGENTIVE): artifacts are man-made, ‘rocks’ develop in nature. Similarly, a concept such as ‘food’ or ‘cookie’ has a physical manifestation or denotation, but also a functional grounding, pertaining to the relation of ‘eating.’ These apparently contradictory aspects of a category are orthogonally represented by the qualia structure for that concept, which provides a coherent structuring for different dimensions of meaning.

For relations, the qualia act in a similar capacity to thematic relations, but where the individual qualia are possibly associated with entire event descriptions, and not just individuals. For discussion, see Pustejovsky (1995).

3. Consequence Of Lexical Design

Considering the potential range of information that can be represented lexically results in a reconceptualization of what a lexicon is, so that the very design of the grammar is significantly impacted. Furthermore, our current understanding of psychological and computational properties of language processing suggests that the resources available for lexical storage and access are considerably higher than originally imagined by early grammatical theorists. The consequence is that, what had originally been accomplished in syntax, because of the combinatoric properties inherent in production rules, can be handled by the lexicon itself. The various approaches to lexical encoding can be analyzed in terms of two parameters.

(a) Precompiling the information into lexical items forms;

(b) Computing or generating new forms or senses during the compositional process.

Typically, idioms are presented as examples of precompiled lexical entries. But some theories have adopted this idea as fundamental for the entire compositional operation. The best example of this is Combinatory Categorical Grammar (CCG) (Steedman 1997). CCG has recently been articulated in enough detail to handle most of the major linguistic phenomena using a library of precompiled lexical types, together with the combinatoric rules of categorical syntax. If the grammar utilizes representations with such non-local dependencies, then there must be additional mechanisms for unifying these representations; these are provided in the form of function composition rules and lexical rules (Schabes et al. 1988).

Lexical rules have been invoked in HPSG, as well, to explain the relationship between the various senses for lexical items, from grinding and packaging operations (such as that relating the animal and food senses of ‘chicken’ and ‘lamb,’ to the relation between logically polysemous items, such as ‘book’ information and physical object), and ‘lecture’ (information and event) (cf. Copestake and Briscoe 1992). In Generative Lexicon such relations are explicitly represented in context (cf. Johnston 1995). It is very likely, however, that language makes use of both types of devices, namely complex types such as dot objects, as well as the application of lexical rules. Regardless, both types of devices must be seriously constrained by the grammar in order not to overgenerate unwanted forms and interpretations.

Considered independently of the issue of precompiled vs. generated forms and senses, there is no question that the mental lexicon is large, containing arguably up to 400,000 lexical entries. This is based on fairly conservative estimates of speaker competence with active and passive vocabularies. For example, an average speaker lexicon might contain at least 5,000 distinct verbs, 30,000 distinct nominal forms, and over 5,000 adjectives. Combine this with an additional 10,000 compound forms and at least 300,000 distinct proper names. Obviously, the psychological (and hence computational) demands of these classes are quite distinct. There are two dimensions that can help us distinguish these classes: (a) the degree of combinatoric (functional) complexity of the lexical item; and (b) whether the lexical item is part of active or passive lexical knowledge. Most closed class items, for example, are functionally complex, as are many open class verbs and relational nouns. The majority of the open class items will also involve a fair amount of information regarding combinatoric possibilities. The class of names, however, is unique in that, although it is by far the largest class of lexical items, it is the least demanding in terms of computational resources.

In conclusion we see that the lexicon is neither a mere listing of morphemes in the language, nor a database of items passively waiting in the service of grammatical processes. Rather, the lexicon is a dynamic and active system of grammar, incorporating as well as dictating essential components of syntactic and semantic composition and interpretation.

Bibliography:

  1. Alsina A 1992 On the argument structure of causatives. Linguistic Inquiry 23(4): 517–55
  2. Baker M 1988 Incorporation: A Theory of Grammatical Function Changing. University of Chicago Press, Chicago
  3. Bresnan J 1994 Locative Inversion and the architecture of universal grammar. Language 70(1): 2–31
  4. Boguraev B, Briscoe E 1989 Computational Lexicography for Natural Language Processing. Longman, Harlow and London
  5. Boguraev B, Pustejovsky J 1996 Corpus Processing for Lexical Acquisition. Bradford Books MIT Press, Cambridge, MA
  6. Bouillon P, Busa F 2001 The Syntax of Word Meaning. Cambridge University Press
  7. Briscoe T, de Paiva V, Copestake A (eds.) 1993 Inheritance, Defaults, and the Lexicon. Cambridge University Press, Cambridge, UK
  8. Busa F 1996 Compositionality and the Semantics of Nominals. Ph.D Dissertation, Brandeis University
  9. Carlson G 1984 Thematic roles and their role in semantic interpretation. Linguistics 22: 259–79
  10. Chomsky N 1955 The Logical Structure of Linguistic Theory. University of Chicago Press, Chicago
  11. Chomsky N 1965 Aspects of the Theory of Syntax. MIT Press, Cambridge, MA
  12. Copestake A, Briscoe E 1992 Lexical operations in a unification-based framework. In: Pustejovsky J, Bergler S (eds.) Lexical Semantics and Knowledge Representation. Springer Verlag, New York
  13. Cruse A 1986 Lexical Semantics. Cambridge University Press
  14. Davis A R, Koenig J P 2000 Linking as constraints on word classes in a hierarchical lexicon. Language 76, no. 1
  15. Dowty D R 1979 Word Meaning and Montague Grammar. D. Reidel, Dordrecht, The Netherlands
  16. Dowty D R 1989 On the semantic content of the notion ‘Thematic Role’. In: Chierchia G, Partee B, Turner R (eds.) Properties, Types, and Meaning Vol. II. Dordrecht
  17. Dowty D 1991 Thematic proto-roles and argument selection. Language 67: 547–619
  18. Flickinger D, Pollard C, Wasow T 1985 Structure-sharing in Lexical Representation. In: Proceedings of 23rd Meeting of the ACL. Chicago, IL, pp. 262–7 Annual
  19. Goldberg A E 1995 Constructions: A Construction Grammar Approach to Argument Structure. University of Chicago Press, Chicago
  20. Grimshaw J 1979 Complement selection and the lexicon. Linguistic Inquiry 10: 279–326
  21. Grimshaw J 1990 Argument Structure. MIT Press, Cambridge, MA
  22. Gruber J S 1976 Lexical Structures in Syntax and Semantics. North Holland, Amsterdam
  23. Guthrie L, Pustejovsky J, Wilks Y, Slator B 1996 The role of lexicons in natural language processing. Communications of the ACM 39: 1
  24. Hale K, Keyser J 1993 On argument structure and the lexical expression of syntactic relations. In: Hale K, Keyser J (eds.) The View from Building 20. MIT Press, Cambridge, MA
  25. Jackendoff R 1990 Semantic Structures. MIT Press, Cambridge, MA
  26. Johnston R 1995 Semantic underspecification and lexical types: Capturing polysemy without lexical rules. Proceedings of ACQUILEX Workshop on Lexical Rules, August 9–11, 1995, Cambridgeshire, UK
  27. Levin B 1993 Towards a Lexical Organization of English Verbs. University of Chicago Press, Chicago
  28. Levin, B, Rappaport M H 1995 Unaccusativity: At the SyntaxSemantics Interface. MIT Press, Cambridge, MA
  29. Lyons J 1968 Introduction to Theoretical Linguistics. Cambridge University Press, Cambridge, UK
  30. McCawley J 1968 Lexical insertion in a transformational grammar without deep structure. Proceedings of the Chicago Linguistic Society 4
  31. Mel’cuk I A 1988 Semantic description of lexical units in an explanatory combinatorial dictionary: Basic principles and heuristic criteria. International Journal of Lexicography 1: 165–88
  32. Miller G WordNet: An on-line lexical database. International Journal of Lexicography 3: 235–312
  33. Miller G 1991 The Science of Words. Scientific American Library
  34. Pinker S 1989 Learnability and Cognition: The Acquisition of Argument Structure. MIT Press, Cambridge, MA
  35. Pollard C, Sag I 1994 Head-Driven Phrase Structure Grammar. University of Chicago Press and Stanford CSLI, Chicago
  36. Pustejovsky J 1991 The syntax of event structure. Cognition 41: 47–81
  37. Pustejovsky J 1992 Lexical semantics. In: Shapiro S (ed.) Encyclopedia of Artificial Intelligence, 2nd edn. Wiley, New York
  38. Pustejovsky J 1995 The Generative Lexicon. MIT Press, Cambridge, MA
  39. Pustejovsky J, Boguraev P 1993 Lexical knowledge representation and natural language processing. Artificial Intelligence 63: 193–223
  40. Sanfilippo A 1993 LKB encoding of lexical knowledge. In: Briscoe T, de Paiva V, Copestake A (eds.) Inheritance, Defaults, and the Lexicon, Cambridge University Press, Cambridge, UK
  41. Schabes Y, Abeille A, Joshi A 1988 Parsing strategies with lexicalized grammars: In: Proceedings of the 12th International Conference on Computational Linguistics, Budapest, Hungary
  42. Steedman M 1997 Surface Structure Interpretation. MIT Press, Cambridge, MA
  43. Talmy L 1985 Lexicalization patterns: Semantic structure in lexical forms. In: Shopen T (ed.) Language Typology and Syntactic Description 3: Grammatical Categories and the Lexicon. Cambridge University Press, Cambridge, pp. 57–149
  44. Weinreich U 1972 Explorations in Semantic Theory. Mouton, The Hague, The Netherlands
  45. Williams E 1981 Argument structure and morphology. Linguistic Review 1: 81–114

 

Linguistic Fieldwork Research Paper
Lexicology And Lexicography Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!