Generative Grammar Research Paper

Academic Writing Service

Sample Generative Grammar Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. If you need a research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our custom research paper writing service for professional assistance. We offer high-quality assignments for reasonable rates.

Generative grammar is the label of the most influential research program in linguistics and related fields in the second half of the twentieth century. Initiated by a short book, Chomsky’s Syntactic Structures (1957), it became one of the driving forces among the disciplines jointly called the cognitive sciences. The term generative grammar refers to an explicit, formal characterization of the (largely implicit) knowledge determining the formal aspect of all kinds of language behavior. From the start the program had a strong mentalist orientation instance, documented, for, in a fundamental critique of Skinner’s Verbal Behavior (1957) by Chomsky (1959), arguing that behaviorist stimulus–response theories could in no way account for the complexities of ordinary language use. The ‘Generative Enterprise,’ as the program was called in 1982, went through a number of stages, each of which was accompanied by discussions of specific problems and consequences within the narrower domain of linguistics as well as the wider range of related fields such as ontogenetic development, psychology of language use, or biological evolution. Four stages of the Generative Enterprise can be marked off for expository purposes.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


1. Transformational Structure And Levels Of Description

The foundation of all further developments is to be found in the Chomsky’s monograph The Logical Structure of Linguistic Theory (1975, henceforth LSLT), which was published only 20 years after the manuscript was completed. Starting from the notion of structural analysis as developed by Bloomfield and made precise by Harris (1951), LSLT introduces a radically new perspective on the nature of linguistic structure. On the one hand, LSLT takes up Harris’s notion ‘level of linguistic structure,’ in terms of which the form of utterances, i.e., their phonetic and syntactic shape, is characterized. On the other hand, however, LSLT develops a notion of linguistic structure that differs fundamentally from the empiricist orientation of American structuralism. More specifically, Chomsky proposes a notion of grammar that provides a full and explicit account of the tacit knowledge a speaker hearer has of the form of utterances of his or her language. Under this perspective, a grammar G is a theoretical construct, dealing with internal states of knowledge that relate only indirectly to overt behavior of the speaker. In other words, G is viewed as an empirical hypothesis about the mental structure underlying ordinary speech behavior. To make this notion precise, LSLT highlights the importance of intuitive judgments a speaker is able to make about properties of utterances, such as the grammatical well-formedness of the famous nonsense-sentence (1) as opposed to the ungrammaticalness of (2), or the rather different properties of (3) and (4), inspite of their superficial similarity:

(1) colorless green ideas sleep furiously




(2) furiously sleep ideas green colorless

(3) this picture was painted by a real artist

(4) this picture was painted by a new technique

Hence G of English has to account for the properties on which intuitions like these rely. The study of the general features of G can furthermore be assumed to reveal principles of the internal organization of mental states in general and of knowledge of language in particular. This is in fact the basic motivation under-lying the research program of Generative Grammar.

More technically, LSLT gives the following account of the concepts needed for this program. First of all, a linguistic level L is defined as an algebraic structure of the following sort:

(5) L=[L, R1,…, Rm, µ, Φ, φ1,…, φn], where

L is the set of primes of L;

R1 to Rm are sets of relations in L,

 such as linear ordering, inclusion,

 identity;

µ is a set of so-called L-markers,

 generated by the primes of L;

Φ is a mapping that maps µ onto

 the set of grammatical utterances;

φ1 to φn are operations relating L to other levels.

Roughly speaking, an L-marker is the representation of the structure of an utterance on the level L. The particular levels assumed in LSLT include the phonetic level Pt, the phonemic level Pm, the morpheme level M, the word level W, the level of word classes C, the phrase structure level P, and the transformational level T. For Pm, the set µ consists of strings of phonemes, for W, the elements of µ are sequences of words, and for P, the set µ consists of phrase markers, roughly tree structures indicating the constituency of an utterance. The relation between the different levels defined by the operations φi depends on particular conditions of the various levels and is, as Chomsky argues, anything but trivial. The full structural representation of an utterance comprises the L- markers assigned to it at the various levels in question. The mapping Φ relating the levels of representation to the actual (or possible) grammatical utterances is correspondingly indirect and complex. The assumption in LSLT is that it may be sufficient to define Φ for phonetic representations, i.e., strings in Pt, provided that the relation to Pt is defined for all other levels. This accounts for the fact that the structure of an utterance cannot be arrived at by simple operations of segmentation and classification, as assumed in classical structuralism, but depends on highly indirect and complex relations.

Second, a grammar G is a system of operations or rules that generate the sentences of a language with their structural representations. Technically, G determines the set of L-markers for each level L and the operations by which they are related to the other levels. LSLT deals with the general form of phonological rules, defining the relation between the phonemic and phonetic level, and more extensively with phrase structure rules, generating P-markers, and with grammatical transformations. The role of trans-formations is a matter of central interest in LSLT (such that for quite a while the term ‘transformational grammar’ was taken as largely equivalent to ‘generative grammar’), they are assumed to capture structural properties P-markers could not account for. For instance, the difference between (3) and (4), which are not distinct in terms of constituency, is explained by different transformational derivations, informally indicated in (3 ) and (4 ):

(3´) a real artist painted this picture = = TPassive

→ this picture was painted by a real artist

(4´) someone painted this picture by a new

technique = = TPassive this picture was

painted by a new technique by someone

= = TAgent-deletion this picture was

painted by a new technique

An important point of the theory developed in LSLT is the assumption that grammars must sysbe evaluated tematically. To this effect, a simplicitymetric is introduced by which a grammar is preferred if and only if it expresses more natural generalizations than its competitors. For example, a grammar that accounts systematically for the different properties of sentences like (3) and (4) is to be preferred over one that can only assign to them the same structure. This is the reason why a grammar with a transformational component comes out as simpler than one without it. One of the factors entering the evaluation is the principle of rule ordering, which recognizes the possibility that rules might have a different structural effect, if they operate in different ordering. As a simple case in point, in (4´) the Agent-deletion can operate only after the Passive Transformation has applied. (The importance of rule ordering had already been observed in the Sanskrit grammar of Panini.)

The productivity of the proposed framework is demonstrated by an analysis of large sections of English with respect to phrase structure and transformations in LSLT and in a stimulating monograph on nominalization by Lees (1960). A seminal study of phonology is given in Halle (1959) for Russian and in Chomsky’s unpublished master’s thesis on the Morphophonemics of Modern Hebrew.

A feature that LSLT inherits from the notion of transformational analysis proposed by Harris (1957) is the notion that transformations not only relate individual sentences to their more elementary basis, as shown in (3´) and (4´), but are also used to decompose complex sentences into their constituent parts made up. Thus (6c), for instance, is composed by embedding of (6a) into (6b), replacing the bracketed pronoun:

(6) (a) this picture was painted by a new technique

(b) the director suspected [it]

(c)= = TEmbedding the director suspected

[this picture to be painted by a new

technique]

This assumption requires a distinction between simple transformations like TPassive and so-called generalized transformations like TEmbedding. Another distinction that must be made is that between optional and obligatory transformations. Passivization, for instance, is optional, while the fronting of relative and wh-pronouns is obligatory, as the ungrammaticalness of (7b) shows:

(7) (a) I know the man who you met last night

(b) *I know the man you met who last night

The clarification of these and a wide range of other complexities concerning the level T is one of the driving forces in further developments of Generative Grammar.

Another, more general feature that the Generative Enterprise inherits from American Structuralism is the assumption that all grammatical structure, from the most concrete level Pt of phonetics to the most abstract level T of transformational structure, is a matter of linguistic form which must be embedded in a wider context of a theory of semiotics that accounts for sense, reference, and use of linguistic expressions. Although there have been several proposals to include a level of semantics into the framework of generative grammar, the main emphasis has been on the knowledge of linguistic form throughout.

2. The Standard Theory Of Generative Grammer

Immediately after the publication of Syntactic Structures (actually lecture notes excerpted from LSLT), the theory it proposed attracted interest from various sides, including mathematical linguistics and psychology. An overview of the formal properties of grammatical rules, especially of phrase structure grammars, in Chomsky (1963) established what became known as the Chomsky hierarchy of formal grammars as a branch of subrecursive function theory. Miller and Chomsky (1963) gave a first outline of the role of Generative Grammar for the study of language comprehension and production. Furthermore, Katz and Fodor (1963) made a first proposal to extend the theory of grammar by a semantic component, using so-called projection rules to integrate compositionally the meaning of lexical items according to their syntactic relations.

The main development, however, was due to continuous efforts to constrain systematically and strengthen the theory, increasing its explanatory power by reducing the range of possible grammars compatible with a given set of data. Such a reduction reduces at once the task assigned to the evaluation metric. Efforts to this effect led to what was later called the ‘Standard Theory’ of Generative Grammar. Based on preparatory ideas discussed, inter alia, in Katz and Postal (1964), it is formulated in Chomsky’s Aspects of the Theory of Syntax (1965). The core point is a reassessment of the nature of transformational structure. Instead of a separate level T of transformations, grammatical transformations are now construed as operations that define the relation between two levels of syntactic representation, both of which are characterized by P-markers. This modification is related to one of the most influential distinctions proposed in the Standard Theory, namely that between deep and surface structure. According to this view, each sentence is assigned a deep and a surface P-marker, with trans- formations mapping the former on the latter. Thus except for the different choice of personal or whpronouns, (8)(a)–(d) are just different surface realizations of the same deep structure, where the bracketed Agent phrases might furthermore optionally be deleted by TAgent-deletion, known from (4 )

(8) (a) John may expect her to meet him

(b) she may be expected (by John) to meet him

(c) who does John expect her to meet

(d) who may be expected (by John) to be met

(by her)

On this account, the semantic interpretation of linguistic expressions could apparently be determined on the basis of their deep structure, which would be an interesting step towards an integration of grammar into an account of meaning and reference. Another important point of the Standard Theory is a more systematic distinction between (a) the lexical system, i.e., the complex system of rules that introduce lexical items with their (idiosyncratic) phonetic, syntactic, and semantic properties, and (b) the grammatical rules that integrate the lexical information into the structure of complex linguistic expressions. Finally, the distinction between levels of representation on the one hand and components of grammar generating and inter-relating these representations is made more perspicuous than in LSLT. The syntactic levels that appeared to be crucial are deep-and surface-structure, while word=and morpheme-structure do not have an independent status. The components of grammatical rules are (a) the base component, including the lexical system and the phrase structure rules, (b) the trans-formational component, (c) the phonological component, and, possibly (d) the semantic component. Rules and representations are inter-related according to the schema in (9), with levels of representation italized:

(9)                                                                  Phrase Structure

Rules

Base                             ↓

Component:      Lexical Rules

                                                                        ↓

 Deep Structure →       Semantic

 ↓                                 Component

 Transformations

 ↓

 Surface Structure         Phonological

Component

The base component and the transformations together constitute the syntactic component. The semantic component raises problems to be taken up below, while the organization of the phonological component, extensively discussed by Chomsky and Halle (1968), is assumed to have a fairly clear organization, parallel to that of the syntactic component in relevant respects. The underlying phonemic structure is defined by the phonological features of lexical items and their arrangement in surface structure:

(10)                                                                                                    Phonemic Structure

                                                                                    ↓

Phonological Component: Phonological Rules

 ↓

Phonetic Structure

While there are still optional and obligatory trans- formations, generalized transformations are dispensed with, the deep structure of complex sentences being generated already in the base component. A particularly elegant feature that emerges from this reorganization is the principle of cyclic operation of the rules, which is added to the earlier assumption that rules are ordered. Cyclic operation apparently captures a deep property of natural languages: the same sequence of operations applies to increasingly larger parts of an utterance, defining its syntactic properties, its phonetic patterns, and the compositional integration of its semantic structure.

An important general effect of the Standard Theory is the perspicuous account it allows for the traditional notion of linguistic universals, or Universal Grammar (UG), considered as the innate, biologically fixed structure underlying the species-specific language capacity. The range of possible rules and the architecture of their interaction, as schematized in (9) and (10), could now be construed as formal universals, the potential primitive elements entering the rules and representations as substantive universals. Under this perspective, the set of possible grammars G accessible as instantiations of UG determines the diversity of possible natural languages, including the range of variation delimiting processes of linguistic change. By the same token, UG provides the innate, formal basis and predisposition that makes language acquisition possible. These accompanying considerations of the Standard Theory led to a new, principled approach to problems of language acquisition, as well as linguistic change.

3. Towards The Principles And Parameters Theory

Shortly after the Standard Theory was formulated, empirical analyses as well as efforts to improve the structure of the theory created modifications of the model. Controversial proposals concerned the role of semantics as indicated in (10). Lakoff (1971) and others proposed a model, according to which semantic representations are the underlying structure of linguistic utterances, from which surface structures are step by step derived by means of transformations and lexical rules. The insights and difficulties related to these proposals created a counterproposal, called the Revised Standard Theory. Based on observations discussed in Jackendoff (1972) it became obvious that meaning is not unaffected by surface structure phenomena. For example (11) and (12) are based on the same deep structure, with passivization applying to (12) but not to (11); they do not in general have the same meaning:

(11) Three students in the class wrote only one paper

(12) Only one paper was written by three students in the class

(Due to what is called the scope of quantification, in (11) the restriction of having written one paper is ascribed to three students, while in (12) the restriction of being written by three students is ascribed to a single paper.) As phenomena of this sort are by no means marginal, the Revised Standard Theory recognized a more complex relation between syntax and semantics than that indicated in (9). Only certain aspects of meaning are determined by deep structure, while scope and certain other relations depend on surface structure, such that transformations—contrary to the Standard Theory—might now affect the semantic interpretation.

A technical way to handle these problems is the so-called Trace Theory proposed in Fiengo (1977). According to this proposal, a constituent that is moved by a transformation leaves behind a trace that is phonetically empty, but may play a role in semantics, as indicated in the following example, where ei is the trace left in the initial position of the co-indexed pronoun:

(13) whoi do you want to talk to ei

In many cases, the trace turns out to be similar to what logicians consider a variable bound by the operator contained in the moved constituent. In this sense, invisible elements of the surface structure account for its contribution to semantic interpretation. Other empty elements of the surface structure are ‘invisible’ pronouns, as in (14)(a) and (b), where the actor of the infinitive clause is controlled by (i.e., coreferential with) different constituents of the main clause:

(14) (a) Johni promised Mary [ei to take the train]

 (b) John recommended Maryi [ei to take the train]

The structural aspect that emerges from transformations, as for instance in (13), cannot be the Deep Structure of (9), but it also differs from Surface Structure, as it contains elements like traces and empty pronouns. This more abstract surface, called S-structure, is assumed to provide the information for phonetic as well as semantic interpretation. On this account, (9) might be replaced by the general schema (15), where LF, namely the Logical Form, represents all the information G contributes to the meaning of an expression, just as PF, that is the Phonetic Form, specifies the conditions G imposes on its pronunciation:

(15)                                                                Deep Structure

                                                                                   |

 S-structure

                                                                        / \

 PF LF

The theory of traces and other ‘empty’ elements is not a purely technical modification, however. It is rather part of a general orientation striving for systematic restrictions on the still much too complex possibilities to construct a grammar according to the Standard Theory. One of the reasons motivating this orientation has been called the logical problem of language acquisition. This problem consists of the requirement to specify the conditions under which an appropriate grammar can be identified on the basis of the restricted and partially defective evidence the learning child is normally exposed to. This problem cannot be approached seriously if UG is as unconstrained as it would be with all the complexities initially allowed for as transformations. On the other hand, it turned out that many of the formal complexities exploited, for instance, in operations like the standard passive transformation, can be reduced to more elementary operations if the application and interaction of these operations is subject to more systematic constraints.

Induced by the Trace theory, a number of systematic conditions have been explored which determine the conditions of empty elements of different sorts. These conditions allow transformations to be reduced ultimately to just one simple operation, called Move α, where α is any constituent. The systematic and by no means trivial conditions determining the possible choice of α, the domain and bounds of movement, the consequences of movement (such as leaving a trace), are all expressed by general principles needed on independent grounds. One of the conditions concerns relations lexical elements impose on their necessary or optional complements. This is called their Argumentor Theta-Structure. Another rather general condition entering these principles is a systematic reorganization of phrase structure rules in terms of the so-called Xbar Theory. This theory takes up an observation made already in Harris (1951) about the restricted options according to which constituents X and Y combine in order to form a complex constituent X. In somewhat simplified terms, properties of X are determined by its head X, which furthermore selects Y as its complement or is modified by Y as a free adjunct.

These and a number of other developments led to a new version of the overall framework called the Principles-and-Parameters Theory, formulated in Chomsky’s Lectures on Government and Binding (1981). The basic idea of this theory is to replace the notion of different types of highly complex rules, from which individual grammars are made up, by a system of universal principles that constrain the effect of rather basic operations like Move α and the options of X-bar theory. The principles in question determine the relation between the positions of a moved constituent, the domain of movement, the possible candidates for movement, and the conditions that require the operation. They constitute separate subsystems, called Government, Binding, Bounding, Control, Case, and Thematic Structure. The properties distinguishing different languages are no longer seen as consequences of intricate systems of different and rather complex rules. It rather seems possible to account for the relevant phenomena by the interaction of these general principles. The obvious differences between different languages is assumed to be reducible to different values of a restricted set of parameters contained in these general principles. Thus the head X of a constituent X may be in initial or final position, an embedded clause may need an introductory element or not, etc. Most (or all) of these parameters seem to be related to classes of particular lexical items so that all language particular properties are basically related to lexical information. This leads to a rather different notion of how UG determines an individual grammar G: besides lexical items making up the language particular dictionary, grammars of different languages might differ only by their choice of parameter values, while the principles these parameters rely on are given by UG and hence identical for all languages. Under this perspective, language acquisition consists essentially of identifying the lexical system of a language, thereby fixing the particular values of the parameters contained in UG. Presupposing the schema of levels given in (15), the organization of a grammar G within the Principles-and-Parameters theory can be indicated roughly as follows:

(16) (a)             Lexical System, fixing the idiosyncratic

phonetic, semantic, and syntactic

properties of lexical items

 (b)       X-bar syntax, Move α

 (c)       Principles of Government, Binding,

Bounding, Thematic Structure,

Control, and Case-Assignment

 (d)       PF-component

 (e)       LF-component

Among the numerous stimulating consequences of this model is a new perspective on historical and typological variation: like language acquisition, linguistic change and typological differences are not only constrained by UG, but also closely related to choice of parameter values on the basis of universal principles.

4. The Minimalist Program Of Generative Grammar

Research within this framework explored the principles listed in (15c) in more detail, keeping to the overall orientation of the Generative Enterprise, namely improving the explanatory power by constraining the theory as far as possible. This led in particular to the attempt to derive the principles in question from general conditions of cognitive organization plus the boundaries that are conceptually necessary to identify the domain of language. Minimal assumptions in this sense require linguistic expressions to compositionally relate patterns of the perceptual and articulatory system A-P to representations in the range of conceptually and intentionally organized experience C-I. This requires the internal or I-language determined by G to provide at least the interfaces PF and LF, relating language to A-P and C-I, respectively, as indicated in (17):

(17) signal ↔A-P↔ PF↔LF↔ C-I↔ environment

 G

Thus G must determine pairs <π, λ> , where π and λ are representations in PF and LF, respectively. One must assume furthermore, that each pair <π, λ> must be based on a selection N of elements of the lexical inventory of the language. The research strategy of the Minimalist Program proposed in Chomsky (1995) pursues the aim to derive the properties of UG as far as possible from these minimal assumptions together with the hypothesis that UG is subject to conditions of structural economy. This requires first the lexical items entering the selection N to contain all and only the idiosyncratic specifications by means of which N participates in determining the pair <π, λ>. This includes, besides phonetic and semantic features interpreted in P-A and C-I, respectively, certain grammatical or formal features controlling the computation of <π, λ> from N. The minimal assumption to be made about this computation is an operation Merge which combines two expressions X and Y into a complex expression Z, made up from the features of X and Y, adding the formal features of the head to characterize the formal properties of Z. As a matter of fact, the operation Merge is the minimalist version of X-bar syntax, reviving, moreover, a reduced version of the generalized transformations combining two expressions of arbitrary complexity. With this background, it is an interesting, empirical fact about natural language that the derivation from N to <π, λ> cannot in general be restricted to the simplest possibility requiring only the operation of Merge, but needs the effect of what was called Mo e α in order to account for facts like Whoi did he to talk to ei. As a matter of fact, the operation can be simplified to Mo e, as it is some formal feature of the moving constituent α, which triggers the operation if and only if the feature cannot be eliminated otherwise. More technically, then, the computation of linguistic expressions proceeds as indicated by (18):

(18) N↔Merge, Move↔PF, LF>Spell Out PF

A wide range of facts in different languages have been shown to follow from appropriate lexical in-formation together with minimal assumptions about Merge, Move, plus two sorts of general conditions regulating the economy in representation and derivation of expressions of I-language. What this leads to is indicated in (19):

(19)      (a) Conditions on lexical items i.e., sets of phonetic, semantic, and formal features

(b) Operations of the computational system of language: Merge, Mo e

(c) Economy Principles:

(i) Representational: Full Interpretation FI

(ii) Derivational: Shortest derivation from N to

< PF, LF> -pairs

The requirement of Full Interpretation presupposes that Merge and Mo e check and eliminate step by step the formal features, leaving only information that can appropriately be interpreted in terms of articulatory and conceptual conditions at the two interface levels. Derivational economy is a fairly abstract notion, the appropriate formulation of which is still under exploration. In any event, violation of these principles is now taken to be the source of ungrammaticalness: only optimal derivations allowing for complete interpretation yield well-formed expressions. The control of these conditions is a crucial effect of formal features, coming primarily with special lexical items called functional categories. These include grammatical words like Determiners, Auxiliaries, but also inflectional elements like Tense, Number, or Case.

From a more general perspective, the conditions in (9) are rather general, compared with the still rather special principles in the Principles-and-Parameters model (16), very much like these principles were a strong generalization compared with the rule systems of the Standard Theory in (9). The shift from (9) to (16) replaced rules of individual grammars by universal but still language-specific principles; the organization of UG indicated in (19) replaces these language-specific principles with even more general conditions of cognitive organization. Language specificity is now essentially a matter of formal features and their effect on Merge and Move, that is the operations that compositionally relate P-A to C-I, modules of cognitive organization which are largely independent of the language capacity.

From this point of view, the species-specific language capacity might consist essentially of the disposal of discrete infinity and in fact be related, as Chomsky (1982) speculates, to the computational capacity underlying arithmetic. The language capacity clearly recruits and enhances general cognitive and communicative capacities, but its evolution, which Pinker (1994) persuasively argues to be due to adaptive selection, might well be an independent step accompanying evolutionary changes in the architecture and size of the human brain.

5. General Perspectives

The intellectual force of generative grammar comes from the combination of two equally important factors. First, it provides a suggestive methodological and theoretical perspective for the study of language as part of the biological endowment of the human organism, supporting a central system of the overall cognitive capacity. Second, it shows how this perspective can be made precise, turned into an effective research program, and pursued with respect to a large domain of empirical facts not recognized before. It is indeed the wide range of phenomena—from syllable structure and inflection to quantifier scope and syntactic embedding—as well as the diversity of languages—from English, German, and Chinese to Hebrew, Malayalam, and Walbiri—that gives the Generative Enterprise its unusual place in the field. The explicit formulation of the theory, its technical means, and its cognitive orientation are the reason for its strong impact on a wide range of subdisciplines from research in language comprehension and production, acquisition and aphasia to typology, historical linguistics, computer science, poetics, theory of music, and philosophy of language.

As can be seen from the above, various offshoots developed different, though related, ideas. These are, inter alia, Categorial Grammar, Dependency Grammar, Case Grammar, Lexical Functional Grammar (LFG), Generalized Phrase Structure Grammar (GPSG), Head-driven Phrase Structure Grammar (HPSG), Relational Grammar, to mention the more influential proposals. Another type of offspring is the Optimality Theory (OT), which develops certain ideas of the Minimalist Program in a different way, assuming that conditions on linguistic structure may be of different strength and can be violated.

A peculiar relationship must finally be noted between Generative Grammar and the various theories of semantics and pragmatics. Sticking to the initial notion that grammar can and must account for the form, but not properly for the complexities of interpretation and use of language, Chomsky did not extend his theory of language to the domain of semantics and pragmatics. It might be noted in this respect that he always considered Logical Form as a syntactic level, pertaining to the form rather than the interpretation of language. Therefore, Formal Semantics as developed by Montague (1974), Discourse Representation Theory as proposed by Kamp and Reyle (1993), Speech Act Theory as conceived by Searle (1969) and a fair number of related approaches to aspects of interpretation and use are compatible with and in part stimulated by Generative Grammar, but they have not been included into its proper domain. Even though this situation was occasionally met with disappointment, it could not in general reduce the attraction exerted by the Generative Enterprise.

Bibliography:

  1. Chomsky N 1957 Syntactic Structures. Mouton, S’Gravenhague, The Netherlands
  2. Chomsky N 1959 Review of Skinner’s verbal Behavior. Language 35: 26–58
  3. Chomsky N 1963 Formal properties of grammars. In: Luce R D, Bush R R, Galanter E (eds.) Handbook of Mathematical Psychology. Wiley & Son, New York, Vol. II, pp. 323–418
  4. Chomsky N 1965 Aspects of the Theory of Syntax. MIT Press, Cambridge, MA
  5. Chomsky N 1975 The Logical Structure of Linguistic Theory. Plenum, New York
  6. Chomsky N 1981 Lectures on Government and Binding. Foris Publications, Dordrecht, The Netherlands
  7. Chomsky N 1982 The Generative Enterprise: A Discussion with Riny Huybrigts and Henk an Rimsdijk. Foris Publications, Dordrecht, The Netherlands
  8. Chomsky N 1986 Knowledge of Language: Its Nature, Origin, and Use. Praeger, New York
  9. Chomsky N 1995 The Minimalist Program. MIT Press, Cambridge, MA
  10. Chomsky N, Halle M 1968 The Sound Pattern of English. Harper and Row, New York
  11. Fiengo R 1977 On trace theory. Linguistic Inquiry 8: 35–62
  12. Halle M 1959 The Sound Pattern of Russian. Mouton, S’ Gravenhage, The Netherlands
  13. Harris Z S 1951 Methods in Structural Linguistics. University of Chicago Press, Chicago
  14. Harris Z S 1957 Co-occurrence and transformations in linguistic structure. Language 33: 293–340
  15. Jackendoff R S 1972 Semantic Interpretation in Generative Grammar. MIT Press, Cambridge, MA
  16. Kamp H, Reyle U 1993 From Discourse to Logic, Kluwer Dordrecht: The Netherlands
  17. Katz J J, Fodor J A 1963 The structure of a semantic theory. Language 39: 170–210
  18. Katz J J, Postal P 1964 An Integrated Theory of Linguistic Descriptions. MIT Press, Cambridge, MA
  19. Lakoff G 1971 On generative semantics. In: Steinberg D, Jakobovits L (eds.) Semantics. Cambridge University Press, Cambridge, UK, pp. 232–96
  20. Lees R B 1960 The Grammar of English Nominalizations. Indiana University Press, Bloomington, IN
  21. Miller G A, Chomsky N 1963 Finitary models of language users. In: Luce R D, Bush R R, Galanter E (eds.) Handbook of Mathematical Psychology. Wiley & Sons, New York, Vol. II, pp. 419–91
  22. Montague R 1974 Formal Philosophy. Yale University Press, New Haven, CT
  23. Pinker S 1994 The Language Instinct. Harper Collins, New York
  24. Searle J R 1969 Speech Acts. Cambridge University Press, Cambridge, UK
  25. Skinner B F 1957 Verbal Behavior. Appleton-Century-Crofts, New York
Generative Syntax Research Paper
Linguistic Evolution Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!