View sample Psychological And Neural Aspects of Sign Language Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. If you need a religion research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our custom writing services for professional assistance. We offer high-quality assignments for reasonable rates.
Signed languages of the deaf are naturally occurring human languages. The existence of languages expressed in diﬀerent modalities (i.e., oral–aural, manual– visual) provides a unique opportunity to explore and distinguish those properties shared by all human languages from those that arise in response to the modality in which the language is expressed. Despite the diﬀerences in language form, signed languages possess formal linguistic properties found in spoken languages. Sign language acquisition follows a developmental trajectory similar to spoken languages. Memory for signs exhibits patterns of interference and forgetting that are similar to those found for speech. Early use of sign language may enhance certain aspects of nonlanguage visual perception. Neuropsychological studies show that left hemisphere regions are important in both spoken and sign language processing.
1. Linguistic Principles Of American Sign Language
1.1 Language And Deaf Culture
Sign languages are naturally-occurring manual languages that arise in communities of deaf individuals. These manual communication systems are fully expressive, systematic human languages and are not merely conventionalized systems of pantomime nor manual codiﬁcations of a spoken language. Many types of deafness are inheritable and it is not unusual to ﬁnd isolated communities of deaf individuals who have developed complex manual languages (Groce 1985). The term Deaf Community has been used to describe a sociolinguistic entity that plays a crucial role in a deaf person’s exposure to and acceptance of sign language (Padden and Humphries 1988). American Sign Language (ASL), used by members of the Deaf community in the USA and Canada, is only one of many sign languages of the world, but it is the one that has been studied most extensively .
1.2 Structure Of American Sign Language
A sign consists of a hand conﬁguration that travels about a movement path and is directed to or about a speciﬁc body location. Sign languages diﬀer from one another in the inventories and compositions of hand-shapes, movements, and locations used to signal linguistic contrasts just as spoken language diﬀer from one another in the selection of sounds used, and how those sounds are organized into words. Many sign languages incorporate a subsystem in which orthographic symbols used in the surrounding spoken language communities are represented manually on the hands. One example is the American English manual alphabet, which is produced on one hand and allows users of ASL to represent English lexical items.
All human languages, whether spoken or signed, exhibit levels of structure which govern the composition and formation of word forms and specify how words combine into sentences. In formal linguistic analyses, these structural levels are referred to as phonology, morphology and the syntax of the language. In this context, phonological organization refers to the patterning of the abstract formational units of a natural language (Coulter and Anderson 1993). Compared to spoken languages in which contrastive units (for example, phonemes) are largely arranged in a linear fashion, signed languages exhibit simultaneous layering of information. For example, in a sign, a hand-shape will co-occur with, rather than follow sequentially, a distinct movement pattern.
ASL exhibits complex morphology. Morphological markings in ASL are expressed as dynamic movement patterns overlaid on a more basic sign form. These nested morphological forms stand in contrast to the linear suﬃxation common in spoken languages (Klima and Bellugi 1979). The prevalence of simultaneous layering of phonological content and the nested morphological devices observed across many diﬀerent sign languages likely reﬂect an inﬂuence of modality on the realization of linguistic structure. Thus the shape of human languages reﬂects the constraints and aﬀordances imposed by the articulator systems involved in transmission of the signal (i.e., oral versus manual) and the receptive mechanisms for decoding the signal (i.e., auditory versus visual).
A unique property of ASL linguistic structure is the reliance upon visuospatial mechanisms to signal linguistic contrasts and relations. One example concerns the use of facial expressions in ASL. In ASL, certain syntactic and adverbial constructions are marked by speciﬁc and obligatory facial expressions (Liddell 1980). These linguistic facial expressions diﬀer signiﬁcantly in appearance and execution of aﬀective facial expressions. A second example concerns the use of inﬂectional morphology to express subject and object relationships. At the syntactic level, nominals introduced into the discourse are assigned arbitrary reference points along a horizontal plane in the signing space. Signs with pronominal function are directed toward these points, and the class of verbs which require subject/object agreement obligatorily move between these points (Lillo-Martin and Klima 1990). Thus, whereas many spoken languages represent grammatical functions through case marking or linear ordering, in ASL grammatical function is expressed through spatial mechanisms. This same system of spatial reference, when used across sentences, serves as a means of discourse cohesion (Winston 1995).
1.3 Sign Language Acquisition
Children exposed to signed languages from birth acquire these languages on a similar maturational timetable as children exposed to spoken languages (Meier 1991). Prelinguistic infants, whether normally hearing or deaf, engage in vocal play commonly known as babbling. Recent research has shown that prelinguistic gestural play referred to as manual babbling will accompany vocal babbling. There appear to be signiﬁcant continuities between prelinguistic gesture and early signs in deaf children exposed to American Sign Language, though questions remain as to whether manual babbling is constrained by predominantly motoric or linguistic factors (Cheek et al. 2001, Petitto and Marantette 1991).
Between 10 and 12 months of age, children reared in signing homes begin to produce their ﬁrst signs, with two-sign combinations appearing at approximately 18 months. Some research has suggested a precocious early vocabulary development in signing children. However, these reports are likely to be a reﬂection of parents’ and experimenters’ earlier recognition of signs compared to words rather than underlying diﬀerences in development of symbolic capacities of signing and speaking children.
Signing infants produce the same range of grammatical errors in signing that have been documented for spoken language, including phonological substitutions, morphological overregularizations, and anaphoric referencing confusions (Petitto 2000). For example, in the phonological domain a deaf child will often use only a subset of the handshapes found in the adult inventory, opting for a simpler set of ‘unmarked’ handshapes. This is similar to the restricted range of consonant usage common in children acquiring a spoken language.
2. Psycholinguistic Aspects Of Sign Language
Psycholinguistic studies of American Sign Language have examined how the diﬀerent signaling characteristics of the language impact transmission and recognition. Signs take longer to produce than comparable word forms. The average number of words per second in running speech is about 4 to 5, compared with 2 to 3 signs per second in ﬂuent signing. However, despite diﬀerences in word transmission rate, the proposition rate for speech and sign is the same, roughly one proposition every 1 to 2 seconds. Compare, for example the English phrase ‘I have already been to California’ with the ASL equivalent, which can be succinctly signed using three monosyllabic signs, glossed as FINISH TOUCH CALIFORNIA.
2.1 Sign Recognition
Studies of sign recognition have examined how signs are recognized in time. Recognition appears to follow a systematic pattern in which information about the location of the sign is reliably identiﬁed ﬁrst, followed by handshape information and ﬁnally the movement. Identiﬁcation of the movement reliably leads to the identiﬁcation of the sign. Interestingly, despite the slower articulation of a sign compared to a word, sign recognition appears to be faster than word recognition. Speciﬁcally, it has been observed that proportionally less of the signal needs to be processed in order to uniquely identify a sign compared to a spoken word. For example, one study reports that only 240 msec. or 35 percent of a sign has to be seen before a sign is identiﬁed (Emmory and Corina 1990). In comparable studies of spoken English, Grosjean (1980) reports that 330 msec. or 83 percent of a word has to be heard before a word can be identiﬁed. This ﬁnding is due, in part, to the simultaneous patterning of phonological information in signs compared to the more linear patterning of phonological information characteristic of spoken languages. These structural diﬀerences in turn have implications for the organization of lexical neighborhoods. Spoken languages, such as English, may have many words that share in their initial phonological structures (tram, tramp, trampoline, etc.) leading to greater coactivation (and thus longer processing time) during word recognition. In contrast, ASL sign forms are often formationally quite distinct from one another, permitting quicker selection of a lexical unique entry.
2.2 Memory For Signs
Eﬀorts to explore how the demands of sign language processing inﬂuence memory and attention have led to several signiﬁcant ﬁndings. Early studies of memory for lists of signs report classic patterns of forgetting and interference, including serial position eﬀects of primacy and recency (i.e., signs at the beginning and the end of a list are better remembered than items in the middle). Likewise, when sign lists are composed of phonologically similar sign forms, signers exhibit poorer recall (Klima and Bellugi 1979). This result is similar to what has been reported for users of spoken language, where subjects exhibit poorer memory for lists of similarly sounding words (Conrad and Hull 1964). These ﬁndings indicate that signs, like words, are encoded into short-term memory in a phonological or articulatory code rather than in a semantic code.
More recent work has assessed whether Baddeley’s (1986) working memory model pertains to sign language processing. This model includes components that encode linguistic and visuospatial information as well as a central executive which mediates between immediate and long-term memory. Given the spatial nature of the sign signal, the question of which working memory component(s) is engaged during sign language processing is particularly interesting. Wilson and Emmorey (2000) have shown word-length eﬀects in signs; it is easier to maintain a cohort of short signs (monosyllabic signs) than long signs (polysyllabic signs) in working memory. They also report eﬀects of ‘articulatory suppression.’ In these studies, requiring a signer to produce repetitive, sign-like hand movements while encoding and maintaining signs in memory has a detrimental eﬀect on aspects of recall. Once again, analogous eﬀects are known to exist for spoken languages and these phenomena provide support for a multicomponent model of linguistic working memory that includes both an articulatory loop and a phonological store. Finally, under some circumstances deaf signers are able to utilize linguistically relevant spatial information to encode signs, suggesting the engagement of a visuospatial component of working memory.
2.3 Perception And Attention
Recent experiments with native users of signed languages have shown that experience with a visual sign language may improve or alter visual perception of nonlanguage stimuli. For example, compared to hearing nonsigners, deaf signers have been shown to possess enhanced or altered perception along several visual dimensions such as motion processing (Bosworth and Dobkins 1999), mental rotation, and processing of facial features (Emmorey 2001). For example, the ability to detect and attend selectively to peripheral, but not central, targets in vision is enhanced in signers (Neville 1991). Consistent with this ﬁnding is evidence for increased functional connectivity in neural areas mediating peripheral visual motion in the deaf (Bavelier et al. 2000).
As motion processing, mental rotation, and facial processing underlie aspects of sign language comprehension, these perceptual changes have been attributed to experience with sign language. Moreover, several of these studies have reported such enhancements in hearing signers raised in deaf signing households. These ﬁndings provide further evidence that these visual perception enhancements are related to the acquisition of a visual manual language, and are not due to compensatory mechanisms developed as a result of auditory deprivation.
3. Neural Organization Of Signed Language
3.1 Hemispheric Specialization
One of the most signiﬁcant ﬁndings in neuropsychology is that the two cerebral hemispheres show complementary functional specialization, whereby the left hemisphere mediates language behaviors while the right hemisphere mediates visuospatial abilities. As noted, signed languages make signiﬁcant use of visuospatial mechanisms to convey linguistic information. Thus sign languages exhibit properties for which each of the cerebral hemispheres shows specialization. Neuropsychological studies of brain injured deaf signers and functional imaging studies of brain intact signers have provided insights into the neural systems underlying sign language processing.
3.2 Aphasia In Signed Language
Aphasia refers to acquired impairments in the use of language following damage to the perisylvian region of the left hemisphere. Neuropsychological case studies of deaf signers convincingly demonstrate that aphasia in signed language is also found after left hemisphere perisylvian damage (Poizner et al. 1987). Moreover, within the left hemisphere, production and comprehension impairments follow the well-established anterior versus posterior dichotomy. Studies have documented that frontal anterior damage leads to Broca-like sign aphasia. In these cases, normal ﬂuent signing is reduced to eﬀortful, single-sign, ‘telegraphic’ output with little morphological complexity (such as verb agreement). Comprehension, however, is left largely intact. Wernicke-like sign aphasia following damage to the posterior third of the perisylvian region presents with ﬂuent but often semantically opaque output, and comprehension also suﬀers. In cases of left hemisphere damage, the occurrence of sign language paraphasias may be observed. Paraphasia may be formationally disordered (for example, substituting an incorrect hand shape) or semantically disordered (producing a word that is semantically related to the intended form). These investigations serve to underscore the importance of left hemisphere structures for the mediation of signed languages and illustrate that sign language breakdown is not haphazard, but rather honors linguist boundaries (Corina 2000).
The eﬀects of cortical damage to primary language areas that result in aphasic disturbance can be diﬀerentiated from more general impairments of movement. For example Parkinson’s disease leads to errors involving timing, scope, and precision of general movements including, but not limited to, those involved in speech and signing. These errors produce phonetic disruptions in signing, rather than higher-level phonemic disruptions that are apparent in aphasic signing (Corina 1999).
Apraxia is deﬁned as an impairment of the execution of a learned movement (e.g., saluting, knowing how to work a knife and fork). Lesions associated with the left inferior parietal lobe result in an inability to perform and comprehend gestures (Gonzalez Rothi and Heilman 1997). Convincing dissociations of sign language impairment with well-preserved praxic abilities have been reported. In one case, a subject with marked sign language aphasia aﬀecting both production and comprehension produced unencumbered pantomime. Moreover, both comprehension and production of pantomime were found to be better preserved than was sign language. These data indicate that language impairments following left hemisphere damage are not attributable to undiﬀerentiated symbolic impairments and demonstrate that ASL is not simply an elaborate pantomimic system.
3.3 Role Of The Right Hemisphere In ASL
Studies of signers with right hemisphere damage (RHD) have reported signiﬁcant visuospatial disruption in nonlinguistic domains (e.g., face processing, drawing, block construction, route ﬁnding, etc.) but have reported only minimal language disruption. Problems in the organization of discourse have been observed in RHD signers, as have also been reported in users of spoken languages. More controversial are the inﬂuences of visuospatial impairment in the use of highly spatialized components of the language such as complex verb agreement and the classiﬁer system. Further work is needed to understand whether these infrequently reported impairments (both in comprehension and production) reﬂect core linguistic deﬁcits or rather reﬂect secondary eﬀects of impaired visuospatial processing.
3.4 Functional Imaging
Recent studies using functional brain imaging have explored the neural organization of language in users of signed languages. These studies have consistently found participation of classic left hemisphere perisylvian language areas in the mediation of sign language in profoundly deaf, lifelong signers. For example Positron Emission Tomography studies of production in British Sign Language (McGuire et al. 1997) and Langue des Signes Quebecoise (Petitto et al. 2000) reported deaf subjects activated left inferior frontal regions, regions similar to those that mediate speech in hearing subjects. Researchers using Functional Magnetic Resonance imaging techniques in investigating comprehension of ASL in deaf and hearing native users of signed language have shown signiﬁcant activations in frontal opercular areas (including Broca’s area and dorsolateral prefrontal cortex) as well as in posterior temporal areas (such as Wernicke’s area and the angular gyrus) (Neville et al. 1998). Also reported was extensive activation in right hemisphere regions. Subsequent studies have conferred that aspects of the right posterior hemisphere activation appear to be unique to sign language processing and present only in signers who acquired sign language from birth (Newman et al. in press).
Investigations of psychological and neural aspects of signing reveal strong commonalities in the development and cognitive processing of signed and spoken languages despite major diﬀerences in the surface form of these languages. Early exposure to a sign language may lead to the enhancements in the speciﬁc neural systems underlying visual processing. There appears to be a strong biological predisposition for left hemisphere structures in the mediation of language, regardless of the modality of expression.
- Baddeley A 1986 Working Memory. Oxford University Press, New York
- Bavelier D, Tomann A, Hutton C, Mitchell T, Corina D, Liu G, Neville H 2000 Visual attention at the periphery is enhanced in congenitally deaf individuals. Journal of Neuroscience 20: RC93
- Bosworth R, Dobkins K 1999 Left-hemisphere dominance for motion processing in deaf signers. Psychological Science 10: 256–62
- Cheek A, Cormier K, Repp A, Meier R 2001 Prelinguistic gesture predicts mastery and error in the production of early signs. Language
- Corina D 1999 Neural disorders of language and movement: Evidence from American Sign Language. In: Messing L, Campbell R (eds.) Gesture, Speech and Sign. Oxford University Press, New York
- Corina D 2000 Some observations regarding paraphasia in American Sign Language. In: Emmorey K, Lane H (eds.) The Signs of Language Revisited: An Anthology to Honor Ursula Bellugi and Edward Klima. Lawrence Erlbaum Associates, Mahwah, NJ
- Coulter G, Anderson S 1993 Introduction. In: Coulter G (ed.) Phonetics and Phonology: Current Issues in ASL Phonology. Academic Press, San Diego, CA
- Conrad R, Hull A 1964 Information, acoustic confusion and memory span. British Journal of Psychology 55: 429–32
- Emmorey K 2001 Language, Cognition, and the Brain: Insights From Sign Language Research. Lawrence Erlbaum Associates, Mahwah, NJ
- Emmorey K, Corina D 1990 Lexical recognition in sign language: Eﬀects of phonetic structure and morphology. Perceptual and Motor Skills 71: 1227–52
- Gonzalez Rothi L, Heilman K 1997 Introduction to limb apraxia. In: Gonzalez Rothi L, Heilman K (eds.) Apraxia: The Neuropsychology of Action. Psychology, Hove, UK
- Grosjean F 1980 Spoken word recognition processes and the gating paradigm. Perception and Psychophysics 28: 267–83
- Groce J 1985 Everyone Here Spoke Sign Language. Harvard University Press, Cambridge, MA
- Klima E, Bellugi U 1979 The Signs of Language. Harvard University Press, Cambridge, MA
- Liddell S 1980 American Sign Language Syntax. Mouton, The Hague, The Netherlands
- Lillo-Martin D, Klima E 1990 Pointing out diﬀerences: ASL pronouns in syntactic theory. In: Fisher S, Siple P (eds.) Theoretical Issues in Sign Language Research I: Linguistics. University of Chicago Press, Chicago
- McGuire P, Robertson D, Thacker A, David A, Frackowiak R, Frith C 1997 Neural correlates of thinking in sign language. Neuroreport 8: 695–8
- Meier R 1991 Language acquisition by deaf children. American Scientist 79: 60–70
- Neville H 1991 Neurobiology of cognitive and language processing: Eﬀects of early experience. In: Gibson K, Peterson A (eds.) Brain Maturation and Cognitive Development: Comparative and Cross-cultural Perspectives. Aldine de Gruyter Press, Hawthorne, NY
- Neville H, Bavelier D, Corina D, Rauschecker J, Karni A, Lalwani A, Braun A, Clark V, Jezzard P, Turner R 1998 Cerebral organization for language in deaf and hearing subjects: Biological constraints and eﬀects of experience. Proceedings of the National Academy of Science 90: 922–9
- Newman A, Corina D, Tomann A, Bavelier D, Jezzard P, BraunA, Clark V, Mitchell T, Neville H (submitted) Eﬀects of age of acquisition on cortical organization for American Sign Language: an fMRI study. Nature Neuroscience
- Padden C, Humphries T 1988 Deaf in America: Voices from a Culture. Harvard University Press, Cambridge, MA
- Petitto L 2000 The acquisition of natural signed languages: Lessons in the nature of human language and its biological foundations. In: Chamberlain C, Morford J (eds.) Language Acquisition by Eye. Lawrence Erlbaum Associates, Mahwah, NJ
- Petitto L, Marantette P 1991 Babbling in the manual mode: Evidence for the ontogeny of language. Science 251: 1493–6
- Petitto L, Zatorre R, Gauna K, Nikelski E, Dostie D, Evans A 2000 Speech-like cerebral activity in profoundly deaf people processing signed languages: Implications for the neural basis of human language. Proceedings of the National Academy of Science 97: 13961–6
- Poizner H, Klima E, Bellugi U 1987 What the Hands Reveal About the Brain. MIT Press, Cambridge, MA
- Wilson M, Emmorey K 2000 When does modality matter? Evidence from ASL on the nature of working memory. In: Emmorey K, Lane H (eds.) The Signs of Language Revisited: An Anthology to Honor Ursula Bellugi and Edward Klima. Lawrence Erlbaum Associates, Mahwah, NJ
- Winston E 1995 Spatial mapping in comparative discourse frames. In: Emmorey K, Reilly J (eds.) Language, Gesture, and Space. Lawrence Erlbaum Associates, Mahwah, NJ