Assessment Of Intellectual Functioning Research Paper

Academic Writing Service

Sample Assessment Of Intellectual Functioning Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. If you need a research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our custom research paper writing service for professional assistance. We offer high-quality assignments for reasonable rates.

Oversimplifying somewhat, there are three applied approaches to assessing intelligence (Daniel 1997). Each rests on a different conceptualization of what intelligence is and why it should be measured. Each approach, the psychometric, the neuropsychological, and the dynamic, is reviewed below. In addition, several physiological indices—brain wave tracings, reaction and inspection times, nerve conduction velocity, and rate at which glucose is metabolized in the brain, for example—correlate with measures of intelligence. However, physiological indices are not reviewed because they have no applied utility. It is possible, though, that physiological indices will play a stronger assessment role as the twenty-first century progresses (Matarazzo 1992).

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


1. Psychometric Assessment

Psychometric tests have dominated intelligence testing for a century. The defining feature of this approach is its empirical foundation; ‘psychometric’ simply refers to the quantitative assessment of psychological states/abilities. While quantitative assessment rests on a massive measurement technology, its theoretical foundations are shallow, as reflected in its origins. The earliest tests that influenced contemporary intellectual measures directly emerged from studies by Alfred Binet and colleagues in France (Cronbach 1984). In 1904, Binet was directed to devise a means of distinguishing educable from noneducable students in the relatively new universal education system. Having investigated cranial, facial, palmar, and handwriting indices, Binet discovered the direct measure of complex intellectual tasks involving judgment, comprehension, and reasoning most successful in distinguishing among pupils. Based on these pragmatic beginnings, Binet defined intelligence as the capacity to adopt and sustain a direction, make adaptations for the purpose of attaining a desired end, and monitor performance self-correctively. With littLevelaboration, this definition still directs the psychometric paradigm.

Typically, modern psychometric tests consist of varied subtests that tap diverse aspects of the loosely defined intelligence construct. For example, scales may include subtests that sample a broad range of knowledge (e.g., the names of objects, dates, historical and geographical facts) and require the examinee to assemble colored blocks such that their pattern resembles a prespecified design (Sattler 1992). Again, choice of subtests is not driven by theoretical prescription. Subtests are selected because they work—in combination, they serve to rank individuals according to how much they know and how good they are at solving certain problems. The pragmatic selection of subtests is based on Binet’s conception of intelligence as a general or undifferentiated ability (g), so that, in principle, the tasks that tap g are interchangeable.




At the heart of psychometric testing lies norm referencing (Sattler 1992). Norm referenced tests are developed by administering items in a standardized manner to a representative sample of the population in question. The norm sample is considered ‘representative’ insofar as it is stratified within age groups for variables that might influence performance differentially, such as sex, geographic region, ethnic status, community size, etc. Scores are scaled such that each individual’s derived score represents a relative standing within the norm or standardization group. In this sense, psychometric testing is an empirical endeavor in its purest sense: as a comparative construct, there is little need to theorize about the exact nature of intelligence.

As mentioned, most modern psychometric tests include varied tasks. The original intention was to ensure that g was comprehensively surveyed. With time, however, clinicians came to exploit the multitask construction of intelligence tests to make intra-individual distinctions (Kaufman 1990). By looking at the variability among subtests or groups of subtests, assessors hypothesized about relative intellectual strengths and weaknesses. For example, a particular respondent might prove better on tests of memory than on tasks involving conceptualization. It is important to note, though, that the analysis of intra- individual differences developed after the fact; such comparisons are driven by the practicalities of what subtests are available, rather than by a detailed theory about the structure of intelligence.

The empirical base of the psychometric effort implies both weakness and strength. With respect to its limitations, attempts to interpret intra-individual differences based on a selection of subtests that were pragmatically chosen have not been validated empirically (Reschly 1997). Furthermore, the atheoretical approach to task selection has resulted in restricted and incomplete sampling of the intelligence domain (Chen and Gardner 1997). For example, musical and interpersonal abilities are neglected. Instead, there is an emphasis on skills acquired though academic learning, a prized outcome in mainstream Western societies. Therefore, critics object to the fact that psychometric tests measure little more than achievement; they assess what an examinee has learned, not the examinee’s potential to learn.

Related to this issue, and magnified by the practice of defining individual intelligence with reference to a norm group, questions have arisen about bias due to (sub)cultural, ethnic, life-experience, and motivational differences. This becomes a social issue when examinees from minority groups are compared to a norm sample whose context, values, and learning experiences are different from their own (Suzuki and Valencia 1997). Testing thereby betrays its original purpose of providing objective data on an individual’s intellectual functioning and comes, instead, to discriminate against atypical examinees.

Another difficulty with psychometric tests is that although they usually correlate highly among themselves, this is not always the case (Daniel 1997). Correlations may be influenced by what tasks are included and how they are weighted. Perhaps a greater problem lies in the fact that even where test scores do correlate highly, the same individual may earn discrepant scores on different instruments due to the fact that tests are normed on different standardization groups.

A crucial criticism of psychometric tests is that recommendations derived from these instruments have not been shown to enhance remediation for the examinees (Reschly 1997). Again, this can be attributed to the fact that the content of these scales has not been selected according to any theory of intelligence, brain functioning, or pedagogy.

In other respects, psychometric testing has met with success. Although test tasks are selected pragmatically, they cluster in remarkably similar ways across tests and studies, giving insight into the structure of intelligence. Based on statistical methods that group subtests into clusters according to underlying commonalities (factor analysis), three strata of intelligence have been identified (Carrol 1997). At the highest stratum is a general factor, g. This factor subsumes a second stratum of broad factors, including ‘fluid’ and ‘crystallized’ intelligence. (Fluid intelligence involves the ability to cope with novelty and think flexibly. Crystallized intelligence involves the storage and use of declarative knowledge such as vocabulary or information.) Subsumed under each broad factor is a set of narrow abilities, such as ‘induction’ and ‘reading comprehension.’ Knowledge of these distinct but interdependent strata can guide construction of new psychometric instrumentation.

Another strength of the psychometric approach derives from its emphasis on quantitative methods; psychometricians strive to ensure that their tests are reliable and valid predictors of performance (Sattler 1992). ‘Reliability’ refers to consistency of measurement; the more reliable a measure, the less error involved in estimates derived from it. Many psychometric tests boast extremely high internal reliability (the degree to which each component score of the test correlates with the full test score) and short-term ‘testretest’ reliability (an index of stability derived by administering the test to the same group of individuals more than once). Furthermore, the long-term stability of IQ has proven impressive, with good predictions over a 20-year time-span. The validity of these tests, too, has proven strong. ‘Validity’ refers to the extent to which a test measures what it was designed to measure. Intelligence test scores correlate with amount of schooling, quality of work produced in school, occupational status, and performance in the work situation (although the strength of the latter prediction is controversial), both concurrently and predictively. To summarize, although there are serious limitations to psychometric measurement, the approach yields reliable and valid estimates of intellectual functioning. Psychometric tests are accurate classifiers and predictors when used with care in circumscribed contexts.

2. Neuropsychologically-Based Assessment

In reaction to the pragmatics of psychometric assessment, several theory-based tests have been developed. These are exemplified by the neuropsychological approach to intellectual assessment. A viable neuropsychological approach emerged in the 1980s. This approach reflects an attempt to increase the validity of assessment by basing it on an understanding of brain function and brain–behavior relationships. The emphasis is not on what the examinee knows but on how the examinee reasons; not on observable abilities but on underlying capacities. Empirical considerations such as reliability, validity, and norm referencing are stressed, but they subserve theoretical relevance (Kaufman 2000).

Neuropsychological approaches to intelligence testing have been influenced by the theories of Alexandr Luria, a Russian psychologist. Luria (1980) proposed three levels of brain functioning that mediate distinct types of learning and performance. The lowest functional unit includes attention. Attention involves selectively attending to particular stimuli while inhibiting attention to competing stimuli over time. It is prerequisite to all forms of intelligent behavior. Attention can be assessed with tasks that demand sustained concentration on one stimulus and sustained inhibition of responses to distracting stimuli. The second functional unit involves encoding and processing information. According to Luria, information can be processed simultaneously and/or successively. Simultaneous processing is holistic; it involves organizing stimuli into interrelated groups. Simultaneous processing can be assessed with tasks that require an examinee to recall the spatial locations of stimuli, identify objects based on incomplete drawings, and construct abstract designs from several identical triangles. Successive processing involves integrating stimuli serially, such that each stimulus is related to the one that precedes it in chain-like succession. Successive processing can be tested with tasks requiring, for example, that the respondent repeat numbers spoken by the examiner or repeat an ordered series of hand movements made by the examiner. Planning and monitoring (executive) functions form the third and highest cognitive level, according to Luria. This level involves the integration of attention and simultaneous and successive processing, along with acquired knowledge, so as to permit the formation and execution of plans and the ongoing verification of their efficacy. Neuropsychological tests of intelligence include subtests that assess some or all of these processes. In addition, they may include subtests that are derived from research on information processing, cerebral specialization, and cognitive psychology, and subtests that reflect other theoretical orientations—the distinction between fluid and crystallized forms of intelligence, for example. Scores are derived to reflect ability in each function assessed, as well as overall functioning. These scores can be compared to derive a profile of strengths and weaknesses which, in turn, has direct educational implications.

The neuropsychological approach to intelligence testing has limitations and strengths. A central shortcoming of these theory-based tests lies in the controversy regarding whether they actually measure the theoretical constructs they purport to measure. Factor analyses indicate that subtests do not always cluster so as to confirm the neuropsychological theory on which they are based (Kranzler and Keith 1999). Furthermore, even when the subtests do cluster as expected, their underlying meaning can be ambiguous. For example, the factors that supposedly reflect sequential and simultaneous reasoning in one neuropsychological intelligence test might just as easily reflect verbal and nonverbal reasoning skills. Resolution of issues pertaining to ‘construct validity’ (whether a test measures what it purports to measure) is crucial; the controversy pertains to whether the theory-based tests in question are actually theory-based.

What of test bias, as discussed in the context of psychometric testing? Bias is not precluded by theoretical considerations pertinent to the neuropsychological approach. Furthermore, the neuropsychological approach relies on norming procedures, such that the performance of a given examinee is compared to the performance of a standardization sample. While attempts have been made to reduce the discrepancies between whites and minority groups (e.g., by reducing verbal requirements, limiting changing measures of planning, disproportionately selecting high socioeconomic representatives of minority groups), these strategies work by sacrificing breadth of intellectual sampling and minimizing the differences between whites and minority groups in the normative group artifactually. The issue of bias remains problematic in neuropsychological intelligence testing.

It has been argued that the theoretical underpinnings of neuropsychological tests of intelligence both restrain and augment the breadth of subtest selection (Das et al. 1994). Which perspective one takes depends on the neuropsychological test one is considering and what one considers the purpose of testing to be. It is true that such tests neglect areas of intelligence such as creativity and wit, but the purpose of these new instruments is primarily to aid in pedagogic planning. Hence, it is the perspective these tests afford on a given examinee’s strengths and weaknesses with respect to learning that is important. If these tests can successfully divilicate the examinee’s ability to plan, attend, and reason successively and simultaneously, such that a successful, deficit-related teaching approach can be developed, the breadth of neuropsychological intelligence testing will be justified. This issue has been debated, however, with some investigators arguing that the adoption of a neuropsychological approach to intelligence testing cannot or has not led to improved pedagogical success (Kranzler and Keith 1999). Others have demonstrated specific gains using carefully controlled trials (Naglieri 1999). For example, students with poor planning improved dramatically in math calculation when instructed in the use of strategies; other students who were good in planning showed more modest improvements when instructed in the use of strategies. At this point, the relative novelty of the neuropsychological approach precludes conclusive comment about treatment validity. If these instruments prove successful in directing effective pedagogy, they will have addressed the major shortcoming of psychometric assessment without sacrificing its empirical strengths.

3. Dynamic Assessment

Incorporating a heterogeneous set of evaluation procedures, the defining feature of dynamic assessment is that it evaluates and modifies cognitive functioning simultaneously through mediation and intervention. The product of a dynamic assessment is not an estimate of the examinee’s intellectual standing relative to their peers, but an estimate of learning potential. The notion behind the learning potential construct is that individuals with similar starting competencies (e.g., similar IQs) may respond differentially to instruction; what an individual has learned does not necessarily reflect the individual’s potential to learn (Grigorenko and Sternberg 1998).

Dynamic assessment represents a reaction against psychometric testing in particular (Haywood et al. 1992), with its purported: over-reliance on previously learned material; failure to consider factors other than ability in measuring IQ, e.g., the (sub)cultural, motivational, personality, and social adequacy differences in examinees; and irrelevance to remediation. The theoretical writings of Lev Vygotsky, a Russian psychologist, form the context of dynamic assessment. Vygotsky (1987) argued that what a child can accomplish when assisted by a mediator (‘zone of proximal development’) is more indicative of the child’s mental development than what the child can do alone (‘zone of actual development’). The zone of proximal development is operationalized in dynamic assessment by a collaborative examiner. This examiner provides ongoing feedback, titrated such that it is just beyond the examinee’s zone of actual development, until the examinee either solves the problem or gives up. Thus, testing and teaching are joined. This is facilitated by emphasizing tasks that stress fluid, rather than crystallized, reasoning skills. The Vygotskian basis of dynamic assessment is supplemented by information processing concepts in some dynamic approaches. For example, one approach stresses the central role of working memory, the ability to hold old and new information in memory simultaneously, so as to manipulate and transform it. Dynamic instrumentation often includes standardized psychometric tasks, as described above, but they are administered interactively.

Dynamic approaches vary along two inversely related continua: individuality of assessment and psychometric adequacy. At one end of these continua, the assessor works in intensive, individualized interaction, modifying tasks freely, encouraging, prompting, and mediating between child and task at will to derive qualitative (as opposed to quantitative) diagnoses and educational prescriptions (Feuerstein et al. 1979). One commentator described the approach as ‘a clinician’s dream and a psychometrician’s nightmare’ (Lidz 1997). At the other end of the continua lies a test-train-retest sequence with standardized mediation and outcome measures. Mediation is delivered through a series of predetermined hints that vary from general to specific. Each hint is offered in response to the youngster’s difficulty. Outcome is quantified as number of prompts required to bring the child to correct response and as the ability to transfer to tasks other than those on which training first occurred (Campione and Brown 1987). Some dynamic practitioners doubt the efficacy of standardized dynamic procedures, because clues are not individually modified to the child’s needs. However, the method’s standard, quantifiable format renders it amenable to psychometric study.

The goals of dynamic assessment—to assess learning ability, to identify learning processes, and to teach the individual generalizable strategies for problem solving—remain its major strength. No other assessment tradition is so ambitious. Whether or not they succeed in this endeavor, dynamic assessment advocates have sharpened the focus of other assessment approaches, alerting them to the ultimate goal of intellectual evaluation. And in selecting response to remedial efforts as an outcome (as opposed to comparing an examinee with his or her age mates), dynamic assessors propose a powerful means of attenuating experiential bias in intelligence testing. The reasoning is as follows. Suppose there are two children, one biologically disadvantaged with mental retardation, the other who has experienced atypical living educational opportunities. One might expect relatively little success in teaching the child with mental retardation. However, the educationally atypical child should make great gains, i.e., there should be a large discrepancy between this youngster’s pretest and posttest scores (or the child should require relatively few clues to reach problem solution), whether or not the post-test scores approximate the average test score of this child’s educationally advantaged age peers. The point is that pretest scores, conceptually comparable to conventional test results, tap a different aspect of intelligence than do difference scores; pretest scores correspond to developed abilities (and, implicitly, past opportunities to develop abilities), while difference scores approximate learning potential. It is by tapping into learning potential that dynamic assessments attenuate the biases of traditional assessment.

However, research support for the claims underlying dynamic assessment is sparse, ambiguous, and often flawed. This is more the case for approaches that emphasize nonstandardized assessment than for those that stress psychometric adequacy, but weak research foundations are common to all approaches (Grigorenko and Sternberg 1998). Little validation of the ‘zone of proximal development’ exists, for example, and much of what does exist has been conducted at the level of theory, ideology, and description, rather than quantitative experiment. And although change is central to the dynamic philosophy, virtually no work addresses the reliability of change scores. Little has been published on internal consistency, and there is some evidence that change scores derived from different tasks do not correlate. Although there may be reasons to expect incongruence of change scores, these reasons need to be explored and verified so that systematic predictions can be made. Questions have also been raised about variability among examiners in administering, scoring, and interpreting the less standardized assessments, and the possibility of obtaining different results for a given child as a result of this. Few studies address the issue of retest reliability of change scores, and some studies suggest that retest reliability is weak. Factor analytic studies, which are used to demonstrate that subtests intercorrelate according to theoretical expectations, are rarely conducted and do not always confirm hypotheses. Studies have shown that mediation promotes change from preto post-test, but it is not always clear how much of the change is attributable to mediation and how much change occurs simply by virtue of practice. Importantly, some research shows that dynamic indices augment prediction of post-test performance beyond data derived from pretest performance (Day et al. 1997). However, other studies have not confirmed the independent value of gain indices over traditional measures; results on this aspect of dynamic testing are ambiguous. From a long-term perspective, some evidence suggests a relation between ability to benefit from mediation and economic and social independence in adulthood among individuals with low IQ. However, there have been no demonstrations that dynamic assessment actually changes school or work performance or leads to recommendations that change performance (Grigorenko and Sternberg 1998). In all, dynamic approaches to assessment represent an ambitious but unproven perspective on human intelligence.

4. Concluding Comments

The aforementioned approaches to intellectual assessment have shared and unique strengths. With time, it is reasonable to expect an amalgamation of these strengths such that, with the addition of practical biological measures of intelligence, assessment will become increasingly reliable, valid, fair, and useful. Assuming that this synthesis will reflect current theories regarding the hierarchical structure of intelligence, a reasonable expectation is as follows. In the foreseeable future, g will be measured with psychometric instrumentation. The exemplary statistical properties of psychometric tests and the diversity of tasks involved in their structure render them most suitable. It is possible that biological measures will augment or replace the psychometric approach to measuring g, although biological paradigms remain primitive. Furthermore, because biological assessment focuses exclusively on brain functioning, rather than on brain–behavior relationships, the remedial implications of this approach will remain limited for some time to come. The neuropsychological approach to intellectual assessment can be used to delineate functional areas (e.g., simultaneous vs. successive reasoning) subsumed by g and provide broad remedial recommendations accordingly. Finally, the dynamic assessment approach can be used at the level of the individual by assessment-remediators. This final level addresses issues of bias most directly. However, the complexities of genetic-environmental interactions will always dictate the need for vigilance with respect to bias. The preceding proposal does not suggest that current tests will be imported into a single paradigm holus-bolus. Nor does it suggest that such a synthesis will resolve the theoretical and empirical inconsistencies integral to each approach. The proposal suggests that the principles derived from each approach may be incorporated into an integrated assessment/remediation process, such that the shortcomings of each approach are attenuated. An integrated approach to intellectual assessment will permit successive classification, prediction, and remediation that encompass large groups (e.g., people with average g) and the smallest groups (i.e., the individual) alike.

Bibliography:

  1. Campione J C, Brown A L 1987 Linking dynamic testing with school achievement. In: Lidz C S (ed.) Dynamic Testing. Guilford, New York, pp. 82–115
  2. Carrol J B 1997 The three-structure theory of cogntive abilities. In: Flanagan D P, Genshart J L, Harrison P L (eds.) Contemporary Intellectual Assessment: Theories, Tests, and Issues. Guilford, New York, pp. 122–30
  3. Chen J-Q, Gardner H 1997 Alternative assessment from a multiple intelligences perspective. In: Flanagan D P, Genshart J L, Harrison P L (eds.) Contemporary Intellectual Assessment: Theories, Tests, and Issues. Guilford, New York, pp. 105–21
  4. Cronbach C J 1984 Essentials of Psychological Testing, 4th edn. Harper and Row, New York
  5. Daniel M H 1997 Intelligence testing: Status and trends. American Psychologist 52: 1038–45
  6. Das J P, Naglieri J A, Kirby J R 1994 Assessment of Cognitive Processes: The PASS Theory of Intelligence. Allyn and Bacon, Needham Heights, MA
  7. Day J D, Engelhardt J L, Maxwell S E, Bolig E E 1997 Comparison of static and dynamic assessment procedures and their relation to independent performance. Journal of Educational Psychology 89: 358–68
  8. Feuerstein R, Rand Y, Hoffman M 1979 The Dynamic Assessment of Retarded Performers: Learning Potential Assessment Device, Theory, Instruments, and Techniques. University Park Press, Baltimore
  9. Grigorenko E L, Sternberg R J 1998 Dynamic testing. Psychological Bulletin. 124: 75–111
  10. Haywood H C, Tzuriel D, Vaught S 1992 Psychoeducational assessment from a transactional perspective. In: Haywood C, Tzuriel D (eds.) Interactive Assessment. Springer-Verlag, New York, pp. 38–63
  11. Kaufman A S 1990 Assessing Adolescent and Adult Intelligence. Allyn and Bacon, Toronto, ON, Canada
  12. Kaufman A S 2000 Tests of intelligence. In: Sternberg R J (ed.) Handbook of intelligence. Cambridge Univeristy Press, Cambridge, UK, pp. 445–76
  13. Kranzler J H, Keith T Z 1999 Independent confirmatory factor analysis of the Cognitive Assessment System (CAS: What does the CAS measure?). School Psychology Review 28: 117–44
  14. Lidz C S 1997 Dynamic assessment approaches. In: Flanagan D P, Genshart J L, Harrison P L (eds.) Contemporary Intellectual Assessment: Theories, Tests, and Issues. Guilford, New York, pp. 281–96
  15. Luria A R 1980 Higher Cortical Functions in Man, 2nd edn. Basic Books, New York
  16. Matarazzo J D 1992 Psychological testing and assessment in the 21st Century. American Psychologist 47: 1007–18
  17. Naglieri J A 1999 How valid is the PASS theory and CAS? School Psychology Review 28: 145–62
  18. Reschly D J 1997 Diagnostic and treatment utility of intelligence tests. In: Flanagan D P, Genshart J L, Harrison P L (eds.) Contemporary Intellectual Assessment: Theories, Tests, and Issues. Guilford, New York, pp. 437–56
  19. Sattler J M 1992 Assessment of Children, 3rd edn. Sattler, San Diego, CA
  20. Suzuki L A, Valencia R R 1997 Race-ethnicity and measured intelligence. American Psychologist 52: 1103–14
  21. Vygotsky L S 1987 The Collected Works of Le Vygotsky. Plenum, New York, Vol. 1
Central Conceptions of Intelligence Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!