Education and Training in Psychological Assessment Research Paper

Academic Writing Service

View sample education and training in psychological assessment research paper. Browse other  research paper examples and check the list of psychology research paper topics for more inspiration. If you need a psychology research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our custom writing service for professional assistance. We offer high-quality assignments for reasonable rates.

We begin this research paper with a story about an assessment done by one of us (Handler) when he was a trainee at a Veterans Administration hospital outpatient clinic. He was asked by the chief of psychiatry to reassess a patient the psychiatrist had been seeing in classical psychoanalysis, which included heavy emphasis on dream analysis and free association, with little input from the analyst, as was the prevailing approach at the time.The patient was not making progress, despite the regimen of three sessions per week he had followed for over a year.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code

The patient was cooperative and appropriate in the interview and in his responses to the Wechsler Adult Intelligence Scale (WAIS) items, until the examiner came to one item of the Comprehension subtest, “What does this saying mean: ‘Strike while the iron is hot’?” The examiner was quite surprised when the patient, who up to that point had appeared to be relatively sound, answered: “Strike is to hit. Hit my wife. I should say push, and then pull the cord of the iron. Strike in baseball—one strike against you. This means you have to hit and retaliate to make up that strike against you—or if you feel you have a series of problems—if they build up, you will strike.” The first author still remembers just beginning to understand what needed to be said to the chief of psychiatry about the type of treatment this patient needed.

As the assessment continued, it became even more evident that the patient’s thinking was quite disorganized, especially on less structured tests. The classical analytic approach, without structure, eliciting already disturbed mentation, caused this man to become more thought disordered than he had been before treatment: His WAIS responses before treatment were quite sound, and his projective test responses showed only some significant anxiety and difficulty with impulse control. Although a previous assessor had recommended a more structured, supportive approach to therapy, the patient was unfortunately put in this unstructured approach that probed an unconscious that contained a great deal of turmoil and few adequate defenses.

This assessment was a significant experience in which the assessor learned the central importance of using personality assessment to identify the proper treatment modality for patients and to identify patients’ core life issues. Illuminating experiences such as this one have led us to believe that assessment should be a central and vital part of any doctoral curriculum that prepares students to do applied work. We have had many assessment experiences that have reinforced our belief in the importance of learning assessment to facilitate the treatment process and to help guide patients in constructive directions.

The approach to teaching personality assessment described in this research paper emphasizes the importance of viewing assessment as an interactive process—emphasizing the interaction of teacher and student, as well as the interaction of patient and assessor. The process highlights the use of critical thinking and continued questioning of approaches to assessment and to their possible interpretations, and it even extends to the use of such a model in the application of these activities in the assessment process with the patient. Throughout the paper we have emphasized the integration of research and clinical application.

Differences Between Testing and Assessment

Unfortunately, many people use the terms testing and assessment synonymously, but actually these terms mean quite different things. Testing refers to the process of administering, scoring, and perhaps interpreting individual test scores by applying a descriptive meaning based on normative, nomothetic data.Thefocushereisontheindividual testitself.Assessment, on the other hand, consists of a process in which a number of tests, obtained from the use of multiple methods, are administered and the results of these tests are integrated among themselves, along with data obtained from observations, history, information from other professionals, and information from other sources—friends, relatives, legal sources, and so on.All of these data are integrated to produce, typically, an in-depth understanding of the individual, focused on the reasons the person was referred for assessment. This process is person focused or problem issue focused (Handler & Meyer, 1998). The issue is not, for example, what the person scored on the Minnesota Multiphasic Personality Inventory-2 (MMPI-2), or what the Rorschach Structural Summary yielded, but, rather, what we can say about the patient’s symptomatology, personality structure, and dynamics, and how we can answer the referral questions. Tests are typically employed in the assessment process, but much more information and much more complexity are involved in the assessment process than in the simple act of testing itself.

Many training programs teach testing but describe it as assessment. The product produced with this focus is typically a report that presents data from each test, separately, with little or no integration or interpretation. There are often no valid clear-cut conclusions one can make from interpreting tests individually, because the results of other test and nontest data often modify interpretations or conclusions concerning the meaning of specific test signs or results on individual tests. In fact, the data indicate that a clinician who uses a single method will develop an incomplete or biased understanding of the patient (Meyer et al., 2000).

Why Teach and Learn Personalityassessment?

When one considers the many advantages offered by learning personality assessment, its emphasis in many settings becomes quite obvious. Therefore, we have documented the many reasons personality assessment should be taught in doctoral training programs and highlighted as an important and respected area of study.

Learning Assessment Teaches Critical Thinking and Integrative Skills

The best reason, we believe, to highlight personality assessment courses in the doctoral training curriculum concerns the importance of teaching critical thinking skills through the process of learning to integrate various types of data. Typically, in most training programs until this point, students have amassed a great deal of information from discrete courses by reading, by attending lectures, and from discussion. However, in order to learn to do competent assessment work  students must now learn to organize and integrate information from many diverse courses. They are now asked to bring these and other skills to bear in transversing the scientistpractitioner bridge, linking nomothetic and ideographic data. These critical thinking skills, systematically applied to the huge task of data integration, provide students with a template that can be used in other areas of psychological functioning (e.g., psychotherapy, or research application).

Assessment Allows the Illumination of a Person’s Experience

Sometimes assessment data allow us to observe a person’s experience as he or she is being assessed. This issue is important because it is possible to generalize from these experiences to similar situations in psychotherapy and to the patient’s environment. For example, when a 40-year-old man first viewed Card II of the Rorschach, he produced a response that was somewhat dysphoric and poorly defined, suggesting possible problems with emotional control, because Card II is the first card containing color that the patient encounters. He made a sound that indicated his discomfort and said, “A bloody wound.” After a minute he said, “A rocket, with red flames, blasting off.” This response, in contrast to the first one, was of good form quality. These responses illuminate the man’s style of dealing with troubling emotions: He becomes angry and quickly and aggressively leaves the scene with a dramatic show of power and force. Next the patient gave the following response: “Two people, face to face, talking to each other, discussing.” One could picture the sequence of intrapsychic and interpersonal events in the series of these responses. First, it is probablethattheperson’sunderlyingdepressionisclosetothe surface and is poorly controlled. With little pressure it breaks through and causes him immediate but transitory disorganization in his thinking and in the ability to manage his emotions. He probably recovers very quickly and is quite capable, after an unfortunate release of anger and removing himself from the situation, of reestablishing an interpersonal connection. Later in therapy this man enacted just such a pattern of action in his work situation and in his relationships with family members and with the therapist, who was able to understand the pattern of behavior and could help the patient understand it.

A skilled assessor can explore and describe with empathic attunement painful conflicts as well as the ebb and flow of dynamic, perhaps conflictual forces being cautiously contained. The good assessor also attends to the facilitating and creative aspects of personality, and the harmonious interplay of intrapsychic and external forces, as the individual copes withday-to-day life issues (Handler & Meyer, 1998). It is possible to generate examples that provide moving portraits of a person’s experience, such as the woman who saw “a tattered, torn butterfly, slowly dying” on Card I of the Rorschach, or a reclusive, schizoid man whom the first author had been seeing for some time, who saw “a mushroom” on the same card. When the therapist asked, “If this mushroom could talk, what would it say?” the patient answered, “Don’t step on me. Everyone likes to step on them and break them. ”Thisresponse allowed the therapist to understand this reserved and quiet man’s experience of the therapist, who quickly altered his approach and became more supportive and affiliative.

Assessment Can Illuminate Underlying Conditions

Responses to assessment stimuli allow us to look beyond a person’s pattern of self-presentation, possibly concealing underlying emotional problems. For example, a 21-year-old male did not demonstrate any overt signs of gross pathology in his initial intake interview. His Rorschach record was also unremarkable for any difficulties, until Card IX, to which he gave the following response: “The skull of a really decayed or decaying body . . . with some noxious fumes or odor coming out of it. It looks like blood and other body fluids are dripping down on the bones of the upper torso and the eyes are glowing, kind of an orange, purplish glow.” To Card X he responded, “It looks like someone crying for help, all bruised and scarred, with blood running down their face.” The student who was doing the assessment quickly changed her stance with this young man, providing him with rapid access to treatment.

Assessment Facilitates Treatment Planning

Treatment planning can focus and shorten treatment, resulting in benefits to the patient and to third-party payors. Informed treatment planning can also prevent hospitalization, and provide more efficient and effective treatment for the patient. Assessment can enhance the likelihood of a favorable treatment outcome and can serve as a guide during the course of treatment (Applebaum, 1990).

Assessment Facilitates the Therapeutic Process

The establishment of the initial relationship between the patient and the therapist is often fraught with difficulty. It is important to sensitize students to this difficult interaction because many patients drop out of treatment prematurely. Although asking the new patient to participate in an assessment before beginning treatment would seem to result in greater dropout than would a simple intake interview because it may seem to be just another bothersome hurdle the patient must jump over to receive services, recent data indicate that the situation is just the opposite (Ackerman, Hilsenroth, Baity, & Blagys, 2000). Perhaps the assessment procedure allows clients to slide into therapy in a less personal manner, desensitizing them to the stresses of the therapy setting.

An example of an assessment approach that facilitates the initial relationship between patient and therapist is the recent research and clinical application of the Early Memories Procedure. Fowler, Hilsenroth, and Handler (1995, 1996) have provided data that illustrate the power of specific early memories to predict the patient’s transference reaction to the therapist.

The Assessment Process Itself Can Be Therapeutic

Several psychologists have recently provided data that demonstrate the therapeutic effects of the assessment process itself, when it is conducted in a facilitative manner. The work of Finn (1996; Finn & Tonsager, 1992) and Fischer (1994) have indicated that assessment, done in a facilitative manner, will typically result in the production of therapeutic results. The first author has developed a therapeutic assessment approach that is ongoing in the treatment process with children and adolescents to determine whether therapeutic assessment changes are long-lasting.

Assessment Provides Professional Identity

There are many mental health specialists who do psychotherapy (e.g., psychologists, psychiatrists, social workers, marriageandfamilycounselors,ministers),butonlypsychologists are trained to do assessment. Possession of this skill allows us to be called upon by other professionals in the mental health area, as well as by school personnel, physicians, attorneys, the court, government, and even by business and industry, to provide evaluations.

Assessment Reflects Patients’ Relationship Problems

More and more attention has been placed on the need for assessment devices to evaluate couples and families. New measures have been developed, and several traditional measures have been used in unique ways, to illuminate relational patterns for therapists and couples. Measures range from penciland-paper tests of marital satisfaction to projective measures of relational patterns that include an analysis of a person’s interest in, feelings about, and cognitive conceptualizations of relationships, as well as measures of the quality of relationships established.

The Rorschach and several selected Wechsler verbal subtests have been used in a unique manner to illustrate the pattern and style of the interaction between or among participants. The Rorschach or the WAIS subtests are given to each person separately. The participants are then asked to retake the test together, but this time they are asked to produce an answer (on the WAIS; e.g., Handler & Sheinbein, 1987) or responses on the Rorschach (e.g., Handler, 1997) upon which they both agree. The quality of the interaction and the outcome of the collaboration are evaluated. People taking the test can get a realistic picture of their interaction and its consequences, which they often report are similar to their interactions in everyday relationships.

Personality Assessment Helps Psychologists Arrive at a Diagnosis

Assessment provides information to make a variety of diagnostic statements, including a Diagnostic and Statistical Manual (DSM) diagnosis. Whether the diagnosis includes descriptive factors, cognitive and affective factors, interaction patterns, level of ego functions, process aspects, object relations factors, or other dynamic aspects of functioning, it is an informed and comprehensive diagnosis, with or without a diagnostic label.

Assessment Is Used in Work-Related Settings

There is a huge literature on the use of personality assessment in the workplace. Many studies deal with vocational choice or preference, using personality assessment instruments (e.g., Krakowski, 1984; Muhlenkamp & Parsons, 1972; Rezler & Buckley, 1977), and there is a large literature in which personality assessment is used as an integral part of the study of individuals in work-related settings and in the selection and promotion of workers (Barrick & Mount, 1991; Tett, Jackson, & Rothstein, 1991).

Assessment Is Used in Forensic and Medical Settings

Psychologists are frequently asked to evaluate people for a wide variety of domestic, legal, or medical problems. Assessments are often used in criminal cases to determine the person’s ability to understand the charges brought against him or her, or to determine whether the person is competent to stand trial or is malingering to avoid criminal responsibility.

Assessments are also requested by physicians and insurance company representatives to determine the emotional correlates of various physical disease processes or to help differentiate between symptoms caused by medical or by emotional disorders. There is now an emphasis on the biopsychosocial approach, in which personality assessment can target emotional factors along with the physical problems that are involved in the person’s total functioning. In addition, psychoneuroimmunology, a term that focuses on complex mind-body relationships, has spawned new psychological assessment instruments. There has been a significant increase in the psychological aspects of various health-related issues (e.g., smoking cessation, medical compliance, chronic pain, recovery from surgery). Personality assessment has become an integral part of this health psychology movement (Handler & Meyer, 1998).

Assessment Procedures Are Used in Research

Assessment techniques are used to test a variety of theories or hypothesized relationships. Psychologists search among a large array of available tests for assessment tools to quantify the variables of interest to them. There are now at least three excellent journals in the United States as well as some excellent journals published abroad that are devoted to research in assessment.

Assessment Is Used to Evaluate the Effectiveness of Psychotherapy

In the future, assessment procedures will be important to insure continuous improvement of psychotherapy through more adequate treatment planning and outcome assessment. Maruish (1999) discusses the application of test-based assessment in Continuous Quality Improvement, a movement to plan treatment and systematically measure improvement. Psychologists can play a major role in the future delivery of mental health services because their assessment instruments can quickly and economically highlight problems that require attention and can assist in selecting the most cost-effective, appropriate treatment (Maruish, 1990). Such evidence will also be necessary to convince legislators that psychotherapy services are effective. Maruish believes that our psychometricallysoundmeasures,whicharesensitivetochangesinsymptomatology and are administered pre- and posttreatment, can help psychology demonstrate treatment effectiveness. Inaddition, F. Newman (1991) described a way in which personality assessment data, initially used to determine progress or outcome, “can be related to treatment approach, costs, or reimbursement criteria, and can provide objective support for decisions regarding continuation of treatment, discharge, or referral to another type of treatment” (Maruish, 1999, p. 15).

Assessment Is Important in Risk Management

Assessment can substantially reduce many of the potential legal liabilities involved in the provision of psychological services (Bennet, Bryan, VandenBos, & Greenwood, 1990; Schutz, 1982) in which providers might perform routine baseline assessments of their psychotherapy patients’ initial level of distress and of personality functioning (Meyer et al., 2000).

Problems of Learning Personality Assessment: The Student Speaks

The first assessment course typically focuses on teaching students to give a confusing array of tests. Advanced courses are either didactic or are taught by the use of a group process model in which hypothesis generation and data integration are learned. With this model, depression, anxiety, ambivalence, and similar words take on new meaning for students when they are faced with the task of integrating personality assessment data. These words not only define symptoms seen in patients, but they also define students’ experiences.

Early in their training, students are often amazed at the unique responses given to the most obvious test stimuli. Training in assessment is about experiencing for oneself what it is like to be with patients in a variety of situations, both fascinating and unpleasant, and what it is like to get a glimpse of someone else’s inner world. Fowler (1998) describes students’ early experience in learning assessment with the metaphor of being in a “psychic nudist colony.” With this metaphor he is referring to the realization of the students that much of what they say or do reveals to others and to themselves otherwise private features of their personality. No further description was necessary in order for the second author (Clemence) to realize that she and Fowler shared a common experience during their assessment training. However, despite the feeling that one can no longer insure the privacy of one’s inner world, or perhaps because of this, the first few years of training in personality assessment can become an incredibly profound educational experience. If nothing else, students can learn something many of them could perhaps learn nowhere else—what it is like to feel examined and assessed from all angles, often against their will. This approach to learning certainly allows students to become more empathic and sensitive to their patients’ insecurities throughout the assessment procedure. Likewise, training in assessment has the potential to greatly enrich one’s ability to be with clients during psychotherapy.Trainees learn how to observe subtleties in behavior, how to sit through uncomfortable moments with their patients, and how to endure scrutiny by them as well.

Such learning is enhanced if students learn assessment in a safe environment, such as a group learning class, to be described later in this research paper. However, with the use of this model there is the strange sense that our interpretation of the data may also say something about ourselves and our competence in relation to our peers. Are we revealing part of our inner experience that we would prefer to keep hidden, or at least would like to have some control over revealing?

Although initially one cannot escape scrutiny, eventually there is no need to do so. With proper training, students will develop the ability to separate their personal concerns and feelings from those of their patients, which is an important step in becoming a competent clinician. Much of their ignorance melts away as they develop increased ability to be excited about their work in assessment. This then frees students to wonder about their own contributions to the assessment experience. They wonder what they are projecting onto the data that might not belong there. Fortunately, in the group learning model, students have others to help keep them in check. Hearing different views of the data helps to keep projections at a minimum and helps students recognize the many different levels at which the data can be understood. It is certainly a more enriching experience when students are allowed to learn from different perspectives than it is when one is left on one’s own to digest material taught in a lecture.

The didactic approach leaves much room for erroneous interpretation of the material once students are on their own and are trying to make sense of the techniques discussed in class. This style of learning encourages students to be more dependent on the instructor’s method of interpretation, whereas group learning fosters the interpretative abilities of individual students by giving each a chance to confirm or to disconfirm the adequacy of his or her own hypothesis building process. This is an important step in the development of students’personal assessment styles, which is missed in the didactic learning model. Furthermore, in the didactic learning model it is more difficult for the instructor to know if the pace of teaching or the material being taught is appropriate for the skill level of the students, whereas the group learning model allows the instructor to set a pace matched to their abilities and expectations for learning.

During my (Clemence) experience in a group learning environment, what became increasingly more important over time was the support we received from learning as a group. Some students seemed to be more comfortable consulting with peers than risking the instructor’s criticism upon revealing a lack of understanding. We also had the skills to continue our training when the instructor was not available. Someone from the group was often nearby for consultation and discussion, and this proved quite valuable during times when one of us had doubts about our approach or our responsibilities.

After several classes in personality assessment and after doing six or seven practice assessments, students typically feel they are beginning to acquire the skills necessary to complete an assessment, until their supervisor asks them to schedule a feedback session with the patient. Suddenly, newfound feelings of triumphand mastery turn again into fear and confusion because students find it awkward and discomforting to be put in a position of having to reveal to the patient negative aspects of his or her functioning. How do new students communicate such disturbing and seemingly unsettling information to another person? How can the patient ever understand what it has taken the student 2–3 years to even begin to understand? Students fear that it will surely devastate someone to hear he or she has a thought disorder or inadequate reality testing. However, when the emphasis of assessment (as in a therapeutic assessment approach) is on the facilitation of the client’s questions about him- or herself, in addition to the referral question(s), this seemingly hopeless bind becomes much less of a problem. This approach makes the patient an active participant in the feedback process.

Problems of Teaching Personality Assessment: The Instructor Speaks

The problems encountered in teaching the initial assessment course, in which the emphasis is on learning the administration and scoring of various instruments, are different from those involved in teaching an advanced course, in which assessment of patients is the focus and the primary issue is integration of data. It must be made clear that the eventual goal is to master the integration of diverse data.

The instructor should provide information about many tests, while still giving students enough practice with each instrument. However, there may only be time to demonstrate some tests or have the student read about others. The instructor should introduce each new test by describing its relevance to an assessment battery, discussing what it offers that other tests do not offer. Instructors should resist students’ efforts to ask for cookbook interpretations. Students often ask what each variable means. The response to the question of meaning is a point where the instructor can begin shifting from a test-based approach to one in which each variable is seen in context with many others.

Learning to do assessment is inherently more difficult for students than learning to do psychotherapy, because the former activity does not allow for continued evaluation of hypotheses. In contrast, the therapeutic process allows for continued discussion, clarification, and reformulation of hypotheses, over time, with the collaboration of the patient. This problem is frightening to students, because they fear making interpretive errors in this brief contact with the patient. More than anything else they are concerned that their inexperience will cause them to harm the patient. Their task is monumental: They must master test administration while also being empathic to patient needs, and their learning curve must be rapid. At the same time they must also master test interpretation and data integration, report writing, and the feedback process.

Sometimes students feel an allegiance to the patient, and the instructor might be seen as callous because he or she does not feel this personal allegiance or identification. Students’ attitudes in this regard must be explored, in a patient, nonconfrontational manner. Otherwise, the students might struggle to maintain their allegiance with the patient and might turn against learning assessment.

Not unlike some experienced clinicians who advocate for an actuarial process, many students also resist learning assessment because of the requirement to rely on intuitive processes, albeit those of disciplined intuition, and the fear of expressing their own conflicts in this process, rather than explaining those of the patient. The students’ list of newfound responsibilities of evaluating, diagnosing, and committing themselves to paper concerning the patients they see is frightening. As one former student put it, “Self-doubt, anxiety, fear, and misguided optimism are but a few defenses that cropped up during our personality assessment seminar” (Fowler, 1998, p. 34).

Typically, students avoid committing themselves to sharply crafted, specific interpretations, even though they are told by the instructor that these are only hypotheses to try out. Instead, they resort to vague Barnum statements, statements true of most human beings (e.g., “This patient typically becomes anxious when under stress”). Students also often refuse to recognize pathology, even when it is blatantly apparent in the test data, ignoring it or reinterpreting it in a much less serious manner. They feel the instructor is overpathologizing the patient. The instructor should not challenge these defenses directly but instead should explore them in a patient, supportive manner, helping to provide additional clarifying data and trying to understand the source of the resistance. There is a large body of literature concerning these resistances in learning assessment (e.g., Berg, 1984; Schafer, 1967; Sugarman, 1981, 1991). Time must also be made available outside the classroom for consultation with the instructor, as well as making use of assessment supervisors. Most of all, students who are just learning to integrate test data need a great deal of encouragement and support of their efforts. They also find it helpful when the instructor verbalizes an awareness of the difficulties involved in this type of learning.

Learning to Interview

All too often the importance of interviewing is ignored in doctoral training programs. Sometimes it is taken for granted that a student will already know how to approach a person who comes for assessment in order to obtain relevant information. In the old days this was the role of the social worker, who then passed the patient on for assessment. We prefer the system in which the person who does the assessment also does the interview before any tests are given, since the interview is part of the assessment. In this way rapport can be built, so that the actual testing session is less stressful. Just as important, however, is that the assessor will have a great deal of information and impressions that can be used as a reference in the interpretation of the other data. Test responses take on additional important meaning when seen in reference to history data.

There are many ways to teach interviewing skills. In the interviewing class taught by the first author (Handler), students first practice using role playing and psychodrama techniques. Then they conduct videotaped interviews with student volunteers, and their interviews are watched and discussed by the class. Students learn to identify latent emotions produced in the interview, to handle their anxiety in productive ways, to manage the interviewee’s anxiety, to go beyond mere chitchat with the interviewee, and to facilitate meaningful conversation. Students also learn to examine relevant life issues of the people they interview; to conceptualize these issues and describe them in a report; to ask open-ended questions rather than closed-ended questions, which can be answered with a brief “yes” or “no”; to reflect the person’s feelings; and to encourage more open discussion.

There are many types of clinical interviews one might teach, depending upon one’s theoretical orientation, but this course should be designed to focus on interviewing aspects that are probably of universal importance. Students should know that in its application the interview can be changed and modified, depending on its purpose and on the theoretical orientation of the interviewer.

The Importance of Reliability and Validity

It is essential when teaching students about the use of assessment instruments that one also teaches them the importance of sound psychometric properties for any measure used. By learning what qualities make an instrument useful and meaningful, students can be more discerning when confronted with new instruments or modifications of traditional measures. “In the absence of additional interpretive data, a raw score on any psychological test is meaningless” (Anastasi & Urbina, 1998, p. 67). This statement attests to the true importance of gathering appropriate normative data for all assessment instruments. Without a reference sample with which to compare individual scores, a single raw score tells the examiner little of scientific value. Likewise, information concerning the reliability of a measure is essential in understanding each individual score that is generated. If the measure has been found to be reliable, this then allows the examiner increased accuracy in the interpretation of variations in scores, such that differences between scores are more likely to result from individual differences than from measurement error (Nunnally & Bernstein, 1994). Furthermore, reliability is essential for an instrument to be valid.

The assessment instruments considered most useful are those that accurately measure the constructs they intend to measure, demonstrating both sensitivity, the true positive rate of identification of the individual with a particular trait or pattern, and specificity, the true negative rate of identification of individuals who do not have the personality trait being studied. In addition, the overall correct classification, the hit rate, indicates how accurately test scores classify both individuals who meet the criteria for the specific trait and those who do not. Ameasure can demonstrate a high degree of sensitivity but low specificity, or an inability to correctly exclude those individuals who do not meet the construct definition. When this occurs, the target variable is consistently correctly classified, but other variables that do not truly fit the construct definition are also included in the categorization of items. As a result, many false positives will be included along with the correctly classified variables, and the precision of the measure suffers. Therefore, it is important to consider both the sensitivity and the specificity of any measure being used. One can then better understand the possible meanings of their findings.

Teaching an Introductory Course in Personality Assessment

Given that students have had an adequate course in psychometrics, the next typical step in training is an introductory course in assessment, in which they learn the many details of test administration, scoring, and initial interpretation. Assessment is taught quite differently in doctoral programs throughout the country. As mentioned previously, in some programs testing is actually taught, but the course is labeled assessment. In some programs this course is taught entirely as a survey course; students do little or no practice testing, scoring, or interpretation (Childs & Eyde, 2002; Durand, Blanchard, & Mindell, 1988; Hilsenroth & Handler, 1995). We believe this is a grave error, because each assessment course builds on the previous one(s). A great deal can be learned about assessment from reading textbooks and test manuals, but there is no substitute for practical experience.

Some doctoral training programs require only one assessment course in which there is actual practice with various tests. Many other programs have two courses in their curriculum but require only one, whereas other programs require two courses. In some programs only self-report measures are taught, and in others only projective measures are taught. In some programs there are optional courses available, and in others no such opportunities exist. The variability of the required and optional personality assessment courses in training programs is astounding, especially since assessment is a key area of proficiency, required by the American Psychological Association (APA) for program accreditation. In our opinion, students cannot become well grounded in assessment unless they learn interviewing skills and have taken both an introductory course focused on the administration and scoring of individual tests and an advanced course focused on the integration of assessment data and their communication to referral sources and to the person who took the tests.

Many times the required assessment courses are determined by a prevailing theoretical emphasis in the program. In these settings, assessment techniques chosen for study are limited to those instruments that are believed to fit the prevailing point of view. This is unfortunate, because students should be exposed to a wide variety of instruments and approaches to personality assessment, and because no instrument belongs to a particular theoretical approach; each test can be interpreted from a wide variety of theoretical viewpoints.

Some programs do not include the training of students in assessment as one of their missions, despite the APA requirement. Instead, they believe that the responsibility for teaching personality assessment lies with the internship site. Relegating this important area of clinical experience to the internship is a bad idea, because students learn under a great deal of pressure in these settings, pressure far greater than that of graduate school. Learning assessment in this type of pressured environment is truly a trial by fire.

Most students do not know the history of the testing and assessment movement and the relevance of assessment to clinical psychology. We recommend that this information be shared with students, along with the long list of reasons to learn assessment, which was discussed earlier in this research paper, and the reasons some psychologists eschew assessment.

The necessary emphasis on each test as a separate entity in the first course must eventually give way to a more integrated approach. In addition, although it is necessary to teach students to administer tests according to standardized instructions, they must also be introduced to the idea that in some cases it will not be possible or perhaps advisable to follow standardized instructions. They must also be helped to see that test scores derived in a nonstandardized manner are not necessarily invalid. Although they should be urged to follow the standardized procedures whenever possible, modifying instructions can sometimes help students understand the patient better.

We believe that it is important to draw students’ attention to the similarities and differences among the tests, emphasizing the details of the stimuli, the ability of different tests to tap similar factors, the style of administration, and so on. Students should be taught the relevance of the variables they are measuring and scoring for each test. Otherwise, their administration is often rote and meaningless. For example, it makes little sense to students to learn to do a Rorschach Inquiry if they are not first acquainted with the relevance of the variables scored. Therefore, conceptualization of the perceptual, communicative, and representational aspects of perceiving the inkblots, and any other stimuli, for that matter, must first be discussed. We recommend beginning with stimuli other than the test stimuli, in order to demonstrate that aspects of the stimuli to which we ask patients to respond are no different from aspects of ordinary, real-life stimuli.

In our opinion, the most important function of this first course is to discuss the reasons each test was chosen to be studied and to help students become proficient in the administration, scoring, and initial interpretation of each test. Once students have mastered test administration, the instructor should begin to emphasize the establishment of rapport with the patient, which involves knowing the directions well enough to focus on the patient rather than on one’s manual.

The introductory course usually has an assigned laboratory section, in which students practice with volunteer subjects to improve proficiency. Checkouts with volunteer subjects or with the instructor are routine. Students must be able to administer the tests smoothly and in an error-free manner and then score them properly before moving on to the next course.

In many programs students are required to administer, score, and begin to interpret several of each test they are learning.The number of practice protocols varies considerably, but it is typical to require two or three, depending on each student’s level of proficiency. In the classroom there should be discussion of the psychometric properties and the research findings for each test and a discussion of the systematic administration and scoring errors produced by students.

Students should be taught that each type of data collected in an assessment has its strengths and its weaknesses. For example, observational and history data are especially helpful in assessment, but these sources can also be quite misleading. Anyone who has done marital therapy or custody evaluations has experienced a situation in which each spouse’s story sounds quite plausible, but the husband and the wife tell opposite stories. Such are the limitations of history and observational data. People typically act differently in different situations, and they interpret their behaviors and intentions, and the behaviors and intentions of others, from their own biased vantage points. It soon becomes obvious that additional methods of understanding people are necessary in order to avoid the types of errors described above. Adding test data to the history and observational data should increase the accuracy of the assessment and can allow access to other key variables involved in knowing another person. However, test-derived data also contain sources of error, and at times they are also distorted by extratest effects or by impression management attempts, but many tests include systematic methods of determining test-taking attitude and the kind and degree of impression management attempted. Students should be taught that because no assessment method is error-free and no test, by itself, is comprehensive, it is important to use a number of assessment methods and a number of different types of tests and to aggregate and integrate them in order to answer referral questions adequately and to obtain a meaningful picture of the person assessed. This orientation leads the students directly to the advanced assessment course.

Teaching an Advanced Course in Personality Assessment

What follows is a description of an advanced course in personality assessment much like the one taught by the first author (Handler). We will present this model to the reader for consideration because it is based on data culled from work on creative reasoning processes and is supported by research. In addition, we have added the use of integration approaches based on the use of metaphor, as well as an approach with which to facilitate empathic attunement with the patient. To this experiential approach we have also added an approach that asks the interpreter to imagine interacting with the person who produced the test results.

Asecond important reason we have used the following description as a suggested model is that the model can be used with any test battery the instructor wishes to teach, because the approach is not test specific. We suggest that the reader attempt to use this model in communicating integrative and contextual approaches to assessment teaching, modifying and tailoring the approach to fit individual needs and style.

Nevertheless, we recognize that this approach will not be suitable in its entirety for some clinicians who teach personality assessment. However, readers should nevertheless feel free to use any part or parts of this model that are consistent with their theoretical point of view and their preferred interpretive style. We believe the approach described here can be of use to those with an emphasis on intuition, as well as to those who prefer a more objective approach, because the heart of the approach to data integration is the use of convergent and divergent reasoning processes. This approach can be applicable to self-report data as well as to projective test data. Indeed, in the class described, the first author models the same approaches to the interpretation of the MMPI-2 and the Personality Assessment Inventory (PAI), for example, that we do to the Rorschach and the ThematicApperception Test (TAT).

In this second course, students typically begin assessing patients. They must now focus on using their own judgment and intuitive skills to make interpretations and to integrate data. The task now, as we proceed, is the use of higher-level integrative approaches to create an accurate picture of the person they areassessing.Theinstructorshoulddescribethechangedfocus and the difficult and complex problem of interpretation, along with the assurance that students will be able to master the process. Nevertheless, students are typically quite anxious, because interpretation places novel demands on them; for the first time they are being placed in a position of authority as experts and are being called upon to use themselves as an assessment tool.They have difficulty in the integration of experiential data andobjectivedata,suchastestscoresandratios.Thecomplexity of the data is often overwhelming, and this pressure often leads students to search instead for cookbook answers.

With no attention to the interpretive process, students make low-level interpretations; they stay too close to the data, and therefore little meaningful integration is achieved. Hypotheses generated from this incomplete interpretive process are mere laundry lists of disconnected and often meaningless technical jargon. An approach is needed that systematically focuses on helping students develop meaningful interpretations and on the integration of these interpretations to produce a meaningful report (Handler, Fowler, & Hilsenroth, 1998).

Emphasis is now placed on the communication of the experiential and cognitive aspects involved in the process of interpretation. Students are told that the interpretive process is systematized at each step of their learning, that each step will be described in detail, and that the focus will be on the development of an experience-near picture of the person assessed. First they observe the instructor making interpretations from assessment data. In the next step the focus is on group interpretation, to be described subsequently. Next, the student does the interpretation and integration with the help of a supervisor and then writes a report free of technical jargon, responding to the referral questions. Reports are returned to the students with detailed comments about integration, style, accuracy, and about how well the referral questions were answered. The students rewrite or correct them and return them to the instructor for review.

The group interpretation focuses on protocols collected by students in their clinical setting. Only the student who did the assessment knows the referral issue, the history, and any other relevant information. The remainder of the class and the instructor are ignorant of all details. Only age and gender are supplied.

Tests typically included in many test batteries include the WAIS-III, the Symptom Checklist-90-Revised (SCL-90-R), the MMPI-2, the PAI, the Bender Gestalt, a sentence completion test, figure drawings, the Rorschach, the TAT, a variety of self-report depression and anxiety measures, and early memories. However, instructors might add or delete tests depending upon their interests and the students’ interests. Although this is much more than a full battery, these tests are included to give students wide exposure to many instruments.

The instructor describes various systematic ways in which one can interpret and integrate the data. The first two methods are derived from research in creativity. The first, divergent thinking, is derived from measures of creativity that ask a person to come up with as many ways as he or she can in which a specific object, such as a piece of string, or a box can be used. Those who find many novel uses for the object are said to be creative (Torrance, 1966, 1974; Williams, 1980). Handler and Finley (1994) found that people who scored high on tests of divergent thinking were significantly better Drawa-Person (DAP) interpreters than those who were low on divergent thinking. (Degree of accuracy in the interpretation of the DAPprotocols was determined by first generating a list of questions about three drawings, each list generated from an interview with that person’s therapist). The participants were asked to look at each drawing and to mark each specific statement as either true or false. This approach asks students to come up with more than one interpretation for each observation or group of observations of the data.

Rather than seeking only one isolated interpretation for a specific test response, students are able to see that several interpretations might fit the data, and that although one of these might be the best choice as a hypothesis, it is also possible that several interpretations can fit the data simultaneously. This approach is especially useful in preventing students from ignoring possible alternatives and in helping them avoid the problem of confirmatory bias: ignoring data that do not fit the hypothesis and selecting data that confirm the initial hypothesis. Gradually, the students interpret larger and larger pieces of data by searching for additional possibilities, because they understand that it is premature to focus on certainty.

The second interpretive method based on creativity research is called convergent thinking. It asks how different bits of information can be brought together so that they reflect something unique and quite different from any of the pieces but are related to those pieces. Convergent thinking has been measured by the Remote Associates Test (RAT; Mednick & Mednick, 1967), in which the respondent is asked to come up with a word that is related in some way to three other presented stimulus words. For example, for the following three words: “base,” round,” and “dance,” the correct answer is “ball.” The interpretive process concerns “seeing relationships among seemingly mutually remote ideas” (Mednick & Mednick, 1967, p. 4). This is essentially the same type of task that is required in effective assessment interpretation, in which diverse pieces of data are fitted together to create an interpretive hypothesis. Burley and Handler (1997) found that the RAT significantly differentiated good and poor DAP interpreters (determined as in the Handler & Finley study cited earlier) in groups of undergraduate students and in a group of graduate students in clinical psychology.

A helpful teaching heuristic in the interpretive process is the use of the metaphor (Hilsenroth, 1998), in which students are taught to offer an interpretive response as though it were an expression of the patient’s experience. They are asked to summarize the essential needs, wishes, expectations, major beliefs, and unresolved issues of the patient through the use of a short declarative statement, typically beginning with “I wish,” “I feel,” “I think,” “I want,” or “I am.” This “metaphor of the self” facilitates interpretation because it allows for a quick and easy way to frame the response to empathize vicariously with the patient. When this approach is combined with the cognitive approaches of divergent and convergent thinking, students generate meaningful hypotheses not only about self-experience, but also about how others might experience the patient in other settings. To facilitate this latter approach, students are asked how they would feel interacting with the patient who gave a certain response if they met the person at a party or in some other interpersonal setting (Potash, 1998).

At first students focus on individual findings, gradually branching out to include patterns of data from a series of responses, and finally integrating these interpretations across various tests. Initial attempts at interpretation are little more than observations, couched as interpretations, such as “This response is an F-”; “She drew her hands behind her back”; “He forgot to say how the person was feeling in this TAT story.” The student is surprised when the instructor states that the interpretation was merely an observation. To discourage this descriptive approach the instructor typically asks the student to tell all the things that such an observation could mean, thereby encouraging divergent thinking.

At the next level, students typically begin to shift their interpretations to a somewhat less descriptive approach, but the interpretations are still test based, rather than being psychologically relevant. Examples of this type of interpretation are “She seems to be experiencing anxiety on this card” and “The patient seems to oscillate between being too abstract and too concrete on the WAIS-III.” Again, the instructor asks the student to generate a psychologically relevant interpretation concerning the meaning of this observation in reference to the person’s life issues, or in reference to the data we have already processed.

Efforts are made to sharpen and focus interpretations. Other students are asked to help by attempting to clarify and focus a student’s overly general interpretation, and often a discussion ensues among several students to further define the original interpretation. The instructor focuses the questions to facilitate the process. The task here is to model the generation of detailed, specific hypotheses that can be validated once we have completed all the interpretation and integration of the data.

Whenever a segment of the data begins to build a picture of the person tested, students are asked to separately commit themselves to paper in class by writing a paragraph that summarizes and integrates the data available so far. The act of committing their interpretations to paper forces students to focus and to be responsible for what they write. They are impressed with each other’s work and typically find that several people have focused on additional interpretations they had not noticed.

Anyone who uses this teaching format will inevitably encounter resistance from students who have been trained to stick closely to empirical findings. Sometimes a student will feel the class is engaging in reckless and irresponsible activities, and/or that they are saying negative and harmful things about people, without evidence. It is necessary to patiently but persistently work through these defensive barriers. It is also sometimes frightening for students to experience blatant pathology so closely that it becomes necessary to back away from interpretation and, perhaps, to condemn the entire process.

The instructor should be extremely supportive and facilitative, offering hints when a student feels stuck and a helpful direction when the student cannot proceed further. The entire class becomes a protective and encouraging environment, offering suggestions, ideas for rephrasing, and a great deal of praise for effort expended and for successful interpretations. It is also important to empower students, reassuring them that they are on the correct path and that even at this early stage they are doing especially creative work. Students are also introduced to relatively new material concerning the problem of test integration. The work of Beutler and Berren (1995), Ganellen (1996), Handleretal. (1998), Meyer (1997), and Weiner (1998) have focused on different aspects of this issue.

Once the entire record is processed and a list of specific hypotheses is recorded, the student who did the assessment tells the class about the patient, including history, presenting problem(s), pattern and style of interaction, and so forth. Each hypothesis generated is classified as “correct,” “incorrect,” or “cannot say,” because of lack of information. Typically, correct responses range from 90 to 95%, with only one or two “incorrect” hypotheses and one or two “cannot say” responses.

In this advanced course students might complete three reports. They should continue to do additional supervised assessments in their program’s training clinic and, later, in their clinical placements throughout the remainder of their university training.

Improving Assessment Results Through Modification of Administration Procedures

Students learning assessment are curious about ways to improve the accuracy of their interpretations, but they nevertheless adhere strictly to standardized approaches to administration, even when, in some situations, these approaches result in a distortion of findings. They argue long, hard, and sometimes persuasively that it is wrong to modify standardized procedures, for any reason. However, we believe that at certain times changing standardized instructions will often yield data that are a more accurate measure of the individual than would occur with reliance on standardized instructions. For example, a rather suspicious man was being tested with the WAIS-R. He stated that an orange and a banana were not alike and continued in this fashion for the other pairs of items. The examiner then reassured him that there really was a way in which the pairs of items were alike and that there was no trick involved. The patient then responded correctly to almost all of the items, earning an excellent score. When we discuss this alteration in the instructions, students express concern about how the examiner would score the subtest results. The response of the instructor is that the students are placing the emphasis in the wrong area: They are more interested in the test and less in the patient. If the standardized score was reported, it would also not give an accurate measure of this patient’s intelligence or of his emotional problems. Instead, the change in instructions can be described in the report, along with a statement that says something like, “The patient’s level of suspicion interferes with his cognitive effectiveness, but with some support and assurance he can give up this stance and be more effective.”

Students are also reluctant to modify standardized instructions by merely adding additional tasks after standardized instructions are followed. For example, the first author typically recommends that students ask patients what they thought of each test they took, how they felt about it, what they liked and disliked about it, and so on. This approach helps in the interpretation of the test results by clarifying the attitude and approach the patient took to the task, which perhaps have affected the results. The first author has designed a systematic Testing of the Limits procedure, based on the method first employed by Bruno Klopfer (Klopfer, Ainsworth, Klopfer, & Holt, 1954). In this method the patient is questioned to amplify the meanings of his or her responses and to gain information about his or her expectations and attitudes about the various tests and subtests. This information helps put the responses and the scores in perspective. For example, when a patient gave the response, “A butterfly coming out of an iceberg” to Card VII of the Rorschach, he was asked, after the test had been completed, “What’s that butterfly doing coming out of that iceberg?” The patient responded, “That response sounds kind of crazy; I guess I saw a butterfly and an iceberg. I must have been nervous; they don’t actually belong together.” This patient recognized the cognitive distortion he apparently experienced and was able to explain the reason for it and correct it. Therefore, this response speaks to a less serious condition, compared with a patient who could not recognize that he or she had produced the cognitive slip. Indeed, later on, the patient could typically recognize when he had made similar cognitive misperceptions, and he was able to correct them, as he had done in the assessment.

Other suggestions include asking patients to comment on their responses or asking them to amplify these responses, such as amplifying various aspects of their figure drawings and Bender Gestalt productions, their Rorschach and TAT response, and the critical items on self-report measures. These amplifications of test responses reduce interpretive errors by providing clarification of responses.

Teaching Students How to Construct an Assessment Battery

Important sources of information will of course come from an interview with the patient and possibly with members of his or her family. Important history data and observations from these contacts form a significant core of data, enriched, perhaps, by information derived from other case records and from referral sources. In our clinical setting patients take the SCL-90-R before the intake interview. This self-report instrument allows the interviewer to note those physical and emotional symptoms or problems the patients endorse as particularly difficult problems for them. This information is typically quite useful in structuring at least part of the interview. The construction of a comprehensive assessment battery is typically the next step.

What constitutes a comprehensive assessment battery differs from setting to setting. Certainly, adherents of the fivefactor model would constitute an assessment battery differently than someone whose theoretical focus is object relations. However, there are issues involved in assessment approaches that are far more important than one’s theoretical orientation. No test is necessarily tied to any one theory. Rather, it is the clinician who interprets the test who may imbue it with a particular theory.

It is difficult to describe a single test battery that would be appropriate for everyone, because referral questions vary, as do assessment settings and their requirements; physical and emotional needs, educational and intellectual levels, and cultural issues might require the use of somewhat different instruments. Nevertheless, there are a number of guiding principles used to help students construct a comprehensive assessment battery, which can and should be varied given the issues described above.

Beutler and Berren (1995) compare test selection and administration in assessment to doing research. They view each test as an “analogue environment” to be presented to the patient. In this process the clinician should ask which types of environments should be selected in each case. The instructions of each test or subtest are the clinician’s way of manipulating these analogue environments and presenting them to the patient. Responding to analogue environments is made easier or more difficult as the degree of structure changes from highly structured to ambiguous or vague. Some people do much better in a highly structured environment, and some do worse.

Assessment is typically a stressful experience because the examiner constantly asks the patient to respond in a certain manner or in a certain format, as per the test instructions. When the format is unstructured there is sometimes less stress because the patient has many options in the way in which he or she can respond. However, there are marked differences in the ways that people experience this openness. For some people a vague or open format is gratifying, and for others it is terrifying. For this reason it is helpful to inquire about the patient’s experience with each format, to determine its effect.

Beutler and Berren make another important point in reference to test selection: Some tests are measures of enduring internal qualities (traits), whereas others  tap more transitory aspects of functioning (states), which differ for an individual from one situation to another. The clinician’s job is to determine which test results are measuring states and which reflect traits. When a specific test in some way resembles some aspects of the patient’s actual living environment, we can assume that his or her response will be similar to the person’s response in the real-world setting (Beutler & Berren, 1995). The assessor can often observe these responses, which we call stylistic aspects of a person’s personality.

One question to be answered is whether this approach is typical of the patient’s performance in certain settings in the environment, whether it is due to the way in which the person views this particular task (or the entire assessment), or whether it is due to one or more underlying personality problems, elicited by the test situation itself. It is in part for this reason that students are taught to carefully record verbatim exactly what the patient answers, the extratest responses (e.g., side comments, emotional expressions, etc.), and details of how each task was approached.

Important aspects of test choice are the research that supports the instrument, the ease of administration for the patient, and the ability of the test to tap specific aspects of personality functioning that other instruments do not tap. We will discuss choosing a comprehensive assessment battery next.

First, an intellectual measure should be included, even if the person’s intelligence level appears obvious, because it allows the assessor to estimate whether there is emotional interference in cognitive functioning. For this we recommend theWAIS-III or the WISC-III, although the use of various short forms is acceptable if time is an important factor. For people with language problems of one type or another, or for people whose learning opportunities have been atypical for any number of reasons (e.g., poverty, dyslexia, etc.), a nonverbal intelligence test might be substituted if an IQ measure is necessary. The Wechsler tests also offer many clues concerning personality functioning, from the pattern of interaction with the examiner, the approach to the test, the patient’s attitude while taking it, response content, as well as from the style and approach to the subtest items, and the response to success or failure. If these issues are not relevant for the particular referral questions, the examiner could certainly omit this test completely.

Additionally, one or more self-report inventories should be included, two if time permits. The MMPI-2 is an extremely well-researched instrument that can provide a great deal more information than the patient’s self-perception. Students are discouraged from using the descriptive printout and instead are asked to interpret the test using a more labor-intensive approach, examining the scores on the many supplementary scales and integrating them with other MMPI-2 data. The PAI is recommended because it yields estimates of adaptability and emotional health that are not defined merely as the absence of pathology, because it has several scales concerning treatment issues, and because it is psychometrically an extremely wellconstructed scale. Other possible inventories include the Millon Clinical Multiaxial Inventory-III (MCMI-III), because it focuses onAxis II disorders, and the SCL-90-R or its abbreviated form, because it yields a comprehensive picture concerning present physical and emotional symptoms the patient endorses. There are a host of other possible self-report measures that can be used, depending on the referral issues (e.g.,the Beck Depression Inventory and the Beck Anxiety Inventory).

Several projective tests are suggested, again depending upon the referral questions and the presenting problems. It is helpful to use an array of projective tests that vary on a number of dimensions, to determine whether there are different patterns of functioning with different types of stimuli. We recommend a possible array of stimuli that range from those that are very simple and specific (e.g., the Bender Gestalt Test) to the opposite extreme, the DAP Test, because it is the only test in the battery in which there is no external guiding stimulus. Between these two extremes are the TAT, in which the stimuli are relatively clear-cut, and the Rorschach, in which the stimuli are vague and unstructured.

Although the research concerning the symbolic content in the interpretation of the Bender Gestalt Test (BG) is rather negative, the test nevertheless allows the assessor a view of the person’s stylistic approach to the rather simple task of copying the stimuli. The Rorschach is a multifaceted measure that may be used in an atheoretical manner, using the Comprehensive System (Exner, 1993), or it may be used in association with a number of theoretical approaches, including self psychology, object relations, ego psychology, and even Jungian psychology. In addition, many of the variables scored in the Exner system could very well be of interest to psychologists with a cognitive-behavioral approach. The Rorschach is a good choice as a projective instrument because it is multidimensional, tapping many areas of functioning, and because there has been a great deal of recent research that supports its validity (Baity & Hilsenroth, 1999; Ganellen, 1999; Kubeszyn et al., 2000; Meyer, 2000; Meyer, Riethmiller, Brooks, Benoit, & Handler, 2000; Meyer & Archer, 2001; Meyer & Handler, 1997; Viglione, 1999; Viglione & Hilsenroth, 2001; Weiner, 2001). There are also several well-validated Rorschach content scoring systems that were generated from research and have found application in clinical assessment as well (e.g., the Mutuality of Autonomy Scale, Urist, 1977; the Holt Primary Process Scale, Holt, 1977; the Rorschach Oral Dependency Scale, or ROD, Masling, Rabie, & Blondheim, 1967; and the Lerner Defense Scale, Lerner & Lerner, 1980).

The TAT is another instrument frequently used by psychologists that can be used with a variety of theoretical approaches. The TAT can be interpreted using content, style, and coherence variables. There are several interpretive systems for the TAT, but the systematic work of Cramer (1996) and Westen (1991a, 1991b; Westen, Lohr, Silk, Gold, & Kerber, 1990) seems most promising.

One assessment technique that might be new to some psychologists is the early memories technique, in which the assessor asks the patient for a specific early memory of mother, father, first day of school, eating or being fed, of a transitional object, and of feeling snug and warm (Fowler et al., 1995, 1996). This approach, which can also be used as part of an interview, has demonstrated utility for predicting details of the therapeutic relationship, and it correlates with a variety of other measures of object relations. The approach can be used with a wide variety of theoretical approaches, including various cognitive approaches (Bruhn, 1990, 1992).

Additional possible tests include various drawing tests (e.g., the DAPtest and the Kinetic Family DrawingTest, or K-F-D). The research findings for these tests are not consistently supportive (Handler, 1996; Handler & Habenicht, 1994). However, many of the studies are not well conceived or well controlled (Handler & Habenicht, 1994; Riethmiller & Handler, 1997a, 1997b). The DAP and/or the K-F-D are nevertheless recommended for possible use for the following reasons:

  1. They are the only tests in which there is no standard stimulus to be placed before the patient. This lack of structure is an asset because it allows the examiner to observe organizing behavior in situations with no real external structure. Therefore, the DAP taps issues concerning the quality of internal structuring. Poor results are often obtained if the person tested has problems with identity or with the ability to organize self-related issues.
  2. Drawing tests are helpful if the person being assessed is not very verbal or communicative, because a minimum of talking is required in the administration.
  3. Drawing tests are quick and easy to administer.
  4. Drawings have been demonstrated to be excellent instruments to reflect changes in psychotherapy (Handler, 1996; Hartman & Fithian, 1972; Lewinsohn, 1965; Maloney & Glasser, 1982; Robins, Blatt, & Ford, 1991; Sarel, Sarel, & Berman, 1981; Yama, 1990).

Much of the research on drawing approaches is poorly conceived, focusing on single variables, taken out of context, and interpreted with a sign approach (Riethmiller & Handler, 1997a, 1997b). There is also confusion between the interpretation of distortions in the drawings that reflect pathology and those that reflect poor artistic ability. There are two ways to deal with these problems. The first is to use a control figure of equal task difficulty to identify problems due primarily to artistic ability. Handler and Reyher (1964, 1966) have developed such a control figure, the drawing of an automobile. In addition, sensitizing students to the distortions produced by people with pathology and comparing these with distortions produced by those with poor artistic ability helps students differentiate between those two situations (Handler & Riethmiller, 1998).

Asentence completion test (there are many different types) is a combination of a self-report measure and a projective test. The recommended version is the Miale-Holsopple Sentence Completion Test (Holsopple & Miale, 1954) because of the type of items employed. Patients are asked to complete a series of sentence stems in any way they wish. Most of the items are indirect, such as “Closer and closer there comes . . . ,” “A wild animal . . . ,” and “When fire starts . . . .” Sentence completion tests also provide information to be followed up in an interview.

Assessment and Cultural Diversity

No assessment education is complete without an understanding of the cultural and subcultural influences on assessment data. This is an important issue because often the effects of cultural variables may be misinterpreted as personality abnormality. Therefore, traditional tests might be inappropriate for some people, and for others adjustments in interpretation should be made by reference to cultural or subcultural norms. Students should recognize that it is unethical to use typical normative findings to evaluate members of other cultures unless data are available suggesting cross-cultural equivalence.

In many cases traditional test items are either irrelevant to the patient or have a different meaning from that intended. Often, merely translating a test into the patient’s language is not adequate because the test items or even the test format may still be inappropriate. Knowledge of various subgroups obtained from reading, consulting with colleagues, and interacting with members of the culture goes a long way to sensitize a person to the problems encountered in personality assessment with members of that subgroup. It is also important to understand the significant differences among various ethnic and cultural groups in what is considered normal or typical behavior. Cultural factors play a critical role in the expression of psychopathology; unless this context is understood, it is not possible to make an accurate assessment of the patient. The instructor should introduce examples of variations in test performance from members of different cultural groups. For example, figure drawings obtained from children in different cultures are shown to students (Dennis, 1966). In some groups the drawings look frighteningly like those produced by retarded or by severely emotionally disturbed children.

Another problem concerning culturally competent personality assessment is the importance of determining the degree of acculturation the person being assessed has made to the prevailing mainstream culture. This analysis is necessary to determine what set of norms the assessor might use in the interpretive process. Although it is not possible to include readings about assessment issues for all available subcultures, it is possible to include research on the subgroups the student is likely to encounter in his or her training.There are a number of important resources available to assist students in doing competent multicultural assessments (e.g., Dana, 2000a, 2000b). Allen (1998) reviews personality assessment with American Indians and Alaska Natives; Lindsey (1998) reviews such work with African American clients; Okazaki (1998) reviews assessment with Asian Americans; and Cuéllar (1998) reviews crosscultural assessment with HispanicAmericans.

Teaching Ethical Issues of Assessment

As students enter the field and become professional psychologists, they must have a clear understanding of how legal and ethical responsibilities affect their work. However, Plante (1995)foundthatethicscoursesingraduatetrainingprograms tend to focus little on practical strategies for adhering to ethical and legal standards once students begin their professional careers.

One way to reduce the risks associated with the practice of assessment is to maintain an adequate level of competency in the services one offers (Plante, 1999). Competency generally refers to the extent to which a psychologist is appropriately trained and has obtained up-to-date knowledge in the areas in which he or she practices. This principle assumes that professional psychologists are aware of the boundaries and limitations of their competence. Determining this is not always easy, because there are no specific guidelines for measuring competence or indicating how often training should be conducted. To reduce the possibility of committing ethical violations, the psychologist should attend continuing education classes and workshops at professional conferences and local psychology organizations.

TheAPA(1992) publication Ethical Principles of Psychologists and Code of Conduct also asserts that psychologists who use assessment instruments must use them appropriately, based on relevant research on the administration, scoring, and interpretation of the instrument. To adhere to this principle, psychologists using assessment instruments must be aware of the data concerning reliability, validity, and standardization of the instruments. Consideration of normative data is essential when interpreting test results. There may be occasions when an instrument has not been tested with aparticulargroup of individuals and, as a result, normative data do not exist for that population. If this is the case, use of the measure with an individual of that population is inappropriate.

Information regarding the psychometric properties of an instrument and its intended use must be provided in the test manual to be in accordance with the ethical standards of publication or distribution of an assessment instrument (Koocher & Keith-Spiegel, 1998). Anyone using the instrument should read the manual thoroughly and understand the measure’s limitations before using it. “The responsibility for establishing whether the test measures the construct or reflects the content of interest is the burden of both the developers and the publishers,” (Koocher & Keith-Spiegel, 1998, p. 147) but the person administering it is ultimately responsible for knowing this information and using it appropriately.

Assessment Approaches and Personality Theory

In the past those with behavioral and cognitive approaches typically used self-report measures in their assessments, whereas those with psychodynamic orientations tended to rely on projective tests. Since those old days, during which the two sides crossed swords on a regular basis in the literature and in the halls of academia, we now seem more enlightened. We now tend to use each other’s tools, but in a more flexible manner. For example, although psychoanalytically oriented clinicians use the Rorschach, it can also be interpreted from a more cognitive and stylistic approach. In fact, Exner has been criticized by some psychodynamically oriented psychologists for having developed an atheoretical, nomothetic system.

Tests can be interpreted using any theoretical viewpoint. For example, psychodynamically oriented psychologists sometimes interpret the MMPI-2 using a psychodynamic orientation (Trimboli & Kilgore, 1983), and cognitive psychologists interpret the TAT from a variety of cognitive viewpoints (Ronan, Date, & Weisbrod, 1995; Teglasi, 1993), as well as from a motivational viewpoint (McClelland, 1987). Martin Mayman’s approach to the interpretation of the Early Memories Procedure (EMP) is from an object relations perspective, but the EMPis also used by adherents of social learning theory and cognitive psychology (e.g., Bruhn, 1990, 1992).

Many psychologists believe that the use of theory in conducting an assessment is absolutely necessary because it serves as an organizing function, a clarifying function, a predictive function, and an integrative function, helping to organize and make sense of data (Sugarman, 1991). Theory serves to “recast psychological test data as psychological constructs whose relationship is already delineated by the theory in mind” (Sugarman & Kanner, 2000). In this way the interpreter can organize data, much of it seemingly unrelated, into meaningful descriptions of personality functioning, and can make predictions about future functioning. Theory often helps students make sense of inconsistencies in the data.

Students should be helped to understand that although assessment instruments can be derived from either an atheoretical or a theoretical base, the data derived from any assessment instrument can be interpreted using almost any theory, or no theory at all. No test is necessarily wedded to any theory, but theory is often useful in providing the glue, as it were, that allows the interpreter to extend and expand the meaning of the test findings in a wide variety of ways. Students must ask themselves what can be gained by interpreting test data through the lens of theory. Some would say that what is gained is only distortion, so that the results reflect the theory and not the person. Others say it is possible to enrich the interpretations made with the aid of theory and to increase the accuracy and meaningfulness of assessment results, and that a theory-based approach often allows the assessor to make predictions with greater specificity and utility than can be made if one relies only on test signs.

Learning Through Doing: Proficiency Through Supervised Practice

Something interesting happens when a student discusses data with his or her supervisor. The supervisee often says and does things that reveal information about the nature and experience of the client being assessed, in metaphors used to describe assessment experiences, slips of the tongue when discussing a client,oranactualrecreationofthedynamicspresentintherelationship between client and assessor in the supervisory relationship. This reenactment has come to be known as parallel process (e.g., Deering, 1994; Doehrman, 1976; Whitman & Jacobs, 1998), defined by Deering (1994) as “an unconscious process that takes place when a trainee replicates problems and symptoms of patients during supervision” with the purpose “of causing the supervisor to demonstrate how to handle the situation” (p. 1). If the supervisor and supervisee can become aware of its presence in the supervision, it can be a powerful diagnostic and experiential tool. It is important for the supervisor to note when students act in a way that is uncharacteristic of their usual behavior, often the first clue that parallel process is occurring (Sigman, 1989). Students sometimes take on aspects of their clients’ personality, especially when they identify with some facet of a patient’s experience or character style.

The supervisor should always strive to model the relationship with the supervisee after that which he or she would want the supervisee to have with the client. With this approach, the supervisor becomes an internalized model or standard for the trainee. Supervisors often serve as the template for how to behave with a client during assessment because many students have no other opportunities to observe seasoned clinicians at their work. It is also important to remember that problems in the supervisor-supervisee relationship can trickle down into the supervisee-client relationship, so issues such as power, control, competition, and inferiority may arise between the supervisee and the client as well if these emotions happen to be present in the supervision relationship. Nevertheless, given the inevitable occurrence of parallel process, going over data with the student is not sufficient supervision or training. The supervisory relationship itself should be used to facilitate growth and development of the student. There must also be a good alliance between the supervisor and the student, and a sense of confidence from both parties involved that each has sound judgement and good intentions toward the assessment process and the client.

It is important for the supervisor to encourage a sense of hopefulness in the student that will translate into hope for the client that this new information will be helpful. Otherwise, it is difficult for students to know or at least to believe that what they are doing is meaningful. When the characteristics of trust, confidence, collaboration, and hopefulness are not present in the supervision relationship, this should be discussed during the supervision hour. It is crucial that the relationship be examined when something impedes the ability to form a strong alliance.

Assessment Teaching in Graduate School: A Review of the Surveys

According to the recent survey literature, training in assessment continues to be emphasized in clinical training programs (Belter & Piotrowski, 1999; Piotrowski, 1999; Piotrowski & Zalewski, 1993; Watkins, 1991), although there is evidence that those in academic positions view assessment as less important than other areas of clinical training (Kinder, 1994; Retzlaff, 1992). Those instruments that have consistently received the most attention during graduate training are MMPI, Rorschach, Wechsler scales, and TAT (Belter & Piotrowski, 1999; Hilsenroth & Handler, 1995; Piotrowski & Zalewski, 1993; Ritzler & Alter, 1986; Watkins, 1991). Some concern, however, has been expressed about the level of training being conducted in the area of projective assessment (Dempster, 1990; Hershey, Kopplin, & Cornell, 1991; Hilsenroth & Handler, 1995; Rossini & Moretti, 1997). Watkins (1991) found that clinical psychologists in academia generally believe that projective techniques are less important assessment approaches now than they have been in the past and that they are not grounded in empirical research (see also Watkins, Campbell, & Manus, 1990).

Academic training often emphasizes objective assessment over projective techniques. Clinical training directors surveyed by Rossini and Moretti (1997) reported that the amount of formal instruction or supervision being conducted in the use of the TAT was little to none, and Hilsenroth and Handler (1995) found that graduate students were often dissatisfied with the quality and degree of training they received in the Rorschach. Piotrowski and Zalewski (1993) surveyed directors of clinical training in APA-approved Psy.D. and Ph.D. programs and found that behavioral testing and objective personality testing were expected to increase in use in academic settings, whereas projective personality assessment was predicted to decrease according to almost one half of those surveyed. In addition, 46% of training directors answered “no” to the question, “Do you feel that the extent of projective test usage in various applied clinical settings is warranted?” (Piotrowski & Zalewski, 1993, p. 399).

It is apparent that although training in assessment remains widely emphasized, this does not mean that students are well prepared, especially in the area of projective assessment. Specific qualities and approaches to training may vary widely from program to program and may not meet the needs of applied settings and internship programs. In fact, Durand et al. (1988) found that 47% of graduate training directors felt that projective assessment was less important than in the past, whereas 65% of internship directors felt projective assessment had remained an important approach for training in assessment. Such disagreement is not rare; much of the literature reflects the discrepancy between graduate training in assessment and internship needs (Brabender, 1992; Durand et al., 1988; Garfield & Kurtz, 1973; Shemberg & Keeley, 1970; Shemberg & Leventhal, 1981;Watkins, 1991). Furthermore, given the report by Camara, Nathan, and Puente (2000), who found that the most frequently used instruments by professional psychologists are the WAIS-R/WISC-R, the MMPI-2, the Rorschach, BG, and the TAT, it is clear that the discrepancy between training and application of assessment goes beyond that of internship needs and includes real-world needs as well.

Assessment on Internship: Report of a Survey

Clemence and Handler (2001) sought to examine the expectations that internship training directors have for students and to ascertain the specific psychological assessment methods most commonly used at internship programs in professional psychology. Questionnaires designed to access this information were mailed to all 563 internships listed in the 1998–1999 Association of Psychology Postdoctoral and Internship Centers Directory. Only two sites indicated that no patients are assessed, and 41% responded that testing instruments are used with the majority of their patients.

Each intern is required to administer an average of 27 full battery or 33 partial battery assessments per year, far exceeding the number of batteries administered by most students during their graduate training. Of those rotations that utilize a standard assessment battery (86%), over 50% include the WISC/WAIS (91%), the MMPI-2/MMPI-A (80%), the Rorschach (72%), or the TAT (56%) in their battery. These results are consistent with previous research investigating the use of assessment on internship (Garfield & Kurtz, 1973; Shemberg&Keeley,1974).PiotrowskiandBelter(1999)also found the four most commonly used assessment instruments at internship facilities to be the MMPI-2/MMPI-A(86%), the WAIS (83%), the Rorschach (80%), and the TAT (76%).

To ensure that students are fully prepared to perform in the area of assessment on their internship, training is frequently offered to bridge the gap that exists between the type and amount of training conducted in most graduate programs and that desired by internship sites. In the Clemence and Handler study,99%oftheinternshipssurveyedreportedofferingtraining in assessment, and three approaches to training in personality assessment were most commonly endorsed by training directors: intellectual assessment (79%), interviewing (76%), and psychodynamic personality assessment (64%). These three methods seem to be the predominant training approaches used by the sites included in the survey. This finding suggests that these are important directions for training at the graduate level, as well.

Of the topics being offered in the area of assessment training, report writing is most often taught (92%); 86% of the rotations conduct training in advanced assessment, 84% in providing feedback to clients, 74% in providing feedback to referral sources, 56% in introductory assessment, and 44% in the study of a specific test. This breakdown may reflect the priorities internship training directors place on areas of assessment, or the areas in which students are less prepared upon leaving graduate school.

Piotrowski and Belter (1999) surveyed 84 APA-approved internship programs and found that 87% of their respondents required interns to participate in assessment seminars. If the demand for training is as critical as these surveys seem to indicate, it is curious that graduating students do not appear to be especially well-prepared in this area, as this and previous studies indicate (Watkins, 1991). Training in basic assessment should be the job of graduate training programs and not internship sites, whose primary function should be in providing supervised practical experience in the field.

From our findings and other surveys (Petzel & Berndt, 1980; Stedman, 1997; Watkins, 1991), it appears that internship training directors prefer students who have been properly trained in a variety of assessment approaches, including self-report, projective, and intelligence testing. Distinct differences were found between the types of assessment techniques utilized across various facilities. The WISC and WAIS were found to be routinely used at each of the various internship facilities; the MMPI-2 and MMPI-A are used regularly at all but the child facilities, where only 36% reported using these instruments routinely. The Rorschach is part of a full battery at the majority of internships surveyed, ranging from 58% for Veterans Administration hospitals to 95% for community mental health centers, and the TAT is used in full batteries primarily at private general hospitals (88%) and community mental health centers (73%).

American Psychological Association Division 12 Guidelines

The discrepancy between the real-world use of assessment and training in graduate schools is troubling and seems to be oddly encouraged by certain groups within the psychological community. For example, Division 12 of the APA (1999) set up a task force (“Assessment for the Twenty-First Century”) to examine issues concerning clinical training in psychological assessment. They defined their task as one of creating a curriculum model for graduate programs that would include proper and appropriate assessment topics for the next century.

The task force, made up of psychologists experienced in various areas of assessment, was asked to recommend class topics that should be included in this ideal curriculum. They came up with 105 topics, which they then ranked according to their beliefs about their usefulness. Rankings ranged from “essential” (“no proper clinical training program should be without appropriate coverage of this item”) to “less important” (“inessential and would not greatly improve the curriculum”; APA Division 12, 1999, p. 11). What is surprising about the final curriculum rankings, given the previously discussed research in the area of assessment in the real world, was that the curriculum seemed to be heavily weighted toward selfreport assessment techniques, with only three class topics in the area of projective assessment: (a) Learning Personality Assessment: Projective—Rorschach (or related methods); (b) Learning Personality Assessment: Projective—Thematic Apperception Test; and (c) Learning Personality Assessment: Projective—Drawing Tests. What is even more striking is that these three classes were ranked extremely low in the model curriculum, with the Rorschach class ranked 95th in importance, the TAT class ranked 99th, and the projective drawings class ranked 102nd out of the possible 105 topics proposed. It is clear that the task force considers these topics as primarily useless and certainly inessential in the training of future psychologists. Furthermore, the low rankings then led to the omission of any training in projective techniques from the final Division 12 model syllabus. The omission of these classes leaves us with a model for training that is quite inconsistent with previously cited research concerning the importance of projective testing in applied settings and seems to ignore the needs of students and internships. This Division 12 task force appears to have missed the mark in its attempt to create a model of training that would prepare students for the future of assessment.

The Division 12 model widens the gap between training and use of assessment in applied settings instead of shrinking it. In fact, the model reinforces the division discussed previously between psychologists in academia and those in the field. A better approach to designing a model curriculum of assessment training for the future would be to combine topics relevant to the application of assessment in the real world with those deemed relevant by academicians. Data from research concerning the use of assessment demonstrate that a multidimensional approach is most valid and most useful in providing worthwhile diagnostic and therapeutic considerations of clinicians. This point must not be ignored due to personal preferences. The Division 12 model of assessment training demonstrates that even as late as1999, models of training continued to be designed that ignored the importance of teaching students a balance of methods so that they would be able to proceed with multifunctional approaches to assessment.

Postgraduate Assessment Training

Although assessment practice during internship helps to develop skills, it is important to continue to refine these skills and add to them and to continue reading the current research literature in assessment. There are many opportunities to attend workshops that focus on particular tests or on the development of particular assessment skills. For example, there is a series of workshops available at various annual meetings of professional groups devoted to assessment, taught by assessment experts. This is an excellent way to build skills and to learn about the development of new instruments.Also, workshops, often offered for continuing education credit, are available throughout the year and are listed in the APA Monitor.

Assessment and Managed Care Issues

Restrictions by managed care organizations have affected the amount of assessment clinicians are able to conduct (Piotrowski, 1999). Consistent with this assertion, Piotrowski, Belter, and Keller (1998) found that 72% of psychologists in applied settings are conducting less assessment in general and are using fewer assessment instruments, especially lengthy assessment instruments (e.g., Rorschach, MMPI, TAT, and Wechsler scales), due to restrictions by managed care organizations. Likewise, Phelps, Eisman, and Kohout (1998) found that 79% of licensed psychologists felt that managed care had a negative impact on their work, andAcklin (1996) reported that clinicians are limiting their use of traditional assessment measures and are relying on briefer, problem-focused procedures.

With the growing influence of managed care organizations (MCOs) in mental health settings, it is inevitable that reimbursement practices will eventually affect training in assessment techniques and approaches (Piotrowski, 1999). We hope this will not be the case because of the many important training functions facilitated in assessment training, mentioned earlier in this research paper. Also, since we are training for the future, we must train students for the time when managed care will not dictate assessment practice. If, as we indicated earlier, assessment serves important training functions, it should continue to be enthusiastically taught, especially for the time when managed care will be merely a curiosity in the history of assessment. However, managed care has served us well in some ways, because we have sharpened and streamlined our approach to assessment and our instruments as well. We have focused anew on issues of reliability and validity of our measures, not merely in nomothetic research, but in research that includes reference to a test’s positive predictive power, negative predictive power, sensitivity, and specificity to demonstrate the validity of our measures. Psychologists have turned more and more to assessment in other areas, such as therapeutic assessment, disability assessment, assessment in child custody, and other forensic applications. The Society for Personality Assessment has reported an increase in membership and in attendance at their annual meetings. We are optimistic that good evaluations, done in a competent manner and meaningfully communicated to the patient and referral source, will always be in great demand.

Nevertheless, an investigation concerning the impact of managed care on assessment at internship settings found that there has been a decrease in the training emphasis of various assessment techniques; 43% of directors reported that managed care has had an impact on their program’s assessment curriculum (Piotrowski & Belter, 1999). Although approximately one third of the training directors surveyed reported a decrease in their use of projectives, the Rorschach and TAT remain 2 of the top 10 assessment instruments considered essential by internship directors of the sites surveyed.These studies indicate that MCOs are making an impact on the way assessment is being taught and conducted in clinical settings.Therefore, it is essential that psychologists educate themselves and their students in the practices of MCOs. Furthermore, psychologists should continue to provide research demonstrating the usefulness of assessment so that MCO descriptions of what is considered appropriate do not limit advancements. Empirical validation can help to guarantee psychologists reasonable options for assessment approaches so that we do not have to rely primarily on the clinical interview as the sole source of assessment and treatment planning information.

It is important to remember that MCOs do not dictate our ethical obligations, but the interests of our clients do. It is the ethical psychologist’s responsibility to persistently request compensation for assessment that can best serve the treatment needs of the client. However, even if psychologists are denied reimbursement, it does not mean they should not do assessments when they are indicated. Therefore, options for meeting both financial needs of the clinician and health care needs of the client should be considered. One solution may be the integration of assessment into the therapy process. Techniques such as the Early Memories Procedure, sentence completion tasks, brief questionnaires, and figure drawings may be incorporated into the therapy without requiring a great deal of additional contact or scoring time. Other possibilities include doing the assessment as the clinician sees fit and making financial arrangements with the client or doing a condensed battery.

The Politics and Misunderstandings in Personality Assessment

For many years there has been very active debate, and sometimes even animosity and expressions of derision, between those who preferred a more objective approach to personality assessment (read self-report and MMPI) and those who preferred a more subjective approach (read projective tests and Rorschach). This schism was fueled by researchers and teachers of assessment. Each group disparaged the other’s instruments, viewing them as irrelevant at best and essentially useless, while championing the superiority of its own instruments (e.g., Holt, 1970; Meehl, 1954, 1956).

This debate seems foolish and ill-advised to us, and it should be described in this way to students, in order to bring assessment integration practices to the forefront. These misleading attitudes have unfortunately been transmitted to graduate students by their instructors and supervisors over many years. Gradually, however, the gulf between the two seemingly opposite approaches has narrowed. Clinicians have come to use both types of tests, but there is still a great deal of misperception about each type, which interferes with productive integration of the two types of measures and impairs clinicians’ efforts to do assessment rather than testing. Perhaps in the future teachers of personality assessment will make fewer and fewer pejorative remarks about each other’s preferred instruments and will concentrate more and more on the focal issue of test integration.

Another issue is the place of assessment in the clinical psychology curriculum. For many years graduate curricula contained many courses in assessment. The number of courses has gradually been reduced, in part because the curricula have become crowded with important courses mandated by the APA, such as professional ethics, biological bases of behavior, cognitive and affective aspects of behavior, social aspects of behavior, history and systems, psychological measurement, research methodology, techniques of data analysis, individual differences, human development, and psychopathology, as well as courses in psychotherapy and in cultural and individual diversity (Committee on Accreditation, Education Directorate, & American Psychological Association, 1996). Courses have also been added because they have become important for clinical training (e.g., child therapy, marital therapy, health psychology, neuropsychology, hypnosis). Therefore, there is sometimes little room for assessment courses. To complicate matters even more, some instructors question the necessity of teaching assessment at all. Despite the published survey data, we know of programs that have no identified courses in assessment, and programs in which only one type of measure (e.g., selfreport, interview, or projective measures) is taught. While most programs do have courses in assessment, the content of some courses does not prepare students to do effective assessment. Sometimes the courses offered are merely survey courses, or courses in which the student administers and scores one of each type of test. Unfortunately, with this type of inadequate training students do poor applied work and even poorer research, both of which reflect poorly on the discipline of personality assessment.

With the impact of cognitive therapy there have been radical changes in the ways in which some training programs teach assessment, seemingly without knowledge of the significant improvements in assessment research and practice that have taken place in the last 15 years or so. There seems to be a “Throw the baby out with the bathwater” approach, whereby traditional instruments are derided and replaced primarily with self-report measures. This is an important issue because it has major implications for teaching assessment in graduate school and in internship settings.

For example,Wetzler (1989) describes a hospital-based assessment approach in which a general broadly focused assessment has been replaced with a so-called focal approach, using self-report instruments. These changes, he indicates, have come about because of shorter hospitalization stays, and because what he calls “the standard battery” (Rapaport, Gill, & Schafer, 1968) “is no longer appropriate.” He believes the questions that need to be answered in this acute problem setting cannot be adequately addressed using the “traditional” assessment approach: “What was well-suited to the psychiatric community of the 1930s, 1940s, and 1950s is no longer appropriate” (p. 5). “No matter what the referral question, they administer the standard battery,” he states (p. 7). He lists a number of reported dissatisfactions with “traditional assessment” procedures, which include the problem that “test findings do not respond to [the] referral questions.” His solution is to replace “traditional assessment” with “focal assessment,” which includes the use of observer rating scales, self-report inventories, and a number of questionnaires derived from psychological research rather than from clinical observation or theory. He describes focal tests as specialized instruments considering specific areas of psychopathology, which have a much narrower focus and are “more concrete and descriptive, focused on surface symptoms and behavior, with clearly defined criteria for scoring, and with normative data available.”

Wetzler concludes that “In light of [its] scientific foundation focal assessment is frequently more valid and therefore more effective than projective testing and/or informal interviewing” and that “focal assessment is more appropriate to the parameters of contemporary treatment than is traditional assessment” (p. 9), especially because in his setting assessment findings and clinical decisions must be made within 72 hours.

We do not agree with Wetzler in a number of his conclusions; we believe the approach he described comes closer to the definition we used earlier of testing than it does to assessment, since only self-report measures are employed, and test scores are emphasized rather than the development of integrated findings. The overemphasis on the validity of test scores does not take into account the validity of their use in a particular clinical setting without the concomitant understanding of the patient’s feelings and his or her experience of being hospitalized, as well as other important issues that would make these disembodied test scores more meaningful.What is lacking is an understanding of and an appreciation for the patient’s contextual world, which we emphasize in our teaching. We have no way of knowing whether the patient responded to these instruments in a meaningful manner. The reduction in personal contact with the patient and its replacement with standardized self-report instruments does not seem to us to be an improvement in the assessment process. Validity of the instrument may be only an illusion in many cases, in which patients take a test with perhaps questionable motivation and a nonfacilitative orientation.

This approach to assessment is a prototype of other similar approaches that are convenience-driven, test-driven, and technician-driven; it is a most dangerous approach, in which the role of the assessor is primarily to choose the right test, and the test scores are said to provide the appropriate answers.

Earlier in this research paper we emphasized that psychologists should be well trained in the area of psychometrics and in the limitations of tests, especially problems of reliability and validity. In testing, one seeks the assistance of confidence limits of the results, but in assessment one determines the validity of the results of the test scores by taking into account a host of variables determined from interview data, from observations of the patient during the assessment, and the similarities and differences among the various assessment findings. In the focused approach it is doubtful whether the proper evaluation of the test scores can be accomplished. More to the point, however, is the criticism that there is actually a rigid adherence to a traditional battery. Our survey of test use in internship settings suggests otherwise; internship directors reported that a wide variety of tests are employed in assessment in their setting.We do not recommend or teach adherence to a traditional test battery, although these assessment devices are among those recommended for use, for reasons discussed in this research paper. We believe innovations in assessment should be employed to improve the validity of the assessment procedure and to improve the delivery of assessment services to those who request them. If the referral questions are not answered in an assessment it is the fault of the assessor, who has not paid attention to the referral issue or who has not sufficiently clarified the referral issue with the person requesting the assessment.

To describe an approach we believe is more typical of assessment rather than testing, also in a hospital setting, we will review the approaches of Blais and Eby (1998), in which psychologists have even more stringent demands on them to provide focal answers, often within a day. Blais and Eby train their internship students to assist the referring physician in clarifying referral questions. After a brief discussion with the nurse in charge of the patient, a review of the patient’s chart, or both, the student selects the appropriate tests and procedures to answer the referral questions, taking into account the necessary turnaround time and both the physical and psychological limitations of the patient.

In a training case example in which the turnaround time was less than a day, Blais and Eby describe a battery that included a seven-subtest short form of the WAIS-R, the Rorschach, four TAT cards, and the PAI. The brief WAIS-R took less than 30 minutes to administer. Since the patient was described by the staff as extremely guarded, projective testing was viewed as crucial. The Rorschach and the TAT were chosen, the latter to identify the patient’s object relations and core interpersonal themes, and both tests served to determine the degree of suicidal ideation. The PAI was chosen rather than the MMPI-2 because it is significantly shorter and the patient had poor physical stamina, and because it can be scored as a short form, using only 199 of its 344 items. It also contained several treatment planning scales that could possibly provide important information relevant to a referral question about treatment.

Although the battery described for this individual patient did include the traditional tests, batteries designed for other patients might not include any of the traditional tests. In addition, these traditional tests were employed not because they were traditional but, rather, because each offered something that the other measures did not offer. Also, the manner in which they are scored is directly tied to a large body of research, including, in the case of the Rorschach, extensive normative findings and reliability and validity data. The Rorschach was scored using the Comprehensive System (Exner, 1993), which includes a well-validated suicide constellation measure along with a host of other scores of importance to the referral issue, and with the P. Lerner and H. Lerner Defense Scale (1980). The TAT was scored as well, using the Social Cognition and Object Relations Scale (SCORS) system, a research-based interpretive system that measures eight aspects of object relations (Westen, 1991a, 1991b). The data were integrated into a picture of the patient’s current psychological functioning and categorized according to thought quality, affect, defenses, and relationship to self and others, all issues directly related to the referral questions. Verbal report was given to the referring psychiatrist by telephone well before rounds the next morning, along with treatment recommendations.

The assessment approach designed by Blais and Eby is an example of a hospital-based assessment that demonstrates that traditional tests can be employed with quite rapid turnaround time and that a test battery that includes traditional tests need not be rigidly fixed. In Blais and Eby’s approach the clinicians responded flexibly and actively in the assessment process, integrating data from several different sources and responding in an efficient and rapid manner to focalized referral issues generated from several sources. In Wetzler’s approach, the response was to develop a test-focused approach rather than a person-focused approach.

Personality Assessment in the Future

In this section we describe several changes we foresee in personality assessment teaching and practice, as well as changes we would like to see.

The Assessment of Psychological Health and the Rise of Positive Psychology

Psychological assessment has typically been tied to the medical model, in which health is defined as the absence of pathology rather than as an aggregate of positive psychological traits that differentiate the psychologically healthy person from others (e.g., Adler, 1958; Erikson, 1963; Maslow, 1954; May, Angel, & Ellenberger, 1958; Rogers, 1961). Seligman and Csikszentmihalyi (2000) have suggested using the term positive psychology instead. Such variables as playfulness, the ability to self-soothe and to be soothed by others, psychological-mindedness, flexibility, and the ability to establish intimacy and to express tenderness in relationships are important variables to consider. Seligman has discussed the concept of optimism, and several of the variables discussed by the Big Five theorists, such as openness to experience (McCrae, 1996), surgency, and agreeableness (Goldberg, 1992) describe positive aspects of personality functioning. The surgency factor includes such concepts as extroversion, energy level, spontaneity, assertiveness, sociability, and adventurousness. The agreeableness factor includes interpersonal warmth, cooperativeness, unselfishness, and generosity. In the future we expect to see a number of scoring systems to measure the variables described above using traditional tests, as well as a number of new tests specially designed to tap positive psychology variables.The Journal of PersonalityAssessment recently published a special series, The Assessment of Psychological Health (Handler & Potash, 1999), which included a discussion of four variables that were measured using traditional tests: optimism, creativity, playfulness, and transitional relatedness. Handler and Potash (1999) suggest that in the future students should be taught to routinely measure these variables and discuss them in feedback.

Focused Measures of Important Personality Variables

There has been a major movement toward the use of instruments that focus on more detailed aspects of personality functioning, either by scoring systems devised for traditional measures or the construction of new measures. For example, there are a very large number of MMPI and MMPI-2 scales constructed to predict various types of behaviors or to identify various problems (Graham, 2000). Some of these scales, the Harris-Lingoes and Si subscales, the Content scales, and the Supplementary scales, have now been included in the complex analysis of the MMPI-2, allowing for increased specificity in personality description, dynamics, and so on. These scales provide a way to focus interpretation when they are used in context with other data. There is an increasing press to provide such measures of specificity, supported by adequate research. We expect to see an increase in the construction and use of tests that are focused on the therapy process. For example, Fowler, Hilsenroth, and Handler (1995, 1996) found that early memories responses were related to the pattern of the relationship patients established with their therapists. The Holt Primary Process Scale, the Lerner Defense Scale, and the Mutuality of Autonomy Scale have made the transition from a research setting to clinical application. Another more complex measure, derived from scoring the TAT, is the SCORS, developed by Westen (1991a, 1991b) to measure various aspects of object relations. These scales have excellent validity and excellent clinical utility. They are used as focal estimates of object relations when such issues are a central aspect of the referral issue (e.g., Kelly, 1997). Students should be taught to use these researchbased measures to generate more focused interpretations.

Recently there has been a proliferation of self-report measures designed for the evaluation of very specific personality questions. These include rapid screening instruments for the presence of specific personality problems, plus inventories that contain fewer items than the MMPI-2 and will therefore be less time consuming. However, we are concerned that test publishers perhaps promise too much. For example, one reputable publisher, describing a reputable test in its recent catalog, announced, “In a relatively short time you will determine whether your clients have characteristics that will aid or impede their treatment program in as few as 80 items, but not more than 120 items.”What concerns us is the proliferation of tests that purport to answer complex personality questions (e.g., suicidality or adaptation to psychotherapy). It is possible that hurried students, unable to take time for proper assessment, will use these tests with apparent face validity, but without data on clinically important types of validity. Complex personality questions cannot be answered with confidence with the use of a single focal instrument.Anumber of studies support this contention (see Meyer et al., 2000). In addition, some of these tests are quite easy to fake (e.g., the Battelle Developmental Inventory, Beebe, Finer, & Holmbeck, 1996). However, in class we should teach focal instruments in conjunction with other more complex measures.

Therapeutic Assessment

Many patients feel alienated by the traditional approach to assessment; they are often troubled by the procedures, feeling that the tasks requested of them are foolish, meaningless, and ultimately useless. These attitudes can lead to poor cooperation and uneven results. Students have more difficulty with assessment feedback than with any other aspect of assessment. An antidote for this problem, as well as a means to make assessment more meaningful and therapeutic for the person assessed, is the concept of Therapeutic Assessment (Finn, 1996; Finn & Martin, 1997; Finn & Tonsager, 1992; Fischer, 1994). Assessment questions are formulated collaboratively, with the patient, and the feedback is also done collaboratively. In this procedure a facilitative and constructive atmosphere is necessarily established, and the patient’s investment in the assessment procedure is increased. Finn indicates that practically any test or test battery can be used as a vehicle for therapeutic assessment. He has also developed a manual for the use of the MMPI-2 as a therapeutic assessment device (Finn, 1996).

The goal of the assessment in this setting is for the person being assessed to come away with answers to his or her initially posed questions and an awareness of problems that can result in personal growth. The process by which this new awareness occurs is the exploration of the patient’s subjective experience in the process that develops between the assessor and the patient. These interactions are accessed through intervention by the assessor from assessment data already collected, or in an intervention using particular assessment stimuli or procedures to tap into the patient’s life issues, thereby producing them in the presence of the assessor. The facilitation of the occurrence of the problem issue is explored with the person, drawing connections to outside problems and to referral issues. The assessor then names, clarifies, and amplifies these issues, exploring the factors that are necessary and sufficient to produce the problem behavior—what elicits it, what reinforces it, and what maintains it—and provides the person with a new awareness about his or her problems and perhaps their roots.This process has understandably resulted in very substantial therapeutic gains for patients assessed (e.g.,Ackerman et al., 2000; Finn & Martin, 1997; Finn & Tonsager, 1992; Hanson, Claiborn, & Kerr, 1997; M. Newman & Greenway, 1997). Students seem very motivated to use these procedures.They are eager to use a method that brings assessment and psychotherapy together very effectively. Students are also more at ease in providing feedback in this manner. We believe this method should be routinely taught in assessment classes.

Assessment on the Internet

Schlosser (1991) envisioned a future in which computers would present test-takers with stimuli ranging from verbal items to moving projective stimuli, including stimuli with synthesized smells. He conceived of the use of virtual reality techniques, computer-generated simulations in which images, sounds, and tactile sensations would be produced to create a synthetic, three-dimensional representation of reality. Ten years later we find a great deal of testing (not assessment) is being done on the Internet, but we have not yet approached Schlosser’s vision. This procedure offers the psychologist a number of fascinating opportunities, but it also presents a number of professional and ethical problems (Barak & English, in press). Much research needs to be done to determine the effects of differences in the interpersonal setting with this more artificial Internet approach for various clinical populations. Just because the interaction simulates the traditional approach does not mean the experience of the assessor and the patient will be similar to that of the traditional approach. More disturbed patients would probably have more difficulty with such distance assessment compared with less impaired patients.

These issues seem modest to some psychologists, who even now offer screening tests for depression, anxiety, sexual disorders, attention-deficit disorder, and various personality disorders. Students should be made aware that such blunt feedback of test results does not meet APA ethics requirements. There is also a long list of other ethical issues in this approach that should be discussed in class, because these problems will face students in the future. Nevertheless, Internet testing promises to be a great help for people who for one reason or another cannot get to a psychologist’s office to be tested or for people in rural communities in which there are no such services available.

Research on the Interpretive Process

More research should be done to illuminate the interpretiveintegrative process in personality assessment, beyond the variables of convergent and divergent thinking. One method that needs exploration is the analysis of the thinking patterns of those who are adept at synthesizing data. By this we mean the study of people who are talented in the integrative process. Emphasis should be placed on studying these experts and on the analysis of heretofore unverbalized methods these people use to integrate data. In other words, we should attempt to focus on these often hidden processes so that the so-called magic of intuition can be described and taught in the classroom. Such studies would be directly relevant for the teaching process. The description of the teaching process in the section describing the advanced assessment course is an effort in that direction.

Expanded Conception of Intelligence

Wechsler’s definition of intelligence—“theaggregate or global capacity to act purposefully, think rationally, and to deal effectively with [the] environment” (Wechsler, 1958, p. 7)—is hardly reflected in his intelligence tests. The definition implies that being interpersonally effective and thinking clearly are important intellectual variables. However, these and other variables suggested by Wechsler’s definition are personality variables as well. Thus, it appears that personality variables and so-called intelligence variables overlap to some extent. Indeed, Daniel Goleman, in his book Emotional Intelligence (1995), highlights the importance of emotional and social factors as measures of intelligence. He describes an expanded model of what it means to be intelligent, emphasizing such variables as being able to motivate oneself and persist in the face of frustration; the ability to control impulses; the ability to delay gratification; the ability to regulate one’s moods and to keep distress from interfering with thought processes; the ability to empathize and to hope. Other researchers in the area of intelligence have discussed similar issues. For example, Gardner (1993), and Salovey (Mayer & Salovey, 1993; Salovey & Mayer, 1989–1990) have discussed the importance of interpersonal intelligence, defined as “the ability to understand other people; what motivates them, how they work; how to work cooperatively with them” (Goleman, 1995, p. 39), and intrapersonal intelligence, defined as “the capacity to form an accurate, veridical model of oneself and to be able to use that model to operate effectively in life” (Goleman, 1995, p. 43). In a recent chapter, Mayer, Caruso, and Parker (2000) focus on four areas of emotional intelligence: perception, facilitation, understanding, and management of emotions. Bar-On and Parker (2000) have compiled a handbook of emotional intelligence, in which they also include the concepts of alexithymia and what they term practical intelligence. Nevertheless, researchers and test constructors seem to focus on a more traditional definition of intelligence variables. Although clinical psychologists take these important variables into account in describing personality functioning, they do not typically construct intelligence tests with these interpersonal and intrapersonal variables in mind. Although there are now measures of emotional intelligence available for adults (e.g., the Bar On Emotional Quotient Inventory; Bar-On, 1997), and for children (e.g., The Emotional Intelligence Scale for Children; Sullivan, 1999), emotional intelligence measures have yet to be integrated as parts of more traditional tests measuring other intelligence factors. However, their future use will undoubtedly go a long way toward a more integrated view of human functioning than exists in the somewhat arbitrary split between the concepts of intelligence and personality.


  1. Ackerman, S., Hilsenroth, M., Baity, M., & Blagys, M. (2000). Interaction of therapeutic process and alliance during psychological assessment. Journal of Personality Assessment, 75, 82– 109.
  2. Acklin, M. (1996). Personality assessment and managed care. Journal of Personality Assessment, 66, 194–201.
  3. Adler,A.(1958).NewYork:Capricorn.
  4. Allen, J. (1998). Personality assessment with American Indians and Alaska Natives: Instrument considerations and service delivery style. Journal of Personality Assessment, 70, 17–42.
  5. American Psychological Association (APA). (1992). Ethical principles of psychologists and code of conduct. American Psychologist, 47, 1597–1611.
  6. American Psychological Association Division 12 (Clinical) Presidential Task Force. (1999). Assessment for the twenty-first century: A model curriculum. The Clinical Psychologist, 52, 10–15.
  7. Anastasi, A., & Urbina, S. (1998). Psychological testing (7th ed.). Upper Saddle River, NJ: Prentice Hall.
  8. Applebaum, S. (1990). The relationship between assessment and psychotherapy. Journal of Personality Assessment, 54, 79–80.
  9. Baity, M., & Hilsenroth, M. (1999). The Rorschach aggression variables: A study of reliability and validity. Journal of Personality Assessment, 72(1), 93–110.
  10. Barak, A., & English, N. (in press). Prospects and limitations of psychological testing on the Internet. Journal of Technology in Human Services.
  11. Bar-On, R. (1997). The Bar On Emotional Quotient Inventory. North Tonawanda, New York: Multi-Health Systems.
  12. Bar-On, R., & Parker, J. (Eds.). (2000). The handbook of emotional intelligence. San Francisco: Jossey-Bass.
  13. Barrick, M., & Mount, M. (1991). The big five personality dimensions and job performance: A meta-analysis. Personnel Psychology, 44, 1–26.
  14. Beebe, D., Finer, E., & Holmbeck, G. (1996). Low end specificity of four depression measures: Findings and suggestions for the research use of depression tests. Journal of Personality Assessment, 67, 272–284.
  15. Belter, R., & Piotrowski, C. (1999). Current status of Master’s-level training in psychological assessment. Journal of Psychological Practice, 5, 1–5.
  16. Bennett, B. E., Bryant, B. K., VandenBos, G. R., & Greenwood, A. (1990). Professional liability and risk management. Washington, DC: American Psychological Association.
  17. Berg, M. (1984). Expanding the parameters of psychological testing. Bulletin of the Menninger Clinic, 48, 10–24.
  18. Beutler, L., & Berren, M. (1995). Integrative assessment of adult personality. New York: Guilford Press.
  19. Blais, M., & Eby, M. (1998). Jumping into fire: Internship training in personality assessment. In L. Handler & M. Hilsenroth (Eds.), Teaching and learning personality assessment (pp. 485–500). Mahwah, NJ: Erlbaum.
  20. Brabender, V. (1992, March). Graduate program training models. Paper presented at the meeting of the Society for Personality Assessment, Washington, DC.
  21. Bruhn, A. (1990). Earliest childhood memories, Vol. 1: Theory and application to clinical practice. New York: Praeger.
  22. Bruhn, A. (1992). The early memories procedure: I. Aprojective test of autobiographical memory. Journal of Personality Assessment, 58, 1–25.
  23. Burley, T., & Handler, L. (1997). Personality factors in the accurate interpretation of projective tests: The Draw-A-Person Test. In E. Hammer (Ed.), Advances in projective test interpretation (pp. 359–380). Springfield, IL: Charles Thomas.
  24. Camara, W., Nathan, J., & Puente, A. (2000). Psychological test usage: Implications in professional psychology. Professional Psychology: Research and Practice, 31, 141–154.
  25. Childs, R., & Eyde, L. (2002). Assessment training in clinical psychology doctoral programs: What should we teach? What do we teach? Journal of Personality Assessment, 78(1), 130– 144.
  26. Clemence, A., & Handler, L. (2001). Psychological assessment on internship: A survey of training directors and their expectations for students. Journal of Personality Assessment, 76, 18–47.
  27. Committee on Accreditation, Education Directorate, & American PsychologicalAssociation. (1996). Guidelines and principles for accreditation of programs in professional psychology, January 1, 1996. Washington, DC: American Psychological Association.
  28. Cramer, P. (1996). Storytelling, narrative and the Thematic Apperception Test. New York: Guilford Press.
  29. Cuéllar, I. (1998). Cross-cultural clinical psychological assessment of Hispanic Americans. Journal of Personality Assessment, 70, 71–86.
  30. Dana, R. H. (2000a). Culture and methodology in personality assessment. In I. Cueller & F. A. Paniagua (Eds.), Handbook of multicultural mental health (pp. 97–120). San Diego, CA: Academic Press.
  31. Dana, R. H. (Ed.). (2000b). Handbook of cross-cultural and multicultural personality assessment. Mahwah, NJ: Erlbaum.
  32. Deering, C. (1994). Parallel process in the supervision of child psychotherapy.AmericanJournalofPsychotherapy,48,102–108.
  33. Dempster, I. (1990). How mental health professionals view their graduate training. Journal of Training and Practice in Professional Psychology, 6(2), 59–66.
  34. Dennis, W. (1966). Group values through children’s drawings. New York: Wiley.
  35. Doehrman, M. (1976). Parallel process in supervision and psychotherapy. Bulletin of the Menninger Clinic, 40, 9–83.
  36. Durand,V.,Blanchard,E.,&Mindell,J.(1988).Traininginprojective testing: Survey of clinical training directors and internship directors. Professional Psychology: Research and Practice, 19, 236– 238.
  37. Erikson, E. (1963). Childhood and society (2nd ed.). New York: Wiley.
  38. Exner, J. E., Jr. (1993). The Rorschach: A comprehensive system (3rd ed., Vol. 1). New York: Wiley.
  39. Finn, S. (1996). A manual for using the MMPI-2 as a therapeutic intervention. Minneapolis, MN: University of Minnesota Press.
  40. Finn, S., & Martin, H. (1997). Therapeutic assessment with the MMPI-2 in managed healthcare. In J. Butcher (Ed.), Objective psychological assessment in managed healthcare: A practitioner’sguide(pp.131–152).NewYork:OxfordUniversityPress.
  41. Finn, S., & Tonsager, M. (1992). The therapeutic effects of providing MMPI-2 test feedback to college students awaiting psychotherapy. Psychological Assessment, 4, 278–287.
  42. Fischer, C. (1994). Individualizing psychological assessment. Hillsdale, NJ: Erlbaum.
  43. Fowler, J. (1998). The trouble with learning personality assessment. In L. Handler & M. Hilsenroth (Eds.), Teaching and learning personality assessment (pp. 31–44). Mahwah, NJ: Erlbaum.
  44. Fowler, J., Hilsenroth, M., & Handler, L. (1995). Early memories: An exploration of theoretically derived queries and their clinical utility. Bulletin of the Menninger Clinic, 59, 79–98.
  45. Fowler, J., Hilsenroth, M., & Handler, L. (1996). A mulitmethod approachtoassessingdependency:Theearlymemorydependency probe. Journal of Personality Assessment, 67, 399–413.
  46. Ganellen, R. (1996). Integrating the Rorschach and the MMPI-2 in personality assessment. Mahwah, NJ: Erlbaum.
  47. Ganellen, R. (1999). Weighing the evidence for the Rorschach’s validity: A response to Wood et al. Journal of Personality Assessment, 77, 1–15.
  48. Gardner, H. (1993). Multiple intelligences: The theory in practice. New York: Basic Books.
  49. Garfield, S., & Kurtz, R. (1973). Attitudes toward training in diagnostic testing: A survey of directors of internship training. Journal of Consulting and Clinical Psychology, 40, 350–355.
  50. Goldberg, L. (1992). The development of markers for the Big-Five factor structure. Psychological Assessment, 4, 26–42.
  51. Goleman, D. (1995). Emotional intelligence. New York: Bantam.
  52. Graham, J. (2000). MMPI-2: Assessing personality and psychopathology. New York: Oxford University Press.
  53. Handler, L. (1996). The clinical use of the Draw-A-Person Test (DAP), the House-Tree-Person Test and the Kinetic Family Drawing Test. In C. Newmark (Ed.), Major psychological assessment techniques (2nd ed., pp. 206–293). Englewood Cliffs, NJ: Allyn and Bacon.
  54. Handler, L. (1997). He says, she says, they say: The Consensus Rorschachinmaritaltherapy.InJ.Meloy,C.Peterson,M.Acklin, C. Gacono, & J. Murray (Eds.), Contemporary Rorschach interpretation (pp. 499–533). Hillsdale, NJ: Erlbaum.
  55. Handler, L., & Finley, J. (1994). Convergent and divergent thinking and the interpretation of figure drawings. Unpublished manuscript.
  56. Handler, L., Fowler, J., & Hilsenroth, M. (1998). Teaching and learning issues in an advanced course in personality assessment. In L. Handler & M. Hilsenroth (Eds.), Teaching and learning personality assessment (pp. 431–452). Mahwah, NJ: Erlbaum.
  57. Handler, L., & Habenicht, D. (1994). The Kinetic Family Drawing Technique: A review of the literature. Journal of Personality Assessment, 62, 440–464.
  58. Handler, L., & Meyer, G. (1998). The importance of teaching and learning personality assessment. In L. Handler & M. Hilsenroth (Eds.), Teaching and learning personality assessment (pp. 3–30). Mahwah, NJ: Erlbaum.
  59. Handler, L., & Potash, H. (1999). The assessment of psychological health [Introduction, Special series]. Journal of Personality Assessment, 72, 181–184.
  60. Handler, L., & Reyher, J. (1964). The effects of stress in the DrawA-Person test. Journal of Consulting and Clinical Psychology, 28, 259–264.
  61. Handler, L., & Reyher, J. (1966). Relationship between GSR and anxiety indexes on projective drawings. Journal of Consulting and Clinical Psychology, 30, 605–607.
  62. Handler, L, & Riethmiller, R. (1998). Teaching and learning the interpretation of figure drawings. In L. Handler & M. Hilsenroth (Eds.), Teaching and learning personality assessment (pp. 267– 294). Mahwah, NJ: Erlbaum.
  63. Handler, L., & Sheinbein, M. (1987, March). Decision-making patterns in couples satisfied with their marriage and couples dissatisfied with their marriage. Paper presented at the Midwinter Meeting of the Society of Personality Assessment, San Francisco.
  64. Hanson, W., Claiborn, C., & Kerr, B. (1997). Differential effects of two test interpretation styles in counseling: Afield study. Journal of Counseling Psychology, 44, 400–405.
  65. Hartman, W., & Fithian, M. (1972). Treatment of sexual dysfunction. Long Beach, CA: Center for Marital and Sexual Studies.
  66. Hershey, J., Kopplin, D., & Cornell, J. (1991). Doctors of Psychology: Their career experiences and attitudes toward degree and training. Professional Psychology: Research and Practice, 22, 351–356.
  67. Hilsenroth, M. (1998). Using metaphor to understand projective test data: A training heuristic. In L. Handler & M. Hilsenroth (Eds.), Teaching and learning personality assessment (pp. 391–412). Mahwah, NJ: Erlbaum.
  68. Hilsenroth, M., & Handler, L. (1995). A survey of graduate students’ experiences, interests, and attitudes about learning the Rorschach. Journal of Personality Assessment, 64, 243–257.
  69. Holsopple, J., & Miale, F. (1954). Sentence completion: A projective method for the study of personality. Springfield, IL: Charles Thomas.
  70. Holt, R. (1970). Yet another look at clinical and statistical prediction: Or, is clinical psychology worthwhile? American Psychologist, 25, 337–349.
  71. Holt, R. (1977). A method for assessing primary process manifestations and their control in Rorschach responses. In M. RickersOvsiankina (Ed.), Rorschach psychology (pp. 375–420). Huntington, NY: Kreiger.
  72. Kelly, F. (1997). The assessment of object relations phenomena in adolescents: TAT and Rorschach measures. Mahwah, NJ: Erlbaum.
  73. Kinder, B. (1994). Where the action is in personality assessment. Journal of Personality Assessment, 62, 585–588.
  74. Klopfer, B., Ainsworth, M., Klopfer, W., & Holt, R. (1954). Development in the Rorschach technique (Vol. 1). New York: World Book.
  75. Koocher, G., & Keith-Spiegel, P. (1998). Ethics in psychology: Professional standards and cases (2nd ed.). New York: Oxford University Press.
  76. Krakowski, A. (1984). Stress and the practice of medicine: III. Physicians compared with lawyers. Psychotherapy and Psychosomatics, 42, 143–151.
  77. Kubeszyn, T., Meyer, G., Finn, S., Eyde, L., Kay, G., Moreland, K., Dies, R., & Eisman, E. (2000). Empirical support for psychological assessment in health care settings. Professional Psychology: Research and Practice, 31(2), 119–130.
  78. Lerner, P., & Lerner, H. (1980). Rorschach assessment of primitive defenses in borderline personality structure. In J. Kwarer, H. Lerner, P. Lerner, & A. Sugarman (Eds.), Borderline phenomena and the Rorschach test (pp. 257–274). New York: International Universities Press.
  79. Lewinsohn, P. (1965). Psychological correlates of overall quality of figure drawings. Journal of Consulting Psychology, 29, 504–512.
  80. Lindsey, M. (1998). Culturally competent assessment of African American clients. Journal of Personality Assessment, 70, 43–53.
  81. Maloney, M., & Glasser, A. (1982). An evaluation of the clinical utility of the Draw-A-Person Test. Journal of Clinical Psychology, 38, 183–190.
  82. Maruish, M. (1990, Fall). Psychological assessment: What will its role be in the future? Assessment Applications, 5.
  83. Maruish, M. (1999). The use of psychological testing for treatment planning and outcome assessment (2nd ed.). Hillsdale, NJ: Erlbaum.
  84. Masling, J., Rabie, L., & Blondheim, S. (1967). Relationships of oral imagery to yielding behavior and birth order. Journal of Consulting Psychology, 32, 89–91.
  85. Maslow, A. (1954). Motivation and personality. New York: Harper and Row.
  86. May,R.,Angel,M.,&Ellenberger,H.(Eds.).(1958).Existence:Anew dimension in psychiatry and psychology. NewYork: Basic Books.
  87. Mayer, J., & Salovey, P. (1993). The intelligence of emotional intelligence. Intelligence, 7, 433–442.
  88. Mayer, J., Caruso, D., & Salovey, P. (2000). Selecting a measure of emotional intelligence: The case for ability scales. In R. Bar-On & J. Parker (Eds.), The handbook of emotional intelligence (pp. 320–342). San Francisco: Jossey-Bass.
  89. McClelland, D. (1987). Human motivation. New York: Cambridge University Press.
  90. McCrae, R. (1996). Social consequences of experiential openness. Psychological Bulletin, 120, 323–337.
  91. Mednick, S., & Mednick, M. (1967). Examiner’s manual: Remote Associates Test. Boston: Houghton Mifflin.
  92. Meehl, P. (1954). Clinical versus statistical prediction. Minneapolis: University of Minnesota Press.
  93. Meehl, P. (1956). Wanted: A good cookbook. American Psychologist, 11, 263–272.
  94. Meyer, G. (1997). On the integration of personality assessment methods: The Rorschach and the MMPI-2. Journal of Personality Assessment, 68, 297–330.
  95. Meyer, G. (2000). Incremental validity of the Rorschach Prognostic Rating Scale over the MMPI Ego Strength Scale and IQ. Journal of Personality Assessment, 74(3), 356–370.
  96. Meyer, G., & Archer, R. (2001). The hard science of Rorschach research: What do we know and where do we go? Psychological Assessment, 13(4), 486–502.
  97. Meyer, G., Finn, S., Eyde, L., Kay, G., Moreland, K., Dies, R., Eisman, E., Kubiszyn, T., & Reed, J. (2000). Psychological testing and psychological assessment: A review of evidence and issues. American Psychologist, 56, 128–165.
  98. Meyer, G., & Handler, L. (1997). The ability of the Rorschach to predict subsequent outcome: A meta-analytic analysis of the Rorschach Prognostic Rating Scale. Journal of Personality Assessment, 69(1), 1–38.
  99. Meyer, G., Riethmiller, R., Brooks, R., Benoit, W., & Handler, L. (2000). A replication of Rorschach and MMPI-2 convergent validity. Journal of Personality Assessment, 74(2), 175–215.
  100. Muhlenkamp, A., & Parsons, J. (1972). An overview of recent research publications in a nursing research periodical. Journal of Vocational Behavior, 2, 261–273.
  101. Newman, F. (1991, Summer). Using assessment data to relate patient progress to reimbursement criteria. Assessment Applications, 4–5.
  102. Newman, M., & Greenway, P. (1997). Therapeutic effects of providing MMPI-2 test feedback to clients at a university counseling service: A collaborative approach. Psychological Assessment, 9, 122–131.
  103. Nunnally, J., & Bernstein, I. (1994). Psychometric theory (3rd ed.). New York: McGraw-Hill.
  104. Okazaki, S. (1998). Psychological assessment of Asian-Americans: Research agenda for cultural competency. Journal of Personality Assessment, 70, 54–70.
  105. Petzel, T., & Berndt, D. (1980). APA internship selection criteria: Relative importance of academic and clinical preparation. Professional Psychology, 11, 792–796.
  106. Phelps, R., Eisman, E., & Kohout, J. (1998). Psychological practice and managed care: Results of the CAPP practitioner study. Professional Psychology: Research and Practice, 29, 31–36.
  107. Piotrowski, C. (1999). Assessment practices in the era of managed care: Current status and future directions. Journal of Clinical Psychology, 55, 787–796.
  108. Piotrowski, C., & Belter, R. W. (1999). Internship training in psychological assessment: Has managed care had an impact? Assessment, 6, 381–389.
  109. Piotrowski, C., Belter, R., & Keller, J. (1998). The impact of managed care on the practice of psychological testing: Preliminary findings. Journal of Personality Assessment, 70, 441–447.
  110. Piotrowski, C., & Zalewski, C. (1993). Training in psychodiagnostic testing in APA-approved PsyD and PhD clinical psychology programs. Journal of Personality Assessment, 61, 394–405.
  111. Plante, T. (1995). Training child clinical predoctoral interns and postdoctoral fellows in ethics and professional issues: An experiential model. Professional Psychology: Research and Practice, 26, 616–619.
  112. Plante, T. (1999). Ten strategies for psychology trainees and practicing psychologists interested in avoiding ethical and legal perils. Psychotherapy, 36, 398–403.
  113. Potash, H. (1998). Assessing the social subject. In L. Handler & M. Hilsenroth (Eds.), Teaching and learning personality assessment (pp. 137–148). Mahwah, NJ: Erlbaum.
  114. Rapaport, D., Gill, M., & Schafer, R. (1968). In R. Holt (Ed.), Diagnostic psychological testing (2nd ed.). New York: International Universities Press.
  115. Retzlaff, P. (1992). Professional training in psychological testing: New teachers and new tests. Journal of Training and Practice in Professional Psychology, 6, 45–50.
  116. Rezler, A., & Buckley, J. (1977). A comparison of personality types among female student health professionals. Journal of Medical Education, 52, 475–477.
  117. Riethmiller, R., & Handler, L. (1997a). Problematic methods and unwarranted conclusions in DAP research: Suggestions for improved procedures [Special series]. Journal of Personality Assessment, 69, 459–475.
  118. Riethmiller, R., & Handler, L. (1997b). The great figure drawing controversy: The integration of research and practice [Special series]. Journal of Personality Assessment, 69, 488–496.
  119. Ritzler, B., & Alter, B. (1986). Rorschach teaching in APAapproved clinical graduate programs: Ten years later. Journal of Personality Assessment, 50, 44–49.
  120. Robins, C., Blatt, S., & Ford, R. (1991). Changes on human figure drawings during intensive treatment. Journal of Personality Assessment, 57, 477–497.
  121. Rogers, C. (1961). On becoming a person: A therapist’s view of psychotherapy. Boston: Houghton Mifflin.
  122. Ronan, G., Date, A., & Weisbrod, M. (1995). Personal problemsolving scoring of the TAT: Sensitivity to training. Journal of Personality Assessment, 64, 119–131.
  123. Rossini, E., & Moretti, R. (1997). Thematic Apperception Test (TAT) interpretation: Practice recommendations from a survey of clinical psychology doctoral programs accredited by the American Psychological Association. Professional Psychology: Research and Practice, 28, 393–398.
  124. Salovey, P., & Mayer, J. (1989–1990). Emotional intelligence. Imagination, Cognition, and Personality, 9, 185–211.
  125. Sarrel, P., Sarrel, L., & Berman, S. (1981). Using the Draw-APerson (DAP) Test in sex therapy. Journal of Sex and Martial Therapy, 7, 163–183.
  126. Schafer, R. (1967). Projective testing and psychoanalysis. New York: International Universities Press.
  127. Schlosser, B. (1991). The future of psychology and technology in assessment. Social Science Computer Review, 9, 575–592.
  128. Schutz, B. (1982). Legal liability in psychotherapy: A practitioner’s guide to risk management. San Fransisco: Jossey-Bass.
  129. Seligman, M., & Csikszentmihalyi, M. (2000). Positive psychology: An introduction. American Psychologist, 55, 5–14.
  130. Shemberg, K., & Keeley, S. (1970). Psychodiagnostic training in the academic setting: Past and present. Journal of Consulting and Clinical Psychology, 34, 205–211.
  131. Shemberg, K., & Keeley, S. (1974). Training practices and satisfaction with preinternship preparation. Professional Psychology, 5, 98–105.
  132. Shemberg, K., & Leventhal, D. B. (1981). Attitudes of internship directors towards preinternship training and clinical models. Professional Psychology, 12, 639–646.
  133. Sigman, S. (1989). Parallel process at case conferences. Bulletin of the Menninger Clinic, 53, 340–349.
  134. Stedman, J. (1997). What we know about predoctoral internship training: A review. Professional Psychology: Research and Practice, 28, 475–485.
  135. Sugarman, A. (1981). The diagnostic use of countertransference reactions in psychological testing. Bulletin of the Menninger Clinic, 45, 475–490.
  136. Sugarman, A. (1991). Where’s the beef? Putting personality back into personality assessment. Journal of Personality Assessment, 56, 130–144.
  137. Sugarman, A., & Kanner, K. (2000). The contribution of psychoanalytic theory to psychological testing. Psychoanalytic Psychology, 17, 1–21.
  138. Sullivan, A. (1999, July). The Emotional Intelligence Scale for Children. Dissertation Abstracts International, 60(01), 0068A.
  139. Teglasi, H. (1993). Clinical use of story telling. Needham Heights, NJ: Allyn and Bacon.
  140. Tett, R., Jackson, D., & Rothstein, M. (1991). Personality measures as predictors of job performance: A meta-analytic review. Personnel Psychology, 44, 703–742.
  141. Torrance, E. (1966). Torrance tests of creative thinking: Directions, manual and scoring guide (Verbal test booklet A). Princeton, NJ: Scholastic Testing Service.
  142. Torrance, E. (1974). Torrance tests of creative thinking: Norms— technical manual. Princeton, NJ: Scholastic Testing Service.
  143. Trimboli, F., & Kilgore, R. (1983). A psychodynamic approach to MMPI interpretation. Journal of Personality Assessment, 47, 614–625.
  144. Urist, J. (1977). The Rorschach test and the assessment of object relations. Journal of Personality Assessment, 41, 3–9.
  145. Viglione, D. (1999). A review of the recent research addressing the utility of the Rorschach. Psychological Assessment, 11(3), 251– 265.
  146. Viglione, D., & Hilsenroth, M. (2001). The Rorschach: Facts, fictions, and the future. Psychological Assessment, 13, 452–471.
  147. Watkins, C. E., Jr. (1991). What have surveys taught us about the teaching and practice of psychological assessment? Journal of Personality Assessment, 56, 426–437.
  148. Watkins, C. E., Jr., Campbell, V., & Manus, M. (1990). Personality assessment training in counseling psychology programs. Journal of Personality Assessment, 55, 380–383.
  149. Wechsler, D. (1958). The measurement and approval of adult intelligence (4th ed.). Baltimore: Williams & Wilkins.
  150. Weiner, I. (1998). Principles of Rorschach interpretation. Mahwah, NJ: Erlbaum.
  151. Weiner, I. (2001). Advancing the science of psychological assessment: The Rorschach Inkblot Method. Psychological Assessment, 13(4), 423–432.
  152. Westen, D. (1991a). Clinical assessment of object relations using the Thematic Apperception Test. Journal of Personality Assessment, 56, 56–74.
  153. Westen, D. (1991b). Social cognition and object relations. Psychological Bulletin, 109, 429–455.
  154. Westen, D., Lohr, N., Silk, K., Gold, L., & Kerber, K. (1990). Object relations and social cognition in borderlines, major depressives, and normals: A Thematic Apperception Test analysis. Psychological Assessment, 2, 355–364.
  155. Wetzler, S. (1989). Parameters of psychological assessment. In S. Wetzler & M. Katz (Eds.), Contemporary approaches to psychological assessment (pp. 3–15). New York: Brunner/Mazel.
  156. Whitman, S., & Jacobs, E. (1998). Responsibilities of the psychotherapy supervisor. American Journal of Psychotherapy, 52, 166–175.
  157. Williams, F. (1980). Creativity assessment packet (CAP). Buffalo, NY: DOK Publishers.
  158. Yama, M. (1990). The usefulness of human figure drawings as an index of overall adjustment inferred from human figure drawings. Journal of Personality Assessment, 54, 78–86.
Ethical Issues in Psychological Assessment Research Paper
Psychological Assessment in Adult Mental Health Settings Research Paper


Always on-time


100% Confidentiality
Special offer! Get 10% off with the 24START discount code!