Psychological Assessment in Industrial/Organizational Settings Research Paper

View sample psychological assessment in industrial/organizational settings research paper. Browse other  research paper examples and check the list of psychology research paper topics for more inspiration. If you need a psychology research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our custom writing service for professional assistance. We offer high-quality assignments for reasonable rates.

Context of Psychological Assessments in Industrial/Organizational Settings

Psychologists have been active in the assessment of individuals in work settings for almost a century. In light of the apparent success of the applications of psychology to advertising and marketing (Baritz, 1960), it is not surprising that corporate managers were looking for ways that the field could contribute to the solution of other business problems, especially enhancing worker performance and reducing accidents. For example, Terman (1917) was asked to evaluate candidates for municipal positions in California. He used a shortened form of the Stanford-Binet and several other tests and looked for patterns against past salary and occupational level (Austin, Scherbaum, & Mahlman, 2000). Other academic psychologists, notably Walter Dill Scott and Hugo Munsterberg, were also happy to oblige.

In this regard, the approaches used and the tools and techniques developed clearly reflected prevailing thinking among researchers of the time. Psychological measurement approaches in industry evolved from procedures used by Fechner and Galton to assess individual differences (Austin et al., 2000). Spearman’s views on generalized intelligence and measurement error had an influence on techniques that ultimately became the basis of the standardized instruments popular in work applications. Similarly, if instincts were an important theoretical construct (e.g., McDougal, 1908), these became the cornerstone for advertising interventions. When the laboratory experimental method was found valuable for theory testing, it was not long before it was adapted to the assessment of job applicants for the position of street railway operators (Munsterberg, 1913). Vocational interest blanks designed for guiding students into careers were adapted to the needs of industry to select people who would fit in.

Centers of excellence involving academic faculty consulting with organizations were often encouraged as part of the academic enterprise, most notably one established by Walter Bingham at Carnegie Institute in Pittsburgh (now Carnegie Mellon University). It makes sense, then, that programs such those at as Carnegie, Purdue, and Michigan State University were located in the proximity of large-scale manufacturing enterprises. As will become clear through a reading of this research paper, the legacy of these origins can still be seen in the models and tools of contemporary practitioners (e.g., the heavy emphasis on the assessment for the selection of hourly workers for manufacturing firms).

The practice of assessment in work organizations was also profoundly affected by activities and developments during the great wars fought by the United States. Many of the personnel and performance needs of the military during both the first and secondWorldWars were met by contributions of psychologists recruited from the academy. The work of Otis on the (then) new idea of the multiple-choice test was found extremely valuable in solving the problem of assessing millions of men called to duty for their suitability and, once enlisted, for their assignments to specific work roles. The Army’s Alpha test, based on the work of Otis and others, was itself administered to 1,700,000 individuals. Tools and techniques for the assessment of job performance were refined or developed to meet the needs of the military relative to evaluating the impact of training and determining the readiness of officers for promotion. After the war, these innovations were diffused into the private sector, often by officers turned businessmen or by the psychologists no longer employed by the government. Indeed, the creation of the Journal of Applied Psychology (1917) and the demand for practicing psychologists in industry are seen as outgrowths of the success of assessment operations in the military (Schmitt & Klimoski, 1991).

In a similar manner, conceptual and psychometric advances occurred as a result of psychology’s involvement in government or military activities involved in winning the second World War. Over 1,700 psychologists were to be involved in the research, development, or implementation of assessment procedures in aneffort to deal with such things as absenteeism, personnel selection, training (especially leader training), and soldier morale. Moreover, given advances in warfare technology, new problems had to be addressed in such areas as equipment design (especially the user interface), overcoming the limitations of the human body (as in high-altitude flying), and managing work teams. Technical advances in survey methods (e.g., the Likert scale) found immediate applications in the form of soldier morale surveys or studies of farmers and their intentions to plant and harvest foodstuffs critical to the war effort.

A development of particular relevance to this research paper was the creation of assessment procedures for screening candidates for unusual or dangerous assignments, including submarine warfare and espionage. The multimethod, multisource philosophy of this approach eventually became the basis for the assessment center method used widely in industry for selection and development purposes (Howard & Bray, 1988). Finally, when it came to the defining of performance itself, Flanagan’s (1954) work on the critical incident method was found invaluable. Eventually, extensions of the approach could be found in applied work on the assessment of training needs and even the measurement of service quality.

Over the years, the needs of the militaryand of government bureaus and agencies have continued to capture the attention of academics and practitioners, resulting in innovations of potential use to industry. This interplay has also encouraged the development of a large and varied array of measurement tools or assessment platforms. The Army General Classification test has its analogue in any number of multiaptitude test batteries. Techniques for measuring the requirements of jobs, like Functional Job Analysis or the Position Analysis Questionnaire, became the basis for assessment platforms like the GeneralAptitudeTest Battery (GATB) or, more recently, the Occupational Information Network (O*Net; Peterson, Mumford, Borman, Jeanneret, & Fleishman, 1999). Scales to measure job attitudes (Smith, Kendall, & Hulin, 1969), organizational commitment (Mowday, Steers, & Porter, 1979), or work adjustment (Dawis, 1991) found wide application, once developed. Moreover, there is no shortage of standard measures for cognitive and noncognitive individual attributes (Impara & Plake, 1998).

Afinal illustration of the importance of cultural context on developments in industry can be found in the implementation of civil rights legislation in America in the 1960s and 1970s (and, a little later, the Americans with Disabilities Act). This provided new impetus to changes in theory, research designs, and assessment practices in work organizations.The litigation of claims under these laws has also had a profound effect on the kinds of measures found to be acceptable for use as well.

The Nature Ofassessment in Industrial and Organizational Settings

This research paper is built on a broad view of assessment relative to its use in work organizations. The thrust of the paper, much like the majority of the actual practice of assessment in organizations, will be to focus on constructs that imply or allow for the inference of job-related individual differences. Moreover, although we will emphasize the activities of psychologists in industry whenever appropriate, it should be clear at the outset that the bulk of individual assessments in work settings are being conducted by others—managers, supervisors, trainers, human resource professionals—albeit often under the guidance of practicing psychologists or at least using assessment platforms that the latter designed and have implemented on behalf of the company.

Most individuals are aware of at least some of the approaches used for individual assessment by psychologists generally. For example, it is quite common to see mention in the popularpressofpsychologists’useofinterviewsandquestionnaires. Individual assessment in work organizations involves many of these same approaches, but there are some characteristic features worth stressing at the outset. Specifically, with regard to assessments in work settings, we would highlight their multiple (and at times conflicting) purposes, the types of factors measured, the approach used, and the role that assessment must play to insure business success.

Purposes

Business Necessity

For the most part, assessments in work organizations are conducted for business-related reasons. Thus, they might be performed in order to design, develop, implement, or evaluate the impact of a business policy or practice. In this regard, the firm uses assessment information (broadly defined) to index such things as the level of skill or competency (or its obverse, their deficiencies) of its employees or their level of satisfaction (because this might presage quitting). As such, the information so gathered ends up serving an operational feedback function for the firm. It can also serve to address the issue of how well the firm is conforming to its own business plans (Katz & Kahn, 1978).

Work organizations also find it important to assess individuals as part of their risk management obligation. Most conspicuous is the use of assessments for selecting new employees (trying to identify who will work hard, perform well, and not steal) or in the context of conducting performance appraisals. The latter, in turn, serve as the basis for compensation or promotion decisions (Murphy & Cleveland, 1995; Saks, Shmitt, & Klimoski, 2000). Assessments of an individual’s level of work performance can become the (only) basis for the termination of employment as well. Clearly, the firm has a business need to make valid assessments as the basis for appropriate and defensible personnel judgments.

In light of the numerous laws governing employment practices in most countries (and because the United States, at least, seems to be a litigious society), assessments of the perceptions, beliefs, and opinions of the workforce with regard to such things as the prevalence of sexual harassment (Fitzgerald, Drasgow, Hulin, Gelfand, & Magley, 1997; Glomb, Munson, Hulin, Bergman, & Drasgow, 1999) or of unlawful discrimination are often carried out as part of management’s “due diligence” obligation. Thus, assessed attitudes can be used to complement demographic data supplied by these individuals relative to their race, age, gender, or disability and used in monitoring personnel practice and to insure nondiscriminatory treatment of the workforce (e.g., Klimoski & Donahue, 1997).

Individual Necessity

Individual assessments in industry can also be performed with the goal of meeting the needs of the individual worker as well. The assessment of individual training needs, once made, can become the basis for a specific worker’s training and development experiences. Such information would guide the worker to just what programs or assignments would best remedy a particular deficiency. Such data, if gathered regularly over time, can also inform the worker of his or her progress in skill acquisition. In a related manner, assessments may be gathered to guide the worker relative to a work career. Whether done in the context of an organizationally managed career-path planning program or done by the worker on his or her initiative, such competency assessments relative to potential future jobs or different careers are ultimately in the service of the worker.

Progressive firms and many others whose workers are covered by collective bargaining agreements might use individual assessment data to help workers find suitable employment elsewhere, a need precipitated by such things as a corporate restructuring effort or downsizing or as an outcome of an acquisition or a merger. Job pBibliography: and skills are typically evaluated as part of an outplacement program. Often, however, one is also assessed (and, if found wanting, trained) in such things as the capacity to look for different work or to do well in a job interview that might lead to new work.

Individual assessments are at the core of counseling and coaching in the workplace. These activities can be part of a larger corporate program for enhancing the capabilities of the workforce. However, usually an assessment is done because the individual worker is in difficulty. This may be manifested in a career plateau, poor job performance, excessive absenteeism, interpersonal conflict on the job, symptoms of depression, or evidence of substance abuse. In the latter cases such assessments may be part of an employee assistance program, specifically set up to help workers deal with personal issues or problems.

Research Necessity

Many work organizations and consultants to industry take an empirical approach to the design, development, and evaluation of personnel practices. In this regard, assessment data, usually with regard to an individual’s job performance, workrelated attitudes, or job-relevant behavior, are obtained in order to serve as research criterion measures. Thus, in evaluating the potential validity of a selection test, data regarding the performance of individuals on the test and their later performance on the job are statistically compared. Similarly, the impact of a new recruitment program may be evaluated by assessing such things as the on-the-job performance and work attitudes of those brought into the organization under the new system and comparing these to data similarly obtained from individuals who are stillbeing brought in under the old one. Finally, as another example, a proposed new course for training employees may need to be evaluated. Here, evidence of learning or of skill acquisition obtained from a representative sample of workers, both before and again after the program, might be contrasted with scores obtained from a group of employees serving as a comparison group who do not go through the training.

Assessments as Criterion Measures

In the course of almost one hundred years of practice, specialists conducting personnel research have concluded that good criterionmeasuresarehardtodevelop.Thismaybedueinpart to the technical requirements for such measures, as outlined in the next section. However, it also may be simply a reflection that the human attributes and the performances of interest are, by their very nature, quite complex. When it comes to criterion measures, this is most clearly noted in the fact that these almost always must be treated as multidimensional.

The notion of dimensionality itself is seen most clearly in measures of job performance. In this regard, Ghiselli (1956) distinguished three types of criterion dimensionality. He uses the term static dimensionality to convey the idea that at any point in time, we can imagine that there are multiple facets to performance. Most of us can easily argue that both quality and quantity are usually part of the construct. In order to define the effective performance of a manager, studies have revealed that it usually requires five or more dimensions to cover this complex role (Campbell, McCloy, Oppler, & Sager, 1993).

Dynamic dimensionality is the term used to capture the notion that the essence of effective performance can change over time, even for the same individual. Thus, we can imagine that the performance of a new worker might be anchored in such things a willingness to learn, tolerance of ambiguity, and persistence. Later, after the worker has been on the job for a while, he or she would be held accountable for such things as high levels of output, occasional innovation, and even the mentoring of other, newer, employees.

Finally, Ghiselli identifies the concept of individual dimensionality. In the context of performance measures, this is used to refer to the fact that two employees can be considered equally good (or bad), but for different reasons. One worker may be good at keeping a work team focused on its task, whereasanothermaybequiteeffectivebecauseheseemstobe able to manage interpersonal conflict and tension in the team so that it does not escalate to have a negative effect on team output. Similarly, two artists can be equally well regarded but for manifesting very different artistic styles.

An additional perspectives on the multidimensionality of performance is offered by Borman and Motowidlo (1993). In their model, task performance is defined as “activities that contribute to the organization’s technological core either directly by implementing a part of its technological process, or indirectly by providing it with needed materials or services” (p. 72). Task performance, then, involves those activities that are formally recognized as part of a job. However, there are many other activities that are important for organizational effectiveness that do not fall within the task performance category. These include activities such as volunteering, persisting, helping, cooperating, following rules, staying with the organization, and supporting its objectives (Borman & Motowidlo, 1993). Whereas task performance affects organizational effectiveness through the technical core, contextual performance does so through organizational, social, and psychological means. Like Ghisellis’s (1956) perspective, the task and contextual performance distinction (Borman & Motowidlo, 1993; Motowidlo, Borman, & Schmit, 1997; Motowidlo & Van Scotter, 1994) shows that the constructs we are assessing will vary depending on the performance dimension of interest. The multidimensionality of many of the constructs of interest to those doing individual assessments in industry places a major burden on those seeking to do highquality applied research. However, as will be pointed out below, it also has a profound on the nature of the tools and of the specific measures to be used for operational purposes as well.

Thus, assessments for purposes of applied research may not differ much in terms of the specific features of the tools themselves. For example, something as common as a work sample test may be the tool of choice to gather data for validation or for making selection decisions. However, when one is adopted as the source of criterion scores, it implies a requirement for special diligence from the organization in terms of assessment conditions, additional time or resources, and certainly high levels of skill on the part of the practitioner (Campbell, 1990).

Attributes Measured

As implied by the brief historical orientation to this research paper, the traditional focus on what to measure has been on those individual difference factors that are thought to account for worker success. These person factors are frequently thought of as inputs to the design and management of work organizations. Most often, the attributes to be assessed derive from an analysis of the worker’s job duties and include specific forms of knowledge, skills, abilities, or other attributes (KSAOs) implying work-related interests and motivation. More recently, the focus has been on competencies, the demonstrated capacity to perform job-relevant activities (Schippmann et al., 2000). Key competencies are ascertained from a careful consideration not of the job but of the role or functions expected to be performed by an employee if he or she is to contribute to business success. Thus, attributes such as speed of learning or teamwork skills might be the focus of assessments.As will be detailed later, these attributes might be the core of any personnel selection program.

Assessments of individuals in work settings may also focus on the process used by the employee to get the job done. Operationally, these are the kinds of behaviors that are necessary and must be carried out well in the work place if the worker is to be considered successful. These, too, derive from an analysis of the job and of the behaviors that distinguish effective employees from less effective ones. Process assessments are particularly common in organizational training and worker performance review programs.

For the most part, employees in work organizations are held accountable for generating products: outcomes or results. Thus, it is common for assessments to be focused on such things as the quality and quantity of performance, the frequency of accidents, and the number of product innovations proposed. The basis for such assessments might be a matter of record. Often, however, human judgment and skill are required in locating and categorizing work outcomes relative to some standard. Outcome assessments are often used as the basis for compensation and retention decisions. In the course of the year, most individuals in work organizations might be assessed against all three types of assessments.

Approaches Used for Assessment in Industry

Three features of the approach favored by many of those doing assessment work in industry are worth highlighting. The first has been noted already in that many assessment platforms are built on careful development and backed up by empirical evidence. Although it is possible that an assessment technique would be adopted or a particular practitioner hired without evidence of appropriateness for that particular organization, it is not recommended. As stressed throughout this research paper, to do so would place the firm at risk.

A second feature that is somewhat distinctive is that most assessments of individuals in work contexts are not done by psychologists. Instead, managers, supervisors, trainers, and even peers are typically involved in evaluating individuals on the factors of interest. This said, for larger firms, practicing psychologists may have had a hand in the design of assessment tools and programs (e.g., a structured interview protocol for assessing job applicants), or they may have actually trained company personnel on how to use them. However, the assessments themselves are to be done by the latter without much supervision by program designers. For smaller firms, this would be less likely, because a practicing psychologist might be retained or used on an as-needed basis (e.g., to assist in the selection of a managing partner in a law firm). Under these circumstances, it would be assumed that the psychologist would be using assessment tools that he or she has found valid in other applications.

A final distinction between assessment in industry and other psychological assessments is that quite often assessments are being done on a large number of individuals at the same time or over a short period of time. For example, when the fire and safety service of Nassau County, NewYork sought to recruit and select about 1,000 new police officers, it had to arrange for 25,000 applicants to sit for the qualifying exam at one time (Schmitt, 1997). This not only has implications for the kinds of assessment tools that can be used but affects such mundane matters as choice of venue (in this case, a sports arena was needed to accommodate all applicants) and how to manage test security (Halbfinger, 1999).

The large-scale nature of assessments in industry implies the common use of aggregate data. Although the individual case will be the focus of the assessment effort, as noted earlier, very often the firm is interested in averages, trends, or establishing the existence of reliable and meaningful differences on some metric between subgroups. For example, individual assessments of skill might be made but then aggregated across cases to reveal, for example, that the average skill of new people hired has gone up as a result of the implementation of a new selection program. Similarly, the performance of individuals might be assessed to show that the mean performance level of employees under one manager is or is not better than the mean for those working under another. Thus, in contrast to other venues, individual assessments conducted in work settings are often used as a means to assess still other individuals, in this case, organizational programs or managers.

Marketplace and the Business Case

Most models of organizational effectiveness make it clear that the capacity to acquire, retain, and efficiently use key resources is essential. In this regard, employees as human resources are no different. At the time that this research paper is being prepared, unemployment levels are at historical lows in the United States. Moreover, given the strength of the so-called new economy, the demand for skilled workers is intense. Added to the convergence of these two marketplace realities is the arrival of new and powerful Internet-based services that give more information than ever to current and prospective employees regarding the human resource needs and practices of various organizations. It is important to note that similar services now provide individuals with a more accurate sense of their own market value than ever. Clearly there are intense competitive pressures to recruit, select, and retain good employees. Those responsible for the design and management of platforms for individual assessment must contribute to meeting such pressures or they will not be retained.

Another marketplace demand is for efficiency. The availability of resources notwithstanding, few organizations can escape investor scrutiny with regard to their effective use of resources. When it comes to assessment programs, this implies that new approaches will be of interest and ultimately found acceptable if it can be demonstrated that (a) they address a business problem, (b) they add value over current approaches, and (c) they have utility, in the sense that the time and costs associated with assessment are substantially less than the gains realized in terms of worker behavior (e.g., quitting) or performance. In fact, the need to make a business case is a hallmark of the practice of individual assessments in work organizations. It is also at the heart of the notion of utility as described in the next section.

A third imperative facing contemporary practitioners is embedded in the notion speed to market. All other things considered, new and useful assessment tools or programs need to be brought on line quickly to solve a business problem (e.g., meeting the human resource needs for a planned expansion or reducing high levels of turnover). Similarly, the information on those individuals assessed must be made available quickly so that decisions can be made in a timely manner. These factors may cause an organization to choose to make heavy use of external consultants for assessment work in the context of managing their human resource needs and their bottom line.

In summary, individual assessment in work settings is indeed both similar to and different from many other contexts in which such assessments take place. Although the skills and techniques involved would be familiar to most psychologists, the application of the former must be sensitive and appropriate to particular contextual realities.

Professional and Technical Considerations

As described in the overview section, professionals who conduct assessments in industrial settings do so based on the work context. A job analysis provides information on the tasks, duties, and responsibilities carried out by the job incumbents as well as the KSAOs needed to perform the job well (Saks et al., 2000). Job analysis information helps us conduct selection and promotion assessments by determining if there is a fit between the skills needed for the job and those held by the individual and if the individual has the potential to perform well on the important KSAOs. We can also use job analysis information for career management by providing the individual and the career counselor or coach with information about potential jobs or careers. The individual can then be assessed using various skill and interest inventories to determine fit. Job analysis information can also be used for classification and placement to determine which position within an organization best matches the skills of the individual.We will discuss the purpose, application, and tools for assessments in the next section. In this section, we will focus on how organizations use job analysis tools to develop assessment tools to make organizational decisions.

The Role of Assessment Data for Inferences in Organizational Decisions

Guion (1998) points out that one major purpose of research on assessments in industrial/organizational settings is to evaluate how well these assessments help us in making personnel decisions. The process he describes is prescriptive and plays out especially well for selection purposes. The reader should be aware that descriptively there are several constraints, such as a small number of cases, the rapid pace at which jobs change, and the time it takes to carry out the process, that make this approach difficult. Guion therefore suggests that assessment practices should be guided by theory, but so too should practice inform theory. With that said, his approach for evaluating assessments is described below:

  1. Conduct a job and organizational analysis to identify what criterion we are interested in predicting and to provide a rational basis for specifying which applicant characteristics (predictors) are likely to predict that criterion.
  2. Choose the specific criterion or criteria that we are trying to predict. Usually, the criteria are some measure of performance (e.g., production quality or earnings) or some valued behavior associated with the job (e.g., adaptability to change).
  3. Developthepredictivehypothesisbasedonstrongrationale and prior research.
  4. Select the methods of measurement that effectively assess the construct of interest. Guion suggests that we should not limit our assessments to any particular method but that we shouldlookatothermethods.Thetendencyhereistoassess candidatesontraitsforwhichtestsaredevelopedratherthan to assess them on characteristics not easily assessed with current testing procedures (Lawshe, 1959).
  5. Design the research to assure that findings from research samples can generalize to the population of interest, job applicants.
  6. Collect data using standardized procedures and the appropriate treatment of those being assessed.
  7. Evaluate the results to see if the predictor correlates with the criterion of interest. This evaluation procedure is often called validation.
  8. Justify the selection procedure through the assessment of both incremental validity and utility. The former refers to the degree to which the proposed selection procedure significantly predicts the criterion over and above a procedure already in place. The latter refers to the economic value of utilizing the new procedure.

Technical Parameters

Reliability

Most readers are aware of the various forms of reliability and how they contribute to inferences in assessments. This section will note the forms of reliability used in industrial settings.

For the most part, organizations look at internal consistency reliability more than test-retest or parallel forms. In many industrial settings, with the exception of large organizations that conduct testing with many individuals on a regular basis, it is often asserted that time constraints limit the evaluation of the latter forms of reliability.

The kind of reliability sought should be appropriate to the application of the assessment. Of particular importance to industrial settings are retest reliability and interrater reliability. For example, in the context of structured interviews or assessment centers, if raters (or judges) do not agree on an individual’s score, this should serve as a warning that the assessment platform should be reviewed. Moreover, political issues may come into play if one of the raters has a significant position of power in the organization. This rater may want the person to be selected even if other raters disagree.

Validity

All test validation involves inferences about psychological constructs (Schmitt & Landy, 1993). It is not some attribute of the tests or test items themselves (e.g., Guion, 1980).

We are not simply interested in whether an assessment predicts performance, but whether the inferences we make with regard to these relationships are correct. Binning and Barrett (1989) lay out an approach for assessing the validity (job-relatedness of a predictor) of personnel decisions based on the many inferences we make in validation.

Guion’s (1998) simplification of Binning and Barrett’s presentation is described in Figure 14.1 to illustrate the relationships among predictor constructs, predictor measures, criterion constructs, and criterion measures.

Psychological Assessment in Industrial/Organizational Settings Research Paper

Line 1 shows that the relationship between the predictor construct (e.g., conscientiousness) is related to the criterion construct (some form of job behavior such as productivity) or a result of the behavior. Relationship 2 is the only inference that is empirically testable. It is the statistical relationship between the predictor measure, a test of conscientiousness such as the Hogan Personality Inventory (HPI; R. T. Hogan & Hogan, 1995), and the criterion measure (some measured criteria of job performance such as scores on a multisource feedback assessment). Tests of inferences 3 and 4 are used in construct validation. Relationship 3 shows whether the predictor measure (HPI) is a valid measure of the predictor construct (conscientiousness). Relationship 4 assesses whether the criterion measure (multisource feedback scores) is effectively measuring the performance of interest (e.g., effective customer service). Finally, relationship 5 is the assessment of whether the predictor measure (conscientiousness) is related to the criterion construct of interest (customer service) in a manner consistent with its presumed relationship to the criterion measure. Relationship 5 is dependent on the inferences we make about our job analysis data and those that we make about our predictor and construct relationships. Although the importance of establishing construct validity is now well established in psychology, achieving the goal of known construct validity in the assessments used in work contexts continues to be elusive.

Political considerations come into play in establishing validity in industrial settings. Austin, Klimoski, and Hunt (1996) point out that “validity is necessary but not sufficient for effective long-term selection systems.” They suggest that in addition to the technical considerations discussed above, procedural justice or fairness and feasibility-utility also be considered. For example, in union environments optimizing all three standards may be a better strategy than maximizing one set.

Fairness

Industrial/organizational psychologists view fairness as a technical issue (Cascio, 1993), a social justice issue (Austin et al., 1996), and a public policy issue (American Educational Research Association, American Psychological Association, & National Council on Measurement in Education, 1999). In industrial/organizational settings, the technical issues of differential validity and differential prediction are assessed for fairness. Differential validity exists when there are differences in subgroup validity coefficients. If a measure that is valid only for one subgroup is used for all individuals regardless of group membership, then the measure may discriminate unfairly against the subgroup for whom it is invalid. Job performance and test performance must be considered because unfair discrimination cannot be said to exist if unfair test performance is associated with inferior job performance by the same group. Differential prediction exists when there are slope and intercept differences between minority and nonminority groups. For example, Cascio (1993) points out that a common differential prediction exists when the prediction system for the nonminority group slightly overpredicts minority group performance. In this case, minorities would tend not to do as well on the job as their test scores would indicate.

As Austin et al. (1996) point out, fairness is also related to the social justice of how the assessment is administered. For example, they point out that that perceptions of neutrality of decision makers, respect given to test takers, and trust in the system are important for the long-term success of assessments. In fact, Gilliland (1993) argues that procedural justice can be decomposed into three components: formal characteristics of procedures, the nature of explanations offered to stakeholders, and the quality of interpersonal treatment as information comes out. These issues must be considered for there to be acceptability of the process.

Additionally, it cannot be stressed enough that there is no consensus on what is fair. Fairness is defined in a variety of ways and is subject to a several interpretations (American Educational ResearchAssociation et al., 1999). Cascio (1993) points out that personnel practices, such as testing, must be considered in the total system of personnel decisions and that each generation should consider the policy implications of testing. The critical consideration is not whether to use tests but, rather, how to use tests (Cronbach, 1984).

Feasibility/Utility

This term has special meaning in the assessment of individuals in industry. It involves the analysis of the interplay among the predictive power of assessment tools and the selection ratio. In general, even a modest correlation coefficient can have utility if there is a favorable selection ratio. Assessments in this context must be evaluated against the cost and potential payoff to the organization. Utility theory does just that. It provides decision makers with information on the costs, benefits, and consequences of all assessment options. For example, through utility analysis, an organization can decide whether a structured interview or a cognitive ability test is more cost effective. This decision would also be concerned with the psychometric properties of the assessment. Cascio (1993) points out the importance of providing the utility unit (criteria) in terms that the user can understand, be it dollars, number of products developed, or reduction in the number of workers needed. Because assessment in industry must concern itself with the bottom line, costs and outcomes are a critical component in evaluating assessment tools.

Robustness

In selecting an assessment, it is also important to assess whether its validity is predictive across many situations. In other words, is the relationship robust?The theory of situation specificity is based on the findings of researchers that validities for similar jobs in different work environments varied significantly. With the increased emphasis on metaanalysis and validity generalization (Schmidt & Hunter, 1977; Schmidt et al., 1993), many researchers believe that these differences were due to statistical and measurement artifacts and were not real differences between jobs.

Legal Considerations

Employment laws exist to prohibit unfair discrimination in employment and provide equal employment opportunity for all. Unfair discrimination occurs when employment decisions are based on race, sex, religion, ethnicity, age, or disability rather than on job-relevant knowledge, skills, abilities, and other characteristics (U.S. Department of Labor, 1999). Employment practices that unfairly discriminate against people are called unlawful or discriminatory employment practices.

Those endeavoring to conduct individual assessments in industry must consider the laws that apply in their jurisdiction. With the increasingly global society of practitioners, they must also consider laws in other countries. In the United States, case law and professional standards and acts must be followed. The following are some standards and acts that must be considered:

  • Title VII of the Civil Rights Act of 1964 (as amended in 1972), which prohibits unfair discrimination in all terms and conditions of employment based on race, color, religion, sex, and national origin.
  • Age Discrimination in Employment Act of 1967 (ADEA), which prohibits discrimination against employees or applicants age 40 or older in all aspects of the employment process.
  • Equal Employment Opportunity Commission (EEOC) of 1972, which is responsible for enforcing federal laws prohibiting employment discrimination.
  • Uniform Guidelines on Employee Selection Procedures of 1978 (Equal Employment Opportunity Commission, Civil Service Commission, U.S. Department of Labor, & U.S. Department of Justice, 1978), which incorporate a set of principles governing the use of employee selection procedures according to applicable laws and provide a framework for employers to determine the proper use of tests and other selection procedures. A basic principle of the guidelines is that it is unlawful to use a test or selection procedure that creates adverse impact, unless justified. When there is no charge of adverse impact, the guidelines do not require that one show the job-relatedness of assessment procedures; however, they strongly encourage one to use only job-related assessment tools. Demonstrating the job-relatedness of a test is the same as establishing that the test may be validly used as desired. Demonstrating the business necessity of an assessment involves showing that its use is essential to the safe and efficient operation of the business and that there are no alternative procedures available that are substantially equally valid to achieve business results with lesser adverse impact.
  • Title I of the Civil Rights Act of 1991, which specifically requires demonstration of both job-relatedness and business necessity (as described in the previous section). The business necessity requirement is harder to satisfy than the “business purpose test” suggested earlier by the Supreme Court. The act also prohibits score adjustments, the use of different cutoff scores for different groups of test takers, or the alteration of employment-related tests based on the demographics of test takers.
  • Americans with Disabilities Act of 1990 (ADA), which requires that qualified individuals with disabilities be given equal opportunity in all aspects of employment. Employers must provide reasonable accommodation to persons with disabilities when doing so would not pose undue hardship.
  • Standards for Educational and Psychological Testing (American Educational Research Association et al., 1999) and principles for validation and use of Personnel Selection Procedures (1987), which are useful guidelines for individuals developing, evaluating, and using assessments in employment, counseling, and clinical settings. Even though they are guidelines, they are consistent with applicable regulations.

Purpose, Focus, and Tools for Assessment in Industrial/Organizational Settings

This section will describe how assessments in industrial/organizational settings are used, the focus of those assessments and the major tools used to conduct these assessments. The reader may notice that the focus of the assessment may be similar for different assessment purposes.

For example, cognitive ability may be the focus of both a selection test and a career planning assessment. Table 14.1 provides the linkage of these components and can serve as a preview of the material to follow.

Psychological Assessment in Industrial/Organizational Settings Research Paper

Purpose of Assessment in Industry

Selection

Selection is relevant to organizations when there are more qualified applicants than positions to be filled. The organization must decide who among those applicants can perform best on the job and should therefore be hired. That decision is based upon the prediction that the person hired will be more satisfactory than the person rejected (Cascio, 1993). The goal of selection is thus to capitalize on individual differences in order to select those persons who possess the greatest amount of particular characteristics judged important for job success. Aparticular assessment is chosen because it looks as though it may be a valid measure of the attributes that are important for a particular job (Landy, 1989; Saks et al., 2000). One or more predictors are selected that presumably relate to the criteria (performance on the job). These predictor constructs become the basis for an assessment test. For example, if we identified that cognitive ability is an important predictor for performance in the job of customer service representative, then we would develop a test that measures the construct of cognitive ability.

The use of assessment tools for selection varies depending on the job performance domain and the level of individual we are selecting. For example, because routine work (e.g., assembly line) is more structured than novel work (e.g., consulting) and because teamwork requires more interpersonal skills than does individually based work, selection assessments vary greatly for different jobs. Additionally, the level of the position dictates the type of assessment we would use. Selection for a chief executive officer would probably involve several interviews, whereas selection for a secretary might involve a typing test, an interpersonal skills test, and an interview.

Thus, selection in industrial settings varies depending on the context in which it is used. Regardless of this difference, the Uniform Guidelines and Standards for Educational and Psychological Assessment should always be applied.

Promotion

When we are conducting an assessment of performance, we are generally determining an individual’s achievement at the time of the assessment. However, when we are considering an individual for a promotion, performance can be the basis for inferring his or her potential to perform a new job. However, we often try to directly assess traits or qualities thought to be relevant to the new job in practice.

In the context of school, an achievement test would be a final examination. At work, it might be a work sample or job knowledge test (more on these types of tests follows) or multisource feedback on the individual’s performance over the past year on the job. In the context of school, an assessment of potential might be the Standardized Aptitude Test, which determines the person’s potential to perform well in college (Anastasi & Urbina, 1996). At work, an assessment might focus on managerial potential and on sales potential (e.g., J. Hogan & Hogan, 1986). These scales have been shown to predict performance of managers and sales representatives, respectively. Additionally, assessment centers and multisource assessment platforms are methods for assessing an individual’s potential for promotion.

One challenge faced by psychologists in developing promotion instruments is that we are often promoting individuals based on past performance; however, often a new job requires additional KSAOs. This occurs when an individual is moving from a position in the union to one in management, from being a project member to being a project manager, or into any position requiring new skills. In this situation, the assessment should focus on future potential rather than simply past performance.

Another challenge in the promotion arena is that organizations often intend to use yearly performance appraisals to determine if a candidate should be promoted. However, there is often little variance between candidates on these appraisals. Many raters provide high ratings, often to insure workplace harmony, therefore showing little difference between candidates. Other tools, which involve the use of multiple, trained raters used in conjunction with the performance appraisal, might be used to remedy this problem.

Career Planning

Career planning is the process of helping individuals clarify a purpose and a vocation, develop career plans, set goals, and outline steps for reaching those goals.Atypical career plan includes identification of a career path and the skills and abilities needed to progress in that path (Brunkan, 1991). It involves assessment, planning, goal setting, and strategizing to gain the skills and abilities required to implement the plan. It can be supported by coaching and counseling from a psychologist or from managers and human resources (HR) specialists within a company. Assessments for the purpose of career planning would be conducted so that individuals can have a realistic assessment of their own potential (Cascio, 1991) as well as theirvalues, interests, and lifestyles (Brunkan, 1991). Assessments for this purpose include the Strong Occupational Interest Blank (Hansen, 1986) and Holland’s Vocational PBibliography: Inventory (Holland, Fritsche, & Powell, 1994). Additionally, the results of participation in an assessment center (described later) or a multisource feedback instrument might be shared with an employee to identify those areas on which he or she could develop further.

Career management in organizations can take place from the perspective of the individual or the firm. If it is the former, the organization is concerned with ensuring that the individual develops skills that are relevant to succeeding in the firm and adding value as he or she progresses. If it is the latter, the individual seeks to develop skills to be applied either inside or outside of the organization. Additionally, the size of the firm and its stability will dictate the degree to which assessments for the purpose of career management can occur. We would be more likely to see large government organizations focusing on career development than small start-up firms.

Training is often a large part of career planning because it facilitates the transfer of skills that are necessary as the individual takes on new tasks. Assessments in this context are used to see if people are ready for training. A representative part of the training is presented to the applicant (Saks et al., 2000) to see if the applicant is ready for and likely to benefit from the training. Additionally, Noe and Schmitt (1986) have found that trainee attitudes and involvement in careers affected the satisfaction and the benefit from training. They show that understanding of trainee attitudes might benefit the organization so that it can develop interventions (e.g., pretraining workshops devoted to increasing involvement and job commitment of trainees) to enhance the effectiveness of the training program.

Classification

Assessments can also be used to determine how to best use staff.The results of an assessment might provide management with knowledge of the KSAOs of an individual and information on his or her interests. Classification decisions are based upon the need to make the most effective matching of people and positions. The decision maker has a specified number of available positions on one hand and a specific number of people on the other.

Depending on the context, the firm might take the perspective that the firm’s needs should be fulfilled first or, if the individual is critical to the firm, that his or her needs should dictate where he or she is placed. In the former case, the organization would place an individual in an open slot rather than in a position where he or she may perform better or wish to work. The individual is placed there simply because there is an immediate business need. The organization knows that this individual can perform well, but he or she may not be satisfied here. On the other hand, organizations may have the flexibility to put the individual in a position he or she wants so that job satisfaction can be increased. Some organizations provide so-called stretch assignments, which allow individuals to learn new skills. Although the organization is taking a risk with these assignments, the hope is that the new skills will increase job satisfaction and also add value to the firm in the future.

Employee Assistance Programs

Many organizations use assessments as part of employee assistance programs (EAPs). Often these programs are viewed as an employee benefit to provide employees with outlets for problems that may affect their work. Assessments are used to diagnose stress- or drug-related problems. The individual might be treated through the firm’s EAP or referred to a specialist.

Compensation

Organizations also assess individuals to determine their appropriate compensation. A traditional method is to measure the employee’s job performance (job-based compensation). More recently, some organizations are using skill-based pay systems, according to which individuals are compensated on explicitly defined skills deemed important for their organization. Murray and Gerhart (1998) have found that skill-based systems can show greater productivity.

Focus of Assessments in Industry

The focus of assessments in industrial settings involves a number of possible constructs. In this section, we highlight those constructs that are robust and are referenced consistently in the literature.

Cognitive Ability Tests

The literature has established that cognitive ability, and specifically general mental ability, is a suitable predictor of many types of performance in the work setting. The construct of cognitive ability is generally defined as the “hypothetical attributes of individuals that are manifest when those individuals are performing tasks that involve the active manipulation of information” (Murphy, 1996, p. 13). Many agree with the comment of Ree and Earles (1992) that “If an employer were to use only intelligence tests and select the highest scoring applicant for the job, training results would be predicted well regardless of the job, and overall performance from the employees selected would be maximized” (p. 88). The metaanalysis conducted by Schmidt and Hunter (1998) showed that scores on cognitive ability measures predict task performance for all types of jobs and that general mental ability and a work sample test had the highest multivariate validity and utility for job performance.

Additionally, the work conducted for the Army Project A has shown that general mental ability consistently provides the best prediction of task proficiency (e.g., McHenry, Hough, Toquam, Hanson, & Ashworth, 1990). The evidence for this relationship is strong. For example, the meta-analysis of Hunter and Hunter (1984) showed that validity for general mental ability is .58 for professional-managerial jobs, .56 for high-level complex technical jobs, .51 for medium-complexity jobs, .40 for semiskilled jobs, and .23 for unskilled jobs. Despite these strong correlations, there is also evidence that it is more predictive of task performance (formally recognized as part of the job) than of contextual performance (activities such as volunteering, persisting, cooperating).

Personality

As we expand the criterion domain (Borman & Motowidlo, 1993) to include contextual performance, we see the importance of personality constructs. Although there is controversy over just how to define personality operationally (Klimoski, 1993), it is often conceptualized as a dynamic psychological structure determining adjustment to the environment but manifest in the regularities and consistencies in the behavior of an individual over time (Snyder & Ickes, 1985).

Mount and Barrick (1995) suggest that the emergence of the five-factor structure of personality led to empirical research that found “meaningful and consistent” relationships between personality and job performance. Over the past 15 years, researchers have found a great deal of evidence to support the notion that different components of the five-factor model (FFM; also known as the “Big Five”) predict various dimensions of performance. Although the FFM is prevalent at this time, it is only one of many personality schema. Others include the 16 Personality Factor Questionnaire (16PF; Cattell, Cattell, & Cattell, 1993) and a nine-factor model (Hough, 1992).

The components of the FFM are agreeableness, extroversion, emotional stability, conscientiousness, and openness to experience. Research has shown that these factors predict various dimensions of job performance and therefore are useful constructs to assess in selection. McHenry et al. (1990) found that scores from ability tests provided the best prediction for job-specific and general task proficiency (i.e., task performance), whereas temperament or personality predictors showed the highest correlations with such criteria as giving extra support, supporting peers, and exhibiting personal discipline (i.e., contextual performance).

The factor of personality that has received the most attention is conscientiousness. For example, Mount and Barrick (1995) conducted a meta-analysis that explored the relationship between conscientiousness and the following performance measures: overall job performance, training proficiency, technical proficiency, employee reliability, effort, quality, administration, and interpersonal orientation. They found that although conscientiousness predicted overall performance (both task and contextual), its relationships with the specific criterion determined by motivational effort (employee reliability and effort) were stronger. Organ and Ryan (1995) conducted a metaanalysis on the predictors of organizational citizenship behaviors (OCBs). They found significant relationships between conscientiousness and the altruism component of OCBs (altruism represents the extent to which an individual gives aid to another, such as a coworker). Schmidt and Hunter’s (1998) recent meta-analysis showed a .31 correlation between conscientiousness and overall job performance. They concluded that, in addition to general mental ability and job experience, conscientiousness is the “central determining variable of job performance” (p. 272).

The FFM of personality has also been shown to be predictive of an individual’s performance in the context of working in a team (we will discuss this issue in the next section). For example, a meta-analysis conducted by Mount, Barrick, and Stewart (1998) found that conscientiousness, agreeableness, and emotional stability were positively related to overall performance in jobs involving interpersonal interactions. Emotional stability and agreeableness were strongly related to performance in jobs that involve teamwork.

Teamwork Skills

Individuals working in organizations today are increasingly finding themselves working in teams with other people who have different sets of functional expertise (Hollenbeck, LePine, & Ilgen, 1996). This change from a clearly defined set of individual roles and responsibilities to an arrangement in which the individual is required to exhibit both technical expertise and an ability to assimilate quickly into a team is due to the speed and amount of information entering and exiting an organization. No individual has the ability to effectively integrate all of this information. Thus, teams have been introduced as a solution (Amason, Thompson, Hochwarter, & Harrison, 1995).

Although an organization is interested in overall team performance, it is important to focus on the individuals’ performance within the team (individual-in-team performance) so that we know how to select and appraise them. Hollenbeck et al. (1996) point out that certain types of individuals will assimilate into teams more easily than others. It is these individuals that we would want to select for a team. In the case of individual-in-team performance, we would suggest that both a contextual and ateamwork analysis be conducted.The contextual analysis provides the framework for team staffing by looking at, first, the reason for selection, be it to fill a vacancy, staff a team, or to transform an organization from individual to team-based work, and, second, the team’s functions (as described by Katz & Kahn, 1978), be they productive/technical, related to boundary management, adaptive, maintenance-oriented, or managerial/executive. Additionally, the team analysis focuses on the team’s role, the team’s division of labor, and the function of the position. The results have implications for the KSAOs needed for the job (Klimoski & Zukin, 1998).

Physical Abilities

Physical abilities are important for jobs in which strength, endurance, and balance are important (Guion, 1998), such as mail carrier, power line repairer, and police officer. Fleishman and Reilly (1992) have developed a taxonomy of these abilities. Measures developed to assess these abilities have predicted work sample criteria effectively (R. T. Hogan, 1991). However, they must be used with caution because they can cause discrimination. The key here is that the level of that ability must be job relevant. Physical ability tests can only be used when they are genuine prerequisites for the job.

Job-Specific Knowledge and Skill

The O*NET system of occupational information (Peterson, Mumford, Borman, Jeanneret, & Fleishman, 1995) suggests that skills can be categorized as basic, cross-functional, and occupational specific. Basic skills are developed over a long period of time and provide the foundation for future learning. Cross-functional skills are useful for a variety of occupations and might include such skills as problem solving and resource management. Occupational (or job-specific) skills focus on those tasks required for a specific occupation. It is not surprising that research has shown that job knowledge has a direct effect on one’s ability to do one’s job. In fact, Schmidt, Hunter, and Outerbridge’s (1986) path analysis found a direct relationship between job knowledge and performance. Cognitive ability had an indirect effect on performance through job knowledge. Findings like this suggest that under certain circumstances job knowledge may be a more direct predictor of performance than cognitive ability.

Honesty and Integrity

The purpose of honesty/integrity assessments is to avoid hiring people prone to counterproductive behaviors. Sackett, Burris, and Callahan (1989) classify the measurement of these constructs into two types of tests. The first type is overt tests, which directly assess attitudes toward theft and dishonesty.They typically have two sections. One deals with attitudes toward theft and other forms of dishonesty (beliefs about the frequency and extent of employee theft, perceived ease of theft, and punitiveness toward theft). The other deals with admissions of theft. The second type consists of personality-based tests, which are designed to predict a broad range of counterproductive behaviors such as substance abuse (Ones & Viswesvaran, 1998b; Camera & Schneider, 1994).

Interpersonal Skills

Skills related to social perceptiveness include the work by Goleman (1995) on emotional intelligence and works on social intelligence (e.g., M. E. Ford & Tisak, 1982; Zaccaro, Gilbert, Thor, & Mumford, 1991). Goleman argues that empathy and communication skills, as well as social and leadership skills, are important for success at work (and at home). Organizations are assessing individuals on emotional intelligence for both selection and developmental purposes. Another interpersonal skill that is used in industrial settings is social intelligence, which is defined as “acting wisely in human relations” (Thorndike, 1920) and one’s ability to “accomplish relevant objectives in specific social settings” (M. E. Ford & Tisak, 1983, p. 197). In fact, Zaccaro et al. (1991) found that social intelligence is related to sensitivity to social cues and situationally appropriate responses. Socially intelligent individuals can better manage interpersonal interactions.

Interests

Psychologists in industrial settings use interests inventories to help individuals with career development. Large organizations going through major restructuring may have new positions in their organization. Interests inventories can help individuals determine what new positions might be a fit for them (although they may still have to develop new skills to succeed in the new positions). Organizations going through downsizing might use these inventories as part of their outplacement services.

Learning

Psychologists in industry also assess one’s ability to learn or the information that one has learned. The former might be assessed for the purpose of determining potential success in a training effort or on the job (and therefore for selection). As mentioned earlier in this research paper, the latter might be assessed to determine whether individuals learned from attending a training course. This information helps the organization to determine whether a proposed new course should be used for a broader audience. Tools used to assess learning can range from knowledge tests to cognitive structures or to behavioral demonstration of competencies under standardized circumstances (J. K. Ford, 1997).

Training and Experience

Organizations often use training and experience information to determine if the individual, based on his or her past, has the KSAOs necessary to perform in the job of interest. This information is mostly used for selection. An applicant might describe his or her training and experience through an application form, a questionnaire, a resume, or some combination of these.

Job Performance

Job performance information is frequently used for compensation or promotion decisions, as well as to refer an individual to an EAP (if job performance warrants the need for counseling). In these situations, job performance is measured to make personnel decisions for the individual. In typical validation studies, job performance is the criterion, and measures discussed above (e.g., training and experience, personality, cognitive ability) are used as predictors.

Tools

Cognitive Ability Tests

Schmidt and Hunter (1998) conducted a meta-analysis of measures used for hiring decisions.They have found that cognitive ability tests (e.g., Wonderlic Personnel Test, 1992) are robust predictors of performance and job-related learning.

They argue that because cognitive ability is so robust, it should be referred to as a primary measure for hiring decisions and that other measures should be referred to as supplementary measures. Where the information is available, this section will summarize these measures and some of their findings on the incremental validity of these measures in predicting performance. Additionally, the reader we suggest that the reader refer to Table 14.2 for examples of tools linked to each type of assessment.

Psychological Assessment in Industrial/Organizational Settings Research Paper

Personality Assessments

Several tools are available to measure personality. Some focus on the FFM of personality (e.g., Big Five Factor Markers, Goldberg, 1992; Mini-markers, Saucier, 1994; Hogan Personality Inventory; R. T. Hogan & Hogan, 1995), whereas others focus on a broader set of personality characteristics (e.g., 16PF; Cattell et al., 1993).

Teamwork Skills Assessments

Several industrial/organizational psychologists have investigated those knowledges, skills, and abilities (KSAs) and personality dimensions that are important for teamwork. For example, Stevens and Campion (1994) studied several teams and argued that two major categories of KSAs are important for teamwork: interpersonal KSAs and self-management KSAs. Research by Stevens and Campion has shown that that teamwork KSAs (to include interpersonal KSAs and self-management KSAs) predict on-the-job teamwork performance of teams in a southeastern pulp processing mill and a cardboard processing plant. Steve and Campion’s teamwork KSAs are conflict resolution, collaborative problem solving, communication, goal setting and performance management, planning, and task coordination.

Physical Abilities Tests

In jobs such as those of police officer, fire fighter, and mail carrier, physical strength (e.g., endurance or speed) is critical to job performance. Therefore, tools have been developed to assess the various types of physical abilities. Fleishman and Reilly’s (1992) work in this area identifies nine major physical ability dimensions along with scales for the analysis of job requirements for each of these dimensions that are anchored with specific examples.

Tools to Measure Job-Specific Knowledge and Skill

Work sample tests are used to measure job-specific knowledge and skill. Work sample tests are hands-on job simulations that must be performed by applicants. These tests assess one’s procedural knowledge base. For example, as part of a work sample test, an applicant might be required to repair a series of defective electric motors. Often used to hire skilled workers such as welders and carpenters (Schmidt & Hunter, 1998), these assessments must be used with applicants who already know the job. Schmidt and Hunter found that work sample tests show a 24% increase in validity over that of cognitive ability tests. Job knowledge tests are also used to assess job-specific knowledge and skills. Like work sample tests, these assessments cannot be used to hire or evaluate inexperienced employees. They are often constructed by the hiring organization on the basis of a job analysis.Although they can be developed internally, this is often costly and time-consuming. Those purchased off the shelf are less expensive and have only slightly lower validity than those developed by the organization. Job knowledge tests increase the validity over cognitive ability measures by 14%.

Honesty and Integrity Tests

Schmidt and Hunter (1998) found that these types of assessments show greater validity and utility than do work samples. The reliability scale of the Hogan Personnel Selection Series (J. Hogan & Hogan, 1989) is designed to measure “organizational delinquency” and includes items dealing with hostility toward authority, thrill seeking, conscientiousness, and social insensitivity. Ones, Viswesvaran, and Schmidt (1993) found that integrity tests possess impressive criterion-related validity. Both overt and personality-based integrity tests correlated with measures of broad counterproductive behaviors such as violence on the job, tardiness, and absenteeism. Reviews of validity of these instruments show no evidence of adverse impact against women or racial minorities (Sackett et al., 1989; Sackett & Harris, 1984).

Assessments of Interpersonal Skills

Interpersonal skills are often found in the results of job analyses or competency studies. Therefore, an organization might assess interpersonal skills in an interview by asking the candidate to respond to questions on how they handled past experiences dealing with difficult interpersonal interactions. They might also be assessed through assessment centers by being placed in situations like the leaderless group discussion, in which their ability to interact with others is assessed by trained raters. Structured interviews and assessment centers are discussed in more detail later.

Employment Interviews

Employment interviews can be unstructured or structured (Huffcutt, Roth, & McDaniel, 1996). Schmidt and Hunter (1998) point out that the unstructured interviews have no fixed format or set of questions to be answered. There is no format for scoring responses. Structured interviews includes questions that are determined by a careful job analysis, and they have set questions and a set approach to scoring. Structured interviews have greater validity and show a 24% increase over cognitive ability alone in validity. Although there is no one universally accepted structured interview tool, depending on how structured the interview is, it might be based on detailed protocols so that candidates are asked the same questions and assessed against the same criteria. Interviewers are trained to ensure consistency between candidates (Judge, Higgins, & Cable, 2000).

As has been implied, the structured interview is a platform for assessment and can be designed to be used for developmental applicants. One if its functions can be to get at applicant or candidate interpersonal skills. For example, Landy (1976) found that interviews of prospective police officers were able to assess communication skills and personal stability. Arvey and Campion (1982) summarized evidence that the interview was suitable for determining sociability.

Assessment Centers

In assessment centers, the participant is observed participating in various exercises such as leaderless group discussions, supervisor/subordinate simulations, and business games. The average assessment center includes seven exercises and lasts two days (Gaugler, Rosenthal, Thornton, & Bentson, 1987). They have substantial validity but only moderate incremental validity over cognitive ability because they correlate highly with cognitive ability. Despite the lack of incremental validity, organizations use them because they provide a wealth of information useful for the individual’s development.

Interest Inventories

Such tools as Holland’s Vocational Preference Inventory (Holland et al., 1994) and Self-Directed Search (Holland, 1985), as well as the Strong Interest Inventory (Harmon, Hansen, Borgen, & Hammer, 1994) are used to help individuals going through a career change. Interest inventories are validated often against their ability to predict occupational membership criteria and satisfaction with a job (R. T. Hogan & Blake, 1996). There is evidence that interest inventories do predict occupational membership criteria (e.g., Cairo, 1982; Hansen, 1986). However, results are mixed with regard to whether there is a significant relationship between interests and job satisfaction (e.g., Cairo, 1982; Worthington & Dolliver, 1977). These results may exist because job satisfaction is affected by many factors, such as pay, security, and supervision. Additionally, there are individual differences in the expression of interests with a job. Regardless of their ability to predict these criteria, interest inventories have been useful to help individuals determine next steps in their career development.

Training and Experience Inventories

Schneider and Schneider (1994) point out that there are two assumptions of experience and training rating techniques. First, they are based on the notion that a person’s past behaviors are a valid predictor of what the person is likely to do in the future. Second, as individuals gain more experience in an occupation, they are more committed to it and will be more likely to perform well in it. There are various approaches to conducting these ratings. As an example, one approach is the point method, in which the raters provide points to candidates based on the type and length of a particular experience. Various kinds of experience and training would be differentially rated depending on the results of the job analysis. The literature on empirical validation of point method approaches suggests that they have sufficient validity. For example, McDaniel, Schmidt, and Hunter’s (1988) metaanalysis found a corrected validity coefficient of .15 for point method–based experience and training ratings. Questionnaires on biographical data contain questions about life experiences such as involvement in student organizations, offices held, and the like. This practice is based on the behavioral consistency theory, which suggests that past performance is the best predictor of future performance. Items are chosen because they have been shown to predict some criteria of job performance. Historical data, such as attendance and accomplishments, are included in these inventories. Research indicates that biodata measures correlate substantially with cognitive ability and that they have little or no incremental validity over it. Some psychologists even suggest that perhaps they are indirect measures of cognitive ability. These tools are often developed in-house based on the constructs deemed important for the job. Although there are some biodata inventories that are available for general use, most organizations develop tools suitable to their specific needs. For those interested, an example tool discussed as a predictor of occupational attainment (Snell, Stokes, Sands, & McBride, 1994) is the Owens Biographical Questionnaire (Owens & Schoenfeldt, 1979).

Measures of Job Performance

Measures of job performance are often in the form of supervisory ratings or multisource assessment platforms and can be used for promotions, salary increases, reductions in force, development, and for research purposes. Supervisory assessments, in the form of ratings, are the most prevalent assessments of job performance in industrial settings. The content is frequently standardized within an organization with regard to the job category and level. There may also be industry standards (e.g., for police officers or insurance auditors); however, specific ratings suitable for the context are also frequently used. Supervisors generally rate individuals on their personal traits and attributes related to the job, the processes by which they get the job done, and the products that result from their work. Multisource assessment platforms are based on evaluations gathered about a target participant from two or more rating sources, including self, supervisors, peers, direct reports, internal customers, external customers, and vendors or suppliers. The ratings are based on KSAOs related to the job. Results of multisource assessments can be either provided purely for development (as with Personnel Decision International Corporation’s Profiler, 1991) or shared with the supervisor as input to a personnel decision (Dalessio, 1998). The problem with the latter is that often the quality of ratings is poorer when the raters know that their evaluations will be used for personnel decisions (e.g., Murphy & Cleveland, 1995).

Major Issues

By virtue of our treatment of the material covered so far in this research paper, it should be clear that the scope of individual assessment activities in work contexts is quite great. Similarly, the number of individuals and the range of professions involved in this activity are diverse. It follows, then, that what constitutes an important issue or a problem is likely to be tied to stakeholder needs and perspective. In deference to this, we will briefly examine in this section issues that can be clustered around major stakeholder concerns related to business necessity, social policy, or technical or professional matters.

It should also be noted that it is not uncommon for psychologists to serve as the assessment tool if they are hired by an organization to advise on a personnel action for an individual. Ryan and Sackett (1987) point out that the input for such a psychologist’s decision could come from several sources (like those discussed above), but that the final evaluation is made by this psychologist. In this regard, the consulting psychologist often develops a proprietary assessment battery, which, when used over time, provides a model on which to base his or her recommendations and diagnoses. Client firms often do not see the data on candidates but trust the consultant to administer, score, and integrate the raw material.

The Business Case

Business Strategy

As noted, to be successful, organizations need to manage human resources effectively, but the magnitude of this requirement and the options open will be strongly affected by business strategy. At the most fundamental level, the nature and quality of employees required will depend on the product or service identified by senior management to be sold and the nature of technology available. Thus, for educational institutions and consulting firms, labor costs will be a major part of the cost structure of the firm. For heavy industry (e.g., steel making), these costs will be much less.

At a more operational level, the need for and the nature of the kind of assessment work called for will depend on HR strategy. For example, by virtue of the marketing claims of the firm for its products or services, it may need to perform more complete or complex assessments of prospective personnel. Similarly, exacting performance standards or the need for innovation may lead to more diligent assessment of people for positions of responsibility and leadership.

The supply of labor and availability of the critical skills will also affect the role of assessment in HR strategy. Under current economic conditions, with unemployment in the United States around 4%, most firms are having a difficult time staffing existing positions, much less trying to open new offices. Human resource strategy may have to shift in favor of augmenting the pool (Rynes & Barber, 1990), revising standards, spending a great deal more on assessment, or finding more efficient ways to assess more people.

Finally, the nature and extent of company assessment practices have systemic implications. On the one hand, what a firm does or does not do by way of assessment of individuals conveys an image to the public. It is often taken as a manifestation of company culture, communicating a value orientation toward the managing of employees. Depending on the industry and on the activities of competing firms, this may benefit or disadvantage a company. In the Washington, D.C. metropolitan area, the shortage of information technology professionals is so acute that many firms are all but eliminating the use of assessments for selection and adopting them for classification purposes. That is, given an available candidate, the question shifts from “should we hire him or her?” to “where can we place this person?”

Speed-to-Market

Organizational success is often related to the speed with which a product gets introduced, the speed of service, or the speed of response to a new business opportunity. In order for an organization to be competitive in its speed, the HR management systems must be up to the task. This readiness means, among other things, being able to obtain and use high-quality assessment information in a timely manner.

Often this is difficult because obtaining high-quality data is too time-consuming. In this regard, there is tension relative to what historically been called band width and fidelity features of a measurement protocol. Although quick and course measures may give us some insight about someone, it is often only a high-level cut. Making really good diagnoses or predictions may require more deliberate and diligent assessment.

As a case in point, few firms are willing to spend the time (or expense) required for a five-day assessment center. Instead, practitioners are expected to take no more than one day of staff or candidate time on which to base important personnel decisions.

In addition to the issue of how to assess, the issue of who should provide the information becomes salient. In particular, just what aspects of individual achievement, potential, or weakness can be supplied efficiently by such agents as the worker himself, a manager, a human resource specialist, or a psychologist? Importantly, if it comes to using an assessment professional, the issue then becomes just who is in the best position to provide the service quickly. In this regard, it may actually be that an external consulting firm on retainer is indeed better positioned to provide high-quality assessment information faster than the company’s own HR department.

The Dynamics of Change

Related to the points made above is the reality that contemporary work organizations are being exposed to forces that induce constant change. The rate of new business start-ups has never been higher in the United States.All types of new businesses (but especially e-commerce firms) are coming into being. Although many are small, all require attention to the assessment of current or prospective employees. The scale and scope of business are dynamic, with acquisitions, mergers, or consolidations occurring almost daily.As many people know, the number of airlines, auto manufacturers, and book publishers is down. This, too, implies the need for assessment for purposes of selective retention or for outplacement of personnel. As a last example, the life cycle of products, especially in the consumer electronics industry, has never been shorter. Although the skills needed to design and produce products may not change dramatically, to be able to quickly reallocate talent across business units as market demand for products is falling calls for valid assessments of past performance, interests, and potential to contribute to the production and sales of new product lines.

The dynamics of change are no less real for public-sector organizations, albeit for somewhat different reasons. Most notably, an examination of the current demographic makeup of the work force in the United States reveals that many current employees will be eligible for retirement in the next five years (Crenshaw, 1999). More specifically, 35% of the U.S. federal workforce in 1998 will be eligible to retire in 2006 (Walsh, 2001). Additionally, 45% of the U.S. government’s most senior executives (the senior executive service) are expected to retire in 2005 (Walsh, 2001). Asimilar reality is facing most school districts relative to their teaching staff. Even the U.S. military must confront new challenges to effectively recruit and retain key personnel (Suro, 2000). In all these instances, we maintain that programs for the valid and timely assessment of individuals have an important role to play.

Build It or Buy It

At one point in time in the United States, many large corporations had their own HR research units. They would be located in the HR function, staffed by psychologists, and given the responsibility to design, develop, and implement assessment programs. Today, most firms have elected to purchase such services. In this regard, these firms have shed some fixed expenses arguably well for profitability. By using an outside firm, the company is presumably getting state-of-the-art methods and models for assessment. Moreover, the outside firm usually has greater insight regarding normative practices in the industry (and even competitive assessment data). On the other hand, in going outside, the company may have increased its agency costs.

The challenges facing a company that wishes to buy assessment services are numerous. For instance, it is often difficult to ascertain the professional qualifications of the numerous firms offering services. This is especially true when the consulting firm has a strong internal champion for the services of the outsider. The temptation to latch on to the latest tool or technique may also be irresistible. There is ample evidence that many management practices, including assessment programs, have a fad like quality: they are adopted for their popularity at the moment (Abrahamson, 1996), regardless of their suitability. Finally, there is the very real possibility that the outside firm would not take a steward role and gather appropriate statistical evidence that the assessment program meets technical and legal requirements. Even should a consultant recommend local validation or adverse impact analysis, the extra costs, time, and effort may cause the firm to opt out of this activity.

Proper Training

As noted, most assessments in industry are carried out by individuals who are neither psychologists nor measurement specialists (e.g., psychometricians). Although it is also true that many kinds of assessments may not require a PhD, it still is imperative that the firm insure that, whatever the assessment practice used, those responsible for gathering and interpreting data are qualified to do so.

This aspect of due diligence is increasingly difficult as a result of the increased public access to tests, either because of their frequent use or because of the reluctance of test publishers (due to financial pressures) to insure test user qualifications. It is also exacerbated by the increased use of computer-based test systems, which make it logistically easy for even a novice to offer services.

Privacy

Assessment data or results are often paid for by the firm, even when they are nominally in the service of meeting worker counseling or career needs. More to the point, they are often archived in a worker’s personnel file. Thus, there is frequently a great deal of tension regarding just how long information should be retained and, importantly, who should have access.

Increasingly, such records are available electronically as companies go to web-based HR systems. Access to such systems, although protected, can rarely be guaranteed. Moreover, there are an increasing number of instances in which third parties can demand to see such data if they are seen as material to litigation. This also implies that disgruntled employees as well may be expected to seek access to individual assessment data (e.g., performance reviews or assessment of career potential) on other employees.

Finally, certain kinds of assessment methods are viewed as more problematic from a privacy perspective. At this point in time, most applicants will accept and tolerate being assessed on ability and performance tests. They will usually see a job interview as a reasonable intrusion into their affairs. However, such approaches as drug testing, honesty testing, and polygraph testing usually create concerns on the part of applicants and employees alike. Assessments with regard to physical or mental disability, even when they are not prohibited by the ADA, are also likely to lead to perceptions of abuse of power and invasion of privacy (Linowes & Spencer, 1996).

Compensation

There are a variety of factors on which to base an individual’s compensation: job duties, title, tenure, seniority, market considerations, and performance. All of these imply careful protocols. However, given that the organization wishes to link compensation to performance, additional considerations are involved (Heneman, 1992; Lawler, 1990). Among these, it is most critical for the organizational managers in charge to have an appropriate assessment platform.

Assessments for purposes of compensation should be based on those work behaviors and outcomes that are of strategic value to the organization (Lawler, 1990). Moreover, from the worker’s point of view, variance on the measures used must reflect individual effectiveness in order to be perceived as fair. All other things being equal, such measures should have relatively small standard errors as well. Finally, because this context represents a case of high-stakes measurement, assessment devices and tools need to be robust and resistant to abuse or unwarranted influence intended to wrongfully benefit certain employees over others. Clearly, the successful use of individual assessments as the basis for compensation presents a challenge to system designers and managers.

When it comes to linking pay to assessments, two special cases are worth pointing out. The first involves recent initiatives aimed at increasing the capacity of organizations to be flexible and adaptable by insuring that they have a welltrained and multiskilled work force. Several organizations have attempted to accomplish this by paying individuals (at least in part) in relationship to the number and nature of the skills that they possess. In skill-based pay systems, individuals qualify for wage premiums by being assessed and performing well on measures specifically set up for this purpose. Depending on the organization, this might be in one or several skill or competency areas felt to be important if the worker is to be able to contribute to the future growth of the firm. Passing performance itself may be set at one of several levels of mastery (Ledford, Lawler, & Mohrman, 1995).

Thus, under this arrangement, the number and nature of the assessment domains to be covered, the tools (e.g., tests, work sample, portfolio), the rules and procedures for allowing workers to be assessed (e.g., they must be performing at certain levels on their current job), the cut or passing scores adopted, and the salary or wage differential associated with a particular skill or score must be worked out. Moreover, if the skills of interest are of the type that can atrophy if not used, it may also be necessary for regular and repeated assessments to be performed over time. One would need not only to qualify but to remain qualified to receive the extra pay.

A second area where there are special challenges for assessment relates to the increased use of work teams by contemporary organizations (Sundstrom, 1998). In general, the assessment of individuals as members of work teams requires new models and techniques. Depending on the purpose of the assessment (e.g., retention or developmental feedback), but especially as the basis for individual compensation, as noted in an earlier section, the protocols to be followed in a team environment would be very different. However, aside from the technical complexities, the choice of just who is to provide the assessment data on individuals in a work team is problematic (Jones & Moffett, 1998). In particular, having team members assessing one another, especially when the information is to be used for compensation decisions, requires careful measurement development work, user training, the regular monitoring of practices, and clear accountabilities in order to insure quality.

In each of the areas examined, the assessment professional, in collaboration with the organization’s owner or agent, needs not only to bring to bear the best technical knowledge and practices, but also to insure that the business case can be made or justified in the design or choice of assessment platform.

Technical Issues

In spite of over 100 years of technical developments associated with individual assessment theory and practice, there are a number of issues that remain to be resolved when it comes to applications to work settings.

The Criterion Domain

Just what to measure continues to be an important issue facing practitioners. As noted throughout this research paper, there is certainly no shortage of options. Recent developments, however, have forced attention to just how broad or narrow the focus should be when it comes to assessing individual effectiveness.

Traditionally, individuals in work settings have been assessed relative to job performance. Here, the choices have typically included measuring both the quality and quantity of performance. However, systematic investigations into the nature of individual effectiveness (Campbell, 1990) have made a strong case for a more elaborate conceptualization of the performance domain. In particular, these studies have pointed out the need for and appropriateness of assessing what has become termed contextual performance (Borman & Motowidlo, 1993).

As discussed in the section on selection, contextual performance by workers is reflected in extrajob behaviors and accomplishments, including such things as cooperation, teamwork, loyalty, and self-development. As extrajob behaviors, their role or function relative to the workers’ formal job duties is often indirect. However, there is evidence that their manifestation does make for a better workplace and for unit effectiveness.

Competencies

As previously noted, there is reason to believe that there is merit to the concept of competencies when it comes to assessing individuals in the workplace. Here too, however, there is no consensus on just how to incorporate this concept (Schippmann et al., 2000). For example, some might include assessing elements of a worker’s value system and needs as part of the process (e.g., Spencer & Spencer, 1993). Others would not try to get at the building blocks underlying or producing the performance, but would focus instead directly on the demonstrated (and observed) capacity to perform jobrelevant activities.

Asecond issue relates to the nature and number of competencies that exist. In particular, it is unclear that their definition should emphasize their links to specific jobs, roles, or firms (e.g., those needed to be effective as a call service representative for an insurance company) or whether they should be thought of as more generic in nature and cross-situational in relevance (e.g., “Telephone service rendering”).

Noncognitive Measures

There has been a major shift in thinking about the appropriateness of noncognitivemeasures, because these can contribute to effective HR management programs in work organizations. This is been especially true regarding their application to personnel selection and screening.

Early reviews of the usefulness of interest and personality inventories had been quite critical (Guion & Gottier, 1965), and with good reason. Many of the inventories used had been designed for the assessment of psychopathology (e.g., the Minnesota Multiphasic Personality Inventory). Moreover, the application of a particular scale was often made without regard to a careful analysis of the requirements of the job (Day & Silverman, 1989). Finally, it was not easy to find studies that were carefully designed relative to the validity data needed to make a business case.

In contrast, greater attention to these technical issues in recent years has resulted in great advances and the increased use of personality and interest measures in personnel work. The empirical record is now convincing enough to most critics (e.g., Schmidt & Hunter, 1998) that they truly can add value over cognitive measures when it comes to predicting or selecting individuals for success on the job.

Much more research is needed, however. For instance, just where to use personality assessments is not yet well understood. Modern, work-relevant personality inventories are found to have acceptable validities, but they do not always have value over cognitive measures (Schmidt & Hunter, 1998). Moreover, the nature of the criterion domain appears to be key. Do we want to assess for success in training, early task performance, contextual performance, career success, or commitment or tenure? The value of noncognitive measures should certainly be linked to the criterion of interest. However, we do not yet have parametric research here to tell us when and where this is the case.

The conceptualization of such measures is problematic.As noted in an earlier section, it is now quite common to make use of inventories that are based on meta-analytic research (e.g., Digman, 1990; Goldberg, 1990; Saucier, 1994) that gives support to the existence of and importance for five key constructs. However, these so-called big five dimensions— conscientiousness, extraversion, openness to experience, agreeableness, and emotional stability are only one approach to conceptualizing and measuring the personality domain. Additional work needs to be done to ascertain just when this set of robust factors is indeed a better way to conceptualize and measure the noncognitive assessment needs than scales that might be more focused on the personnel problem at hand (e.g., individual assessment for placement in work teams).

There is also the issue of just when it may be appropriate to use more clinically oriented noncognitive measures. As noted in an earlier section, these are often found in applications such as individual coaching or in the diagnosis of behavior or performance problems in the workplace. However, their validity and relative utility are not well understood.

The use of noncognitive measures in high-stakes settings, such as screening for employment, has also revealed a key weakness. This is their susceptibility to being faked (Ones, Viswesvaran, & Reiss, 1996; McFarland & Ryan, 2000). Applicants or candidates for promotion who are being assessed on personality measures are likely to try to look good. Very often the measures themselves are transparent to the applicant in the sense that he or she can imagine the right answers to questions, those that would give them an advantage in the scoring.

To be sure, the issue of faking on measures (or intentional distortion) has been investigated for over many years (Ones et al., 1996). However, the emphasis in the past has been on response formats that were felt to make it difficult for someone to deliberately mislead the administrator. Formats like the forced-choice tetrad or the weighted checklists were invented for this purpose (Saks et al., 2000). Although there is evidence that these can help to mitigate the problem, this approach has not been a major breakthrough.

Instead, research is currently under way to try to provide an understanding of faking as a phenomenon in and of itself. Thus, the newer focus includes examining such things as the level of awareness that the test taker may have about his or her behavior as investigators try to distinguish between distortion that is motivated or that which is based on lack of self-insight. Similarly, the role of contextual forces is being examined to understand those situations that may promote honesty. Paradoxically, there also a question of the implications of faking for estimating the future job behavior or performance of one who does so. In this regard, some might argue that faking in a high-stakes situation is more of a manifestation of adaptability than dishonesty and thus might actually imply more, than less, likelihood for being a successful worker. Finally, there is uncertainty regarding the implications of applicant faking for personnel decisions based on data from those measures (e.g., personality) more susceptible to faking. Such things as the prevalence and nonrandom distribution of faking behavior in the applicant pool should make a difference here, but the evidence is not conclusive (Ones & Viswesvaran, 1998b). Clearly there is much research that remains to be done.

Technology-Mediated Testing

The previous sections of this research paper have highlighted the variety of ways that individual assessments in work organizations can be carried out. However, traditionally, the individual of interest is usually assessed when in physical proximity to the assessor. Thus, the latter is usually face to face with the candidate or at least in the same room. In contrast, current practice is moving toward technology-mediated interaction.

The technology involved can vary, depending on the application. In the past, assessment processes have been mediated by such mundane technologies as the telephone and FAX machine. In the case of the former, the person to be tested might be asked to dial in to a number to be talked through a series of questions. The person could respond by using the dial matrix as a key pad, following instructions that translated the standard dial to a scoring matrix. Despite such primitive arrangements, many organizations were able to achieve a cost savings and improve the speed of decisions, while still maintaining test security and standardized administration. By and large, however, the computer is currently the technology of choice.

Burke (1993) identifies and contrasts computer-based testing (CBT) and computer-adaptive testing (CAT) as two major advances in computerized psychological testing. The former builds on optical scanning technology and the capacity for computers not only to summarize responses but, using carefully developed algorithms, to provide interpretations.

As the movement toward the on-line administration of tests converges with developments in the areas of Item Response Theory (IRT) and CAT, new technical issues arise and must be resolved (Green, Bock, Humphreys, Linn, & Reckase, 1994; Kingsbury, 1990). Burke highlights such issues as resolving the best way to develop work attribute-job performance matrices on which to base computer algorithms, establishing the construct equivalence of tests across CAT versions and in relationship to conventional forms, and determining criterionrelated validity. Finally, although there is some support for the belief that CAT might represent an improvement in test security (Bunderson, Inouye, & Olsen, 1989), it is possible that a test taker motivated to do so would have an easier time recalling specific test items and communicating them to future test takers.

Validity

The earliest practitioners tended to focus on the relevance of the test to the job (Landy, 1991), thus stressing what we would now call content validity. If one wanted to assess someone for suitability for a manufacturing position, one looked for evidence of the knowledge, skills, or ability to perform. This evidence was usually based on criterion-related validity data. However, as the field matured, and for both theoretical and practical reasons (e.g., the need to identify a worker who does not yet have the skill but has the potential to be an effective worker), most thoughtful practitioners have sought to insure that assessments allow for the valid inference of the individual’s status relative to a job-related construct (Schmitt & Landy, 1993).

As described above, the process of establishing construct validity is one of building a case that the measure is appropriate based on a variety of types of evidence, obtained from multiple cases and over numerous settings. Analytic tools and approaches such as factor analysis, scaling, rational methods, multimethod, multitrait analysis, validity generalization, path analysis, and even the experimental method can be used (Schmitt & Landy, 1993). Although data from one study or analyzed by one technique may be less than conclusive, there is reassurance in replication and in finding expected or predicted patterns. Thus, the conceptual models and techniques needed to infer that an assessment tool does indeed measure what it is intended to do so are readily available.

Nevertheless, the problem of construct validity persists. For example, biodata or life history information has been found useful for predicting job outcomes, but the construct nature of such measures is often obscure or overlooked (Mael & Hirsch, 1993; Mumford, Costanza, Connelly, & Johnson, 1996). Similarly, the assessment center method has been adopted by many firms as a way to estimate the developmental needs of workers (usually couched in terms of trait-like constructs). However, the evidence for the construct validity of such applications is weak (Klimoski, 1993; Klimoski & Brickner, 1987).

The problem is likely to become worse, given many of the competitive pressures on companies and practitioners noted above. For example, there is a great emphasis on satisfying the client. Often this means providing custom work within a short time frame, leaving few options for careful development work. Even if there is some scale or test development, given the proprietary nature of such work products, traditional ways of ascertaining the quality of a test (e.g., requesting and reviewing a technical report) are often thwarted. Similarly, many managerial decisions about who to hire for professional assessment work or just what assessment approach to use are unduly influenced by popular culture and testimonials, and rarely based on rigorous evidence of validity or utility (Abrahamson, 1996).

Onadifferenttack,amorediverseworkforce,withEnglish as a second language and the tendency for firms to become multinational, also presents challenges. The successful crosstranslation of material is difficult at best. When this must be done for assessment tools, the challenges to construct validity are quite great. Similarly, as practitioners attempt to be responsive to applicants covered under the ADA, variation in testing conditions or test format would seem to have a major, but frequently underexamined, impact on construct validity (Klimoski & Palmer, 1994).

Social Policy Issues

Fairness of Assessment Practices

Regardless of the approach used or the purpose, it would seem that issue of the fairness of assessments in the context of work could be considered the major social policy issue facing practitioners. Indeed, much of the work on noncognitive predictors has been stimulated with the goal of finding valid measures that do not have an adverse impact on subgroups in society (Ryan, Ployhart, & Friedel, 1998).

One approach to this problem has been to investigate the bases for inferior performance of subgroups, often using the tools of cognitive psychology (Sternberg & Wagner, 1993; DeShon, Smith, Chan, & Schmitt, 1998). Here, the goal is to ascertain the state of mind or the test-taking strategy used by individuals, with the goal of assisting them in changing their approach if it is dysfunctional. In a related manner, the use of exposure to and training on problematic assessment tools has also been tried, but with limited success (Ryan et al., 1998).

Still a third approach has been to examine the perceived fairness of assessment devices. Here, the goal is to understand how fair the test is in the eyes of the test taker, because this affects his or her motivation to do well and to challenge the outcome of the assessment (Chan, Schmitt, Sacco, & DeSchon, 1998; Gilliland, 1994).

Finally, it has already been noted that the ADA also presents fairness challenges. Applicants or workers covered under the ADA have the right to request an accommodation to allow them to perform to their potential in an assessment event. However, what is reasonable from the firm’s point of view may be viewed as unfair by the test taker. Ironically, even when the company and the worker can agree on disability-mitigating adjustments to the assessment platform, other workers and even supervisors may see this as an injustice, because they had to meet the standard, and frequently more strenuous, testing conditions (Klimoski & Donahue, 1997).

Poor Language Skills

At the time of this writing, unemployment is very low in the United States. On the other hand, immigration is very high. Add to this the observation thatAmerican schools are producing graduates who perform poorly on many carefully developed achievement-related assessment programs. All this implies that the current and near-future members of the workforce are poorly prepared to perform well on many of the traditional assessment tools used in the work context. Language and reading skills are not at appropriate levels. In extreme cases, recent immigrants may be illiterate in their own national language. As noted above, these tendencies create problems of both valid inference and perceived social injustice. All this implies that, at a minimum, conventional tools need to be recalibrated in light of such a diverse pool of talent. However, it may also call for entirely new approaches to individual assessment as well.

Open Testing

Given the importance of doing well on assessments as a way to obtain desirable work outcomes, its not surprising that there are tremendous pressures to make the nature of tests, test content, and assessment protocols more public. Of course, over the years, there have always been arrangements for informing both the representative of the firm and prospective test taker about what the measure is all about. More recently, however, there are increasing demands for public disclosure of such things as the exact items or exercises used, response options, and even item operating characteristics. This trend has been accelerated by the availability of social dialogue and ease of information search on the Internet. Now details on items can be made known to millions of people in a matter of seconds and at little cost. Needless to say, full disclosure would quickly compromise the test and dramatically affect its validity and utility.

Thus the challenge here is to find ways of meeting the legitimate needs of stakeholders and to insure the accountability of firms to professional practice and yet to be able to maintain the integrity of the assessment program.

Although these issues are not exhaustive, they highlight the fact that the field continuous to overcome new challenges. As a field of practice that is driven not only by developments in theory but also by trends in the business community, it must continuously respond to and affect major societal changes in the economy, immigration, public policy, culture, and business practices.

Bibliography:

  1. Abrahamson, E. (1996). Management fashion. Academy of Management Review, 21, 254–285.
  2. Amason, A. C., Thompson, K. R., Hochwarter, W. A., & Harrison, A. W. (1995). Conflict: An important dimension in successful management teams. Organizational Dynamics, 23, 20–35.
  3. American Educational ResearchAssociation,American Psychological Association, & National Council on Measurement in Education.(1999).Standardsforeducationalandpsychologicaltesting. Washington, DC: American Educational Research Association.
  4. Anastasi, A., & Urbina, S. (1996). Psychological testing (7th ed.). New York: Macmillan.
  5. Arvey, R. D., & Campion, J. E. (1982). Biographical trait and behavioral sampling predictors of performance in a stressful life situation. Personnel Psychology, 35, 281–321.
  6. Austin, J. T., Klimoski, R. J., & Hunt, S. T. (1996). Dilemmatics in public sector assessment: A framework for developing and evaluating selection systems. Human Performance, 3, 177–198.
  7. Austin, J. T., Scherbaum, C. A., & Mahlman, R. A. (2000). History of research methods in industrial and organizational psychology: Measurement, design analysis. In S. Rogelberg (Ed.), Handbook of research methods in industrial and organizational psychology. Malden, MA: Basil Blackwell.
  8. Baritz, L. (1960). The servants of power. Middleton, CT: Wesleyan University Press.
  9. Binning, J. F., & Barrett, G. V. (1989). Validity of personnel decisions:Aconceptualanalysisoftheinferentialandevidentialbases. Journal ofApplied Psychology, 74, 478–494.
  10. Borman, W. C., & Motowidlo, S. J. (1993). Expanding the criterion domain to include elements of contextual performance. In N. Schmitt & W. C. Borman (Eds.), Personnel selection in organizations (pp. 71–98). San Francisco: Jossey-Bass.
  11. Brunkan, R. J. (1991). Psychological assessment in career development. In C. P. Hansen & K. A. Conrad (Eds.), A handbook of psychological assessment in business. New York: Quorum.
  12. Bunderson, C. V., Inouye, D. K., & Olsen, J. B. (1989). The four generations of computerized educational measurement. In R. L. Linn (Ed.), Educational measurement (3rd ed.). New York: American Council on Education and Macmillan.
  13. Burke, M. J. (1993). Computerized psychological testing: Impacts on measuring predictor constructs and future job behavior. In N. Schmitt & W. C. Borman (Eds.), Personnel selection in organizations (pp. 203–239). San Francisco: Jossey-Bass.
  14. Cairo, P. C. (1982). Measured interests versus expressed interests as predictors of long-term occupational membership. Journal of Vocational Behavior, 20, 343–353.
  15. Camera, W., & Schneider, D. L. (1994). Integrity tests: Facts and unresolved issues. American Psychologist, 49, 112–119.
  16. Campbell, J. P. (1990). An overview of the army selection and classification project (Project A). Personnel Psychology, 43, 231–240.
  17. Campbell, J. P., McCloy, R. A., Oppler, S. H., & Sager, C. E. (1993). A theory of performance. In N. Schmitt & W. C. Borman (Eds.), Personnel selection in organizations (pp. 35–70). San Francisco: Jossey-Bass.
  18. Cascio, W. F. (1991). Applied psychology in personnel management (4th ed.). Englewood Cliffs, NJ: Prentice Hall.
  19. Cascio, W. F. (1993). Assessing the utility of selection decisions: Theoretical and practical considerations. In N. Schmitt & W. C. Borman(Eds.),Personnelselectioninorganizations(pp.71–98). San Francisco: Jossey-Bass.
  20. Cattell, R. B., Cattell, A. K., & Cattell, H. F. (1993). Sixteen Personality Factors Practical Questionnaire (5th ed.). Champaign, IL: Institute for Personality and Abilities Testing.
  21. Crenshaw, A. B. (1999, September 22). The rise of phased retirement. Washington Post, 2.
  22. Cronbach, L. J. (1984). Essentials of psychological testing (4th ed.). New York: Harper and Row.
  23. Dalessio,A.T. (1998). Using multisource feedback for employee development and personnel decisions. In J. W. Smither (Ed.), Performance appraisal: State of the art in practice (pp. 278–330). San Francisco: Jossey-Bass.
  24. Dawis, R. (1991). Vocational interests, values and pBibliography:. In M. Dunnette & L. Hough (Eds.), Handbook of industrial and organizational psychology (2nd ed., pp. 833–872). Palo Alto, CA: Consulting Psychologists Press.
  25. Day, D. V., & Silverman, S. B. (1989). Personality and job performance: Evidence of incremental validity. Personnel Psychology, 42, 25–36.
  26. DeShon, R. P., Smith, M. R., Chan, D., & Schmitt, N. (1998). Can racial differences in cognitive test performance be reduced by presenting problems in a social context? Journal of Applied Psychology, 83, 438–451.
  27. Digman, J. M. (1990). Personality structure: Emergence of the FiveFactor Model. Annual Review of Psychology, 41, 417–440.
  28. Equal Employment Opportunity Commission, Civil Service Commission, U.S. Department of Labor, & U.S. Department of Justice. (1978). Uniform guidelines on employee selection procedures. Federal Register, 43, 38290–38315.
  29. Fitzgerald,L.F.,Drasgow,F.,Hulin,C.L.,Gelfand,M.J.,&Magley, V. J. (1997). The antecedents and consequences of sexual harassment in organizations: A test of an integrated model. Journal of Applied Psychology, 82, 578–589.
  30. Flanagan, J. C. (1954). The Critical Incident Technique. Psychological Bulletin, 51, 327–358.
  31. Fleishman, E. A., & Reilly, M. E. (1992). Handbook of human abilities: Definitions, measurements and job task requirements. Palo Alto, CA: Consulting Psychologists Press.
  32. Ford, J. K. (1997). Improving training effectiveness in work organizations. Mahwah, NJ: Erlbaum.
  33. Ford, M. E., & Tisak, M. S. (1983). A further search for social intelligence. Journal of Educational Psychology, 75, 196–206.
  34. Gaugler, B. B., Rosenthal, D. B., Thornton, G. C., & Bentson, C. (1987). Meta-analysis of assessment center validity. Journal of Applied Psychology, 72, 493–511.
  35. Ghiselli, E. E. (1956). Dimensional problems of criteria. Journal of Applied Psychology, 40, 1–4.
  36. Gilliland, S. W. (1993). The perceived fairness of selection systems: An organizational justice perspective. Academy of Management Review, 18, 694–734.
  37. Gilliland, S. W. (1994). Effects of procedural and distributive justice on reactions to selection systems. Journal of Applied Psychology, 79, 791–804.
  38. Glomb, T. M., Munson, L. J., Hulin, C. L., Bergman, M. E., & Drasgow, F. (1999). Structural equation models of sexual harassment: Longitudinal explorations and cross-sectional generalizations. Journal of Applied Psychology, 84, 14–28.
  39. Goldberg, L. R. (1990). An alternative “description of personality”: The Big Five factor structures. Journal of Personality and Social Psychology, 59, 1216–1229.
  40. Goldberg, L. R. (1992). The development of markers for the BigFive structure. Psychological Assessment, 4, 26–42.
  41. Goleman, D. (1995). Emotional intelligence. New York: Bantam Books.
  42. Green, B. F., Bock, R. D., Humphreys, L. G., Linn, R. L., & Reckase, M. D. (1994). Technical guidelines for assessing computer adaptive tests. Journal of Educational Measurement, 21, 347–360.
  43. Guion, R. M. (1980). On trinitarian doctrines of validity. Professional Psychology, 11, 385–398.
  44. Guion, R. M. (1998). Assessment, measurement, and prediction for personnel decisions. Mahwah, NJ: Erlbaum.
  45. Guion, R. M., & Gottier, R. F. (1965). Validity of personality measures in personnel selection. Personnel Psychology, 18, 135–164.
  46. Halbfinger, D. M. (1999, March 21). Police and applicants have big stake in test. New York Times, I40.
  47. Hansen, J. C. (1986). Strong Vocational Interest Blank/StrongCampbell Interest Inventory. In W. B. Walsh & S. H. Osipow (Eds.), Advances in vocational psychology, Vol. 1: The assessment of interests (pp. 1–29). Hillsdale, NJ: Erlbaum.
  48. Harmon, L. W., Hansen, J. C., Borgen, F. H., & Hammer, A. L. (1994). Strong Interest Inventory: Applications and technical guide. Palo Alto, CA: Consulting Psychologists Press.
  49. Heneman, R. L. (1992). Merit pay. Reading, MA: Addison-Wesley.
  50. Hogan, J., & Hogan, R. T. (1986). Hogan Personnel Selection Series manual. Minneapolis, MN: National Computer Systems.
  51. Hogan, J., & Hogan, R. T. (1989). How to measure employee reliability. Journal of Applied Psychology, 74, 273–279.
  52. Hogan, R. T. (1991). Personality and personality management. In M. D. Dunnette & L. M. Hough (Eds.), Handbook of industrial and organizational psychology (Vol. 2). Palo Alto, CA: Consulting Psychologists Press.
  53. Hogan, R. T., & Blake, R. J. (1996). Vocational interests: Matching self concept with work environment. In K. R. Murphy (Ed.), Individual difference and behavior in organizations (pp. 89–144). San Francisco: Jossey-Bass.
  54. Hogan, R. T., & Hogan, J. (1995). Hogan Personality Inventory manual. Tulsa, OK: Hogan Selection Systems.
  55. Holland, J. L. (1985). Professional manual Self-Directed Search. Odessa, FL: Psychological Assessment Resources.
  56. Holland, J. L., Fritsche, B. A., & Powell, A. B. (1994). The SelfDirected Search (SDS): Technical manual. Odessa, FL: Psychological Assessment Resources.
  57. Hollenbeck, J. R., LePine, J. A., & Ilgen, D. R. (1996). Adapting to roles in decision-making teams. In K. R. Murphy (Ed.), Individual differences and behavior in organizations (pp. 300–333). San Francisco: Jossey-Bass.
  58. Hough, L. M. (1992). The “Big Five” personality variables— construct confusion: Description versus prediction. Human Performance, 5(1&2), 139–155.
  59. Howard, A., & Bray, D. W. (1988). Managerial lives in transition: Advancing age and changing times. New York: Guilford Press.
  60. Huffcutt, A. I., Roth, P. L., & McDaniel, M. A. (1996). A metaanalytic investigation of cognitive ability in employment interview evaluations: Moderating characteristics and implications for incremental validity. Journal ofApplied Psychology, 81, 459– 473.
  61. Hunter, J. E., & Hunter, R. F. (1984). Validity and utility of alternative predictors of job performance. Psychological Bulletin, 96, 72–98.
  62. Impara, J. C., & Plake, B. S. (1998). 13th annual mental measurement Yearbook. Lincoln, NE: Buros Institute for Mental Measurement.
  63. Jones, S., & Moffett, R. G. (1998). Measurement and feedback systems for teams. In E. Sundstrom (Ed.), Supporting work team effectiveness: Best management practices for fostering high performance (pp. 157–187). San Francisco: Jossey-Bass.
  64. Judge, T. A., Higgins, C. A., & Cable, D. M. (2000). The employment interview: A review of recent research and recommendations for future research. Human Resources Management Review, 10, 383–406.
  65. Katz, D., & Kahn, R. L. (1978). The social psychology of organizations. New York: Wiley.
  66. Kingsbury, G. G. (1990). Adapting adaptive testing with the MicroCAT testing system. Educational Measurement, 9, 3–6, 29.
  67. Klimoski, R. J. (1993). Predictor constructs and their measurement. In N. Schmitt & W. C. Borman (Eds.), Personnel selection in organizations (pp. 71–98). San Francisco: Jossey-Bass.
  68. Klimoski, R. J., & Brickner, M. (1987). Why do assessment centers work? Puzzle of assessment center validity. Personnel Psychology, 40, 243–260.
  69. Klimoski, R. J., & Donahue, L. M. (1997). HR Strategies for integrating individuals with disabilities into the workplace. Human Resources Management Review, 7(1), 109–138.
  70. Klimoski, R. J., & Palmer, S. N. (1994). Psychometric testing and reasonable accommodation for persons with disabilities. In S. M. Bruyere & J. O’Keeffe (Eds.), Implications of the Americans with Disabilities Act for psychology (pp. 37–83). Washington, DC: American Psychological Association.
  71. Klimoski, R. J., & Zukin, L. B. (1998). Selection and staffing for team effectiveness. In E. Sundstrom (Ed.), Supporting work team effectiveness (pp. 63–94). San Francisco: Jossey-Bass.
  72. Landy, F. J. (1976). The validity of the interview in police officer selection. Journal of Applied Psychology, 61, 193–198.
  73. Landy, F. J. (1989). The psychology of work behavior (4th ed.). Pacific Grove, CA: Brooks/Cole.
  74. Landy, F. J. (1991, August). Hugo Musterberg: Victim, visionary or voyeur. Paper presented at the Society for Industrial Organizational Psychology, St. Louis, MO.
  75. Lawler, E. E. (1990). Strategic pay. San Francisco: Jossey-Bass.
  76. Lawshe, C. H. (1959). Of management and measurement. American Psychologist, 14, 290–294.
  77. Ledford, G. E., Lawler, E. E., & Mohrman, S. A. (1995). Reward innovations in fortune 1000 companies. Compensation and Benefits Review, 27, 76–80.
  78. Linowes, D. F., & Spencer, R. C. (1996). Privacy I the workplace in perspective. Human Resources Management Review, 613, 165–181.
  79. Mael, F., & Hirsch, A. (1993). Rainforest empiricism and quasirationality: Two approaches to objective biodata. Personnel Psychology, 46, 719–738.
  80. McDaniel, M. A., Schmidt, F. L., & Hunter, J. E. (1988). A metaanalysis of the validity of methods for rating training and experience in personnel selection. Personnel Psychology, 41, 283–314.
  81. McDougal, W. (1908). An introduction to social psychology. London: Methaen.
  82. McFarland, L. A., & Ryan, A. M. (2000). Variance in faking across noncognitive measures. Journal of Applied Psychology, 85, 812–821.
  83. McHenry, J. J., Hough, L. M., Toquam, J. L., Hanson, M. A., & Ashworth, S. (1990). Project A validity results: The relationship between predictors and criterion domains. Personnel Psychology, 43, 335–354.
  84. Motowidlo, S. J., Borman, W. C., & Schmit, M. J. (1997). A theory of individual differences in task and contextual performance. Human Performance, 10, 71–83.
  85. Motowidlo, S. J., & Van Scotter, J. R. (1994). Evidence that task performance should be distinguished from contextual performance. Journal of Applied Psychology, 79, 475–480.
  86. Mount, M. K., & Barrick, M. R. (1995). The Big Five personality dimensions: Implications for research and practice in human resources management. Research in Personnel and Human Resources Management, 13, 153–200.
  87. Mount, M. K., Barrick, M. R., & Stewart, G. L. (1998). Five-factor model of personality and performance in jobs involving interpersonal interactions. Human Performance, 11, 145–165.
  88. Mowday, R. T., Steers, R. M., & Porter, L. W. (1979). The measurement of organizational commitment. Journal of Vocational Behavior, 14, 224–247.
  89. Mumford, M. D., Costanza, D. P., Connelly, M. S., & Johnson, J. F. (1996). Item generation procedures and background data scales: Implications for construct and criterion-related validity. Personnel Psychology, 49, 361–398.
  90. Murphy, K. R. (1996). Individual differences and behavior in organizations: Much more than g. In K. R. Murphy (Ed.), Individual differences in behavior in organizations (pp. 3–30). San Francisco: Jossey-Bass.
  91. Murphy, K. R., & Cleveland, J. N. (1995). Understanding performance appraisal: Social, organizational, and goal-based perspectives. Thousand Oaks, CA: Sage.
  92. Murray, B., & Gerhart, B. (1998). An empirical analysis of a skillbased pay program and plant performance outcomes. Academy of Management Journal, 41, 68–78.
  93. Munsterberg, H. L. (1913). Psychology and industrial efficiency. Boston: Houghton-Mifflln.
  94. Noe, R. A., & Schmitt, N. (1986). The influence of trainee attitudes on training effectiveness: Test of a model. Personnel Psychology, 39, 497–523.
  95. Ones, D. S., & Viswesvaran, C. (1998a). Gender, age and race differences in overt integrity tests: Results across four large-scale job applicant data sets. Journal of Applied Psychology, 83, 35–42.
  96. Ones, D. S., & Viswesvaran, C. (1998b). The effects of social desirability and faking on personality and integrity assessment for personnel selection. Human Performance, 11, 245–269.
  97. Ones, D. S., Viswesvaran, C., & Reiss, A. D. (1996). Role of social desirability in personality testing for personnel selection: The red herring. Journal of Applied Psychology, 81, 660–679.
  98. Ones, D. S., Viswesvaran, C., & Schmidt, F. L. (1993). Comprehensive meta-analysis of integrity test validities: Findings and implications for personnel selection and theories of job performance. Journal of Applied Psychology, 78, 679–703.
  99. Organ, D., & Ryan, K. (1995). A meta-analytic review of the attitudinal and dispositional predictors of organizational citizenship behavior. Personnel Psychology, 48, 775–802.
  100. Owens, W. A., & Schoenfeldt, L. F. (1979). Toward a classification of persons. Journal of Applied Psychology, 65, 569–607.
  101. Personnel Decisions International Corporation. (1991). The profiler. Minneapolis, MN: Author.
  102. Peterson, N. G., Mumford, M. D., Borman, W. C., Jeanneret, P. R., & Fleishman, E. A. (1995). Development of Prototype Occupational Information Network (O*NET). (Vols. 1–2). Salt Lake City: Utah Department of Employment Security.
  103. Peterson, N. G., Mumford, M. D., Borman, W. C., Jeanneret, P. R., & Fleishman, E. A. (1999). An occupational information system for the 21st century: The development of the O*Net. Washington, DC: American Psychological Association.
  104. Ree, M. J., & Earles, J. A. (1992). Intelligence is the best predictor of job performance. Current Directions in Psychological Science, 1, 86–89.
  105. Ryan, A. M., Ployhart, R. E., & Friedel, L. A. (1998). Using personality testing to reduce adverse impact: Acautionary note. Journal of Applied Psychology, 83, 298–307.
  106. Ryan, A. M., & Sackett, P. R. (1987). A survey of individual assessment practices by I/O psychologists. Personnel Psychology, 40, 455–488.
  107. Rynes, S. L., & Barber,A. E. (1990).Applicant attraction strategies: An organizational perspective. Academy of Management Review, 15, 286–310.
  108. Sackett, P. R., Burris, L. R., & Callahan, C. (1989). Integrity testing for personnel selection. Personnel Psychology, 42, 491–529.
  109. Sackett, P. R., & Harris, M. M. (1984). Honesty testing for personnel selection: A review and critique. Personnel Psychology, 37, 221–245.
  110. Saks, A. M., Schmitt, N. W., & Klimoski, R. J. (2000). Research, measurement and evaluation of human resources. Scarborough, Ontario: Thompson Learning.
  111. Saucier, G. (1994). Mini-markers: A brief version of Goldberg’s unipolar Big-five markers. Journal of Personality Assessment, 63, 506–516.
  112. Schippmann, J. S., Asch, R. A., Battista, M., Carr, L., Eyde, L., Hesketh, B., Kehoe, J., Pearlman, K., Prien, E. P., & Sanchez, J. I. (2000). The practice of competency modeling. Personnel Psychology, 53, 703–740.
  113. Schmidt, F. L., & Hunter, J. E. (1977). Development of a general solution to the problem of validity generalization. Journal of Applied Psychology, 62, 529–540.
  114. Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 2, 262–274.
  115. Schmidt, F. L., Hunter, J. E., & Outerbridge, A. N. (1986). The impact of job experience and ability on job knowledge, work sample performance, and supervisory ratings of job performance. Journal of Applied Psychology, 71, 432–439.
  116. Schmidt, F. L., Law, K., Hunter, J. E., Rothstein, H. R., Pearliman, K., & McDaniel, M. (1993). Refinements in validity generalization methods: Implications for the situational specificity hypothesis. Journal of Applied Psychology, 78, 3–13.
  117. Schmitt, N. (1997, April). Panel discussion: Police selection in Nassau County. Presentation at the annual conference of the Society for Industrial and Organizational Psychology, St. Louis, MI.
  118. Schmitt, N., & Klimoski, R. J. (1991). Research methods for human resources management. Cincinnati, OH: South Western Publishers.
  119. Schmitt, N., & Landy, F. J. (1993). The concept of validity. In N. Schmitt & W. C. Borman (Eds.), Personnel selection in organizations (pp. 275–309). San Francisco: Jossey-Bass.
  120. Schneider, B., & Schneider, J. L. (1994). Biodata: An organizational focus. In G. S. Stokes & M. D. Mumford (Eds.), Biodata handbook: Theory, research, and use of biographical information in selection and performance prediction (pp. 423–450). Palo Alto, CA: CPP Books.
  121. Smith, P. C., Kendall, L. M., & Hulin, C. L. (1969). The measurement of satisfaction in work and retirement: A strategy for the study of attitudes. Chicago: Rand-McNally.
  122. Snell, A. F., Stokes, G. S., Sands, M. M., & McBride, J. R. (1994). Adolescent life experiences as predictors of occupational attainment. Journal of Applied Psychology, 79, 131–141.
  123. Snyder, M., & Ickes, W. (1985). Personality and social behavior. In G. Lindzey & E. Aronson (Eds.), Handbook of social psychology (3rd ed., pp. 883–948). New York: Random House.
  124. Spencer, L. M., & Spencer, S. M. (1993). Competence at work: Models for superior performance. New York: Wiley.
  125. Sternberg, R. J., & Wagner, R. K. (1993). The g-ocentric view of intelligence and performance is wrong. Current Directions in Psychological Science, 2, 1–5.
  126. Stevens, M. J., & Campion, M. A. (1994). The knowledge, skill, and ability requirements for teamwork: Implications for human resource management. Journal of Management, 20, 503–530.
  127. Sundstrom, E. D. (1998). Supporting work team effectiveness: Best management practices for fostering high performance. San Francisco: Jossey-Bass.
  128. Suro, R. (2000, October 16). Captains’Exodus has army fearing for future. Washington Post, A2.
  129. Terman, L. M. (1917). A trial of mental and pedagogical tests in a civil service examination for policemen and firemen. Journal of Applied Psychology, 1, 17–29.
  130. Thorndike, E. L. (1920). Equality in difficulty of alternative intelligence examinations. Journal of Applied Psychology, 4, 283–288.
  131. United States Department of Labor. (1999). Testing and assessment: An employers guide to good practices.
  132. Walsh, E. (2001, January 18). GAO says lax management hurts federal work force. Washington Post, A19.
  133. Wonderlic Personnel Test. (1992). Wonderlic Personnel Test user’s manual. Libertyville, IL: Author.
  134. Worthington, E. L., & Dolliver, R. H. (1977). Validity studies of the Strong vocational interest inventories. Journal of Counseling Psychology, 24, 208–216.
  135. Zaccaro, S. J., Gilbert, J. A., Thor, K. K., & Mumford, M. D. (1991). Leadership and social intelligence: Linking social perceptiveness and behavioral flexibility to leader effectiveness. Leadership Quarterly, 2, 317–342.
Psychological Assessment in Medical Settings Research Paper
Psychological Assessment in Forensic Settings Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get discount 10% for the first order. Promo code: cd1a428655