Curriculum Evaluation Research Paper

Academic Writing Service

Sample Curriculum Evaluation Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. iResearchNet offers academic assignment help for students all over the world: writing from scratch, editing, proofreading, problem solving, from essays to dissertations, from humanities to STEM. We offer full confidentiality, safe payment, originality, and money-back guarantee. Secure your academic success with our risk-free services.

The term ‘curriculum evaluation’ has historically been used to refer to several different but inter-related concepts that have not been well delineated in the research literature. Some writers have used ‘curriculum evaluation’ to refer to curriculum product evaluation; others employ the term in reference to curriculum program evaluation. Curriculum products such as textbooks or natural curricula are evaluated prior to large-scale implementation. These evaluations are conducted using prespecified external criteria, outcome data from field trials, or both. Curriculum programs refer to the instruction that takes place within specific field contexts.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


1. Curriculum Product Evaluation

Curriculum product evaluation centers on the evaluation of curriculum products by their developers or by those who make election or adoption decisions. Curriculum product evaluation is conducted in a variety of settings. Tamir (1985) categorized these as: (a) part school-based curriculum development; (b) national curriculum study organizations; (c) within the framework of national curriculum development centers; and (d) development, by university teams. Curriculum product evaluation is also conducted by local, state, or national textbook selection committees, which also consider other instructional materials.

1.1 Evaluation Based On External Criteria

One type of curriculum product evaluation employs specified external criteria intended to describe appropriateness. The work of Tyler and Klein (1976) offers an excellent early example of curriculum product evaluation employing prespecified criteria. Their evaluation procedure uses characteristics such as: specification of instructional objectives on which the material is based; appropriateness of the materials given the skills, background knowledge, age, ethnicity, and socioeconomic background of the intended students; and adequacy of the teachers’ manual for classroom application and for providing an explanation of the content selection, sequence, and presentation. Later, Gall (1981) presented a listing and elaboration of 39 criteria for analyzing curriculum materials.




These evaluation procedures, particularly Tyler and Klein’s, reflect a behaviorist orientation to evaluating curriculum products. Brophy and Alleman’s work (1991), derived more heavily from research on teaching, provided a constructivist framework for the evaluation of instructional activities. The principles they proposed for evaluation include goal relevance, appropriate level of difficulty, feasibility, cost effectiveness, multiple goals, motivational value, topic currency, whole-task completion, higher order thinking, and adaptability. This constructivist view led to the formation of new curriculum evaluation standards in California and many other states.

Standards typically originate when a policy-making body invites colleges, universities, professional organizations, school districts, and other agencies to nominate distinguished professionals with expertise in a given field to serve on a panel. The standards do not require all products or programs to be alike except in their quality, which assumes different forms in different environments. Judgments as to whether standards are fulfilled are made by professionals who have been trained in their interpretations.

The use of standards for product evaluation is particularly relevant for the evaluation of textbooks. Procedures for textbook evaluation have been most thoroughly systematized in the larger states that practice state-based textbook adoption. California and other large adoption states in the United States are fine tuning selection procedures and standards for each subject area in order to obtain a more tailored product from textbook publishers. In these procedures, criteria are provided for evaluating textbooks to be considered for adoption. The criteria are well specified, further elaborated, and weighted in a separate rating document. A minimum total score cut-off creates the possibility that none of the textbooks will be deemed acceptable at the first review.

1.2 Evaluation Based On Outcome Data From Field Trials

A second type of curriculum product evaluation relies on field data for judging the adequacy of curriculum products, specifically the product’s actual impact on students. In this sense curriculum evaluation is an examination, or validation, of the impact of a newly developed product.

Curriculum product evaluation based on field data may be performed both formatively (to shape or modify the curriculum under development) or summatively (to validate the curriculum prior to its release). The type of products included are usually more limited than for curriculum product evaluation using external criteria. However, the two procedures are sometimes used concurrently (e.g., when a state evaluation proposed textbook is asked to provide descriptions of the product development process, including field testing data).

2. Curriculum Program Evaluation

The term ‘curriculum program evaluation’ refers to a complex set of interactions between a given curriculum instructional program and its setting. Curriculum evaluation activities in this broader con-text examine particular curricula, instructional pro-grams, courses of study, or national programs within their situational context. Theorists whose work is relevant to curriculum evaluation are mainly those who are known for their work in evaluation of educational or social programs.

Alkin and Ellett (1985) have noted that models of evaluation are ‘prescriptive’ rather than ‘descriptive.’ A descriptive model is a ‘theory’ consisting of empirically derived statements containing generalizations that might describe, predict, or explain evaluation activities. The models in evaluation instead prescribe which evaluation activities are good or bad, right or wrong, adequate or inadequate, rational or irrational, just or unjust. These models claim that evaluation should be conducted in prescribed ways, indicate evaluator’s obligations and responsibilities and pro-vide advice, recommendations, and pitfalls in con-ducting an evaluation.

Three features define the unique characteristics of different evaluation models: (a) the collection and analysis of data (methodology); (b) the ways in which data are valued or judgments made (values); and (c) the purposes for providing evaluation information (uses). All evaluations involve some kind of methodology for description and explanation, all necessitate the valuing of data acquired within the particular context, and all evaluations are conducted with some intended use in mind. Thus, the distinction between evaluation models based on these three dimensions is not exclusionary; for example, it is not claimed that one model only represents the necessity of employing a methodology and others do not. Rather the classification system for evaluation models presented here is based on the relative emphasis within the various models. In essence, the issue is: when evaluators must make concessions, what do they most easily give up and what do they most tenaciously defend?

The methodology-oriented models may be viewed in three subsets: the measurement outcome-oriented models, the research and/or quantitative method-ology-oriented models, and models that focus on naturalistic methods. The values-oriented models presented here are expertise-oriented valuing, process-oriented valuing, and valuing based on principles of social justice. ‘Uses’ will be examined in two categories directed toward decision-makers and primary users.

2.1 Measurement Outcome-Oriented Evaluation

The Tylerian evaluation model has been most closely associated with the term ‘curriculum evaluation’ (see Tyler 1942). This model is one of the earliest oriented toward measurement outcomes as a governing basis for the conduct of evaluation; its ultimate purpose is to determine whether a given program (or curriculum) achieved its intended purposes. For those using this model, or more recent variations of it, the important questions ask how the objectives are to be determined, behaviorally defined, and measured. Other important considerations are short- and long-term objectives, and the use of traditional and nontraditional forms of measurement. The important evaluative focus is on discrepancies between measured outcomes and intended objectives.

2.2 Research And/or Quantitative Methodology- Oriented Evaluation

Other methodology-oriented evaluators perceive evaluation as nearly synonymous with experimental re-search methods. Thus, an emphasis on the experiment or quasi-experiment as the appropriate methodology for social science research evaluation was lauded by Campbell and Stanley (1963) and embraced by a generation of evaluation researchers. The evaluation text by Rossi et al. (1999) most closely reflects this viewpoint.

Postpositivism represents an extension of this evaluation model category. Greene and McClintock (1991 p. 14) noted that postpositivists retain ‘a preference for the quantitative methodology of experimental science. This preference includes a continued emphasis on causal explanation.’ However, they are more willing to allow the use of multiple instruments for a given phenomenon, multiple analysis procedures, and critiques from alternative value frameworks.

2.3 Naturalistic Evaluation Methods

A number of theorists have developed their views partly in reaction to what they viewed as a dominance of measurement and other quantitatively oriented evaluation procedures. These evaluation writers have acquired ‘naturalistic’ evaluation points of view, which emphasize the richness that qualitative and descriptive data can bring to the evaluation process. Naturalistic evaluation methods typically rely on data from several sources, respect the variety of perspectives, allow understanding to emerge from the process, and portray multiple realities.

Most influential of the group of theorists associated with this category is (see McLaughlin 1991). His early writing provided a framework for viewing the evaluation process more broadly. His later emphasis on the case study as an appropriate method for conducting evaluations served to expand professional perspectives at that time.

2.4 Expertise-Oriented Valuing

Values-oriented evaluation theorists have ‘values’ as their primary concern but differ on their positions concerning who should make the value judgments about programs, and how such judgments should be reached. Some values-oriented theorists maintain that the evaluator should be responsible for making value judgments based primarily on his or her own personal background, knowledge, and experience. This point of view we label ‘expertise -oriented valuing.’

Educational connoisseurship and criticism are the cornerstones of Eisner’s (1993) approach. He maintained that, like the art connoisseur, the educational connoisseur is capable of deep appreciation, combining an understanding of formal qualities, socio-historical-cultural contexts, and influences. Not unlike wine connoisseurs, educational connoisseurs have attained refined perceptual capabilities acquired in part through previous relevant experience. While connoisseurship is the art of appreciation, criticism concerns itself with the art of disclosure.

Another view of expertise-oriented valuing is presented in Scriven’s writings. The expertise of Scriven’s evaluator is founded on several bases: (a) the objectivity for unbiased valuing provided by the use of the goal-free technique, the evaluator being ‘blind’ to the program’s goals (Scriven 1973); (b) the broad education of the evaluator (e.g., formal education, human development, academic disciplines) providing the basis for recognizing and defining objective values; and (c) the empirical, statistical, and logical skills, and a sense of perspective, provided by training as a philosopher of science. To these, he adds ‘a little toughening of the moral fiber’ as the final justification for his evaluator as the person having the expertise to value data.

2.5 Process-Oriented Valuing

Other approaches that focus on the valuing of evaluation data do not rely primarily on the expertise of the evaluator but instead depend on a well-defined process for performing the valuation task. One such process-oriented approach involves the employment of standards. Standards may be used for conducting evaluation of programs as well as for evaluation products. In this process, review teams visit and examine a program in terms of a set of process criteria consisting of attributes of programs presumed to be indicators of quality. That is, there is a presumption that these processes will lead to quality outcomes. Most typical of such procedures are the various types of program accreditation involving self-appraisal and site visit teams guided by defined process standards.

The constructivist-hermeneutic model—what Guba and Lincoln (1989) called ‘fourth generation evaluation’—emphasizes another approach that focuses on negotiation as the key dynamic of evaluation. Here, the construction of meaning and the valuing of that meaning are the evaluation’s principal foci, and a central purpose of the evaluation is to create outcomes that appropriately reflect all participants’ values. A principal assumption underlying this approach is that ‘realities are not ‘‘out there’’ simply to be discovered,’ but are created through human interactions and that there are many possible coexisting realities created out of those interactions.

According to Guba and Lincoln, the evaluation process is necessarily value-laden, as all participants bring with them not only expectations as to what the purpose of the evaluation might be, but personal values that are the products of each individual’s social, psychological, and physical context. The evaluation is designed to allow for the participation of competing value-systems, and is organized to produce, through the processes of negotiation, a consensus that takes into account all perspectives.

2.6 Valuing Based On Principles Of Social Justice

Sometimes valuing occurs through the application of commonly accepted external principles. Ernest House the theorist has been guided by a concern for democratizing public decision-making (House 1999). In addressing this purpose, he has recognized that evaluation, as a political tool, becomes a means for determining ‘who gets what’ (distributive justice) in a societal program. He noted that many people’s well-being depends on evaluations that provide a ‘fair just’ distribution of benefits and responsibilities. Thus, the most important role of the evaluator is applying explicit theories of justice in valuing data. A key aspect is the development of a ‘fair evaluation agreement’ in which all participants are guaranteed to have their ‘real interests’ identified and addressed. In rendering judgements on programs, the social justice evaluator must be guided by the following principles: moral equality, moral autonomy, impartiality, reciprocity, and utility.

2.7 Decision Maker-Oriented Evaluation

Unlike the previous evaluation models which have a more general audience (e.g., the research community, stakeholders generally, consumers, government), models oriented toward decision-makers focus on specific individuals and thus emphasize use. The first of the decision-maker models, the CIPP model (developed in the early 1960s and presented in revised form in Stufflebeam 1983) focused on evaluation as a systematic procedure designed to provide in-formation that would ‘enlighten’ decision-making. Thus, the focus of these evaluation efforts were decision-makers—usually interpreted to mean pro-gram administrators, and most typically those who contracted for the evaluation. Stufflebeam described the attributes of four different evaluation types (Con-text, Input, Process, and Product) each designed to ‘serve,’ that is, provide information for a different kind of management decision.

2.8 User-Oriented Evaluation

Patton (1997) extended the concern for information provision beyond specified decision-makers and presented a model in which evaluators are obligated to identify likely ‘primary users.’ Thus, the evaluator must seek out those specific stakeholders for whom the evaluation is most likely to be of value in actually effecting change. Patton noted that this means finding strategically located people who are committed and competent and who are able and willing to use information. Patton also recognized a dynamic relationship existing between evaluators and users, a relationship in which prespecified decision concerns have been replaced by interactions, thus illuminating the more incremental nature of decision-making as it applies to educational programs. Alkin’s writing also reflects a user-oriented approach to evaluation (see McLaughlin 1991).

3. Other Models: ‘Collaborative’

Recently, various kinds of ‘collaborative’ evaluations have received substantial prominence. These include illuminatory evaluation (Partlett and Hamilton 1976), participatory evaluation (Cousins and Earl 1992), empowerment evaluation (Fetterman et al. 1995), and emancipatory and critical action research (Carr and Kemmis 1988). Some have programmatic interests, philosophic interests, or political interests. These collaborative evaluation approaches are not categorized separately. Indeed they would be placed into existing categories based on the intent and purpose of each evaluation type.

For example, participatory evaluation is generally concerned with participation as a means to increase the utilization of evaluation and would be included in category 2.8 above. Likewise, emancipatory and critical action research is more directly concerned with achieving ‘emancipation’ from the dictates of injustice, alienation, and unfulfillment, clearly falling into category 2.6 above, as would empowerment evaluation.

4. Current Issues In Curriculum Evaluation

We will briefly comment on several current issues in curriculum evaluation. These relate to the nature of appropriate measures for design considerations and the use of discourse analysis.

4.1 Measurement

One current issue relates to the reliance on standardized tests as the instrument for assessing the impact of curriculum. Although information from such tests is relatively inexpensive to collect, easy to aggregate, and somewhat reliable, standardized tests seldom match national and state goals. These tests are an inadequate measure of what is learned (they fail to sample content adequately) and are limited in assessing the depth of student understanding (no allowance for multiple forms of representation of student’s knowledge). Black (1998) has drawn attention to the difficulty of finding tests that meet requirements of validity (determining that the inferences based on the test results are justified) and reliability (showing that the results are reproducible on different occasions and with different questions).

Hence, curriculum evaluation must be based on more than standardized test scores and includes assessing long-term consequences, a wide range of source reports interviews, classroom observation, work samples, inventories, portfolios, records, exhibits, and the like. Information from such sources is also necessary for determining how the curriculum is implemented, the processes of student learning, and identifying factors that facilitate or restrain implementation.

A related issue concerns the interactions that take place between curriculum evaluation at different levels. Political leaders and central administrators favor standardized testing for the quantifiable data that will guide education policy and control and monitor schools and classrooms. Professional educators and those engaged in classroom instruction and the guidance of individual students favor consumer-oriented evaluation that integrates assessment with learning, such as continuous assessment of student’s regular work and self-assessment, using the results formatively as feedback and diagnostically for guidance to reinforce learning and to encourage learners to take responsibility for their own learning as well as to suggest more appropriate curriculum. The challenge for the next decade is to overcome the polarization between those who put standards and testing to the forefront in the interest of accountability (control) and those who see evaluation as a tool for the promotion of learning.

4.2 Critical Discourse Analysis

Finally, critical discourse analysis is emerging as a powerful tool for showing how curriculum works, how it operates politically, how it constructs and regulates social relations, and how it positions students, teachers, and others. Allan Luke (1995) has used critical discourse analysis to show how the curriculum of the home and preschool orients children toward family, property, social practices, and ownership; how the language of the primary school teacher constructs identities of children and guides them for participation in different content areas; and how textbooks influence students to reproduce, naturalize, and accept particular forms of cultural logic and social identity under the guise of transmitting neutral skills. Similarly, Shirley Grundy (1994) has deepened under-standing of the contrasting public curriculum policies in Australia and Canada using this analytic tool.

Essentially, evaluators using discourse analysis study and interpret the language and language patterns as found in both written and/oral contexts. The analysis reveals how the curriculum: (a) relates to the students’ life trajectories—the kinds of futures portended (field); (b) instills a view of authority and degree of individual freedom, teaching students where authority lies, when and how one can speak, what can be said, and constructing social relations (tenor); and (c) defines a given subject matter—its purposes and functions, modeling what it means to learn the subject, what counts as knowledge in the field, and defining what the students’ relations to the subject should be (mode).

Bibliography:

  1. Alkin M C, Ellett F S 1985 Evaluation models: development. In: Husen T, Postlethwaite N (eds.) The International Encyclopedia of Education, lst edn. Pergamon, Oxford, UK
  2. Black P J 1998 Testing: Friend or Foe? The Theory and Practice of Assessment Testing. Falmer Press, London
  3. Brophy J, Alleman J 1991 Activities as instructional tools: A framework for analysis and evaluation. Educational Researcher 20(4): 9–23
  4. Campbell D T, Stanley J C 1963 Experimental and quasi-experimental designs for research on teaching. In: Gage N L (ed.) Handbook of Research on Teaching. Rand McNally, Chicago
  5. Carr W, Kemmis S 1988 Becoming Critical: Education, Knowledge and Action Research. Falmer, London
  6. Cousins J B, Earl L M 1992 The case for participatory evaluation. Educational Evaluation and Policy Analysis 14(4): 397–418
  7. Eisner E W 1993 The Educational Imagination: On the Design and Evaluation of School Programs, 3rd edn. Macmillan, New York
  8. Fetterman D A, Kaftarian S J, Wandersman A (eds.) 1995 Empowerment Evaluation. Sage, Beverly Hills, CA
  9. Gall M D 1981 Handbook for Evaluating and Selecting Curriculum Materials. Allyn and Bacon, Boston
  10. Greene J, McClintock C 1991 The evolution of evaluation methology. Theory Into Practice 30(1): 13–21
  11. Grundy S 1994 Which way toward the year 2000? Contrasting policy discourses in two education systems. Curiculum Inquiry 24(3): 327–47
  12. Guba E G, Lincoln Y S 1989 Fourth Generation Evaluation. Sage, Newbury Park, CA
  13. House E R 1999 Values in Evaluation and Social Research. Sage Publications, Thousand Oaks, CA
  14. Luke A 1995 Text and discourse in education: an introduction to critical discourse analysis. Review of Research in Education 21: 3–48
  15. McLaughlin M W (ed.) 1991 Evaluation and Education: At Quarter Century. The National Society for the study of Education, Chicago
  16. Partlett M, Hamilton D 1976 Evaluation as illumination: A new approach to the study of innovatory programs. In: Glass G (ed.) Evaluation Studies Review Annual. Sage, Beverly Hills, CA, Vol. 1
  17. Patton M Q 1997 Utilization-focused evaluation. The New Century Text, 3rd edn. Sage, Thousand Oaks, CA
  18. Rossi P H, Freeman H E, Lipsey M W 1999 Evaluation: A Systematic Approach, 6th edn. Sage, Thousand Oaks, CA
  19. Scriven M 1973 Goal free evaluation. In: House E (ed.) School Evaluation: The Politics and Process. McCutchan, Berkeley, CA
  20. Stufflebeam D L 1983 The CIPP model for program evaluation, In: Madaus G F, Scriven M, Stufflebam (eds.) Evaluation Models: Viewpoints on Educational and Human Services Evaluation. Kluwer-Nijhoff, Boston
  21. Tamir P 1985 The potential and actual roles of evaluator in curriculum development. In: Tamir P (ed.) The Role of Evaluators in Curriculum Development. Croom Helm, London
  22. Tyler L L, Klein, M F 1976 Evaluating and Choosing Curriculum and Instructional Materials. Educational Resource Associates, Los Angeles, CA
  23. Tyler R W 1942 General statement on evaluation. Journal of Educational Research 35(7): 492–501
Curriculum Theory Research Paper
Curriculum Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!