Educational Evaluation Research Paper

Academic Writing Service

Sample Educational Evaluation Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. iResearchNet offers academic assignment help for students all over the world: writing from scratch, editing, proofreading, problem solving, from essays to dissertations, from humanities to STEM. We offer full confidentiality, safe payment, originality, and money-back guarantee. Secure your academic success with our risk-free services.

In goal-directed human behavior, choosing between alternatives requires the assessment or the ‘evaluation’ respectively of the different possible behaviors. A rational choice can only be made when there is at least some factually based appraisal of each alternative’s consequences. Thus an evaluation of former consequences of behavior (‘experience’) should always be the basis for rational decisions to keep the risks of unwanted consequences as low as possible. Trying to minimize the risks is also a question of ethics if other persons are affected by the consequences of an action as well. However, educational evaluation is realized in systems which are so complex that most evaluation attempts meet fundamental difficulties. Dealing with these problems requires not only scientific knowledge but also practical experience. This research paper can only give a general insight into some fundamental and methodological questions.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


1. Fundamental Questions Of Educational Evaluation

1.1 Definition Of Evaluation

It is difficult to give a complete definition of an empirical construct. Rossi and Freeman (1993, p. 5), however, offer a formal definition: ‘evaluation is the systematic application of social research procedures for assessing the conceptualization, design, implementation, and utility of social intervention programs.’ Put differently, evaluators ‘use social research methodologies to judge and improve the ways in which human services policies and programs are conducted, from the earliest stages of defining and designing programs through their development and implementation.’ (Rossi and Freeman 1993, p. 5) This is a definition close to science. The more one gets involved in the everyday business of evaluation, the less important become special methods of the social sciences. Often, simple questionnaires or interviews can be helpful. Generally speaking, evaluation is about the systematic appraisal of alternative options of behavior. Anything beyond that depends on the respective field of application and the emphasis chosen by evaluators working there.

1.2 Concepts Of Evaluation

Due to the many forms of evaluation, different concepts of evaluation have been developed; for an overview see Scriven (1991).




The following forms of evaluation can be distinguished:

Microevaluation (the focus is on individual aspects of the evaluated program) vs. macroevaluation (global results are of interest);

Internal evaluation (those responsible for the scheme evaluate the program themselves) vs. external evaluation (development of the program and its evaluation are separate);

Summative evaluation (the review of the results takes place after a scheme program has been completed) vs. formative evaluation (a measure and its effect are constantly checked in order to optimize it).

All these models (as well as others) can be used as general schemata to prepare evaluation studies. However, in each individual case it needs to be considered which aspects are especially relevant. ‘Model’ in this sense means aid for structuring and can offer a first orientation. In practice most evaluation studies (and most of them are focused on practical issues) cannot be completely described with just one model because of the complexities in real life. Therefore several dimensions and evaluation criteria are required.

1.3 Aspects And Problems Of Practical Evaluation

Instead of an ‘ideal’ definition (which would be impossible) or an ideal classification scheme, the multifaceted aspects and problems of real evaluation attempts are outlined. The following questions are discussed: What is evaluated where and with what purpose? Which direct and indirect costs arise during the evaluation? With what are the results compared?

Exemplary answers (see Table 1) are given to these questions for the Third International Mathematics and Science Study (TIMSS), where the mathematical and scientific achievements of students from selected years in primary schools and secondary schools have been compared internationally.

Educational Evaluation Research Paper

1.3.1 What Can Be Evaluated? As evaluation serves to optimize both actions and decisions, the consequences of both need to be compared with each other, i.e., evaluated. Concerning the alternatives in the educational field, the following are especially important:

Objectives: e.g., the consequences of different curriculums for a school subject or a vocational training;

Conduct of individuals: e.g., different didactical approaches in one seminar;

Techniques and procedures: e.g., comparison of different teaching methods;

Programs: e.g., the results of a new program to reduce unemployment in young people;

Systems: e.g., international comparison of school achievement.

Different aspects are often intertwined. If, for example, techniques are to be evaluated then different individuals (e.g., teachers) use these techniques. This makes separating the effects problematic and may even raise the question of an interaction between the teachers’ personalities and the teaching methods. It might get even more complicated when some of the aspects mentioned above vary systematically, e.g., when different educational programs have different objectives. Consequently, the results have to be interpreted keeping that in mind.

1.3.2 Where Can Evaluation Take Place? Generally speaking, all educational and educational-psychological measures can be evaluated. These evaluation studies can take place at very different locations. Each confronts the evaluator with very specific problems:

Laboratories: e.g., a model school can be introduced to test different forms of tuition. This makes the work for the evaluator easier as both teachers and parents are more motivated. Nevertheless, it is problematic how far the results from the laboratory situation can be transferred to the real life situation. This is a methodological problem as internal and external validity cannot be maximized at the same time (see Scriven 1991).

Educational Institutions: e.g., schools, universities. They are most often the place where educationally relevant evaluation takes place. Often the psychologists need to be able to communicate and cooperate well as they are dependent on the support of the other people involved.

Educational Systems: the evaluation of larger reform models or the comparison of whole educational systems like TIMSS (see Beaton et al. 1996 and Mullis et al. 1998 for a US account, Baumert et al. 1997 and Baumert et al. 2000 for a German account). Even more than with individual organizations one has to anticipate that the objective of the measures might encounter opposition from some of the people involved.

1.3.3 What Objectives Can An Evaluation Have? Other than in science, evaluators cannot choose the objectives of their evaluations in practice themselves. Evaluations in practice usually aim at improving existing ‘systems.’ The evaluator has to consider the needs of those affected by the consequences of the evaluation. Additionally, the evaluator can only help the principals and those affected by the evaluation’s consequences to clarify the objectives. The evaluator might draw attention to facts that have so far not been considered or point out contrasting interests of different subgroups. Nevertheless, evaluators need to remain neutral to all parties as well as all alternatives that are to be evaluated (for further information see e.g., American Psychological Association 1992).

1.3.4 What Costs Arise? Evaluations are expensive. Those evaluations that are carefully planned, carried out, and appraised especially require full commitment from all those involved. ‘Costs’ in addition to the cost of financing include:

Doubts concerning the present situation: The fact of an evaluation being carried out might be seen as criticism and depreciation of the present teaching efforts;

Unrest for all those involved: The examination of measures one has got accustomed to and the introduction of new measures can often disturb the implicitness that is important for the success of teaching;

Impairment of the personal situation: A control of the effectiveness among the employees of an institution can upset their self-esteem and reputation;

Additional workload for all those involved;

Delay in time: especially if a measure could be implemented immediately without the evaluation and without the risk of a negative outcome;

Possible damage for those affected by the implemented measure, if for example a new way of teaching languages does not prove to be as effective as the old method.

These costs and disadvantages of evaluation are only acceptable if they are outweighed by the benefit. Thus evaluators do not only need to be competent in their field but also in convincing others of their project. Still, it would be asking too much to assume that society needs to finance evaluations for every important new measure—even if the outcome of the cost–benefit analysis is positive.

1.3.5 Comparing The Results. The basis for evaluations is always the comparison of different alternatives. If there are several independent alternatives then their consequences can be compared directly with each other. In this case a summative evaluation is possible.

If just one action alternative can be evaluated then the appraisal can have different criteria:

Expectation of the participants (‘the students have been satisfied with my classes’);

Former situation (‘my intervention has been a success as the climate in the class has improved’);

Personal objective (‘I have reached XXX’);

External objective (‘the desired improvement in mathematics was achieved by the new teaching method’);

Standard with group-reference (‘with this learning method results were achieved that are better than the average of the last course’).

In all this cases a functional appraisal is possible, especially with the objective of a formative evaluation. It lacks, however, the possibility of deciding that this option is better than another one (and which one). The alternative chosen might even have been the comparatively worst option under the circumstances. Without an alternative this cannot be decided, even if the outcome was positive (this class was better than last year’s).

Which methodological approach is chosen (appraisal of one alternative or a comparison of groups as in experimental designs) depends on the objectives of a study. Causality can only be demonstrated when different groups are compared. If the cost–profit relationship is assessed it might be sufficient to focus on one alternative, even if the result is less meaningful.

2. Science’s Contribution To Evaluation Studies

2.1 Reasons For Employing Scientifically Trained Evaluators

Evaluations of all kind are present in work situations, e.g., a school principal giving a judgment about a teacher. Evaluations like this are frequently done and many professionals believed (and sometimes still do) that the use of science for evaluations is superfluous. In contrast to this stands the rising figure of scientists and graduates actually working in this field. The reasons for this should lie primarily in the aspects of technical competence and objectivity.

Experts should have at least some knowledge in critical areas like basic research as well as in methodology (for planning the design and editing the results statistically). In addition to that they should have a higher ability of analyzing the often complicated connections found in most evaluation studies. The image of objectivity plays an important role especially in critical evaluation studies, e.g., the evaluation of politically significant reform plans which only get financed if there is a massive dissension between the different political groups. Even though scientists are usually independent of political programs and ideas, they still have their own social convictions as well as preferences for certain theories. Consequently, politicians tend to assign those evaluators to the task who they assume to be on ‘their’ side. This may lead to the image of evaluators who can be bought. Different evaluation studies of the same project leading to different results can make the situation even more difficult (a famous example was the discussion about the effects of the comprehensive school, see Leschinsky 1999).

2.2 Planning And Designing An Evaluation Study

A significant advantage scientists have is their familiarity with the design of studies that allow a comparison of the relevant groups. For the planning and implementation of scientific evaluation studies, it is furthermore important that before the data is ascertained it is clear how it will be analyzed.

The design should try to exclude as many disruptive factors as possible. As studies differ so much there is nothing like an ‘ideal’ design. Other than in pure research it is not always helpful to stick to the original design throughout the whole evaluation. Whenever a program that is being evaluated shows serious shortcomings then it needs to be corrected in the interest of those concerned, even if it might go against scientific principles.

Another difficulty may arise with the sample. For being able to generalize the results, it needs to be representative. It is, however, also the psychologist’s duty not to deceive participants about the purpose of the study (see e.g., American Psychological Association 1992). The truth might deter some people (‘The new method is probably only ideal for the very weak children but we also need good pupils in the study’) which then has serious statistical consequences.

2.3 Evaluation Reports

Mostly, evaluation results only present a basis for further discussion for those having to make a decision. Thus it is especially important how the decision-making groups learn about the results.

Usually, scientists are not used to informing laypersons (politicians, journalists, teachers, etc.): Only short summaries can be easily understood and scientific jargon (e.g., using the subjunctive and avoiding clear positions) is not accepted. The report has to be written in such a way that laypersons can comprehend the results and recommendations by the evaluators.

Even more problematic are oral reports, especially when controversial issues are concerned. The evaluator can easily be turned into the agent of one position. At the same time, it ought to be part of the professional ethics of an evaluator that although one attacks the scientific position of a colleague, personal integrity should remain untouched.

For guidelines on scientific evaluation reports see, for example, Fink (1995).

3. Present State Of And Outlook For Evaluation Studies

In the last years there has been a renewed evaluation trend; this also holds true for the educational sector. In both schools and universities, new projects have been realized. In times of recession and tight budgets no one is prepared to pay for something of which the benefit is unknown. The public experts want to be informed about the efficiency of these costly institutions. In the universities, evaluations so far have been national. Some countries have an established ranking system as can be found in the USA (for an overview see House 1993) or the UK, others, like Germany, do not. For schools this has been the TIMSS study on the international level.

Realistically, evaluation studies in the educational sector cannot have the same degree of meaningfulness as pure research studies. However, this is not the intention of these studies either. Evaluation has served its purpose when it has provided discussions about a ‘best’ measure or the ‘ideal’ design with a rational basis. For this objective, there is no better alternative.

Luckily, the demand for evaluation studies is rising. The acceptance for evaluations is increasing in the whole educational sector. Nevertheless, they are limited by tight public budgets.

Bibliography:

  1. American Psychological Association 1992 Ethical Principles of Psychologists and Code of Conduct 47(12): 1597–611
  2. Baumert J, Bos W, Lehmann R (eds.) 2000 TIMSS III: Dritte Internationale Mathematikund Naturwissenschaftsstudie. Mathematische und naturwissenschaftliche Bildung am Ende der Schullaufbahn. Leske & Budrich, Opladen, Germany
  3. Baumert J, Lehmann R, Lehrke M, Schmitz B, Clausen M, Hosenfeld I, Koller O, Neubrand J 1997 TIMSS–Mathematisch-naturwissenschaftlicher Unterricht im internationalen
  4. Vergleich: Deskripti e Befunde. Leske & Budrich, Opladen, Germany Beaton A E, Mullis I V, Martin M O, Gonzalez E J, Kelly D L, Smith T A 1996 Mathematics Achievement in the Middle School Years: IEA’s Third International Mathematics and Science Study (TIMSS). TIMSS International Study Center, Boston College, Chestnut Hill, MA
  5. Fink A 1995 Evaluation for Education and Psychology. Sage, Thousand Oaks, CA
  6. House E R 1993 Professional Evaluation—Social Impact and Political Consequences. Sage, Newbury Park, CA
  7. Leschinsky A (ed.) 1999 The Comprehensive School Experiment Revisited: Evidence from Western Europe. Lang, Frankfurt am Main, Germany
  8. Mullis I V, Martin M O, Beaton A E, Gonzalez E J, Kelly D L, Smith T A 1998 Mathematics and Science Achievement in the Final Year of Secondary School: IEA’s Third International Mathematics and Science Study (TIMSS). TIMSS International Study Center, Boston College, Chestnut Hill, MA
  9. Rossi P H, Freeman H E 1993 Evaluation—A Systematic Approach. Sage, Newbury Park, CA
  10. Schmidt W H, McKnight C C, Cogan L S, Jakerth P M, Houang R T 1999 Facing the Consequences: Using TIMSS for a Closer Look at US Mathematics and Science Education. Kluwer Academic, Boston, MA
  11. Scriven M 1991 Evaluation Thesaurus. Sage, Newbury Park, CA
Educational Assessment Research Paper
Management Of Educational Innovation Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!