Sample Randomized Experimental Design Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. If you need a research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our custom research paper writing service for professional assistance. We offer high-quality assignments for reasonable rates.
1. Definition, Functions, And Rationale
In a randomized field trial, a sample of individuals or of entities, such as schools or hospitals, are randomly assigned to one of two or more interventions. These interventions may be different programs or variations on the same program, different intensities of social service, different mixes of therapies, and so on. The interventions may include a ‘control’ condition in which no services, or services that are ordinarily available, are provided.
Academic Writing, Editing, Proofreading, And Problem Solving Services
Get 10% OFF with 24START discount code
For example, to understand whether a new employment and training program works well, one randomly allocates some eligible individuals to the new program. All or some of the remaining eligible individuals are randomly allocated to the alternative intervention. This alternative may be a normally available training service, i.e., a control condition, or a program with lower levels of service, or a different service.
This random assignment of individuals to a new employment and training program and to an alternative program permits one to make a fair comparison of the wage rates achieved by the people who were assigned to each group. The value that is added by a new program can be estimated well because the people who were assigned to the conventional services and to the new program are equivalent on account of the random assignment. Furthermore, one can take into account the chance statistical differences among individuals who were assigned to each group, a fact that is important in science and law (Kaye and Freeman 1994).
The logic that underlies a randomized trial is as follows. To estimate the relative effect of a particular intervention, one has to estimate the status of individuals (or entities) had they not had the intervention. At times, a precise forecast can be made about how individuals or entities would fare in the absence of the intervention. Also at times, one may be able to construct what appears to be a fair comparison group that represents the state of people (or institutions) in the absence of the particular intervention. Under these circumstances, quasi-experiments or observational studies might then be employed to generate evidence about the relative effectiveness of programs. These approaches try to make a fair comparison of the outcome of alternative interventions based on sophisticated statistical models of how people and entities behave, and on conscientious attention to competing explanations about why a difference in the effects of interventions might appear.
Often, the researcher cannot make good forecasts, construct defensible ad hoc comparison groups, or account for competing explanations for a difference among interventions in a quasi-experiment. A randomized trial may than be warranted because it assures that the prediction is fair: the comparison is fair simply because people or entities are randomly allocated into two or more equivalent groups. It assures that competing explanations for people’s (or entities’) behaviors are taken into account because both groups are subject to the same influences, apart from the interventions.
Social and behavioral scientists and statisticians use phrases other than ‘randomized field trial’ to describe this approach to generating evidence. These phrases include randomized experiment, randomized trial, randomized clinical trial (RCT) in medical research, and controlled experiment. Different phrases are used to identify trials in which entities such as schools or hospitals are randomly assigned to different interventions. These descriptors include cluster-randomized trials, group randomized experiments, macro-experiments, and place-based trials (Murray 1998, Boruch and Foley 2000).
2. Examples Of Randomized Field Trials
Good examples of randomized trials are not difficult to locate, although such trials are in the minority of studies on the effects of interventions. Boruch (1997) gives many such examples. In Switzerland, for example, randomized trials have been carried out to understand the effects of heroin therapy vs. conventional treatment of drug addicts and to estimate the effects of community service vs. imprisonment. In Germany, community rehabilitation programs in prisons have been assessed vs. conventional prison programs in randomized trials. In the US, randomized trials have been carried out to learn the effects of arresting people for misdemeanor domestic assault, new approaches to courts’ handling of certain cases, intensive police patrol strategies, and other interventions. A main outcome variable in such experiments is recidivism.
Measures of academic achievement and related outcomes have also been important in randomized trials to estimate the effects of new approaches to tutoring in England and the United States (FitzGibbon 1990, Fuchs et al. 1997); reducing the size of elementary school classrooms in the United States (Mosteller 1999); radio-based mathematics education in Nicaragua (Jamison et al. 1980); cultural enrichment programs for children in the barrios in Colombia (McKay et al. 1978); and cooperative group learning in elementary grades in Australia (Gillies and Ashman, 1998).
Randomized trials have also been mounted to test programs that were purported to reduce individuals’ dependence on public welfare. The United States has been the site for many studies on training and employment, housing, and welfare reform. Greenberg et al. (1997) summarized over 140 such trials, including a Canadian study of tax rates and guaranteed annual income and a Dutch experiment on intensive job assistance services for the unemployed.
Many randomized field trials have been used to test behavioral and social approaches to health care, including reducing HIV risk exposure or the risk of sexually transmitted diseases. See, for instance, reports on tests of husband-oriented approaches to enhancing the use of contraceptive devices in Ethiopia by Terefe and Larsen (1993), AIDs education in the Philippines (Alplasca et al. 1995), among others.
3. History
Claude Bernard’s Introduction to the Study of Experimental Medicine, first published in 1865 in France, outlined a framework for controlled comparisons. Bernard, however, said nothing about the topic of randomized trials.
Scientists’ use of randomization in order to construct groups that do not differ systematically, apart from chance and effects of interventions, is more recent. In the late 1890s, for instance, researchers in Denmark used randomized groups to understand whether certain diphtheria serum worked (Hrobjartsson et al. 1997). During the same period, psychophysical experiments in German laboratory studies employed random assignment (see Stigler 1986). For history on the idea of randomized experiments in health and other sectors and of comparisons, see www.rcpe.ac.uk/cochrane, Maynard and Chalmers (1997), and Marks (1997).
Large-scale randomized controlled trials (RCTs) in medicine appear to antedate randomized field trials (RFTs) in the social sector. Contemporary RCTs have their origins in trials during the 1940s and 1950s, undertaken by Bradford Hill and others in the United Kingdom to generate high-quality evidence on the effectiveness of streptomycin in treating tuberculosis (Marks 1997). Later, Jonas Salk, Paul Meier, and others in the United States successfully mounted randomized trials to discern the effects of vaccine on the incidence of poliomyelitis (Chalmers 1999).
Some contemporary history of randomized trials in the social sector is given in Boruch (1997) and Oakley (2000). See both books for references to the examples that follow. In criminological research, the Manhattan Bail Bond experiment of the 1960s is a classical case. It involved randomly eliminating bail for some people to understand whether bail bond requirements actually work to assure that people appear for a trial. More recent and remarkable trials are described in the University of Maryland’s report to the US Congress (http://www.nijrs.org and http://preventingcrime./org).
Advances in the economic arena include the graduated work on incentive trials of the 1970s, tests of employment and training programs in the 1980s, and trials on welfare programs in the 1990s. Fienberg et al. (1985) discuss these and other large-scale economic and policy experiments, and how substantive theory is important in the experiments’ designs. Many of these studies are abstracted in Greenberg et al. (1997). See the website for the Social Research Demonstration Corporation for information on related work in Canada (http://www.policyresearch.gc.ca).
In the 1960s, the Perry Preschool High Scope trial on preschool education is a precedent that is well known by people who want to understand the effects of early childhood intervention programs (Boruch 1997). Taroyan and others (2001) have mounted a similar trial in England. No reports on a large-scale randomized trial on the effects of reducing classroom size were reported until the 1990s. The Tennessee STAR trial is a remarkable exception covered in a special issue of Education Evaluation and Policy Analysis (2000), Mosteller (1999), and Finn and Achilles (1990).
4. Ethics And Law
The ethical standards for research on human subjects in many countries have been influenced heavily by the Nuremberg Code and the Helsinki Declaration on medical research. Elements of these codes have been adopted, at times, to assure that the rights of people are protected in social, criminological, and educational research. The codes are pertinent to randomized field trials as they are to other kinds of studies.
Such codes require, for instance, that individuals be informed that their engagement in a study is voluntary, and that they can decline to participate or, having agreed to participate, that they can withdraw. Codes regard it as essential to obtain the participant’s informed consent, based on a well-specified statement of possible risks and benefits to the participant. Moreover, an institutional review board is often required to examine the experiment’s design in order to assure that certain ethics’ standards are met.
In the United States, for example, the US Code of Federal Regulations (45 CFR 46) requires that most research organizations in the United States abide by this kind of standard. Canada, the United Kingdom, and the Nordic countries have similar codes. Such governmental rules can be found at times on worldwide web sites such as www.helix.nih.gov. Research journals and books also provide important information. See, for instance, see Sieber (1992) and articles in two special issues of Crime and Delinquency, 2000, 46(2–3), for specific references.
Many professional associations in social research have adopted codes of conduct that enlarge on the international codes and government standards. For instance, members of the Board of the American Criminological Association have (a) reiterated the need to attend to the ethics of research on human subjects and (b) emphasized the ethical responsibility of service providers, as well as researchers, to generate good evidence through randomized trials about the effectiveness of interventions in justice research. The Board also reminded readers that it is unethical for professionals in the service delivery arena, such as the courts, to require that individuals receive treatments whose effectiveness is unknown (Crime and Delinquency, 2000, 46(3)).
One of the paradoxes of codes of ethics for researchers is that codes of research ethics have not been applicable to professionals who experiment in an uncontrolled way by ‘trying something out.’ Despite the absence of unequivocal evidence on a treatment’s efficacy, for example, a physician may use the treatment on a patient without any ethics board review and without the informed consent that is required in a randomized trial. Similarly, a teacher who tries out new ways of teaching may do so without oversight by any institutional review board.
Ethical standards are often reified in law. That is, legislatures, diets, parliaments, and other law-making bodies can influence decisions about whether randomized trials are run. In the United States, for instance, welfare laws that were enacted during the 1990s permitted various states of the US to depart from national rules. This permission to depart from national law was in the interest of testing unconventional approaches to enhancing the education and employment status of low-income families. Such ‘waivers’ are an important vehicle for allowing ordinary societal interest in improvement, to deploy and test the innovation.
More generally, the designer of a randomized field trial can use at least four simple strategies to tailor the trial’s design so as to conform to good standards of ethics and law. In the first place, if a purportedly effective service is in short supply, relative to the demand for the service, random assignment of people to the service can often be justified. That is, many people regard a lottery as the fairest way to allocate a scarce resource.
Second, people recognize that the eligibility of individuals for a particular intervention is an important ingredient in their design of a trial. For example, some criminal offenders clearly need to be jailed. Other offenders clearly ought not be put into jail. For still others, the effectiveness of jail would not be known. The randomized study would focus then only on those for whom the benefit of jail to the individual is unpredictable. That is, one designs the trial so as to include only those for whom the intervention’s effectiveness is uncertain.
A third strategy to the ethical design of randomized field trials involves a wait-list approach. That is, equally eligible people are randomly allocated into a group that receives treatment early and others are randomized to receive treatment later. The experiment then compares the state of two groups after the first has completed treatment and before the second group has begun.
A fourth strategy recognizes that, in some cases, random assignment of individuals to treatments is unethical (or infeasible) but recognizes that the random assignment of entities to different interventions is ethical. The trial’s designer, for instance, may find that local juvenile facilities are unwilling, on ethical grounds, to assign individuals randomly to different treatments in their facility. The trialists may also find, however, that the same facilities are willing to try out interventions whose effectiveness is unknown, so long as the entire facility is assigned (randomly) to the intervention. Boruch and Foley (2000) and Murray (1998) reviewed randomized trials of this kind. Hospitals, factories, schools, geopolitical jurisdictions, police precincts, and other entities are the units of random allocation and analysis.
5. Institutional Capacity For Randomized Trials
A randomized field trial requires resources. To judge from experience in different countries, an essential resource is people who are willing to support and contribute to studying the effects of innovations such as policy makers and practitioners. The Tennessee Legislature’s support of the State’s trial on reducing the sizes of classes in schools is a case in point. See the special issue of Educational Evaluation and Policy Analysis, 2000, 20 (Nov. 10).
Doing a randomized field trial also requires under- standing of managerial issues in field research and how to handle the cultural and other political-institutional factors that must be taken into account. Organizations and teams that are capable of running the experiment are important. Over the past 20 years, organizations such as Manpower Demonstration Research Corporation, Abt Associates, and Mathematica Policy Research have emerged in the United States to conduct large-scale experiments. In Canada, the Social Research and Demonstration Corporation has done so. Other organizational arrangements have been developed in universities to conduct small-to medium-scale trials. In the United Kingdom, for instance, see Fitz- Gibbon (1990) on school-based interventions. Martin Killias and his colleagues in Switzerland and Rudiger Ortmann and his colleagues in Germany built teams to mount randomized trials in the crime prevention arena. (See the special issue of Crime and Delinquency, 2000, 46(2).)
6. Complications And Issues
When a simple randomized trial involving two interventions is well designed and deployed, the statistical analysis and inferences are usually straightforward. Trials are not always simple. Nor are all such studies implemented without problems.
Trialists know how to randomize and check the integrity of the randomization. See Boruch (1997) and Experimental Design: Overview. Nonetheless, randomization can be corrupted especially when the trial is run in an unfamiliar context. If the randomization is subverted, it is often difficult if not impossible to take this into account in a satisfactory way.
Further, trialists capitalize on contemporary statistical methods and software to design a trial that maximizes the experiment’s statistical power under different scenarios. This assures that real differences among programs can be discerned. In novel contexts, however, it may not be possible to use this technology easily because little is known about the target groups, programs, and context. More important, such power analyses help to discount small sample size and other design factors as a reason for finding no remarkable differences among programs. The trialist must usually get beyond this, addresssing questions such as: did the interventions and control conditions really differ in character or operation? Is there something wrong with the theory underlying the new program or one’s understanding of the problems it was designed to resolve?
The advice to ‘analyze the units as you have randomized them’ is wise in that this approach preserves the benefits of randomization. That is, when individuals have been randomly allocated, they are then the units of analysis. When institutions have been randomly assigned to alternative programs, then these are the units of statistical analysis. In some trials, matters are more complicated. Schools may be randomly assigned to interventions; teachers within schools may or may not be randomly assigned to children within classrooms and schools. Designing these more complicated trials depends on the knowledge base in education and in statistics, including understanding of hierarchical and mixed statistical models.
Accumulating and synthesizing results of randomized trials is obviously important in building the knowledge base about the effectiveness of social innovations. Advances in meta-analysis and systematic reviews can help to do this. There are complications of course. In any given area, randomized trials vary in quality and trustworthiness and so ought not be given equal weight. Despite the threats to validity of nonrandomized trials, some of these trials produce evidence that is as defensible, under certain assumptions, as the evidence in randomized trials. Learning how to put the two together depends on new statistical methods such as propensity scoring, access to original microrecords, and substantive knowledge about programs, target populations, and contexts.
7. International Information On Randomized Field Trials
Until the early 1990s, there existed no central source of information on randomized trials undertaken by health care researchers. In 1993, the international Cochrane Collaboration (Maynard and Chalmers 1997) was created to prepare, maintain, and disseminate systematic reviews of studies on the effects of health care programs. As part of its mission to produce systematic reviews, the Collaboration evolved into an important major source of information on randomized trials. The Cochrane Library is accessible through subscription on the worldwide web site (ww.coch/rane.ac.uk).
Until the year 2000, no organization had been invented to provide ready access to randomized trials in education, crime and justice, social work and social welfare, and other social sectors. The international Campbell Collaboration was developed as a younger sibling to the Cochrane Collaboration. The Campbell Collaboration’s mission, in part, is to identify, maintain, and make accessible, randomized field trials done in all social sectors. The collaborators use this registry resource to generate systematic reviews of the effectiveness of interventions (http://campbell.gse.upe/nn.edu).
Bibliography:
- Alplasca M, Sieged D, Mandel J S, Santana R, Paul J, Hurdes E S, Monzon O T, Hearst N 1995 Results of a model AIDS prevention program for high school students in the Philippines. AIDS Supplement 1: 7–13
- Bernard C 1957 1865 An Introduction to the Study of Experimental Medicine [trans. Greene H C]. Dover Publications, New York
- Boruch R F 1997 Randomized Experiments for Planning and Evaluation: A Practical Guide. Sage Publications, Thousand Oaks, CA
- Boruch R F, Foley E 2000 The honestly experimental society. In: Bickman L (ed.) The Legacy of Donald T. Campbell. Sage Publications, Thousand Oaks, CA, pp. 193–238
- Chalmers I 1999 Why transition from alteration to randomization was made: Letter to the Editor. British Medical Journal 319(20): 1372
- Fienberg S E, Singer B, Tanur J M 1985 Large scale social experimentation in the United States. In: Atkinson A C, Fienberg S E (eds.) A Celebration of Statistics: The ISI Centenary Volume. Springer-Verlag, Berlin, pp. 287–326
- Finn J D, Achilles C M 1990 Answers and questions about class size: a state wide experiment. American Educational Research Journal 27(3): 557–77
- Fitz-Gibbon C T 1990 Success and failure in peer tutoring experiments. In: Goodlad S, Hirst B (eds.) Explorations in Peer Tutoring. Basil Blackwell Ltd, Oxford, UK, pp. 26–57
- Fuchs D, Fuchs L S, Mathes P G, Simmons D C 1997 Peerassisted learning strategies: making classrooms more responsive to diversity. American Educational Research Journal 34(1): 174–206
- Gillies R M, Ashman A F 1998 Behavior and interactions of children in cooperative groups in lower and middle elementary grades. Journal of Educational Psychology 90(4): 746–57
- Greenberg D, Shroder M, Onstott M 1997 Digest of Social Experiments. Urban Institute Press, Washington, DC
- Hrobjartsson A, Gotzche P C, Gluid C 1997 The controlled clinical trial turns 100, Fibieger’s trial of serum treatment of diphtheria. British Medical Journal 317: 1243–5
- Kaye D H, Freeman D A 1994 Reference guide on statistics. In: Federal Judicial Center (ed.) Reference Manual on Scientific Evidence. Federal Judicial Center, Washington, DC, pp. 331–412
- Jamison D, Searle B, Suppes P 1980 Radio Mathematics in Nicaragua. Stanford University Press, Stanford, CA
- Marks H M 1997 The Progress of Experiment: Science and Therapeutic Reform in the United States, 1900–1990. Cambridge University Press, Cambridge UK
- Maynard A, Chalmers I (eds.) 1997 Nonrandom Reflections on Health Services Research: On the 25th Anni versary of Archive Cochrane’s Effectiveness and Effi British Medical Journal (BMJ) Publishing Group, London
- McKay J H, McKay A, Sinnestera L, Gomez H, Lloreda P 1978 Improving cognitive ability in chronically deprived children. Science 200(4): 3270–8
- Mosteller F 1999 How does class size relate to achievement in schools?. In: Mayer S E, Peterson P E (eds.) Earning and Learning: How Schools Matter. Brookings Institution Press, Washington DC and Russell Sage Foundation, New York, pp. 117–30
- Murray D M 1998 Design and Analysis of Group Randomized Trials. Oxford University Press, New York and Oxford, UK
- Oakley A 2000 Experiments in Knowing. Polity Press, Cambridge, UK
- Sieber J 1992 Planning Ethically Responsible Research: A Guide for Students and Institutional Review Boards. Sage Publications, Thousand Oaks, CA
- Stigler S M 1986 The History of Statistics: The Measurement of Uncertainty Before 1900. Belnap Press of Harvard University Press, Cambridge, MA
- Taroyan T, Roberts I, Oakley A 2000 Randomisation and resource allocations: A missed opportunity for evaluating health care and social interventions. Journal of Medical Ethics 26: 319–22
- Terefe A, Larson C P 1993 Modern contraception use in Ethiopia: does involving husbands make a difference? American Journal of Public Health 83(1): 1567–71