Sample Random Assignment Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. If you need a religion research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our research paper writing service for professional assistance. We offer high-quality assignments for reasonable rates.
Random assignment of units to treatment and control groups allows researchers to make the strongest possible causal connection between the treatment and observed outcomes. This research paper considers the way in which random assignment is implemented in educational and social ﬁeld settings.
Academic Writing, Editing, Proofreading, And Problem Solving Services
Get 10% OFF with 24START discount code
1. Widespread Acceptance Of Random Assignment
The last three decades of the twentieth century have seen widespread acceptance of random assignment as an important aspect of high-quality research studies designed to test the eﬀectiveness of interventions. In the 1970s and 1980s, proponents of randomized designs for studying program impacts began to assemble a literature attesting to their feasibility and frequent use (Boruch 1974, Boruch et al. 1978, Boruch and Wothke 1985). By the 1990s, randomized experiments in the social sciences had become so numerous that Greenberg and Shroder (1997) prepared a 500page compendium of recent experiments in various content areas. The US Congress now routinely speciﬁes that random assignment be used in studies of the eﬀectiveness of federally funded programs, and recently called for a random assignment study of Head Start, America’s ﬂagship social program for disadvantaged children (Advisory Committee on Head Start Research and Evaluation 1999).
Randomized experiments have limitations as research vehicles (see Cook and Campbell 1979 for a discussion of the strengths and weaknesses of randomized experiments), and they are not universally accepted as the best way to learn about program eﬀectiveness. One sector of the evaluation community espouses the virtues of qualitative research in place of randomized experiments and other quantitative approaches (e.g., Guba and Lincoln 1982), while another sector is more interested in issues of generalizability than internal validity (e.g., Cronbach 1982). Cook (1999) has written about the anti-random assignment culture that exists in American schools of education. Still, there is no denying that American society relies heavily on randomized experiments as one source of information about the eﬀectiveness of social programs (St. Pierre and Puma 2000).
2. The Purpose Of Random Assignment
As long as researchers take the time to explain the strengths and weaknesses of random assignment studies, those who commonly question experiments are easily able to understand the rationale underlying random assignment. Perhaps most helpful is the argument that random assignment is a special purpose tool, and while it is good only for one purpose, for making causal inferences, it is far better for that purpose than any alternative. As a special purpose tool, random assignment has many limitations. For example, it does not assure group comparability over time (attrition from the evaluation sample may result in nonrandom analytic groups), it cannot ensure that subjects remain in the intervention being studied (with or without random assignment, interventions may incur high dropout rates), and it does not help with issues of generalizability (a one-site random assignment study is no more generalizable than a one-site quasi-experiment). Cook and Campbell (1979) stated it clearly: ‘Though random assignment is germane to the research goal of assessing whether the treatment may have caused any observed eﬀects, it is conceptually irrelevant to all other research goals’ (p. 343).
3. Common Objections To Random Assignment
Most of the common objections to random assignment in studies of program eﬀectiveness do not dispute its utility as a tool for understanding causal connections. Instead, objections take other forms. Some are made in an attempt to avoid participation in a research study. Take the oft-made claim that ‘random assignment is unethical’ or that ‘random assignment means denying services to some subjects.’ These are political statements, either made without understanding what random assignment means, or made in an attempt to retain control over which subjects are to be served by a given social program.
When there is excess demand for a social intervention, program operators are in the position of denying services to needy families every day as they make decisions about which families are the neediest or the most deserving. As long as a randomized experimental study of such an in-demand program does not reduce the total number of subjects served, then there is no denial of services—the same aggregate number of families are served with or without an experiment. What an experiment does change is the mechanism for selecting which families receive the services; random assignment is used to decide who participates in the program instead of the judgment of program staﬀ. Thus, when program operators say that random assignment is unethical or involves denying services, they usually are saying that they don’t want to give up the power to determine which families are served.
But what about denying services to families? Is this always unethical? Suppose a funding agency wants to learn about the eﬀectiveness of one of its social programs and in the absence of evidence of eﬀectiveness, the funding agency plans to withdraw its support. Suppose further that there is no waiting list and hence that a randomized experiment would make it impossible to maintain aggregate service levels since some families would be assigned to a control group. In this case, the ethical problem of doing an experiment which reduces the number of families serviced by a social program has to be weighed against the possibly greater ethical problem of not doing an experiment and risking reduced funding for the entire program.
An underlying, and typically unvoiced, objection to random assignment is that program staﬀ or funders assume the eﬀectiveness of social programs instead of worrying about whether scarce resources are being spent in the most appropriate manner. This is understandable, since scientiﬁc skepticism seems to counteract the ability of program operators to do their best work. But again, this is a political issue—that it is appropriate to assume the eﬀectiveness of a social program instead of subjecting it to a rigorous test.
Another objection to random assignment is that assignment to a control group ‘feels bad’ to program staﬀ and to the families they serve. The argument that randomization is like a lottery doesn’t work in many social program settings. Why? Because a lottery usually has only one winner, and the losers can take solace in the fact that they are in the great majority. On the other hand, those assigned to the control group in a randomized study do not view their assignment the same way they view a lottery loss, since the odds of winning the lottery are miniscule, whereas the odds of being randomly assigned to the desired social program are usually 50 50.
Some other arguments are that randomized experiments are expensive (Levitan 1992) or that they take a long time. The response to this sort of complaint is, ‘Compared to what?’ A randomized experiment is expensive and time-consuming when compared with the alternative of doing no research at all. But this is rarely a realistic alternative. Many quasi-experimental alternatives are just as expensive (any design in which data are collected from a treatment group and a nonequivalent comparison group) and time-consuming (the common pre–post study) as randomized studies. Some alternative approaches are quick and inexpensive, for example, relying on data collected for other purposes. However, the fact is that none of these alternatives are capable of doing the one thing that randomized studies excel at—providing strong evidence about causal connections. The problem with using alternatives to random assignment to study program eﬀectiveness is that even if they are less expensive, even if they require less time, even if they are easier to implement, they rarely are worth the time and money spent on them. Why? Because the results are not helpful for their intended purpose—for understanding program eﬀectiveness. To paraphrase the common aphorism about education, ‘If you think randomized studies are expensive, try ignorance.’
4. Conditions Making Random Assignment Easier
There are several circumstances that reduce the difﬁculty of implementing random assignment studies. Suppose an intervention program is in short supply and there is a list of subjects waiting to participate. This often happens with Head Start (the largest federal preschool program in the US) projects in large metropolitan areas. If being placed on a waiting list means that a family is likely to miss a substantial portion of the Head Start year, then randomly assigning families from the waiting list to be in the program or in a control group might well be preferable to being on the waiting list with certainty. The absence of a waiting list does not mean that subjects are uninterested in a program. It often means that program staﬀ do not want to spent time and energy on recruiting until they have openings. Hence, it is possible in short-supply situations for researchers to generate a waiting list by helping to recruit subjects and informing them about the planned study.
Suppose a treatment can be delayed with no ill eﬀect on the participants. This is likely the situation in a school that wishes to test the short-term eﬀectiveness of a one-semester school health curriculum. In this case, it would be possible to randomly assign students to take the school health course in the fall semester or in the spring semester.
Suppose a program’s proponents are skeptical and are searching for the best solution to a social problem instead of trying to promote a speciﬁc intervention. Or, suppose a demand for evidence of program eﬀectiveness cannot be ignored, for example, a congressional mandate. In these cases, political objections to random assignment are eased because program implementers are supportive of or at least are willing to cooperate with the research.
Suppose that a social program has many local projects, for example, the Head Start program has about 1,600 local projects and the Even Start program (a family literacy program) has about 800 projects. An experimental evaluation of these programs could be done be selecting a relatively large number of projects and randomly assigning only a few subjects within each project. This approach minimizes the unpleasant eﬀects of random assignment for any given project, making it more palatable to local program staﬀ.
Assigning higher-level units is another way to ease the pain of random assignment. Suppose that there is an interest in testing three diﬀerent solutions to a given social problem. One way to construct this test is to randomly assign individuals to one of three interventions implementing the three diﬀerent solutions. A diﬀerent approach which avoids many of the common objections to random assignment is to randomize groups of individuals. For example, a federal agency could hold a grant competition to test three diﬀerent approaches to preschool education. School districts would not be allowed to receive a grant unless they indicated their willingness to implement any of the three approaches. School districts that receive grants would then be randomly assigned to implement one of the three preschool approaches.
5. Working Towards Correct Random Assignment
Random assignment is more likely to be successfully implemented if done by researchers and research organizations with prior experience in conducting experimental studies. Certain research organizations have built the capacity to and reputation for conducting random assignment studies. Greenberg et al. (1999) report that almost half of the social experiments started in the US since 1983 have been done by three large research organizations.
In the same vein, random assignment is more likely to be successful if it is under the control of researchers rather than program implementers (Conner 1977). The latter are likely to make exceptions or may misunderstand random assignment rules, even if those rules are carefully prepared by researchers. The research team does not have to be physically present in order to control the random assignment. One approach is to have program staﬀ be responsible for recruiting subjects, explaining the experimental alternatives, and transmitting lists of study subjects to the research team via fax, email, or other means. The research team then does the random assignment and sends back listings of research subjects, sorted into treatment and control groups. A related approach is for the research team to prepare a random assignment computer program for use by program staﬀ who recruit subjects and enter basic data on a microcomputer which does the random assignment on the spot.
The way in which random assignment is implemented depends on the way in which applicants are recruited for an intervention. Sometimes a large pool of applicants are recruited. In these cases, information on the entire pool can be sent to the research team at the same time, simplifying the random assignment. The process is more complicated when applicants enter a program on a rolling basis. In these cases, a random assignment algorithm has to be developed so that applicants are told their treatment control status soon after they apply. However, simple assignment rules such as assigning the ﬁrst applicant to the treatment, the second to control, the third to treatment, etc. are easily manipulated by program staﬀ who often have control over the ﬂow of applicants. More complex assignment systems are less subject to local control. For example, random assignment of families to an intervention could be done in blocks of four, where the order of treatment and control assignment is random within each block.
6. Preserving Randomization
Implementing random assignment is diﬃcult. Once achieved, how can it be preserved throughout the life of an evaluation? One issue has to do with when randomization takes place. This involves tradeoﬀs between generalizability and internal validity. Families interested in participating in the Even Start family literacy program could be randomized as soon as they are ﬁrst recruited for the program, after they have had the evaluation explained to them and have indicated their willingness to accept any of the alternatives, or after they have completed a one-month ‘tryout period’ during which they can try Even Start and see whether they truly are interested in the program. Early randomization draws on the largest possible pool of families (increased generalizability) but risks losing many families (decreased internal validity) who ﬁnd out that they are not assigned to the desired group or who leave the program after a tryout period. Late randomization winnows the pool of families (decreased generalizability) to those who are most likely to participate in the study (increased internal validity). Gueron and Pauly (1991) and Rieken et al. (1974) give additional examples.
Whether assigned to an intervention or a control group, subjects may refuse to cooperate with research measurements once an experiment has been started. Such attrition has the potential to destroy the comparability of the intervention and control groups. Clearly, it is best to use all means possible to avoid attrition of subjects. In some circumstances (e.g., studies of welfare programs), subjects can be sanctioned if they do not participate in research; in other cases, incentives for participation can be oﬀered. Attrition is all but unavoidable in most social experiments since families move, are ill or otherwise unavailable for measurement activities, or simply refuse to participate in the research. Methods for determining whether attrition is related to the treatment and ways of dealing with this problem are discussed by Cook and Campbell (1979).
Are there ways to reduce the likelihood that subjects will refuse to go along with the randomization, or refuse to participate in the study? One approach is to plan the study so that subjects who agree to participate in the randomization have a better chance of getting the desired intervention than they would have if the study had not existed. This works best when there is a waiting list and when the intervention has a large number of openings relative to the number of subjects needed for the study. A second approach is to oﬀer a competing intervention to members of the control group. This could work in several diﬀerent ways. One competing intervention could be a ﬁnancial or in-kind incentive for participating in the study. Another competing intervention could be a reduced or modiﬁed version of the treatment being studied. Yet a third alternative could be to oﬀer control group subjects the opportunity to enroll in the desired intervention at a later point in time.
- Advisory Committee on Head Start Research and Evaluation 1999 Evaluating Head Start: A Recommended Framework for Studying the Impact of the Head Start Program. US Department of Health and Human Services, Washington, DC
- Boruch R 1974 Bibliography:: Illustrative randomized ﬁeld experiments for program planning and evaluation. Evaluation 2: 83–7
- Boruch R, McSweeny A J, Soderstrom E J 1978 Randomized ﬁeld experiments for program planning, development and evaluation. Evaluation Quarterly 2: 655–95
- Boruch R, Wothke W 1985 Randomization and ﬁeld experimentation. New Directions for Program Evaluation 28
- Conner R F 1977 Selecting a control group: An analysis of the randomization process in twelve social reform programs. Evaluation Quarterly 1: 195–244
- Cronbach L J 1982 Designing Evaluations of Educational and Social Programs, 1st edn. Jossey-Bass, San Francisco
- Cook T D 1999 Considering the major arguments against random assignment: An analysis of the intellectual culture surrounding Evaluation in American schools of education. Paper presented at the Harvard Faculty Seminar on Experiments in Evaluation, Cambridge, MA
- Cook T D, Campbell D T 1979 Quasi-experimentation: Design & Analysis Issues for Field Settings. Rand McNally College Pub. Co., Chicago
- Greenberg D, Shroder M 1997 The Digest of Social Experiments, 2nd edn. The Urban Institute Press, Washington, DC
- Greenberg D, Shroder M, Onstott M 1999 The social experiment market. Journal of Economic Perspectives 13: 157–72
- Guba E G, Lincoln Y S 1982 Eﬀective Evaluation, 1st edn. Jossey-Bass, San Francisco
- Gueron J M, Pauly E 1991 From Welfare to Work. Russell Sage Foundation, New York
- Levitan S A 1992 Evaluation of Federal Social Programs: An Uncertain Impact. George Washington University, Center for Social Policy Studies, Washington, DC
- Rieken H W, Boruch R F, Campbell D T, Coplan W, Glennan T K, Pratt J, Rees A, Williams W 1974 Social Experimentation: A Method for Planning and Evaluating Social Innovations. Academic Press, New York
- Pierre R G, Puma M J 2000 Toward the dream of the experimenting society. In: Bickman L (ed.) Advances in Social Research Methods: The Heritage of Donald Campbell. Sage Publications, New York