Sample Social Experiments Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. If you need a research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our custom research paper writing service for professional assistance. We offer high-quality assignments for reasonable rates.
An experiment is a deliberately planned process to collect data that enable causal inferences and legitimize treatment comparisons. Random assignments of subjects to treatments (including control groups) allow statistically signiﬁcant diﬀerences among treatment outcomes to be attributed solely to the treatment diﬀerences. Without strong additional assumptions, observational studies and quasi-experiments cannot justify drawing causal inferences, because only experiments, by deﬁnition, assign the treatments randomly.
Academic Writing, Editing, Proofreading, And Problem Solving Services
Get 10% OFF with FALL23 discount code
Public welfare relies on experiments for many purposes, including to approve medicines, medical procedures, new foods, and product reliability. Largescale social experiments diﬀer from most of these experiments, even from many that involve human subjects, in that the treatments in social experiments are social programs, the subjects are humans, and the responses measure outcomes for these people (or families) as they make decisions under diﬀerent possible programs.
Large-scale public policy experiments in the United States generally are considered to have started with the New Jersey Negative Income Tax Experiment 1968. Orr (1999) puts the number of social experiments (not all large scale) launched per decade in the US at six in the 1960s, at 49 and 84 in the next two decades, and then at 56 for 1991–95.
These experiments were ﬁelded to gather scientiﬁc information so policy makers could know as accurately as possible the costs and beneﬁts of possible social programs. Most social experiments have been initiated and funded by agencies (national, state, or at other political levels) concerned with the welfare of large populations. While these large experiments were costly and time consuming, they found sponsors because their costs were dwarfed by the costs of actual programs.
1. Some Examples
Much has been written about social experiments. Greenberg and Shroder (1997) summarize the duration, cost, treatments, measured outcomes, sample sizes, design, funding sources, developers, evaluators, and results of over 100 of them. Orr (1999), Fienberg et al. (1985), and others in the references here, review some of the main studies and list further sources. This research paper describes several of the better-known public policy social experiments, each of which monitored human subjects for several years as they made personal or family economic decisions while eligible for beneﬁts provided by an experimental policy to which they were randomly assigned.
1.1 The New Jersey Negative Income Tax Experiment (Nj–Nit )
The NJ–NIT experiment (Fienberg et al. 1985, Greenberg and Shroder 1997, Orr 1999, Hausman and Wise 1985) was inspired by the thesis of Ross (1966) while she was a PhD student in economics at MIT. Ross argued that only a social experiment could resolve disputes over how work patterns of the poor might change if the government were to adopt a Negative Income Tax (NIT) program to supplement their incomes. The Oﬃce of Economic Opportunity in the Department of Health, Education, and Welfare (HEW) funded the initial Income Maintenance Experiment (IME) to begin in 1968. Mathematica Policy Research, commissioned to design and ﬁeld it, randomized a total of 725 experimental households into nine treatment groups, and another 632 households into a control group, with welfare and near welfare households in four New Jersey cities being enrolled for three years each. The nine experimental treatments in this response surface design involved two main variables, one providing a range of income guarantees, and the other providing a range of tax rate reductions to families as incentives to earn additional income. The NJ–NIT objectives were to estimate, mainly via regression analyses, how work hours, rates of high school graduation, and other outcomes depended on these factors.
Diﬃculties encountered in the NJ–IME included signiﬁcant nonstationary participant behavior at the outset caused by participants needing time to become familiar with their program incentives, and toward the end caused by some gaming behaviors. The NIT design deliberately correlated the random treatment assignments with family incomes in order to reduce experimental costs. That strategy complicated the analyses and risked compromising the experiment’s randomization. Other design problems included diﬀerences between treatment and control participants in their initial income reporting, and using of total income as an eligibility criterion at enrollment with a resulting underrepresentation of working wives.
1.2 The Health Insurance Experiment (Hie )
The need for a health insurance experiment was recognized during the Nixon administration when for the ﬁrst time the government was considering possible forms of a national health insurance program. Congressional leaders held widely divergent views about possible costs of a national program, and especially about the elasticity of demand for health care. If demand were completely inelastic, previously insured individuals would continue to purchase medical services at the same rate. That would make government sponsorship more aﬀordable. While no experiment could provide all needed information, policymakers funded the HIE because it would narrow the range of disagreements, and help avoid disastrous errors in a national program.
The HIE was ﬁelded in 1974 as the income maintenance experiments wound down. The Oﬃce of Economic Opportunity, and later HEW, asked the Rand Corporation to design and conduct the HIE, ultimately detailed in ‘Free for All’ (Newhouse 1993) by its principal investigator.
The HIE was designed to assess the demand for healthcare and many other questions, including whether health beneﬁts might derive from a national health insurance program. Nearly 3,000 families, each for three to ﬁve years, were randomized to one of 14 insurance plans, including an HMO group. Insurance plans had two main dimensions, with varying coinsurance rates that families faced, and with varying deductible limits (after reaching the limits families were exempted from any further expenses for the years remaining). Anticipating that the world might diﬀer considerably after the results of an eight year experiment became available, the HIE treatments included no particular proposal, but instead were chosen to provide a range of rates and beneﬁts that could be used reliably to extrapolate HIE results to future legislative proposals.
Demand in the HIE was ultimately found to be elastic. Perhaps primary among HIE ﬁndings have been the elasticity estimates that legislators still use when comparing the costs of new health insurance proposals (Newhouse 1993, Orr 1999).
1.3 JOBSTART, A Demonstration
The rate of new social experiments continues to increase as the technology for doing them matures, and as new research groups and ﬁrms develop the skills to design, conduct, and analyze them. The newer experiments tend to be smaller and more focused than the ﬁrst ones, many having been designed to demonstrate the eﬀectiveness of an existing program, and to identify areas for improvement. These ‘demonstration’ experiments are simpler to ﬁeld than those, like the IME and the HIE, that are designed to provide information on a wide range of unspeciﬁed programs.
Perhaps politicians are partial to demonstrations because of their lower costs and because they focus on concrete legislation. Demonstration experiments cost less because some of the participant funding comes through an existing program’s beneﬁts, and because focus is exclusively on one treatment and a control group. A complication is that control group subjects, if matched to treatment subjects, will be eligible for the same program beneﬁts and so must be allowed to access them. One way to counter this is to have the treatment to be an encouragment for eligible subjects to take advantage of an underused program.
JOBSTART was a demonstration experiment to evaluate the Job Training Partnership Act (JTPA) of 1982 (Greenberg and Shroder 1997, Cave et al. 1993). Sponsored jointly by the US Department of Labor and by several private foundations, JOBSTART was designed and evaluated by Manpower Demonstration Research Center (MDRC) between 1985 and 1992, and administered by the Local Service Delivery Areas in 13 geographically disparate US sites. Subjects were randomized into treatment and control groups, with about 1,150 subjects in each. All subjects were JTPA eligible (a national program), being economically disadvantaged, high school dropouts aged between 17 and 21 years, and all were poor readers. Treatment interventions included two summers of vocational and educational training, support services, job placement assistance, and incentive payments. Subjects were monitored for four years.
JOBSTART found that the treatment group had a dramatically higher rate of high school completion. Women who had no children before the demonstration, but who gave birth afterward, were less likely to receive AFDC payments. The opposite result was found for women who started with children. No major long-term diﬀerences were found for employment, earnings, or welfare measures.
1.4 The Housing Assistance Supply Experiment (HASE )
Economic theory predicts an elastic supply response to increased demand, so that beneﬁts received by individuals do not necessarily rise in proportion to the dollars provided. Even so, almost all large-scale social experiments have measured demand only, mainly because a supply response is extremely diﬃcult to produce in an experiment. HASE is a notable exception (Lowry 1983). While Housing Allowance Demand Experiments had been ﬁelded to evaluate new housing policies (Bradbury and Downs 1981), evaluating the supply response to alternative housing programs requires a long-term saturation experiment in which all eligible members of an entire community participate in an experimental program. A supply experiment takes longer because the supply side needs adequate time to respond to new demand, generally much longer than individuals need to respond to new opportunities.
The Department of Housing and Urban Development sponsored HASE to learn how the supply side would respond if individuals and families received increased government assistance for making home improvements. HASE ran from 1972 to 1982 in Green Bay, Wisconsin and in South Bend, Indiana, selected as two small, stable communities in which a meaningful supply response might be stimulated. Long-term beneﬁts were guaranteed to all eligible families for 15 years, well beyond the length of the experiment, to give the construction community suﬃcient incentive to relocate workers and businesses into these regions.
Supply experiments are extremely ambitious, but the questions they address are crucial.
2. Alternatives To Social Experiments
Less expensive alternatives must be considered before undertaking a social experiment. These include expert (and nonexpert) opinions, theoretical models, surveys, observational studies, quasiexperiments, and, if they exist, appropriate ‘natural experiments.’ Because these alternatives usually are cheaper, faster, and easier than an experiment, they will be preferred whenever they can provide valid predictions. Even if they cannot produce valid predictions, their consideration is vital to designing an eﬃcient social experiment.
These alternatives all have drawbacks. Expert opinions and theoretical models are no better than the experts or models. While surveys can sample randomly from a target population, they cannot assign treatments randomly, and surveyed subjects rarely would know how they would respond to a hypothetical future program. Nonexperimental data, quasiexperiments, and observational studies (both synonyms for nonrandomized studies) might be used to extrapolate from data on existing programs to predict outcomes of untried programs. Such predictions, however, are especially unreliable for major program changes, and self-selection in such data may bias results. Natural experiments might occur if a comparable country or region adopts a policy similar to one under consideration, but they usually do not exist, and if one does occur, results still must be extrapolated to another country or population.
As none of these alternatives has randomly assigned treatments, selection bias will be present and causal inferences will be suspect. Drivers who use seat belts have lower traﬃc fatality rates, but those rates are confounded with the same drivers being generally safer drivers. Similarly, individuals who choose health insurance plans with less generous coinsurance rates generally consume fewer health services. Is that only because some of them, knowing they are healthier, choose less generous insurance plans? If so, their behavior in a future health insurance program that includes everyone will not be predictable from their past utilizations. A health insurance experiment oﬀers an alternative option to using observational data for predicting the costs of proposed national health insurance programs.
3. Risks Of Social Experiments
Being costly and time-consuming, social experiments have been used sparingly. They might not provide useful information for many years, risking their irrelevance. Human subjects can refuse to enroll, and enrollees can drop out before completion. If that happens with regularity, randomization fails, and so the experiment fails.
If large experiments become too visible, their subjects may feel that the situation is artiﬁcial and act diﬀerently.
Drastic political, social, legislative, or economic changes can occur during an experiment and invalidate it. However, a control group can be protective in such situations, to the extent that the shock aﬀects all treatments and controls equally. Orr (1999), who as a government economist helped spawn and plan several early public policy experiments, reviews such concerns about social experiments, and he discusses how experiments can protect themselves. Besides the issues mentioned here, he discusses whether and how experiments have aﬀected policy, their credibility, time considerations, communication issues, their generalizability, relevance of their results, the policy environment, and policy links.
4. Design Issues
Those who design social experiments make numerous crucial choices. They must choose treatments that span the policies that are likely to be considered without picking an overly large range that provides too little information on the policies that ultimately matter. How many sites are needed? Too many sites are diﬃcult to manage, and for a ﬁxed budget, increased management costs must be paid for by decreased sample sizes. Too few sites restricts generalizability. Should sites be chosen randomly? Probably not if the number of sites must be kept small, because then randomization provides little basis for inference.
More subjects are better, but too many subjects make an experiment unfeasibly expensive. Without strong reasons to do otherwise, it helps to keep the percentage of subjects allocated to each treatment the same in every site. Balanced samples, which means matching the key characteristics of the sample across sites and treatments, have various optimal properties in experimental design, and they enjoy considerable face validity (Morris and Hill 2000).
Enrollment participation must be long enough for individuals to learn how to take advantage of experimental programs (and so to reach steady state), and perhaps to discount the times at the beginning and/or the end of the experiment when subjects may act diﬀerently than they would in a continuing government program. However, overly long enrollments waste time because early results are better for policy, and because, assuming positive correlations within individuals, one individual provides less information per unit time than two independent individuals. These transitory issues may make it necessary to ﬁeld a longer experiment, or at least to guarantee treatment beneﬁts beyond the actual measurement period.
Decisions about what data to collect, about the design of questionnaires, and about how much and how frequently to interview subjects may be as crucial as other statistical design decisions. Interviewing subjects too often risks over-stimulating them (‘Hawthorne eﬀects’), so that subjects become overly aware of participating in an experiment and behave artiﬁcially. Respondent burden can cause dropouts, careless responses, and selective nonresponse. Interviewing subjects too infrequently sacriﬁces potentially important data. Experimenters can give a better understanding of respondent burden by subjecting themselves to the same interviewing processes which experimental subjects face.
While large-scale social experiments encounter diﬃculties beyond those of smaller experiments, they will have greater ﬁnancial resources to deal with them. Some potential diﬃculties that concerned HIE designers included: creating Hawthorne eﬀects by frequent interviewing; activating participants by obtaining a medical screening examination at enrollment; stimulating health expenditures by having to pay participation incentives to families; and not being sure of the best time horizon (Newhouse et al. 1979). The HIE budget allowed for four (balanced) ‘subexperiments’ within the main experiment to measure these eﬀects. In one subexperiment, half of the HIE subjects were chosen at random for weekly interviews, and half for bi-weekly. Similarly, 60 percent of the HIE subjects were randomly selected for initial screening exams, and the rest not; and 70 percent were randomly assigned for three-year enrollments, and the other 30 percent for ﬁve years. This allowed testing for these eﬀects and, if found, adjusting for them.
5. Further Reading
Orr (1999) provides an overview and ﬁrst-hand examples of the design and implementation of social experiments. Greenberg and Shroder (1997) summarize 143 completed social experiments in the United States, and 75 others that were ongoing at the time of their publication. Each summary describes the target population, policies tested, experimental design, sites, ﬁndings, sources of further information, and public access to the data. Campbell’s fundamental work in social experimentation is summarized in Campbell and Russo (1999). Boruch (1997) provides reference material on randomization in ﬁeld experiments and Neuberg (1989) discusses anomalies of social control experimentation.
- Aigner D J, Morris C (eds.) 1979 Experimental design in econometrics. Journal of Econometrics 11(1)
- Boruch R F 1975 On common contentions about randomized ﬁeld experiments. In: Boruch R F, Riecken H W (eds.) Experimental Tests of Public Policy. Westview, Boulder, CO, pp. 108–45
- Boruch R F 1997 Randomized Experiments for Planning and Evaluation: A Practical Guide. Sage, Thousand Oaks, CA Bradbury K, Downs A (eds.) 1981 Do Housing Allowances Work? Brookings Institution, Washington, DC
- Campbell D T, Russo M J 1999 Social Experimentation. Sage, Thousand Oaks, CA
- Cave G, Doolittle F, Bos H, Toussaint C 1993 JOBSTART: Final Report on a Program for School Dropouts. Manpower Demonstration Research Corporation, New York
- Ferber R, Hirsch W Z 1982 Social Experimentation and Economic Policy. Cambridge University Press, Cambridge, UK
- Fienberg S E, Singer B, Tanur J M 1985 Large-scale social experimentation in the United States. A Celebration of Statistics: The ISI Centenary Volume: 287–326, SpringerVerlag, New York
- Greenberg D, Shroder M 1997 The Digest of Social Experiments, 2nd edn. Urban Institute Press, Washington, DC
- Hausman J, Wise D (eds.) 1985 Social Experimentation. University of Chicago Press, Chicago
- Lowry I S (ed.) 1983 Experimenting with Housing Allowances: The Final Report of the Housing Assistance Supply Experiment. Oelgeschlager, Gunn & Hain, Cambridge, MA
- Morris C N 1979 A ﬁnite selection model for experimental design of the health insurance study. Journal of Econometrics 11: 43–61
- Morris C N, Hill J L 2000 The Health Insurance Experiment: Design Using the Finite Selection Model. Public Policy and Statistics: Case Studies from RAND. Springer, New York
- Neuberg L G 1989 Conceptual Anomalies in Economics and Statistics: Lessons from the Social Experiment. Cambridge University Press, Cambridge, UK
- Newhouse J P 1993 Free for All? Lessons from the RAND Health Insurance Experiments. Harvard University Press, Cambridge, MA
- Newhouse J P, Marquis K H, Morris C N, Phelps C E, Rogers W H 1979 Measurement issues in the second generation of social experiments: The health insurance study. Journal of Econometrics 11: 117–29
- Orr L L 1999 Social Experiments: Evaluating Public Programs with Experimental Methods. Sage, Thousand Oaks, CA
- Ross H 1966 A Proposal for a Demonstration of New Techniques in Income Maintenance (mimeo). Data Center Archives, Institute for Research on Poverty, University of Wisconsin, Madison, WI