Experiments in Political Science Research Paper

Academic Writing Service

View sample Experiments in Political Science Research Paper. Browse other research paper examples and check the list of political science  research paper topics for more inspiration. If you need a research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Also, chech our custom research proposal writing service for professional assistance. We offer high-quality assignments for reasonable rates.

Outline

I. Introduction

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


II. Theory

A. The Experimental Method




1. Treatment and Control Groups

2. Random Assignment

3. Comparing Different Interventions

4. Understanding the Placebo Effect

B. Randomized Experiments Versus Survey Data

C. Laboratory Experiments Versus Field Experiments

1. Studying Challenging Topics Using Laboratory Experiments

2. The Limitations of Lab Based Experiments

3. Field Experiments: The Real World as Political Laboratory

D. Natural Experiments: Exploiting As-If-Random Conditions

III. Applications

A. Lab Experiments on Negative Advertising

B. Field Experiments on Voter Mobilization

C. Survey-Based Experiments on Racial Attitudes

D. Other Uses

E. Limitations and the Need for Replication

IV. Policy Implications

V. Future Directions

VI. Conclusion

I. Introduction

Experimental research experienced a resurgence in the 21st century. This resurgence was led by a group of scholars at Yale University who persuasively argued that randomized intervention into real-world settings should “occupy a central place in political science” (Green & Gerber, 2002, p. 808). Committed to the belief that the value of survey research had been overstated and the value of field experiments was underappreciated, they set out to explore and promote the “untapped potential of field experiments” (p. 808).Working through Yale’s Institution for Social and Policy Studies, Green and Gerber set up a summer workshop on field experiments, inviting social scientists across the nation (and world) to join them in this shared endeavor. Meanwhile, they trained their graduate students to conduct field experiments, inspiring a series of doctoral dissertations and academic articles using field experimentation. This research paper discusses the experimental method, compares the experimental method to survey-based research, and stresses the importance of random assignment of experimental treatments. The paper also explains the difference between laboratory experiments and field experiments, highlights the wide range of applications for experimental studies, and briefly discusses the policy implications and future directions of experimental research in political science.

II. Theory

Most research papers on this site discuss substantive, topic-based areas of political science. These subfields are driven by assumptions, or theories, about the way the political world works. In contrast, this research paper focuses on a specific method for studying political phenomena: the experimental method. This method is designed to test substantive theories about the empirical world. Experiments are based on the assumption that political scientists can investigate the political world by designing specific interventions that change political behavior or policy outcomes in measurable ways.

A. The Experimental Method

An experiment is a method used to study cause and effect. The point is to examine the relationship between two or more variables. A variable refers to a measurable attribute (e.g., age, sex, educational attainment, or partisanship) that varies over time or among individuals. Experiments involve the deliberate manipulation of one variable, while trying to hold all other variables constant. By changing one variable while measuring another, the experimental method allows researchers to draw conclusions about cause and effect with far more certainty than any nonexperimental method. The variable manipulated by the researcher is called the independent variable while the dependent variable is the change in behavior measured by the researcher. The logic is clear: If the independent variable is the only thing that is changed, then the independent variable is responsible for any change in the dependent variable. All other variables that might affect the results are called confounding variables. By carefully assigning subjects to treatment and control groups, researchers can ensure that confounding variables are evenly distributed among participants in both groups so that the effect of the experimental treatment itself can be isolated and measured.

1. Treatment and Control Groups

To conduct an experiment, the researcher divides research subjects (sometimes called participants) into a control group and a treatment group. The control group receives no treatment, while the treatment group receives a specific intervention. Suppose a political scientist wants to investigate whether calling people and reminding them to vote will actually increase the likelihood that they will cast a vote on election day. The phone call is the independent variable. The researcher wishes to determine whether phone calls can increase voter turnout. The dependent variable is voter turnout. Voter turnout records can be obtained from the county clerk or secretary of state. Turnout is the dependent variable because the researcher’s hypothesis is that performance on this variable (level of turnout) depends on the independent variable (whether the person received a reminder phone call). To test the effectiveness of the phone calls, members of the control group do not receive a reminder call before election day, while members of the treatment group receive the reminder call. The researcher expects that people in the treatment group will, on average, be more likely to vote than people in the control group. The experiment allows this hypothesis to be tested empirically.

Confounding variables in the example given might include age, sex, partisanship, educational attainment, political interest, and past voter history. Each of these factors is correlated with voter turnout. Older people, women, strong partisans, and educated citizens with an interest in politics are more likely to vote than their counterparts. The best predictor of whether people will vote is their past behavior. People who have voted in the past are most likely to vote in the future. These factors may be more important than the reminder call in determining whether an individual will vote on election day. In a laboratory setting, researchers often match the treatment and control groups according to relevant characteristics to reduce the effects of confounding variables. An even better alternative is to randomly select participants into the treatment or control group.

2. Random Assignment

A random selection process ensures that every subject has an equal chance of being selected into the treatment group. As a check, the researcher can compare the characteristics of the treatment and control groups to assure readers that the groups really are similar along all relevant dimensions. Rather than drawing names out of a hat or flipping a coin, political scientists in the 21st century use computers to assign subjects to a treatment or control group. Using a random number generator to assign half of the participants to a treatment group and the other half to a control group ensures that the treatment group and the control group do not differ in terms of their politically relevant characteristics. This technique is particularly useful when the number of participants is large. The larger the number of participants, the less likely it is that members of a treatment group share some unidentified behavior-changing characteristic that could affect their performance on the dependent variable.

There is an added benefit of random selection for experiments conducted outside the laboratory. The random selection assures that other stimuli that might affect participant behavior (e.g., candidate mailings, campaign commercials, and television stories) will reach both the treatment and control groups. This means any effects of such outside stimuli will shape behavior of both the treatment and control group in roughly equal (though immeasurable) ways. In contrast, the random assignment of the researcher’s intervention assures that only a representative sample of participants will receive the treatment of interest. In this way, researchers can isolate the specific influence of the intervention they are studying.

3. Comparing Different Interventions

There are many variations of this basic experimental method. The most common are comparisons of different treatments and the use of a placebo group. To test multiple treatments, a researcher simply creates additional treatment groups. The researcher in the earlier example may want to know whether phone calls or door hangers are more effective in getting people to vote on election day. The researcher might assign one third of the registered voters in a precinct to the control group, another third to Treatment Group A, and the final third to Treatment Group B. The control group would not receive any reminders. Treatment Group A would receive a reminder phone call encouraging them to vote. Treatment Group B would receive a door hanger reminding them to vote on election day. As long as the researcher uses random selection to assign subjects to one of the three groups, this study will effectively compare the relative impact of making a phone call versus leaving a written message on a prospective voter’s door. By comparing the turnout rates of subjects in each of the three groups, the researcher can determine which approach is most effective at getting people to the polls. Similarly, an experiment might compare the relative effectiveness of two different phone call scripts or two different door hangers to see which message is most effective in getting people to participate in the electoral process.

4. Understanding the Placebo Effect

Sometimes political scientists are concerned that they are measuring the effect of intervening in people’s lives, rather than the effect of a specific treatment. To address this concern, one could use a placebo group. To understand this approach, it is helpful to think about the field of medicine.

Medical studies frequently employ the use of placebo groups to disentangle the psychological effects of receiving treatment from the actual physiological effects of the treatment itself. A placebo is a phony medical intervention that leads the recipient to believe that his or her medical condition may be improved. One common placebo treatment is an inert sugar pill. Subjects in a clinical trial may be divided into three groups: a control group, a treatment group, and a placebo group. The control group receives no medicine. The treatment group receives the medicine being tested. The placebo group receives the (medically ineffective) sugar pill. Subjects do not know whether they have received the new wonder drug or the inert sugar pill. The placebo effect is well documented. People frequently report feeling better after treatment, even if they receive the placebo.

Although political scientists are less likely to use placebo groups than their colleagues in the field of medicine, some recent studies demonstrate the effective use of placebo groups in political research. One recent study used a placebo-controlled experiment targeting households with two registered voters (Nickerson, 2008). Residents who answered the door were exposed to either encouragement to get out and vote (treatment) or a recycling message (placebo). The placebo treatment was used to address the possibility that people who answer the door and talk to strangers also may have a higher propensity to vote. This could be true because they are more civic-minded or simply because they are alive, mobile, and still living in the voting precinct. The fact that the treatment group voted at higher rates than the control and placebo groups increases confidence that the get-out-the-vote treatment was effective.

B. Randomized Experiments Versus Survey Data

The vast majority of work in political science relies on nonexperimental data. Since the early 1950s, surveys have been the mainstay of political behavior research. Earlier political scientists conducted some controlled experiments (e.g., Gosnell, 1927; Hartman, 1936–1937), but their work was seldom replicated. In the 1950s, as the principles of probability sampling and survey research became better known, political scientists sought to offer complete explanations for political phenomena. Surveys seemed ideally suited to this task, allowing researchers to take into account a wide range of demographic, economic, and social-psychological characteristics that shape political attitudes and behavior. In addition, surveys seemed better able to address big-picture questions of interest to political scientists, including topics like political culture, party identification, and support for the political system (Green & Gerber, 2002).

Surveys offer a relatively inexpensive way to study political attitudes and behavior from a nationally representative sample. A sample of only 1,000 Americans can provide a snapshot of public opinion that is highly accurate. With only 1,000 respondents, researchers can be 95% sure that they have captured public opinion with a mere ±3% margin of error. For example, if 57% of all survey respondents say that they approve of the job the president is doing in office, one can be 95% sure that the president’s true level of support among citizens is between 54% and 60%.

Although survey data provide social scientists with valuable research opportunities, this approach has several inherent drawbacks. Consider the example of voter mobilization. A survey conducted using a randomly selected sample can provide a very good estimate of the percentage of people who voted, or at least the number of people nationwide who are likely to report voting, on election day. The problem is that this behavior is self-reported. By comparing survey data on voter turnout to official election records, political scientists have discovered that people overreport voting. People tell pollsters that they voted when they in fact did not. This is also true of other socially desirable behaviors. When people know that they should do something, they are more likely to report that they did it whether or not this report is accurate.

Voter turnout rates can be obtained through official election records. Political scientists wish to know not only how many people went to the polls on election day, but also why they voted while others failed to do so. To determine the most effective way to mobilize people to vote, survey researchers ask citizens to recall having been contacted by political campaigns. Unfortunately, the survey researcher must rely on respondents’ self-reports that the contacts actually occurred. If voters are more likely than nonvoters to report campaign contact when none occurred, the analysis might overestimate the effect of contact on voter turnout. On the other hand, if nonvoters are more likely than voters to incorrectly report campaign contact, the analysis will underestimate the effectiveness of campaign contact. In addition to the problem of potential reporting bias, the nature of the contact between the campaign and the voter is usually unclear. It is difficult to distinguish between face-to-face contacts and phone contacts or between single contacts and multiple contacts. It is also difficult to determine what type of group contacted the voter or what message was used. Survey questions are often not specific enough to detect these differences, and voter recollection is limited.

Even if these campaign contacts were accurately recorded, one could not be sure the contacts really increased turnout. It is possible that political campaigns targeted likely voters. Campaigns use state voter files, including voter history, to select voters to target. The correlation between contacts and turnout may simply reflect the fact that campaigns targeted likely voters. Experiments provide a more precise tool to isolate cause and effect.

Experiments isolate cause and effect by determining how a change in one variable causes change in another variable. Unlike survey researchers, experimenters know precisely what treatments each subject received. Most often, experimental research studies also allow the researcher to observe the actual outcome of the treatment. Neither the treatment nor the outcome is self-reported. In the case of a voter mobilization field experiment, for example, the researcher randomly assigns subjects to the treatment or control group and then delivers specific treatments (phone calls, face-to-face visits, etc.) to each subject, keeping careful records of who received the treatment. After election day, official voter records are examined to compare the voter turnout of the treatment group to that of the control group. Because of random assignment, the researcher knows that mobilization messengers did not target high-propensity or low-propensity voters. Because the study relies on actual records, rather than self-reports, the researcher need not worry about reporting bias. By comparing the turnout of the treatment and control groups, researchers can determine the precise effect of specific mobilization tactics.

C. Laboratory Experiments Versus Field Experiments

As political scientists recognized the limits of survey-based research, the late 20th century ushered in a renewed interest in experiments. Researchers began to modify surveys to include embedded experiments. Survey-based experiments were conducted by randomly assigning respondents to receive different versions of the same question to study how question content and wording shape people’s answers to questions on politically sensitive topics such as racial attitudes (Hurwitz & Peffley, 1998; Sniderman & Grob, 1996). Using split samples is one way to avoid, or measure, the reporting bias that can undermine survey-based research. Researchers also designed laboratory experiments to study such topics as media exposure (Ansolabehere & Iyengar, 1995; Iyengar & Kinder, 1987; Iyengar, Peters, & Kinder, 1982), collective action (Dawes, Orbell, Simmons, & van de Kragt, 1986), and legislative bargaining (McKelvey & Ordeshook, 1990). Using human behavior laboratories, social scientists can set up lab experiments that are similar to those conducted by their colleagues in the physical sciences. The researcher creates equivalent groups through matching or randomization and then follows one of three basic protocols: (1) administering a treatment to one group but not to the other, (2) administering the treatment to one group and a placebo treatment to the other group, or (3) administering different treatments to different groups.

1. Studying Challenging Topics Using Laboratory Experiments

Laboratory experiments can be useful in detecting prejudice, stereotyping, and other forms of bias that people might not readily admit to a person administering a survey. Such experiments are also useful at isolating specific variables that would be difficult, if not impossible, to isolate in the real world. For example, Sapiro (1991–1992) conducted an experiment on gender stereotypes. Interested in difficult-to-detect, perhaps nonconscious, forms of sexism, Sapiro showed students campaign speeches for hypothetical candidates for the U.S. House of Representatives. The text of the speech was taken from a real speech by a U.S. senator, selected because it provided little information about policy proposals, partisanship, or political ideology. The text for each candidate was identical, except that one was labeled “Speech by John Leeds” while the other version was labeled “Speech by Joan Leeds.” The students used gender stereotypes to determine how competent the candidate would be in solving specific kinds of problems. Students gave Joan higher competence ratings than John in specific policy areas, including improving our educational system, maintaining honesty and integrity in government, and dealing with health problems. None of these issues were directly mentioned in the speech. In contrast, students gave higher competence scores to John versus Joan when asked to rate the candidate’s competence in the stereotypically male domains of dealing with military issues and making decisions on farm issues. Students were also less likely to think that female candidate Joan Leeds would win the election. Because the researcher so carefully controlled the experiment, offering students identical information except candidate name, the importance of candidate gender in shaping voters’ perceptions is clearly demonstrated. Simply asking voters to answer questions about real-life male and female candidates would not prove that gender stereotypes are driving people’s perceptions or responses. Real-life candidates have different personalities, ideologies, writing styles, speech patterns, and campaign strategies, all of which shape voter perceptions of the candidates.

2. The Limitations of Lab Based Experiments

The major question about lab-based experiments is whether they provide findings that will apply to the real world outside the laboratory. The results of laboratory experiments may not always be generalizable outside the lab. One potential problem is the requirement that all subjects participating in a lab-based experiment must provide informed consent. Informed consent means that subjects are aware that they are being studied. There is a real concern that subjects may act differently because they know they are being watched. Researchers try to limit the effects of this potential problem by obscuring the specific research question and variable of interest, while accurately highlighting potential benefits and risks of participation. For example, researchers studying the effects of negative campaign ads might embed these ads within a newscast, telling subjects only that they are looking at selective perceptions of news programs (Ansolabehere & Iyengar, 1995). Even if the exact nature of the experiment is successfully obscured, the fact that so many political scientists use undergraduate students as research subjects reduces the likelihood that the findings are applicable to a full range of people outside the lab. It is unlikely that college students are representative of the population as a whole. Recognizing this limitation, some scholars have begun taking their experimental labs on the road. For example, Iyengar selected shopping malls as a laboratory for a series of experiments designed to test the effects of negative advertisements on respondents’ knowledge, attitudes, and likelihood of voting. In this way, Iyengar was able to capitalize on the benefits of a controlled experiment while broadening his pool of subjects beyond the college campus (Ansolabehere & Iyengar).

Even if a more representative pool of participants is identified, there is still reason to doubt the generalizability of lab-based experiments. People make decisions based on a range of factors, including self-interest, rationality, and political ideology. However, human behavior is also shaped by the degree to which people believe their decisions will be scrutinized by others, the particular context in which a decision is made, and the manner in which participants are selected. If these circumstances do not reflect the real-world environment in which political decisions are made, the results may not be generalizable outside the lab. Despite these shortcomings, laboratory experiments can provide an important, even critical, first step to understanding people’s political decision-making processes. They can also produce important findings that can later be tested, when possible, outside the laboratory.

3. Field Experiments: The Real World as Political Laboratory

Social scientists who wish to test hypotheses in the real world often turn to field experiments. Unlike laboratory experiments, field experiments examine an intervention or treatment in the real world, in naturally occurring environments. To maximize the realistic nature of the experiment, social scientists often use subjects who are unaware that they are participating in an experiment. Government requirements regarding the protection of human subjects require that subjects sign a consent form (or receive a study information sheet) unless the research design relies solely on interactions that might take place anyway in the absence of the study and on public data available without the subjects’ consent. As with laboratory experiments, researchers must also convince an institutional review board that the experiment will not in any way harm subjects and that the identity and confidentiality of all participants will be protected. Voter mobilization experiments meet all of the conditions for waiving informed consent. First, researchers are not doing anything other campaigns and political groups do not do in an election season. Second, lists of registered voters are collected from public voter files, and voting behavior is determined using these same public files. Finally, review boards are unlikely to argue that asking somebody to vote is likely to cause harm.

D. Natural Experiments: Exploiting As-If-Random Conditions

Sometimes political scientists conduct experiments without using random assignment. These studies, called quasi experiments or natural experiments, are conducted when real-life circumstances approximate the conditions of a randomized experiment. With quasi experiments, researchers observe differences between groups without assigning subjects to treatment and control groups or manipulating the treatment variable. Instead, researchers take advantage of a predetermined change, such as a new law or policy, designed to alter public behavior. For example, researchers might study the effects of a new gun control law by comparing homicide rates before and after implementation (Bogus, 1992). Studies like these that measure changes in the entire population reduce the problem of an unrepresentative treatment group by eliminating the possibility that people self-selected the treatment. Another approach is to select two different cities with comparable population sizes, education levels, racial and ethnic diversity, and pre-gun-ban crime rates and compare the homicide rates and gun-related crime rates after a ban was enacted in one city but not the other (Bogus, 1992). The key to making a convincing case would be to demonstrate that the two cities are, in fact, similar with regard to all characteristics that might affect the crime and homicide rates. Ideally, they would also have identical crime and homicide rates before the ban was put into place. The goal with natural experiments is to establish that the treatment and control groups will perform as if they were randomly selected.

Unfortunately, many social and policy changes do not meet this as-if-random requirement. For example, comparing the performance of students at a new magnet school to the performance of other public school students would not provide a good measure of the success of the new school in promoting student achievement. The fact that students and parents self-select into the magnet school may lead to higher performance among charter school children. Any differences in performance between children at the regular public school and those at the new magnet school may be due to selection bias. The magnet school may have attracted high-achieving students with involved parents. Random assignment can solve this problem of selection bias. A lottery system would be desirable from a research point of view but is not always practical or desirable.

Quasi experiments are particularly useful when it is impossible for political scientists to control the variables of interest. Although political boundaries provide a popular basis for natural experiments, many other socially occurring phenomena may present the possibility for this kind of research design. As a first step in a government-funded series of experiments, Brady and McNulty (2004) used a natural experiment to explore how the costs of voting affect turnout. They studied California’s special gubernatorial recall election of 2003, in which Arnold Schwarzenegger became governor. The elections supervisor in Los Angeles County consolidated the number of district voting precincts from 5,231 to 1,885. For some voters, the distance to their polling place was increased, while for others, it remained the same. The group that had to drive farther to get to the polls became the treatment group for a natural experiment studying how the costs of voting affect turnout. The key question is whether assignment of voters to polling places in the 2003 election was as-if-random with respect to characteristics that affect people’s likelihood of voting. Did the county elections supervisor close some polling places and not others in ways that were correlated with this predisposition to vote? Brady and McNulty find some evidence for a lack of pretreatment equivalence between groups of voters who had their polling place changed (i.e., the treatment group) and those who did not. This threatens the validity of their findings. Fortunately, in this case, the pretreatment differences between the groups are small, relative to the reduction in turnout associated with increased voting costs. This indicates a strong likelihood that forcing people to drive farther to get to a polling place reduces turnout on election day. Scholars have begun to evaluate the plausibility of various natural experiments in the social sciences based on the degree to which they meet the as-if-random requirement (Dunning, 2008).

III. Applications

Students of politics are filled with questions about why and how politics works. Experiments conducted in the lab, in the field, or embedded within surveys can further our understanding of the political world. Applications are numerous, but three areas of investigation have been particularly likely to generate experimental research: negative advertising, voter mobilization, and racial attitudes.

A. Lab Experiments on Negative Advertising

Scholars have long debated the effects of negative political advertising. Conventional wisdom holds that people dislike the ads but that they work. Most of this scholarship went unnoticed by the media and political consultants until Ansolabehere and Iyengar (1995) published Going Negative: How Attack Ads Shrink and Polarize the Electorate. Based on lab experiments and observations of U.S. political campaigns, the authors argue that negative advertising depresses voter turnout and that political consultants intentionally use ads for this purpose. The authors suggest that negative ads work better for Republicans than for Democrats and better for men than for women and also that negative ads work better than positive ones. They caution that as independent voters are driven away by negativity, the voting public is reduced to its partisan extremes. A 1996 study challenged these conclusions, finding that negative ads can promote political participation, especially among uninformed voters (Wattenberg & Brians, 1996). Using survey data, the authors found that citizens who report being exposed to negative ads are more likely to vote than those who do not comment on such ads. They argue that Ansolabehere and Iyengar’s findings must not apply outside the lab. Given this high-profile dispute, several political scientists conducted a review of the literature on the topic and ultimately concluded that there is little evidence that negative advertisements are especially disliked, more effective than positive ads, or detrimental to participation in the electoral process (Lau, Sigelman, Heldman, & Babbitt, 1999). Political consultants remained convinced that negative advertising works, leading to a flurry of 21stcentury experiments testing the effects of negative ads in a variety of forms and contexts.

B. Field Experiments on Voter Mobilization

Voter mobilization studies have been the subject of a variety of natural and randomized field experiments. The ability to test specific mobilization techniques, to accurately record the treatments received, and to bypass the problem of self-reporting using official voting records makes the experimental method ideal for this line of research. Scholars and practitioners of the art of campaigning have devoted significant attention to randomized field experimentation since Gerber and Green’s (2000) article reporting on the effectiveness of different voter mobilization techniques. The success of this enterprise is documented in the 2004 release (and 2008 second edition) of Green and Gerber’s (2008) instant hit, Get Out the Vote! How to Increase Voter Turnout. Targeting academic researchers and political practitioners, the book summarizes the results of dozens of voter mobilization field experiments conducted and published since the turn of the century. Scholars working in this area were also invited to publish their completed studies in a special edition of the Annals of the American Academy of Political and Social Science, titled “The Science of Voter Mobilization” (Green & Gerber, 2005). The journal featured a collection of articles by political scientists using randomized field experiments to test the effectiveness of different voter mobilization methods, messages, and canvassers in a wide range of contexts.

C. Survey-Based Experiments on Racial Attitudes

Racial attitudes are difficult to study because of most people’s reluctance to admit prejudice. Survey-based experiments have proven an unobtrusive way to measure racial attitudes and the effects of these attitudes on popular support for various government policies. Survey-based experiments challenge previous survey data that suggested whites in the South resemble the rest of the country in their racial attitudes (Kuklinski, Cobb, & Gilens, 1997). The authors of these experiments argue that previous survey results were contaminated by social desirability. By randomly assigning respondents to different forms of the question, cuing or not cuing people to think about race, political scientists can get around this problem. For example, a study of the effects of racial attitudes might ask people’s impressions of a welfare recipient described as either a white or a black woman in her early 30s with a 10-year-old child who has been on welfare for the past year. How likely is it that she will have more children to get a bigger welfare check? How likely is it that she will look for a job? Because subjects were randomly assigned to receive either the black or the white version of the (otherwise identical) question, researchers can measure the effect of racial attitudes without directly asking the respondent to compare whites with blacks (Gilens, 1999). Based on these experiments, political scientists have argued that racial attitudes dominate public perceptions of welfare, with black stereotypes predicting much of the opposition to welfare programs (Gilens, 1999). In contrast, it is clear that public opposition to affirmative action is driven less by racial prejudice than commonly believed (Kuklinski et al., 1997). Political scientists continue to use experiments to investigate the effects of racial attitudes on political identity, attitudes, and behavior.

D. Other Uses

Experimental methodology has broad application to questions about the effectiveness of a wide range of social interventions. Although political scientists initially viewed the random assignment of social interventions in real-world settings (outside medicine) as impractical, the use of field experiments is gaining popularity and encouraging collaborations among scholars from many disciplines interested in political questions. Sage Publications published a special issue of the American Behavioral Scientist titled “Field Experiments in the Political Sciences” (Green & Gerber, 2004). This work crosses the disciplines of political science, social psychology, social work, criminology, and public policy. Topics include the relationship between campaign spending and electoral victory, how to frame messages to get patients to seek preventative care, how to evaluate the effectiveness of social welfare program reforms, the difficulty of evaluating crime prevention programs, and the effectiveness of school voucher programs on academic performance.

E. Limitations and the Need for Replication

Although experiments excel at testing causal relationships, they are not without limitations. There are several common criticisms that have limited the use of randomized experiments in the discipline. Each criticism reflects valid concerns, but critics often overstate the extent of these limitations. Large experiments can require a great deal of time and money, but other forms of research are also time-, labor-, and (sometimes) capital-intensive. Moreover, many of the costs of field experiments can be covered by political organizations, agencies, and foundations that hire academics to evaluate their efforts.

Some critics correctly argue that experiments may produce contradictory findings. Whether because of sampling error or differences in experimental design, experiments may produce incompatible findings. However, this is true of studies based on a wide range of data collection methods. Clear, detailed descriptions of the experimental protocol and further experimentation and replication can help resolve these issues.

Other critics argue that experimental research frequently fails to offer a clear explanation for why a specific intervention produced a specific effect. This shortcoming can also be solved through additional experimentation. Researchers can vary the stimulus to determine which aspect of the treatment is producing the demonstrated effect. Researchers can also measure variables that are thought to affect the relationship between the intervention and the dependent variable.

In addition, critics argue that experimental results might not be applicable to the real world of politics. Replication in different contexts, including field experiments outside the lab, can boost confidence in the external validity of experimental findings. Each of these criticisms, while valid, points to the need for additional experimentation.

Finally, randomized experiments are sometimes dismissed as impractical, either because the subjects of investigation are too broad and complex or because it is thought that key political actors cannot (or will not) randomly assign groups to different types of interventions. It is true that political scientists cannot randomly assign states or countries to different forms of government, legal systems, or public policies. Similarly, researchers cannot randomly assign people or nations to different political cultures, economic circumstances, or global positions. Although experiments on these topics are likely to be limited to natural experiments and rare circumstances, the discipline has not yet tested the limits of randomized experiments. The causes of economic development, democracy, socialism, religious fundamentalism, or revolution may be too complex to be reduced to specific, and measurable, causal relationships. On the other hand, studies of basic aspects of each phenomenon would be instructive.

When it comes to narrower research questions of interest to political candidates and organizations, it is important to note that “experimentation is possible whenever decision makers face constrained resources and are indifferent between competing ways of allocating them” (Green & Gerber, 2002, p. 821). Unable to reach every voter or donor, organizations could call as many as possible using a list of names ordered using random assignment. Those they do not have time to call become the control group. They can continue their work as usual while allowing a researcher to assess the effectiveness of their efforts in ways that help future campaigns.

IV. Policy Implications

Political scientists can work with any policymaker who has discretionary resources and an interest in causal relationships. Any policy that will be phased in can be phased in using random assignment to create treatment and control groups of individuals, blocks, or cities. The ability to test the effectiveness of specific social interventions has major advantages for policymakers. Just as the Food and Drug Administration relies on randomized experiments (drug trials) to test the safety of pharmaceutical drugs before approving them for consumer use, public policy makers can look to randomized field experiments to provide similar tests of the effectiveness of various social or legal interventions. Researchers could use experiments to assess the effectiveness of efforts to reduce prejudice, reduce crime, increase conviction rates, rehabilitate convicts, promote recycling, recruit civil servants, recruit military personnel, reduce health care costs, improve health care quality, promote political participation, and decrease dependence on social services. Education, health care, criminal justice, welfare, national security, and the environment are a few of the policy areas that might benefit from additional scientific experimentation. Experiments can promote evidence-based public policy decisions. Experiments can also help elected officials determine the best way to educate ordinary citizens about public policy and to get them more involved in the policy-making process.

V. Future Directions

For most of the 20th century, political scientists rejected the notion that politics could be studied experimentally (Lowell, 1910). Experiments remained rare, until an increased interest in causality, new computer technology, and innovative scholars pushed the experimentation forward (Druckman, Green, Kuklinsi, & Lupia, 2006). In the future, political scientists will increasingly move laboratory experiments from the college classroom to more natural settings to include a wider range of subjects. They will also increasingly test laboratory findings out in the field. For example, the work on negative campaigning has moved into the field, with randomly selected voters or zip codes receiving negative campaign mail (e.g., Niven, 2006) or negative radio spots (Green & Vavreck, 2008). Experimental work will also become increasingly sophisticated. For example, studies of voter mobilization have gone beyond testing the relative impact of leaflets, door knocks, and phone calls to testing the effectiveness of different messages, messengers, and timing on different kinds of voters (see Green & Gerber, 2005, for a collection of research by scholars working in this field). New experimental studies are also beginning to look at more complicated social phenomena, including the importance of social pressure (Gerber, Green, & Larimer, 2008) and (online and offline) social networks in shaping voter behavior. To increase the realism and relevance of their work, scholars are also doing more to work with real political campaigns, organizations, and governmental agencies. The U.S. government has also begun funding more large-scale experiments in political science, a trend that may continue as budget constraints lead to an emphasis on evidence-based policy decisions. Finally, the increased prominence and visibility of experimentation will lead to more experimental research on topics outside the subfields of legislative politics, public opinion, and political participation.

VI. Conclusion

Experiments allow political scientists to test the relationship between cause and effect. The experimental method is one way to learn more about the political world. By randomly assigning subjects to treatment and control groups, researchers can isolate the effect of a specific intervention on subjects’ political attitudes, knowledge, or behavior. Randomized experiments can be conducted in the laboratory or in the field. Researchers also conduct so-called natural experiments by seeking out circumstances in which specific interventions affect populations selected as if at random. Although survey-based research continues to dominate the discipline, scholars are increasingly turning to experiments as a way to overcome the problem of self-reporting that can bias survey responses. Critics raise questions about the internal and external validity of experimental research. Proponents of the method argue that both concerns can be addressed through extension and replication. Building on previous research, political scientists are using experiments to answer increasingly complex questions about a wide variety of topics, including, but not limited to, the political effects of political advertising, racial attitudes, and voter mobilization campaigns. Political science has relied less heavily on experiments than have the related fields of psychology and economics. The end of the 20th century marked an increase in important laboratory-based experiments, while the 21st century witnessed a movement toward field experimentation. Experiments are now employed in work across the discipline and in interdisciplinary studies of politics.

Bibliography:

  1. Ansolabehere, S., & Iyengar, S. (1995). Going negative: How attack ads shrink and polarize the electorate. New York: Free Press.
  2. Bogus, C. T. (1992). The strong case for gun control. American Prospect, 3(10), 19-24.
  3. Brady, H. E., & McNulty, J. E. (2004, July). The costs of voting: Evidence from a natural experiment. Paper presented at the 2004 meeting of the Society for Political Methodology, Palo Alto, CA.
  4. Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi experimental designs for research. Chicago: Rand McNally.
  5. Dawes, R. M., Orbell, J. M., Simmons, R. T., & van de Kragt, A. J. C. (1986). Organizing groups for collective action. American Political Science Review, 80, 117-185.
  6. Druckman, J. M., Green, D. P., Kuklinski, J. H., & Lupia, A. (2006). The growth and development of experimental research political science. American Political Science Review, 100, 627-636.
  7. Dunning, T. (2008). Improving causal inference: Strengths and limitations of natural experiments. Political Research Quarterly, 61, 282-293.
  8. Eldersveld, S. J. (1956). Experimental propaganda techniques and voting behavior. American Political Science Review, 50, 154-165.
  9. Gerber, A. S., & Green, D. P. (2000). The effects of canvassing, direct mail, and telephone calls on voter turnout: A field experiment. American Political Science Review, 94, 653-663.
  10. Gerber, A. S., Green, D. P., & Larimer, C.W. (2008). Social pressure and voter turnout: Evidence from a large scale field experiment. American Political Science Review, 102, 33-48.
  11. Gilens, M. (1999). Why Americans hate welfare: Race, media, and the politics of anti poverty policy. Chicago: University of Chicago Press.
  12. Gosnell, H. F. (1927). Getting out the vote: An experiment in the stimulation of voting. Chicago: University of Chicago Press.
  13. Green, D. P., & Gerber, A. S. (2002). Reclaiming the experimental tradition in political science. In H. Milner & I. Katznelson (Eds.), State of the discipline (Vol. 3, pp. 805-832). New York: W. W. Norton.
  14. Green, D. P., & Gerber, A. S. (Eds.). (2004). Field experiments in the political sciences [Entire issue]. American Behavioral Scientist, 47(5), 485-728.
  15. Green, D. P., & Gerber, A. S. (Eds.). (2005). The science of voter mobilization [Special issue]. Annals of the American Academy of Political and Social Science, 601(1), 1-204.
  16. Green, D. P., & Gerber, A. S. (2008). Get out the vote! How to increase voter turnout.Washington, DC: Brookings Institution. Green, J. G., & Vavreck, L. (2008). Advertising and information. Unpublished manuscript.
  17. Hartman, G. W. (1936 1937). Field experiment on the comparative effectiveness of “emotional” and “rational” political leaflets in determining election results. Journal of Abnormal Psychology, 31, 99-114.
  18. Hurwitz, J., & Peffley, M. (Eds.). (1998). Perception and prejudice: Race and politics in the United States. New Haven, CT: Yale University Press.
  19. Iyengar, S., & Kinder, D. R. (1987). News that matters: Television and American opinion. Chicago: University of Chicago Press.
  20. Iyengar, S., Peters, M. D., & Kinder, D. R. (1982). Experimental demonstrations of the “not so minimal” consequences of television news programs. American Political Science Review, 76(4), 848-858.
  21. Kuklinski, J. H., Cobb, M. D., & Gilens, M. (1997). Racial attitudes and the “new South.” Journal of Politics, 59, 323-349.
  22. Kuklinski, J. H., Sniderman, P. M., Knight, K., Piazza, T., Tetlock, P. E., Lawrence, G. R., & Mellers, B. (1997). Racial prejudice and attitudes toward affirmative action. American Journal of Political Science, 41(2), 402-419.
  23. Lau, R. R., Sigelman, L., Heldman, C., & Babbitt, P. (1999). The effects of negative political advertisements: A meta analytic assessment. American Political Science Review, 93(4), 851-876.
  24. Lowell, A. L. (1910). The physiology of politics. American Political Science Review, 4(1), 1-15.
  25. McKelvey, R. D., & Ordeshook, P. C. (1990). Information and elections: Retrospective voting and rational expectations. In J. Ferejohn & J. Kuklinski (Eds.), Information and democratic processes (pp. 281-312). Urbana: University of Illinois Press.
  26. Nickerson, D. W. (2008). Is voting contagious? Evidence from two field experiments. American Political Science Review, 102, 49-57.
  27. Niven, D. (2006). A field experiment on the effects of negative campaign mail on voter turnout in a municipal election. Political Research Quarterly, 49(2), 203-210.
  28. Sapiro, V. (1981 1982). If U.S. Senator Baker were a woman: An experimental study of candidate images. Political Psychology, 3, 61-83.
  29. Sniderman, P. M., & Grob, D. B. (1996). Innovations in experimental design in attitude surveys. Annual Review of Sociology, 22, 377-399.
  30. Wattenberg, M. P., & Brians, C. L. (1996). Negative campaign advertising: Demobilizer or mobilizer? Los Angeles: Center for Research in Society and Politics. Retrieved from the eScholarship Repository: https://escholarship.org/uc/item/7gf3q1w1
Survey Research in Political Science Research Paper
Formal Theory and Spatial Modeling Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!