Survey Research in Political Science Research Paper

Academic Writing Service

View sample Survey Research in Political Science Research Paper. Browse other research paper examples and check the list of political science  research paper topics for more inspiration. If you need a research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Also, chech our custom research proposal writing service for professional assistance. We offer high-quality assignments for reasonable rates.

Outline

I. Introduction

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


II. Basic Ideas in Survey Research

A. Survey Questionnaire Design




1. Open and Closed Ended Questions

2. Answering Factual Recall and Attitude Questions

3. Survey Question Wording and Order

B. Random Sampling

1. Face to Face Interviews and Area Probability Cluster Sampling

2. Telephone Surveys in the RDD Poll

III. The Future of the RDD Poll and Cellular Phones

IV. New Web Survey Methodology

V. Blurring Probability and Nonprobability Sampling

VI. Cross-National Surveying

VII. Conclusion

I. Introduction

Survey research is a major tool for bringing facts—data—to bear on political science theories (Brady, 2000). The way in which survey researchers do so, by collecting data from the few to generalize to the many, is once again undergoing a period of profound change. In the last significant period of change, survey research shifted from a reliance on face-to-face interviewing in respondent homes during the 1960s to the cheaper and faster world of telephone surveying in the 1970s and 1980s. Today, as the 21st century reaches its second decade, this transition toward a technology-mediated experience of the survey interview continues (Dillman, Smyth, & Christian, 2009). The revolution in digital communications technology has brought about even bigger changes, from the steady replacement of landlines with cellular phones to the expansion and habitual reliance of an ever-larger number of Americans on the Internet. And although survey researchers have dealt with public skepticism of polling and a refusal to participate before, today it is higher than ever. Nevertheless, survey research has always been an investigative tool shifting with the prevailing social trends (Tourangeau, 2004). As the study of survey research has become a scientific discipline of its own, survey research in political science is well prepared to meet these challenges and will adapt to do so.

This book, 21st Century Political Science: A Reference Handbook, presents an appropriate opportunity to take stock of the most important changes and sources of continuity in survey research. This research paper discusses four theses for the future. First, cellular telephone ownership has increased the number of households without a landline telephone, disrupting the traditional methodology for telephone surveys. Second, the web survey is already a major mode of survey research, and given the spread of broadband Internet access, its methodology will continue to develop. Third, these two changes have resulted in a blurring of what was once a fundamental distinction between surveys: the difference between probability and nonprobability sampling. And fourth, highlighting the place of survey research in a globalized world, cross-cultural (or cross-national) survey research will continue to open up new research opportunities. Prior to these ideas, however, to see where survey research heads into the future, this research paper must briefly review a few critical ideas about what a survey is, how survey questionnaires are written, and how survey sampling works.

II. Basic Ideas in Survey Research

Survey research in political science encompasses a great diversity of subject matter; the most well-known application is the mass opinion survey of an entire nation’s votingage population.Within the United States, the major political science survey is the American National Election Studies (ANES); around the world, the World Values Survey is conducted in more than 40 countries. Across Europe, there are the long-standing Eurobarometer surveys and the more recent European Social Survey. These studies are only a few among the many in political science.

Although these surveys all share a common concern for understanding the beliefs, attitudes, and values of mass democratic publics, a general definition of a scientific survey is surprisingly elusive, given the many ways in which survey research is conducted. At its core, survey research is the process of collecting data from a small part of a population to make general statements, or inferences, about characteristics of the larger population (de Leeuw, Hox, & Dillman, 2008a). These data are collected by having people answer questions to develop a set of systematic descriptions of the sample (Weisberg, 2005). The foundation of this process is built from writing a survey questionnaire and drawing a sample of individuals to interview.

A. Survey Questionnaire Design

In the early to mid-20th century, the work of writing survey questions and laying out a survey questionnaire resembled the art of a sculptor; aesthetic principles and skilled craft resulted in a work reflecting the vision of its creator and appreciated on her terms. Today, however, science has supplanted art; a robust literature testing practically all aspects of survey questionnaire design provides strong guidance for researchers (Schaeffer & Presser, 2003). Although too voluminous to be summarized here, there are three essential aspects: (1) differences in question structure from open- to closed-ended, (2) how event recall and attitude questions are answered, and (3) consequences of the phrasing and order of survey questions.

1. Open and Closed Ended Questions

The most basic distinction within survey questionnaires is between open- and closed-ended questions. Open-ended questions are conversational, such as a question the ANES (2004) has asked about parties: “I’d now like to ask you what you think are the good and bad points about the two national parties. Is there anything in particular that you like about the Democratic Party?” If the respondent says, “Yes,” then the interviewer asks, “What is that?” The interviewer records everything the respondent says. Usually, survey researchers analyze open-ended responses by categorizing phrases, counting mentions of a theme such as their han dling of the economy. Open-ended questions are very costly in survey time to administer and analyze by researchers; for these reasons, closed-ended questions are much more common in survey research. Closed-ended questions provide response options for a respondent to identify and select. There are many ways of structuring the response alternatives for closed-ended questions, but the most common is to ask a respondent to select an item from a rating scale.

Two different types of rating scales appear in Table 1. The upper half of the figure displays a bipolar response scale for gauging a respondent’s ideological identification. The scale ranges from two poles, extremely liberal to extremely conservative. In answering the question, the respondent is asked to select both a direction and strength of ideological identity. For the process to be as valid and reliable as possible, all points on the scale are labeled. Respondents who haven’t thought much about it are excluded from the scale. In the lower half of Table 1, respondents are given a branching scale that unfolds in two steps. Respondents first select a direction of their party identification (Democrat, Republican, or Independent), then second, select a strength of identification, strong or not very strong, and if Independent, whether they lean toward either party. Although both scale types are acceptable, branching formats are preferred to bipolar response scales in which not all points on the bipolar scale are labeled (Krosnick & Berent, 1993). Telephone surveyors tend to rely on branching scales rather than verbal descriptions of bipolar response scales.

Table 1. Bipolar and Branching Scales

Survey Research in Political Science Research Paper

NOTE: The table displays two survey questions from the ANES (2004), in two common closed-ended formats, a bipolar and a branching scale. Both are used to create 7-point response scales for strength and direction of ideological beliefs and party identification.

2. Answering Factual Recall and Attitude Questions

Closed-ended questions may be used to gauge respondent recall of objective factual information. For example, a researcher may ask, “How many days in the past week did you watch the national network news on TV?” The response alternatives range from “1 day” to “7 days.” In answering such questions, respondents may simply attempt to recall the past week’s schedule and estimate (or recall from working memory) the correct answer. (This description simplifies a complex process described in greater detail in Tourangeau, Rips, & Rasinski, 2000.) In political surveys, such questions are usually of less interest than those assessing a respondent’s attitude or their positive or negative opinion about a political party, elected official, or policy issue. Although slightly different, the personal beliefs in Table 1 also contain an evaluative aspect and are similar to attitudes. For respondents, the process of answering attitude questions is fundamentally different and more complex.

For example, an attitude question commonly asked in U.S. surveys is presidential approval. In many contemporary surveys, respondents are asked, “Do you approve or disapprove of the way Barack Obama is handling his job as president?” When respondents answer such questions, they do not pull preexisting opinions out of memory. Instead, opinions are constructed on the fly. People draw on general values, predispositions, and fragments of prior beliefs in expressing an opinion through a process termed belief sampling (Tourangeau et al., 2000).

In the case of President Obama, respondents could have a wide range of positive and negative considerations brought to mind when asked. For example, a respondent may have recently seen a newscast describing bonuses paid to bailed-out bank executives on Wall Street, and when the respondent is later asked about Obama, this consideration comes to mind, drawing him or her toward expressing disapproval of Obama’s presidency. Generally, the balance of negative or positive considerations results in a similar direction of attitude expressed by the respondent. One implication of belief sampling is that researchers should not automatically include a “don’t know” response alternative in survey questions based on their assumption that respondents do not have a preexisting opinion. Moreover, the importance of the concept of belief sampling is that it is useful for explaining context effects in surveys: how question wording and differences in question order can alter survey results. A more in-depth discussion of belief sampling and context effects is Schwarz, Knäuper, Oyserman, and Stich (2008), while Asher (2007) provides an accessible introduction with a clear application to the interpretation of poll results.

3. Survey Question Wording and Order

Clearly, question wording and order is important because of the influence of words on the sampling of considerations. Subtle changes in question wording can alter the considerations brought to mind. Asher (2007) discusses an example in which Americans were asked about their support for cuts in state services; when respondents were given the option of cutting what was termed aid to the needy, only 8% chose this option, but when the option was substituted with the term public welfare, 39% chose it. The term welfare apparently primed much more negative considerations. The most general advice about writing survey questions is to be aware of how such wording changes can affect results. Beyond that, questions should be worded in such a way that terms are defined as concretely as possible, using unbiased, simple language, while avoiding so-called double-barreled questions that refer to two subjects at once.

The order of survey questions can affect individual responses because particular questions prime certain considerations. For example, if one asks respondents if they would say that over the past year the nation’s economy has gotten better, stayed about the same, or gotten worse, prior to a question on presidential approval, it would lead respondents to evaluate the president in light of economic considerations. Because of the potential for the order of questions to influence individual survey responses, wherever possible professional survey researchers randomize the order of questions.

At a practical level, when constructing a questionnaire researchers should have on hand a comprehensive reference. Authoritative, up-to-date textbooks are Fowler (2009) and Groves et al. (2004), and a slightly dated textbook is Weisberg, Krosnick, and Bowen (1996). A classic reference with practical advice is Dillman et al. (2009).

B. Random Sampling

Given a survey questionnaire and a population of individuals to study, the researcher must draw a sample from that population. Randomization is the cornerstone of probability sampling methods and is the professional standard for survey research. It is illustrated in its purest form in the simple random sample via the use of a sampling frame. A frame is a list of each and every individual within the population of interest. The sample will be drawn from the individuals on the frame; the frame should exactly mirror the population of interest, or else the sample will be subject to coverage error: the difference between the individuals appearing on the frame and in the population. In a simple random sample, all individuals in the population have a known and nonzero, equal chance of being selected. From the listed elements of the sampling frame, a randomnumber generator could be used to select the corresponding individuals listed sequentially on the frame.

Next, a researcher would administer a survey questionnaire. With answers to these questions recorded from each member of the sample, classical theories of statistical inference from any introductory statistics textbook could estimate population characteristics within a margin of error. With approximately 1,500 interviews, a researcher could estimate a characteristic (such as party identification) with a remarkable degree of precise accuracy, within roughly two percentage points of the true population value.

In sampling, however, there is often a disjuncture between the basic theory and feasible practice. Simple random samples are rarely ever applied, least of all for any survey of a geographically dispersed population, such as an entire region or nation. Consider two examples, the first from survey research involving face-to-face interviews of nationally representative American voters and the second from random digit dial (RDD) telephone surveying, the two polling methods most frequently used in large-scale research over the past century.

1. Face to Face Interviews and Area Probability Cluster Sampling

During the 1950s, the time characterized by Weisberg (2005) as the period of professionalization and expansion of survey research, a large proportion of American households still did not have a telephone. Major national surveys of the day such as the Gallup Poll and the ANES were conducted in person; interviewers traveled to the homes of survey respondents. Without an accurate sampling frame of American citizens of voting age (the ANES study population), a simple random sample was (and remains) impossible. Even if it were possible, meeting face-to-face with a simple random sample of Americans spread across the United States would be prohibitively expensive. So the method for drawing samples for such interviews has relied on an alternative that does not require a national sampling frame and randomly distributed interviews: area probability cluster sampling (Weisberg et al., 1996).

Area probability cluster sampling for face-to-face surveys such as the ANES occurs in stages, resulting in a nationally representative sample of individuals interviewed in regional clusters. First, the United States would be divided into mutually exclusive primary sampling units (PSUs), such as metropolitan statistical areas (big cities) or sets of rural counties. Sampling is stratified by region, sampling units from within the north, south, east, and west of the country. Second, using maps of the areas within each PSU, multiple city blocks and similar rural areas are sampled as chunks. At the third stage, a sampling frame is constructed of all the housing units within these blocks. Then, fourth, individual housing units are sampled, and from within each one, an individual household member is selected for the interview. (A common method of selection is to interview the eligible person with the most recent birthday.) The result is a nationally representative sample; what may not be possible, however, is inference to each U.S. state, since interviews in small states may occur in only one city or county. There are many other aspects to this methodology; see Fowler (2009) for a further discussion.

This methodology has remained the platinum standard for achieving high-quality data and response rates and can be applied to any geographically defined area. In developing countries, it is the primary tool for national survey research. Nevertheless, face-to-face surveys are increasingly expensive to conduct. A current rule of thumb is that in the United States, the surveys cost approximately $1,000 per interviewed respondent. Compared with approximately $5 for 15 minutes of time with a respondent on an RDD poll, national face-to-face interview polls are cost prohibitive for nearly all purposes.

2. Telephone Surveys in the RDD Poll

Surveys conducted via the landline telephone became the norm in mass survey research with the advent of RDD methodology. Beginning in the 1970s, with sufficient coverage of landline telephones across the United States, the cheaper cost of contacting respondents over the phone meant that researchers could more quickly and efficiently complete their work. (For a discussion of these developments, see the 2007 special issue of Public Opinion Quarterly on telephone surveying.) Thus began the heyday of the RDD poll, which extended through the 1980s and into the 1990s.

To make telephone sampling cost-efficient, researchers had to determine how to isolate residential telephone numbers out of all the possible numbers, such as those no longer working or assigned to an address other than a residential household, which was the case for most telephone numbers. Randomly dialing any telephone number, similar to a simple random sample, would prove to be too inefficient. The Mitofsky–Waksberg (MW) method was the first protocol to develop an efficient alternative, based on the logic of cluster sampling (Tourangeau, 2004). Residential telephone numbers have never been spread randomly throughout each combination of 7-digit telephone numbers; the MW methodology provided a way to reach more residential telephone numbers with less dialing.

The MW method is a two-stage sampling design, beginning with identification of sampling clusters and then continuing with sampling of individuals (Brick & Tucker, 2007). Consider a telephone number, given an area code (abbreviated AAA), a prefix (PPP), and the last four numbers (SSRR): AAA-PPP-SSRR. Because residential telephone numbers tend to be clustered together, in the WM method, numbers are selected in clusters. From an area code and prefix (AAA-PPP) bank, telephone numbers are sampled by randomly selecting an SS portion of the suffix. Then, in the second stage, two RR numbers are randomly chosen, and that telephone number is dialed. If it is not a residential number, that particular telephone bank is discarded, and the researcher moves on to the next cluster. If it is a residential number, then the researcher continues to dial additional numbers by randomly choosing RR numbers within the cluster, perhaps conducting as many as 10 interviews within the cluster.

The WM method is still used today (Brick & Tucker, 2007; Lepkowski et al., 2008). Of course, the method has evolved (Dillman et al., 2009), since over the past 20 years telephone service has expanded dramatically and residential lines are less densely assigned to particular banks of numbers, reducing the efficiency of RDD telephone surveys (Tucker & Lepkowski, 2008). As a result, today, researchers increasingly use RDD methods supplemented by purchased lists of working banks that function much like a sampling frame (Tourangeau, 2004).

Thus, RDD polls have become more expensive, particularly with the widespread use of caller identification and voicemail for screening calls from pollsters. Faced with increasingly difficult-to-reach respondents, some researchers may be tempted to take the cheaper, incorrect decision to replace these households in the sample with an additional, easier-to-reach household. Such a decision can cause problems when there is a significant difference between the two sets of respondents. For example, during the 2008 Democratic presidential primary in New Hampshire, survey researchers overestimated support for Senator Obama while underestimating it for Senator Clinton; the harder-to-reach households tended to support Clinton, skewing the results since they were excluded from samples (American Association for Public Opinion Research, 2009). Yet the bigger concern for the future of telephone surveys is the growth of cellular telephones and the potential for a coverage bias due to the increasing number of Americans who own cellular telephones but not landlines and are thus excluded from (not covered in) the traditional telephone methodologies.

III. The Future of the RDD Poll and Cellular Phones

Standard estimates for the proportion of American households with only cellular telephone service (no landline phone) are based on the National Health Interview Survey, which began collecting data in 2003 on the implications of cellular telephone ownership for survey research. In the second half of 2008, about 20% of American households fit within this category. The proportion of cell-only households has been growing at a rate of approximately 2% to 3% every 6 months. Americans without landline telephone service are likely to be younger and poorer, and more likely to rent than to own homes or apartments. The largest wireless-only age-group is composed of persons from 25 to 29 years old, of whom approximately 40% live in wireless-only households, while among those aged 18 to 24, 33% do so (Blumberg & Luke, 2009).

The problem of undercoverage in traditional landline RDD polls is problematic to the extent that those persons systematically excluded from the surveys differ in relevant political characteristics from those who are not. Fortunately for political science, currently cell-only ownership appears to be largely unrelated to the political judgments asked in surveys. For instance, in the United States, among the youngest cohort of Americans, 18 to 25 years of age, although there appears to be little difference between cellonly young Americans and those with a landline on most measures of political beliefs and attitudes, those with only cell phones are significantly more likely to consume alcohol, among other behaviors that concern public health researchers (Blumberg, 2007; Keeter, Kennedy, Clark, Tompson, & Mokrzycki, 2007).

Yet despite the potential for a coverage bias in RDD polls affecting election surveys, for now, there appears to be little cause for immediate concern. Furthermore, any potential for coverage bias in RDD surveys is corrected, in the analysis of the data, by demographic weighting. Demographic sampling weights, calculated via a technique called raking or sample balancing, are typically required in the analysis of all contemporary RDD polls. Individual cases are weighted in an analysis to represent slightly less or greater than one observation to make the sample more closely match the known aggregate demographics of the population. For national surveys in the United States, surveys are usually matched to the U.S. Census Bureau’s most recent Current Population Survey. For a technical discussion of weighting and its implications, see Biemer and Christ (2008) or Lepkowski et al. (2008).

It is unlikely that cellular telephones will be included routinely in mass surveys in the near future. Because cell phone consumers in the United States must pay for incoming calls, the federal Telephone Consumer Protection Act prohibits anyone—survey researchers included—from using any autodialing technology to attempt to contact cell phone owners. (Often, telephone surveyors use automated technology to initiate the interviews, similar to that used by telemarketers.) Only under the circumstance in which the cell phone owner gave the researcher express prior consent to be contacted on the cell phone, or where a telephone interviewer manually dials the number, would such a contact be legal, thus increasing the cost of contacting cellular telephone owners. Generally, there currently exists no standard professional methodology for conducting cell phone surveys; the significantly higher costs in doing so mean that at least for the immediate future, in RDD polls, cellular telephones are more likely to be excluded from samples than to be included. Although cellular telephone surveys are out, web surveys are increasingly popular.

IV. New Web Survey Methodology

Any casual web surfer must notice the ubiquity of surveys conducted over Internet web browsers, from the one- or two-question flash polls appearing on news websites to customer satisfaction surveys and e-mail solicitations for lengthy marketing studies. Web surveys are based on a wide range of survey sampling procedures; although news flash polls may not even be representative of visitors to a particular website, other web surveys are based on highly scientific, accurate probability samples. At its core, the web provides a survey mode: an interface for conducting surveys, independent of a particular sampling methodology. Although older survey methodologies, such as RDD polls, imply both the sampling method (RDD) and an interview mode (telephone), Web surveys imply nothing (Couper & Miller, 2008). The use of web surveys over a broad range of sampling purposes has spawned a rich array of developments in web survey technology, particularly software, the varieties of which are far too numerous and diffuse to effectively summarize here. (A comprehensive resource for technology and scholarship of web surveys is the website for Web Survey Methods: http://www.websm.org/) Yet what is considered here are two concerns: One is a fundamental feature of web surveys and source of new research, and the second is an implication of an unfulfilled promise for web surveys to deliver cheap yet highquality data.

Compared with survey methodologies for face-to-face or telephone interviews, web surveys are not interviewer assisted, since much survey research increasingly is through digital communications technology. But traditional paper-and-pencil surveys, too, are not interviewer assisted. And web surveys tend to be modeled after the paper-and-pencil surveys of generations ago; the layout of the more popular online web survey companies offering free survey hosting are modeled after this layout. Some lament this state, calling for an investigation into the rich potential of web surveys to interact with the respondent through multimedia (Couper, 2007). Couper (2008) has written a comprehensive web survey design manual. Others find that the pencil-and-paper method has continued to be the most desirable for collecting reliable and valid data (Dillman et al., 2009).

As web survey methodologies developed, researchers hoped to uncover a new methodology that would lead to vast new methods for collecting new survey data. Yet survey research has not yet done so. At least for now, it does not appear that the administration of surveys via the web has delivered on hopes that this mode would recapture the response rates of yesteryear or of the telephone survey enterprise in its heyday. Response rates for web surveys appear to be no greater than those for other survey methods (Couper & Miller, 2008). Yet it does appear, however, that a web survey could complement other, more traditional survey modes as a way of reducing survey costs. Respondents can answer web surveys, while more expensive mail-back or face-toface survey modes could be used for other respondents (Rookey, Hanway, & Dillman, 2008). The use of these mixed modes of survey research is likely to become a standard part of survey research projects (de Leeuw, Hox, & Dillman, 2008b).

One of the principal benefits of web surveys is the democratizing effect of Internet technology. Lower costs expand the reach of conducting surveys to more researchers, yet the familiar challenges from unequal Internet access still remain (Couper & Miller, 2008). Because of the continued digital divide (the systematic differences between those with home-based, consistent access to broadband Internet and those without), the challenge remains of respondent Internet access, representation, and generalizability from the sample to the population (Groves et al., 2004). Yet in response to the problem of coverage bias in web surveys, a number of innovations have emerged, blurring the distinction between probability and nonprobability sampling.

V. Blurring Probability and Nonprobability Sampling

One common type of web survey is the Internet panel, in which individuals volunteer to periodically complete web surveys produced by a researcher. These opt-in studies, which rely on volunteers recruited through web advertisements, ask for volunteers to join a firm’s panel (a periodic survey) and periodically answer a few surveys. Although most popular with marketing and consumer research, some research firms conduct political polls in this form, such as YouGovPolimetrix. The survey methodology and its application developed shortly after the turn of the 21st century, where in light of the decline of response rates for telephone surveys and in hopes of reaching younger respondents at relatively cheaper survey costs, web panels emerged (Dillman et al., 2009).

Yet panel surveys, while consisting of responses from an unrepresentative sample of Americans that complete the survey, are intended to facilitate generalizations to a more diverse population and sometimes even to the American public as a whole. When signing up to join a panel, respondents provide some basic demographic information about themselves, such as age, gender, race, formal educational attainment, and place of residence. Out of the people who volunteer and join the panel, a sample of these individuals is drawn to complete a survey, selected on the basis of these characteristics that should match those of the target population.

There are a number of potential sources of controversy with these surveys. From a more traditional, probabilitysampling perspective, these surveys present a few potential difficulties: (a) coverage error—individuals without Internet access cannot participate and systematically differ from those who do; (b) self-selection bias—those panel members completing the survey are self-selected into it; (c) estimation of sampling error in the absence of randomization; and (d) panel-conditioning effects from repeatedly asking respondents to participate in surveys (Dillman et al., 2009). In accounting for these potential sources of error, Internet panel researchers have come up with a variety of technically sophisticated ways of analyzing the data to help the survey sample become more representative of the population of interest.

The potential for difference between the sample and population can be accounted for via sample weighting, in which the response of a particular individual is weighted to count for more or less than that one person’s score, such that the aggregated individual responses are representative of a population. After the survey is administered, these weighting adjustments occur via a form of weighting called propensity scoring. Through propensity scoring, the demographic characteristics of a volunteer panel are weighted to more closely match the demographic characteristics of a similar, traditional probability-sample survey. Typically, researchers running an Internet panel will run a parallel, small-scale probability survey for the purpose of providing these data. For a discussion of propensity scoring and web surveys, see Lee (2006) and Lee and Valliant (2009).

Success of weighting to correct for bias depends on the similarity between the groups of persons included within the survey and those excluded. If the approximately 30% of Americans who do not have Internet access systematically differ from those who do, the weights may not successfully adjust for nonresponse. These studies depart from the traditional theory of probability-based sampling. Yet the proponents of these surveys vigorously defend their methods as producing valid inferences.

Others, though, are not convinced and propose to view these studies as offering exploratory findings or as a vehicle for carrying out experimental tests (Dillman et al., 2009). The methodology of survey-based experimentation— designing an experimental manipulation within a survey— has developed strongly over the past decade. For an in-depth review of survey-based experimentation, see Lupia (2002), Sniderman and Grob (1996), and Gaines, Kuklinski, and Quirk (2007). Others see Internet panels as a return to the nonprobability polls (quota or convenience sampling) conducted prior to the institutionalization of probability sampling among professional pollsters (Bethlehem & Stoop, 2007). Despite criticism of the methodology and the declining response rates that now affect Internet panels as much as any other group, given the growth of Internet access and social networking websites (and web 2.0 technologies), panels will continue to grow. So although controversial, panels are likely to be increasingly popular among researchers.

There are probability-based alternatives for web surveys. Some panel providers have attempted to overcome the coverage problem of the Internet by recruiting participants through a probability-based method. For example, in the Netherlands, CentERdata recruited participants for their 2007 longitudinal Internet studies for the social sciences (LISS) panel through a probability sample of households drawn from national census records. Similarly, the most recent ANES 2008 presidential election survey included a monthly web survey in which participants were recruited to participate through an RDD sample. In both examples, if respondents did not already have a personal computer and Internet access, they were given one to use during the duration of the study. Of course, questions remain about the conditioning effect of computer access and web surveys; relying on RDD sampling with the rise of cellular telephones brings about its own challenges. There remains much to be learned about the subject.

VI. Cross-National Surveying

An area of major concern in survey research is the development of survey research methods conducted across linguistic, national, or even continental boundaries. The development of valid, reliable survey research methodologies (called cross cultural methods) is a major theme for the 21st century (de Leeuw et al., 2008b). Crossnational survey research has opened up many new research frontiers in the comparative study of public opinion; methodologically, survey researchers continue to investigate methods of standardizing survey administration to facilitate cross-national comparisons.Authoritative texts on the subject are Harkness, Van de Vijver, and Mohler (2003) and de Leeuw et al. (2008b).

VII. Conclusion

The challenges facing survey research for the 21st century are great, but it would be premature to begin penning the method’s obituary. Survey research has faced similar challenges in the past and been the subject of criticism that the industry would decline. When survey researchers faced severely declining response rates for face-to-face surveys in the 1960s, some questioned whether surveys would survive. Instead of declining, survey research thrived, leading to the widespread scientific study of survey questionnaires and sampling methodologies. So today, survey researchers will meet the challenges of the current era. No other research tool facilitates the study of population characteristics on the basis of a relatively small sample as well as survey research. Even challenges such as the development of cellular telephone technologies will likely prove to be surmountable. The spread of Internet access, further development of social networking technology, and the continued growth of exclusive cell phone ownership will likely be future research subjects, ensuring the place of survey research in political science.

Bibliography:

  1. American Association for Public Opinion Research, Ad Hoc Committee on the 2008 Presidential Primary Polling. (2009). An evaluation of the methodology of the 2008 pre election primary polls. Retrieved from https://www.aapor.org/
  2. Asher, H. B. (2007). Polling and the public: What every citizen should know. Washington, DC: CQ Press.
  3. Bethlehem, J., & Stoop, I. (2007). Online panels A paradigm theft? Paper presented at the meeting of the Association for Survey Computing, University of Southampton, UK.
  4. Biemer, P. P., & Christ, S. (2008). Weighting survey data. In E. D. de Leeuw, J. J. Hox, & D. A. Dillman (Eds.), International handbook of survey methodology (pp. 317-341). New York: Lawrence Erlbaum.
  5. Blumberg, S. J. (2007). Coverage bias in traditional telephone surveys of low income and young adults. Public Opinion Quarterly, 71, 734-749.
  6. Blumberg, S. J., & Luke, J. V. (2009). Wireless substitution: Early release of estimates from the National Health Interview Survey, July December 2008. National Center for Health Statistics. Retrieved from https://www.cdc.gov/nchs/nhis/index.htm
  7. Brady, H. E. (2000). Contributions of survey research to political science. PS: Political Science and Politics, 33, 47-57.
  8. Brick, J. M., & Tucker, C. (2007). Mitofsky Waksberg: Learning from the past. Public Opinion Quarterly, 71, 703-716.
  9. Couper, M. P. (2007). Whither the web: Web 2.0 and the chang ing world of web surveys. Paper presented at the meeting of the Association for Survey Computing, University of Southampton, UK.
  10. Couper,M. P. (2008). Designing effective web surveys. NewYork: Cambridge University Press.
  11. Couper, M. P., & Miller, P. V. (2008). Web survey methods: Introduction. Public Opinion Quarterly, 72, 831-835.
  12. de Leeuw, E. D., Hox, J. J., & Dillman, D. A. (2008a). The cornerstones of survey research. In E. D. de Leeuw, J. J. Hox, & D. A. Dillman (Eds.), International handbook of survey methodology (pp. 1-17). New York: Lawrence Erlbaum.
  13. de Leeuw, E. D., Hox, J. J., & Dillman, D. A. (2008b). International handbook of survey methodology. New York: Lawrence Erlbaum.
  14. Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009). Internet, mail, and mixed mode surveys: The tailored design method (3rd ed.). Hoboken, NJ: Wiley.
  15. Fowler, F. J., Jr. (2009). Survey research methods. Thousand Oaks, CA: Sage.
  16. Gaines, B. J., Kuklinski, J. H., & Quirk, P. J. (2007). The logic of the survey experiment reexamined. Political Analysis, 15, 1-20.
  17. Groves, R. M., Fowler, F. J., Jr., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2004). Survey methodology. Hoboken, NJ: Wiley.
  18. Harkness, J. A., Van de Vijver, F. J. R., & Mohler, P. P. (2003). Cross cultural survey methods. Hoboken, NJ: Wiley.
  19. Keeter, S., Kennedy, C., Clark, A., Tompson, T., & Mokrzycki, M. (2007).What’s missing from national landline RDD sur veys? The impact of the growing cell only population. Public Opinion Quarterly, 71, 772-792.
  20. Kempf, A. M., & Remington, P. L. (2007). New challenges for telephone survey research in the twenty first century. Annual Review of Public Health, 28, 113-126.
  21. Krosnick, J. A. (1999). Survey research. Annual Review of Psychology, 50, 537-567.
  22. Krosnick, J. A., & Berent, M. K. (1993). Comparisons of party identification and policy preferences: The impact of survey question format. American Journal of Political Science, 37, 941-964.
  23. Lavrakas, P. J. (1993). Telephone survey methods: Sampling, selection, and supervision. Newbury Park, CA: Sage.
  24. Lee, S. (2006). Propensity score adjustment as a weighting scheme for volunteer panel web surveys. Journal of Official Statistics, 22, 329-349.
  25. Lee, S., & Valliant, R. (2009). Estimation for volunteer panel web surveys using propensity score adjustment and calibration adjustment. Sociological Methods and Research, 37, 319-343.
  26. Lepkowski, J. M., Tucker, C., Brick, J. M., de Leeuw, E. D., Japec, L., Lavrakas, P. J., et al. (Eds.). (2008). Advances in telephone survey methodology. Hoboken, NJ: Wiley.
  27. Lupia, A. (2002). New ideas in experimental political science. Political Analysis, 10, 319-324.
  28. Rookey, B. D., Hanway, S., & Dillman, D. A. (2008). Does a probability based household panel benefit from assignment to postal response as an alternative to Internet only? Public Opinion Quarterly, 72, 962-984.
  29. Schaeffer, N. C., & Presser, S. (2003). The science of asking questions. Annual Review of Sociology, 29, 65-88.
  30. Schwarz, N., Knäuper, B., Oyserman, D., & Stich, C. (2008). The psychology of asking questions. In E. D. de Leeuw, J. J. Hox, & D. A. Dillman (Eds.), International handbook of survey methodology (pp. 18-34). New York: Lawrence Erlbaum.
  31. Sniderman, P. M., & Grob, D. B. (1996). Innovations in experi mental design in attitude surveys. Annual Review of Sociology, 22, 377-399.
  32. Tourangeau, R. (2004). Survey research and societal change. Annual Review of Psychology, 55, 775-801.
  33. Tourangeau, R., Rips, L. J., & Rasinski, K. A. (2000). The psy chology of the survey response. New York: Cambridge University Press.
  34. Tucker, C., & Lepkowski, J. M. (2008). Telephone survey meth ods: Adapting to change. In J. M. Lepkowski, C. Tucker, J. M. Brick, E. D. de Leeuw, L. Japec, P. J. Lavrakas, et al. (Eds.), Advances in telephone survey methodology (pp. 3-28). Hoboken, NJ: Wiley.
  35. Web survey methods: http://www.websm.org/
  36. Weisberg, H. F. (2005). The total survey error approach: A guide to the new science of survey research. Chicago: University of Chicago Press.
  37. Weisberg, H., Krosnick, J. A., & Bowen, B. D. (1996). An intro duction to survey research, polling, and data analysis (3rd ed.). Thousand Oaks, CA: Sage.
Qualitative vs. Quantitative Research Paper
Experiments in Political Science Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!