Sample Cognitive Aspects Of Survey Design Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. If you need a religion research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our research paper writing service for professional assistance. We offer high-quality assignments for reasonable rates.
Cognitive aspects of survey design refers to a research orientation that began in the early 1980s to integrate methods, techniques, and insights from the cognitive sciences into the continuing effort to render data from sample surveys of human populations more valid and reliable, and to understand systematically the threats to such reliability and validity.
Academic Writing, Editing, Proofreading, And Problem Solving Services
Get 10% OFF with 24START discount code
1. Background And History
While a sample survey of human beings can be of good and measurable accuracy only if it is accomplished through probability sampling (Sample Surveys: The Field and Sample Surveys: Methods), probability sampling is only the first step in assuring that a survey is successful in producing results accurate enough for use in predicting elections, for gauging public opinion, for measuring the incidence of a disease or the amount of acreage planted in a crop, or for any of the other myriad of important uses that depend on survey data. Questions must be worded not only in ways that are unbiased, but that are understandable to respondents and convey the same meaning to respondents as was intended by the survey’s author. If the questions refer to the past, they must be presented in ways that help respondents remember the facts accurately and report truthfully. And interviewers must be able to understand respondents’ answers correctly to record and categorize them appropriately.
These and other nonsampling issues have been of concern to survey researchers for many decades (e.g., Payne 1951). Researchers were puzzled by the fact that changing the wording slightly in a question, sometimes resulted in a different distribution of answers and sometimes did not; worried that sometimes the context of other questions affected answers to a particular question, and sometimes did not; concerned that sometimes respondents were able to remember things accurately and sometimes were not. In the late 1970s and early 1980s, as survey data came to be used ever more extensively for public policy purposes, concerns for the validity of those data became especially widespread.
These concerns led to a cross-disciplinary research endeavor that would bring the insights and methods of the cognitive sciences, especially cognitive psychology, to bear on the problems raised by surveys, and that would encourage researchers in the cognitive sciences to use surveys as a means of testing, broadening, and generalizing their laboratory-based conclusions. Recent work in the field has incorporated theories and viewpoints from fields not usually classified as the cognitive sciences, notably linguistics, conversational analysis, anthropology, and ethnography.
Described below are some of the practical and theoretical achievements of the movement to date. These include methodological transfers from the cognitive sciences to survey research, the broad establishment of cognitive laboratories in governmental and nongovernmental survey research centers, theoretical formulations that account for many of the mysteries generated by survey responses, and practical applications to the development and administration of on-going surveys. Some references appear at the close describing these achievements more fully and speculating on the future course of the field.
2. Methodological Transfers
Much work in the field is rooted in a classic information processing model applied to the cognitive tasks in a survey interview. The model suggests that the tasks the respondent must accomplish, in rough order in which they must be tackled (though respondents go back and forth among them), are comprehension interpretation, retrieval, judgment, and communication. These models have been intertwined with the earlier models of the survey interview as a social interaction (Sudman and Bradburn 1974) through the efforts of such researchers as Clark and Schober (1992) and Sudman et al. (1996) to take into account both the social communication and individual cognitive processes that play a part in the survey interview.
Guided by this model, one of the earliest achievements of the movement to study cognitive aspects of survey design was the adoption of some of the methods of the cognitive psychology laboratory for the pretesting of survey questionnaires. In particular, these methods are aimed at insuring that questions are comprehensible to respondents and that the meanings transmitted are those intended by the investigator. They also can point to problems in retrieval and communication. Such cognitive pretesting includes ‘think-aloud protocols,’ in which specially recruited respondents answer the proposed questionnaire aloud to a researcher, and describe, either concurrently with each question or retrospectively at the end of the questionnaire, their thought processes as they attempt to understand the question and to retrieve or construct an answer. Also used is behavioral coding, based on procedures originally developed by Cannell and coworkers (e.g., Cannell and Oskenberg 1988), which note questions for which respondents ask for clarification or for which their responses are inadequate. Other methods include reviews of questionnaires by cognitively trained experts, the use of focus groups, and for questionnaires that are automated, measures of response latency. Detailed descriptions of these methods and their applicability to detect problems at various stages of the question answering process appear in Sudman et al. (1996), Schwarz and Sudman (1996), and Forsyth and Lessler (1991).
3. Interviewer Behavior
Although survey interviewing began with investigators themselves merely going out and having conversations with respondents, as the survey enterprise aspired to greater scientific respectability and as it became sufficiently large-scale so that many interviewers were needed for any project, standardized interviewing became the norm. Printed questionnaires were to be followed to the letter; interviewers were instructed to answer respondents’ requests for clarification by merely re-reading the question or by saying that the meaning of any term is ‘whatever it means to you.’ While it is probably true that skilled interviewers have always deviated from these rigid standards in the interest of maintaining respondent cooperation and getting valid data, the robot-like interviewer delivering the same ‘stimulus’ to every respondent was considered the ideal. Working with videotapes of interviews for the National Health Interview Survey and for the General Social Survey and using techniques of conversational analysis, Suchman and Jordan (1990) showed that misunderstandings frequently arose because of the exclusion of normal conversational resources from the survey interview, and that these misunderstandings resulted not only in discomfort on the part of respondents, but in data that did not meet the needs of the survey designer. Suchman and Jordan recommended a more collaborative interviewing style, in which interviewer and respondent work together to elicit the respondent’s story and then fill in the questionnaire. Note that this recommendation applies almost exclusively when the information sought is about the respondent’s activities or experiences rather than about his/her attitudes or opinions. Both theoretical and practical advances have followed on these insights.
On a theoretical level, several researchers have used variants of the ‘cooperativeness’ principle advanced by Grice (1975) to help explain some of the anomalies that have puzzled survey researchers over the decades. Many are understandable as respondents bringing to the survey interview the communicative assumptions and skills that serve well in everyday life. The maxims of the cooperativeness principle require a participant
in a conversation to be truthful, relevant, informative, and clear (Sudman et al. 1996, p. 63). Thus, for example, it has long been known that respondents are willing to report on their attitudes towards fictitious issues or nonexistent groups, and investigators have taken this willingness as evidence of the gullibility of respondents and perhaps of the futility of trying to measure attitudes at all. But to ‘catch on’ to the ‘trick’ nature of such a question, a respondent would have to assume that the interviewer violated all the maxims of the cooperativeness principle. Parsimony suggests, instead, that the respondent would assume that the interviewer knew what s/he was talking about, and answer accordingly. Several puzzles about context effects, discussed below, also seem understandable in light of Grice’s maxims.
On a practical level, freeing interviewers to be more conversational with respondents is not costless; when interviewers explain more, interviews take longer, and are thus more costly in interviewers’ wages and respondents’ patience. Schober and Conrad (e.g., 1997) have embarked on a program of research that has demonstrated that conversational interviewing generates better data than does standardized interviewing when the mappings of the respondent’s experiences onto the survey’s concepts are complicated, but indeed at the cost of longer interviews. These investigators are now experimenting with interviewing styles that fall at several points along the continuum between fully standardized and fully conversational, seeking a point that balances the benefits in accuracy to be gained by a more conversational style with the costs in interview length.
4. Factual Questions vs. Questions About Attitudes Or Opinions
The distinction between questions that ask respondents to report on facts, usually autobiographical, and those that ask for an attitude or an opinion has been an important one in survey research. Indeed, the explicit origins of the movement to study cognitive aspects of survey design lie in a desire to improve the accuracy of factual reports from respondents; the extension of concern to attitudinal questions came somewhat later, although such a concern was certainly prefigured by the work reported in Turner and Martin (1984). The distinction remains important for interviewer behavior, in the sense that the kinds of conversational interviewing being investigated, are designed to help respondents better comprehend the intent of the question and to recall more accurately their experiences. Interviewer interventions designed for these purposes, when the question is aimed at getting a factual report might well bias the respondent’s answer when the aim of the question is to elicit an attitude or opinion. The distinction is also valid in the sense that the cognitive theories about autobiographical memory that have been used to understand problems of the recall of factual information are less applicable to attitudes and opinions, and the cognitive techniques that have been developed to aid respondent recall for autobiographical questions are not appropriate for the recall of attitudes or opinions. But in important ways the distinction obscures similarities between the two types of questions, as will be discussed in the section on context effects, below.
4.1 Improving The Accuracy Of Factual Reports
Very often survey questions ask for the frequency with which a respondent has accomplished a behavior (e.g., visited a doctor) or experienced an event (e.g., had an illness) during a specified time period (usually starting sometime in the past and ending with the date of the interview) called the reference period. Thus the respondent, having understood what events or experiences that should be included in the report, has a two-fold task: he or she must recall the events or experiences, and also determine whether those events or experiences fell inside the reference period. Theories from the cognitive sciences about how experiences are stored in autobiographical memory and retrieved therefrom have been marshaled to understand both phases of the respondent’s recall task and to improve their accuracy.
In a phenomenon called ‘telescoping,’ respondents are known often to move events or experiences forward in time, reporting them as falling within the reference period although in fact they occurred earlier. One of the earliest contributions to the literature on cognitive aspects of survey design was the proposal of a technique dubbed ‘landmarking’ to control telescoping (Loftus and Marburger 1983). Here, rather than inquiring about the last six months or the last year, a respondent is supplied with (or is asked to supply) a memorable date, and then is asked about events or experiences since that date. The related concept of bounding is also useful in controlling telescoping. In a panel survey, in which respondents are interviewed repeatedly at regular intervals, informing them of the events or experiences reported at the last interview will prevent such events or experiences from being placed in the time period between the previous interview and the current one. An analogous technique useable in a single interview is the two-time-frame procedure in which the respondent is first asked the report events or experiences during a long reference period (e.g., the last 6 months) and then for a shorter one that is really of interest to the investigator (e.g., the last 2 months). This technique both seems to relieve the pressure on respondents to report socially desirable events or experiences and to stress the interviewer’s interest in correct dating, encouraging respondents to strive for greater accuracy. A theoretical explanation of telescoping, taking into account the workings of memory for the storage of timing and the effects of rounding to culturally stereotypical values (e.g., 7 days, 10 days, 30 days), is provided by Hutenlocher et al. (1990).
The task of counting or estimating the number of events or experiences in the reference period can also provide a challenge to the respondent. Issues of frequency regularity and similarity influence whether the respondent is likely to actually count the instances or to employ an estimation strategy, as well as influencing whether counting or estimation is likely to be more accurate. In general, respondents are more likely to retrieve from memory and count events or experiences if their number is few and to estimate if their number is large. But regular events (e.g., going to the dentist every 6 months) and similar events (e.g., trips to the grocery store) are more likely to be estimated. And investigators have recently found that estimation (perhaps by retrieving a rate and applying that rate to the length of the reference period) is usually more accurate, than the attempt to retrieve and count regular, similar events or experiences. For a full discussion of these issues, see Sudman et al. (1996).
4.2 Context Effects
Context effects occur when the content of the questions preceding a particular question (or even following that question in a self-administered questionnaire in which the respondent is free to return to earlier questions after reading later ones) influence how some respondents answer that question. Our understanding of the term has been broadened recently to include also the effects on the distribution of responses of the response categories offered in a closed-ended question. Investigators have used the maxims of Grice’s cooperativeness principle to approach a systematic understanding of these effects, which have seemed mysterious for many years. Full treatments of these issues may be found in Sudman et al. (1996) and Tourangeau (1999); of necessity only some selected findings can be presented here.
A pair of puzzling context effects that have fascinated researchers for years are assimilation effects and contrast effects. An assimilation effect occurs when the content of the preceding items moves the responses to the target item in the direction of the preceding items; a contrast effect when the movement is in the opposite direction. These effects can now be understood through the inclusion exclusion model of the judgment process in survey responding, proposed by Schwarz and Bless (1992). The model holds that when a respondent has to make a judgment about a target stimulus, s/he must construct a cognitive representation, not only of the target stimulus, but also of a standard with which to compare the target stimulus. To construct these representations respondents must retrieve information from memory; since not all information in memory can be retrieved, respondents retrieve only that which is most easily accessible. And information can be accessible either because it is ‘chronically’ accessible or because it is temporarily accessible; it is the temporary accessibility of information that is affected by the context of the question. Positive temporarily accessible information supplied by the context and included in the representation of the target will render the judgment of the target more positive: negative context information thus included will render the judgment more negative. Both these processes produce assimilation effects. Contrast effects are created when the context suggests that information be excluded from the representation of the target, or included in the representation of the standard to which the target is to be compared. For example, Schwarz and Bless (1992) were able to manipulate the frequency of positive evaluations of a political party by using a reference to a popular politician in a preceding question. They were able to create an assimilation effect, resulting in more positive evaluations, by encouraging respondents to include the politician in their representations of the party (by having them recall that the politician had been a member of the party). They were also able to create a contrast effect, resulting in less positive evaluations of the political party, by encouraging respondents to exclude the politician from their representation of the party (by having them recall that his present position took him out of party politics). The model predicts inclusion of more information when the target is a general question, thus encouraging assimilation effects; it also predicts contrast effects when the target is narrow. Thus, a preceding question about marital happiness seems to result in higher reports of general happiness for those who are happily married, but lower reports of general happiness for those who are unhappy in their marriages. This influence of the preceding question can be eliminated by having respondents see the questions about marital happiness and general happiness as part of a single unit (either by an introduction that links the two questions or by a typographical link on a printed questionnaire); then, in obedience to the conversational maxim that speakers should present new information, respondents exclude their marital happiness from their evaluation of their general happiness.
Similar recourse to conversational maxims helps explain the influence of the response alternatives presented in closed answer questions on the distribution of responses. For example, Schwarz et al. (1985) found that 37.5 percent of respondents reported watching TV for more than two and a half hours a day when the response scale ranged from ‘up to 2 hours’ to ‘more than 4 hours,’ but only 16.2 percent reported watching more than two and a half hours a day when the response scale ranged from ‘up to hour’ to ‘more than 2 hours.’ Respondents take the response scales to be conversationally informative about what typical TV watching habits are and calibrate their report accordingly.
5. Some Achievements Of The Movement To Study Cognitive Aspects Of Surveys
5.1 The Impact Of Theories
Perhaps the most profound effect of the movement to study cognitive aspects of survey design is the introduction of theory into the research area. There has been space above to present only a few of the many such theoretical advances, but their utility should be clear. For example, consulting the maxims of cooperativeness in conversation gives us a framework for predicting and preventing response effects. In the same sense, the inclusion/exclusion model, much more fully developed than is presented here, makes predictions for what to expect from various manipulations of question ordering and content. These predictions offer guidance for the experimental advance of the research domain, and their testing systematically advances our knowledge, both of the phenomena and of the applicability of the theory.
5.2 The Cognitive Laboratories And Their Impact
Cognitive interviews in laboratory settings have become standard practice in several US government agencies (including the National Center for Health Statistics, the Bureau of Labor Statistics, and the Census Bureau), as well as in government agencies around the world, and at many academic and commercial survey organizations. Work in these laboratories supplements the usual large-scale field pretests of surveys. For illustrative purposes it is worth describing two major efforts at US government agencies.
The Current Population Survey (CPS) is a large-scale government survey, sponsored by the Bureau of Labor Statistics and carried out by the Census Bureau. From interviews with some 50,000 households monthly are derived major government statistical series, including the unemployment rate. In the early 1990s the CPS underwent its decennial revision, this time using the facilities of the government cognitive laboratories as well as traditional field testing of proposed innovations. In particular, this research found that the ordering and wording of the key questions regarding labor force participation caused some respondents to report themselves as out of the labor force when in fact they were either working part time or looking for work. It also found that the concept of layoff, which the CPS takes to mean temporary furlough from work with a specific date set for return, was not interpreted that way by respondents. Respondents seemed to understand ‘on layoff’ as a polite term for ‘fired.’ Appropriate revision of these questions, followed by careful field testing and running the new and old versions in parallel for some months resulted in estimates of the unemployment rate in which policy makers and administrators can have greater confidence. More details can be found in Norwood and Tanur (1994).
The US Decennial Census has always asked for a racial characterization of each resident. Originally designed to distinguish whites (mostly free) from blacks (mostly slaves) for purposes of counting for apportionment (the US Constitution originally mandated that a slave be counted as 3 5 of a man), the uses to which answers to the race identification question was put evolved over time. Civil Rights legislation starting in the 1960s designated protected groups, and it thus became important to know the population of such groups. At the same time, individuals were becoming both more conscious of their racial self-identification and less willing to be pigeonholed into a small set of discrete categories, especially as more individuals were self-identifying with more than one racial group. The 1990 Census asked two questions in this general area—one on racial identity followed by one on Hispanic ethnic identification. Experimental work (Martin et al. 1990) indicated that the ordering of these questions mattered to respondents. For example, when the Hispanic ethnicity question appeared first, fewer people chose ‘other’ as their racial category. This and other research made a convincing case that the complexity of Americans’ racial and ethnic self-identification needed a more complex set of choices on the Census form. A 4-year program of research resulted in the conclusion that allowing respondents to choose as many racial categories as they believe apply to them is a solution superior to the provision of a multiracial category (see Tucker et al. 1996). The 2000 Decennial Census allowed respondents to choose as many racial categories as they wished; at this writing it is not yet known what proportion of the population took advantage of that opportunity, or the effects those choices will have on the analyses of the data.
6. Conclusions
The movement to study cognitive aspects of surveys has thus given us some theoretical insights and practical approaches. Ideally, we could hope for a theory of the survey designing and responding process, but we are still far from that goal, and may perhaps never be able to attain it. But the frameworks so far provided offer clear guidance of how to proceed in an effort to reach systematic understanding. Far fuller accounts of the results of the movement to study cognitive aspects of survey design and ideas of future directions can be found in Sirken et al. (1999a), Sirken et al. (1999b), Sudman et al. (1996), and Tanur (1992).
Bibliography:
- Cannel C F, Oskenberg L 1988 Observation of behavior in telephone interviews. In: Groves R M, Biemer P B, Lyberg L E, Massey J T, Nichols II W L, Waksberg J (eds.) Telephone Survey Methodology. Wiley, New York
- Clark H H, Schober M F 1992 Asking questions and influencing answers. In: Tanur J. (ed.) Questions about Questions: Inquiries into the Cognitive Bases of Surveys. Sage, New York
- Forsyth B H, Lessler J 1991 Cognitive laboratory methods: A taxonomy. In: Biemer P, Groves R M, Lyberg L E, Mathiowetz N, Sudman S (eds.) Measurement Errors in Surveys. Wiley, New York
- Grice H P 1975 Logic and conversation. In: Cole P, Morgan J L (eds.) Syntax and Semantics, Vol 3: Speech Acts. Academic Press, New York
- Hippler H J, Schwarz N, Sudman S (eds.) 1987 Social Information Processing and Survey Methodology. SpringerVerlag, New York
- Huttenlocher J, Hedges L V, Bradburn N M 1990 Reports of elapsed time: Bounding and rounding processes in estimation. Journal of Experimental Psychology: Learning, Memory, and Cognition 16: 196–213
- Loftus E F, Marburger W 1983 Since the eruption of Mt. St. Helens, did anyone beat you up? Improving the accuracy of retrospective reports with landmark events. Memory and Cognition 11: 114–20
- Martin E, Demaio T J, Campanelli P C 1990 Context effects for census measures of race and Hispanic origin. Public Opinion Quarterly 54: 551–66
- Norwood J L, Tanur J M 1994 Measuring unemployment in the nineties. Public Opinion Quarterly 58: 277–94
- Payne S L 1951 The Art of Asking Questions. Princeton University Press, Princeton, NJ
- Schober M F, Conrad F G 1997 Does conversational interviewing reduce survey measurement error? Public Opinion Quarterly 61: 576–602
- Schwarz N, Bless H 1992 Constructing reality and its alternatives: Assimilation and contrast effects in social judgment. In: Martin L L, Tesser A. (eds.) The Construction of Social Judgment. Erlbaum, Hillsdale, NJ
- Schwarz N, Hippler H J, Deutsch B, Strack F 1985 Response categories: Effects on behavioral reports and comparative judgments. Public Opinion Quarterly 49: 388–95
- Schwarz N, Sudman S (eds.) 1996 Answering Questions: Methodology for Determining Cognitive and Communicati e Processes in Survey Research. Jossey-Bass, San Francisco
- Sinaiko H, Broedling L A (eds.) 1976 Perspectives on Attitude Assessment Surveys and their Alternatives. Pendleton, Champaign, IL
- Sirken M G, Herrmann D J, Schechter S, Schwarz N, Tanur J M, Tourangeau R (eds.) 1999a Cognition and Survey Research. Wiley, New York
- Sirken M G, Jabine T, Willis G, Martin E, Tucker C (eds.) 1999b A New Agenda for Interdisciplinary Survey Research Methods: Proceedings of the CASM II Seminar. US Department of Healthy and Human Services, Centers for Disease Control and Prevention, National Center for Health Statistics, Hyattsville, MD
- Suchman L, Jordan B 1990 Interactional troubles in face-to-face survey interviews. Journal of the American Statistical Association 85: 232–53
- Sudman S, Bradburn N M 1974 Response Effects in Surveys: A Review and Synthesis. Aldine, Chicago
- Sudman S, Bradburn N M, Schwarz N 1996 Thinking About Answers: The Application of Cognitive Processes to Survey Methodology. Jossey-Bass, San Francisco
- Tanur J M (ed.) 1992 Questions about Questions: Inquiries into the Cognitive Bases of Surveys. Sage, New York
- Tourangeau R 1999 Context effects on answers to attitude questions. In: Sirken M G, Hermann D J, Schwarz N, Tanur J M, Tourangeau R (eds.) Cognition and Survey Research. Wiley, New York
- Tucker C, McKay R, Kojetin B, Harrison R, de al Puente M, Stinson L, Robison E 1996 Testing methods of collecting racial and ethnic information: Results of the Current Population Survey Supplement on Race and Ethnicity. Bureau of Labor Statistics, Statistical Notes, No. 40
- Turner C F, Martin E (eds.) 1984 Sur eying Subjective Phenomena. Sage, New York