Cognitive Approaches to Questionnaires Research Paper

Academic Writing Service

Sample Cognitive Approaches to Questionnaires Research Paper. Browse other  research paper examples and check the list of research paper topics for more inspiration. If you need a religion research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our research paper writing service for professional assistance. We offer high-quality assignments for reasonable rates.

Respondents’ answers to standardized questions presented in self-administered questionnaires or face-to-face and telephone interviews are a major source of social science data. These answers, however, are profoundly influenced by characteristics of the question asked, including the wording of the question, the format of the response alternatives, and the context in which it is presented. The cognitive and communicative processes underlying these influences are increasingly well understood and this research paper reviews their implications for question wording and questionnaire construction.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


1. Respondents’ Tasks

Answering a question requires that respondents (a) interpret the question to understand what is meant and (b) retrieve relevant information from memory to form an answer. In most cases, they cannot provide their answer in their own words but (c) need to map it onto a set of response alternatives provided by the researcher. Finally, (d) respondents may wish to edit their answer before they communicate it to the interviewer for reasons of social desirability and self-presentation. Respondents’ performance at each of these steps is context dependent and profoundly influenced by characteristics of the questionnaire (Sudman et al. 1996, Chap. 3), including its administration mode (self-administered, or face-to-face, or telephone interview; Schwarz et al. 1991).

1.1 Question Comprehension

The key issue at the question comprehension stage is whether respondents’ understanding of the question matches the meaning the researcher had in mind. Not surprisingly, methodology textbooks urge researchers to write simple questions and to avoid unfamiliar or ambiguous terms (see Sudman and Bradburn 1983, for good practical advice). However, understanding the literal meaning of a question is not sufficient to provide an informative answer. When asked, ‘What have you done today?’ respondents are likely to understand the words—yet, they still need to determine what kind of activities the researcher is interested in. Should they report, for example, that they took a shower, or not? Providing an informative answer requires inferences about the questioner’s intention to determine the pragmatic meaning of the question.




To infer the pragmatic meaning, respondents draw on contextual information, including the content of adjacent questions and the nature of the response alternatives. Their use of this information is licensed by the tacit assumptions that govern the conduct of conversation in daily life, which respondents apply to the research interview (Clark and Schober 1992, Schwarz 1996). These pragmatic inferences have important implications for questionnaire construction.

1.1.1 Question Context. Respondents interpret questions in the context of the overall interview and a term like ‘drugs’ acquires different meanings when presented in survey pertaining to respondents’ medical history rather than to crime in the neighborhood. Moreover, they assume that adjacent questions are meaningfully related to one another, unless otherwise indicated. They may infer, for example, that an ambiguous ‘educational contribution’ refers either to fellowships that students receive, or tuition they have to pay, depending on the content of adjacent questions. Finally, they hesitate to reiterate information they have already provided, consistent with conversational norms that discourage redundancy. For example, questions pertaining either to happiness with one’s life or to life-satisfaction typically elicit highly similar answers, unless both questions are presented together. In the latter case, respondents assume the researcher wouldn’t ask for the same thing twice and differentiate between both questions, resulting in less similar answers. In self-administered questionnaires, question interpretation is influenced by preceding as well as following questions, whereas only preceding questions can exert an influence in face-to-face and telephone interviews (see Clark and Schober 1992, Sudman et al. 1996, Schwarz 1996 for extensive reviews).

1.1.2 Response Alternatives. Most questions are presented with a fixed set of response alternatives, which facilitates administration and data analysis. Far from being ‘neutral’ measurement devices, however, response alternatives influence question interpretation (Schwarz and Hippler 1991). First, consider the differences between open and closed response formats. When asked in an open response format ‘What have you done today?’ respondents are likely to omit activities that the researcher is obviously aware of (e.g., ‘I gave a survey interview’) or may take for granted anyway (e.g., ‘I took a shower’). Yet, if these activities were included in a closed list of response alternatives, most respondents would endorse them. At the same time, such a list would reduce reports of activities that are not represented on the list, even if an ‘other’ option were provided, which respondents rarely use. Both of these question form effects reflect that response alternatives can clarify the intended meaning of a question by specifying what the researcher is interested in. When using a closed response format, it is therefore important to ensure that the list of response alternatives covers the full range of behaviors or opinions.

Similarly, suppose respondents are asked how frequently they felt ‘really irritated’ recently. To infer if ‘really irritated’ refers to major or to minor annoyances, respondents may draw on an apparently formal feature of the question, namely the numeric values of the frequency scale. When the scale presents low (high) frequency alternatives, respondents infer that the researcher is interested in rare (frequent) events. Hence, they report on major annoyances (which are relatively rare) when given a low frequency scale, but on minor annoyances when given a high frequency scale. As a result, the same question elicits reports of substantively different experiences, depending on the frequency scale used (Schwarz 1996, Chap. 5, Schwarz and Hippler 1991).

In sum, apparently formal differences in the response alternatives can profoundly influence respondents’ interpretation of the question. Far from reflecting superficial responding, such effects indicate that respondents do their best to make sense of the question asked by drawing on the full range of information available to them—including information the researcher never intended to provide. To safeguard against unintended question interpretations, researchers have developed a number of different pretesting procedures, reviewed in Sect. 3.

1.2 Recalling Or Computing A Judgment

Once respondents determined what the researcher is interested in, they need to recall relevant information from memory. In some cases, respondents may have direct access to a previously formed relevant judgment that they can offer as an answer. In most cases, however, they will not find an appropriate answer readily stored in memory and will need to compute a judgment on the spot. The different judgmental processes pertaining to behavioral questions and opinion questions are discussed in Sect. 2 and Attitude Measurement, respectively.

1.3 Formatting The Response

Unless the question is asked in an open response format, respondents need to format their answer to fit the response alternatives provided by the researcher. Respondents observe these question constraints and avoid answers that are not explicitly offered, as already seen in Sect. 1.1.2. For example, many more respondents answer ‘don’t know,’ or choose a middle alternative, when these options are explicitly offered than when they are not (see Schwarz and Hippler 1991, for a review). Under which conditions the addition or omission of these response options results in different substantive conclusions, and whether some respondents are more likely to be affected by the response alternatives than others, is the topic of some debate (see Krosnick and Fabrigar in press).

Response formatting processes are particularly relevant when rating scales are used (Sudman et al. 1996, Chap. 4). Specifically, respondents use the most extreme stimuli to anchor the endpoints of the scale. Hence, a given stimulus will be rated as less extreme if presented in the context of a more extreme one, than if presented in the context of a less extreme one. In addition, if the number of stimuli to be rated is large, respondents attempt to use all categories of the rating scale about equally often. Hence, two similar stimuli may receive different ratings when only a few stimuli are presented, but may receive the same rating when a large number of stimuli has to be located along the scale. Accordingly, ratings of the same object cannot be directly compared when they were collected in different contexts, rendering comparisons over time or between studies difficult. The psychometric properties of rating scales have been the topic of extensive investigation (see Krosnick and Fabrigar in press). Seven point scales seem to be best in terms of reliability, percentage of undecided respondents, and respondents’ ability to discriminate between scale points. Moreover, scales that provide verbal labels for each scale value seem more reliable than scales with verbal endpoints only.

Finally, respondents’ choice of a given response alternative is often influenced by the order in which the alternatives are presented. Under the auditory presentation conditions of telephone interviews, a given alternative is more likely to be endorsed when it is presented last rather than first (referred to as a recency effect). Under the visual presentation conditions of a self-administered questionnaire, a given alternative is more likely to be endorsed when it is presented first rather than last (referred to as a primacy effect; see Sudman et al. 1996, Chap. 6, for a discussion of the underlying processes).

1.4 Editing The Response

Finally, respondents may want to edit their response before they communicate it, reflecting considerations of social desirability and self-presentation (see DeMaio 1984 for a review). Not surprisingly, the impact of self-presentation concerns is more pronounced in face-to-face interviews than in self-administered questionnaires. It is important to emphasize, however, that influences of social desirability are limited to potentially threatening questions and are typically modest in size. These influences can be reduced through techniques that ensure the confidentiality of the answer (see Sudman and Bradburn 1983 for detailed advice).

2. Asking Questions About Behavior

Most questions about respondents’ behavior are frequency questions, pertaining, for example, to how often the respondent has bought something or has missed a day at work during some specified period of time. Researchers would ideally like respondents to identify the behavior of interest; to scan the reference period; to retrieve all instances that match the target behavior; and to count these instances to determine the overall frequency of the behavior. This, however, is the route that respondents are least likely to take.

Except for rare and very important behaviors, respondents are unlikely to have detailed representations of numerous individual instances of a behavior stored in memory. Instead, the details of various instances of closely related behaviors blend into one global representation, rendering it difficult to distinguish and retrieve individual episodes. Accordingly, a ‘recall-and-count’ model does not capture how respondents answer questions about frequent behaviors or experiences. Rather, their answers are likely to be based on some fragmented recall and the application of inference rules to compute a frequency estimate. Accordingly, the best we can hope for under most conditions is a reasonable estimate, unless the behavior is rare and of considerable importance to respondents.

Conway (1990) provides a useful introduction to psychological research into autobiographical memory (i.e., individuals’ memory for their experiences and behaviors) and Bradburn et al. (1987), Sudman et al. (1996, Chaps. 7–9), Tourangeau et al. (2000), and the contributions in Schwarz and Sudman (1994) review the implications for questionnaire construction. Section 2.1 summarizes what researchers can do to facilitate recall and Sect. 2.2 addresses respondents’ estimation strategies.

2.1 Facilitating Recall

2.1.1 Recall Cues And Reference Periods. If researchers are interested in obtaining reports that are based on recalled episodes, they may simplify respondents’ task by providing appropriate recall cues and by restricting the recall task to a short and recent reference period. There are, however, important drawbacks to both of these strategies. Although the quality of recall will generally improve as the retrieval cues become more specific, respondents are likely to restrict their memory search to the particular cues presented to them, reflecting that the cues constrain the meaning of the question. As a result, respondents will omit instances that do not match the specific cues, resulting in under-reports if the list is not exhaustive. Moreover, using a short reference period may result in many ‘zero’ answers from respondents who rarely engage in the behavior, thus limiting later analyses to respondents with a high behavioral frequency. Finally, reference periods of different length may result in different question interpretations. Long reference periods (‘How often have you been angry last year?’) suggest that the researcher is interested in relatively rare events, whereas short reference periods (‘How often have you been angry last week?’) indicate an interest in frequent events. Hence, respondents may report on substantively different behaviors, e.g., rare and intense vs. frequent but minor episodes of anger.

Not surprisingly, different recall cues are differentially effective. The date of an event is the poorest cue, whereas cues pertaining to what happened, where it happened, and who was involved have been found to be very effective. In addition, recall will improve when respondents are given sufficient time to search memory. Recalling specific events may take up to several seconds and repeated attempts to recall may result in the retrieval of additional material, even after a considerable number of previous trials. Unfortunately, respondents are unlikely to have sufficient time to engage in repeated retrieval attempts in most research situations, and may often not be motivated to do so even if they had the time. This is particularly crucial in the context of survey research, where the available time per question is usually less than one minute.

Moreover, the direction in which respondents search memory may influence the quality of recall. Specifically, better recall is achieved when respondents begin with the most recent occurrence of a behavior and work backward in time than when they begin at the beginning of the reference period. This presumably occurs because memory for recent occurrences is richer and the recalled instances may serve as cues for recalling previous ones. Given free choice, however, respondents tend to prefer the less efficient strategy of forward recall.

Yet, even under optimal conditions, respondents will frequently be unable to recall an event or some of its critical details, even if they believed they would ‘certainly’ remember it at the time it occurred. In general, the available evidence suggests that respondents are likely to under-report behaviors and events, which has led many researchers to assume that higher reports of mundane behaviors are likely to be more valid. Accordingly, a ‘the-more-the-better’ rule frequently is substituted for external validity checks.

2.1.2 Dating Recalled Instances. After recalling or reconstructing a specific instance of the behavior under study, respondents have to determine if this instance occurred during the reference period. This requires that they understand the extension of the reference period and that they can accurately date the instance with regard to that period. Reference periods that are defined in terms of several weeks or months are highly susceptible to misinterpretations. For example, the term ‘during the last 12 months’ can be construed as a reference to the last calendar year, as including or excluding the current month, and so on. Similarly, anchoring the reference period with a specific date (‘Since March 1, how often …?’) is not very helpful because respondents will usually not be able to relate an abstract date to meaningful memories.

The most efficient way to anchor a reference period is the use of salient personal or public events, often referred to as landmarks. In addition to improving respondents’ understanding of the reference period, the use of landmarks facilitates the dating of recalled instances. Given that the calendar date of an event will usually not be among its encoded features, respondents were found to relate recalled events to other, more outstanding events in order to reconstruct the exact time and day. Accordingly, using public events, important personal memories or outstanding dates (such as New Year’s Eve) as landmarks was found to reduce dating biases (see Sudman et al. 1996, Tourangeau et al. 2000 for reviews).

Without a chance to relate a recalled event to a well-dated landmark, time dating is likely to reflect both forward and backward telescoping, that is, distant events are assumed to have happened more recently than they did, whereas recent events are assumed to be more distant than they are (Sudman et al. 1996, Chap. 8).

2.2 Estimation Strategies

In most cases, respondents need to rely on estimation strategies to arrive at an answer (Sudman et al. 1996, Chap. 9). Which strategy they use is often a function of the research instrument and hence to some extent under researchers’ control.

2.2.1 Decomposition Strategies. Many recall problems become easier when the recall task is decomposed into several subtasks. To estimate how often she has been eating out during the last three months, for example, a respondent may determine that she eats out about every weekend and had dinner at a restaurant this Wednesday, but apparently not the week before. Based on such partial recall she may arrive at an estimate of ‘18 times during the last 3 months.’ Such estimates are likely to be accurate if the respondent’s inference rule is adequate, and if exceptions to the usual behavior are rare.

In the absence of these fortunate conditions, however, decomposition strategies are likely to result in overestimates because people usually overestimate the occurrence of low frequency events and underestimate the occurrence of high frequency events. Hence, asking for estimates of a global, and hence frequent, category (e.g., ‘eating out’) is likely to elicit an underestimate, whereas asking for estimates of a narrow, and hence rare, category (e.g., ‘eating at a Mexican restaurant’) is likely to elicit an overestimate. The observation that decomposition usually results in higher estimates does therefore not necessarily reflect better recall.

2.2.2 Subjective Theories. A particularly important inference strategy is based on subjective theories of stability and change (Ross 1989). In answering retrospective questions, respondents often use their current behavior or opinion as a bench-mark and invoke an implicit theory of self to assess whether their past behavior or opinion was similar to, or different from, their present behavior or opinion. The resulting estimates are correct to the extent that the subjective theory is correct. This, however, is often not the case.

In many domains, individuals assume a rather high degree of stability, resulting in retrospective reports that are too close to the current state of affairs, thus underestimating the degree of change that has occurred over time. For example, retrospective estimates of income or of tobacco, marijuana, and alcohol consumption were found to be heavily influenced by respondents’ income or consumption habits at the time of interview. On the other hand, when respondents have reason to believe in change, they will detect change, even though none has occurred. Assuming, for example, that therapy is helpful, respondents may infer that their problem behavior prior to therapy was more frequent than was actually the case. Relying on such theory-driven retrospective reports, researchers may infer therapeutic success where none has occurred (see Ross 1989 for examples).

2.2.3 Response Alternatives. In many studies, respondents are asked to report their behavioral frequencies by checking the appropriate alternative on a frequency scale of the type shown in Table 1. Assuming that the researcher constructed a meaningful scale, respondents commonly believe that the values in the middle range of the scale reflect the ‘average’ or ‘usual’ behavioral frequency, whereas the extremes of the scale correspond to the extremes of the distribution. Given this assumption, respondents can use the range of the response alternatives as a frame of reference in estimating their own behavioral frequency. This strategy results in higher estimates along scales that present high rather than low frequency response alternatives, as shown in Table 1. As may be expected, the impact of frequency scale is most pronounced for frequent and mundane behaviors, which are poorly represented in memory (see Schwarz 1996 for a review).

Cognitive Approaches to Questionnaires Research Paper

In addition to influencing respondents’ behavioral reports, response alternatives may also affect subsequent comparative judgments. For example, a frequency of ‘2 h a day’ constitutes a high response on the low frequency scale, but a low response on the high frequency scale shown in Table 1. Checking this alternative, a respondent may infer that their own TV consumption is above average in the former case, but below average in the latter. Hence, respondents in this study reported lower satisfaction with the variety of things they do in their leisure time when the low frequency scale suggested that they watch more TV than most other people.

To avoid these systematic influences of response alternatives, it is advisable to ask frequency questions in an open response format, such as, ‘How many hours a day do you watch TV? … hours per day.’ Note that such an open format needs to specify the relevant units of measurement, for example, ‘hours per day’ to avoid answers like ‘a few.’

As another alternative, researchers are often tempted to use vague quantifiers, such as ‘sometimes,’ ‘frequently,’ and so on. This, however, is the worst possible choice (see Pepper 1981 for a review). Most importantly, the same expression denotes different frequencies in different content domains, for example, ‘frequently’ suffering from headaches reflects higher absolute frequencies than ‘frequently’ suffering from heart attacks. Moreover, different respondents use the same term to denote different objective frequencies of the same behavior. For example, suffering from headaches ‘occasionally’ denotes a higher frequency for respondents with a medical history of migraine than for respondents without that migraine history. Accordingly, the use of vague quantifiers reflects the objective frequency relative to respondents’ subjective standard, rendering vague quantifiers inadequate for the assessment of objective frequencies, despite the popularity of their use.

3. Pretesting

Given the complexity of the question answering process, extensive pretesting is of paramount importance. Traditionally, questions were often assumed to ‘work’ if respondents could provide an answer. Recent psychological research, however, facilitated the development of more sensitive pretest procedures (Sudman et al. 1996, Chap. 2, Schwarz and Sudman 1996). These procedures include the extensive use of probes and think-aloud protocols (summarily referred to as cognitive interviewing), detailed codings of interview transcripts, and the use of expert systems that alert researchers to likely problems. Drawing on these procedures, researchers can increase the odds that respondents understand the question as intended and use cognitive strategies that are likely to produce a meaningful answer.

Bibliography:

  1. Bradburn N M, Rips L J, Shevell S K 1987 Answering autobiographical questions: The impact of memory and inference on surveys. Science 236: 157–61
  2. Clark H H, Schober M F 1992 Asking questions and influencing answers. In: Tanur J M (ed.) Questions about Questions. Sage, New York, pp. 15–48
  3. Conway M A 1990 Autobiographical Memory: An Introduction. Open University Press, Milton Keynes, UK
  4. DeMaio T J 1984 Social desirability and survey measurement: A review. In: Turner C F, Martin E (eds.) Sur eying Subjective Phenomena, Vol. 2. Russell Sage Foundation, New York, pp. 257–281
  5. Krosnick J A, Fabrigar L in press Handbook of Attitude Questionnaires. Oxford University Press, Oxford, UK
  6. Pepper S C 1981 Problems in the quantification of frequency expressions. In: Fiske D W (ed.) Problems with Language Imprecision (New Directions for Methodology of Social and Behavioral Science), Vol. 9. Jossey-Bass, San Francisco
  7. Ross M 1989 The relation of implicit theories to the construction of personal histories. Psychological Review 96: 341–57
  8. Schuman H, Presser S 1981 Questions and Answers in Attitude Surveys. Academic Press, New York
  9. Schwarz N 1996 Cognition and Communication: Judgmental Biases, Research Methods, and the Logic of Conversation. Erlbaum Associates, Mahwah, NJ
  10. Schwarz N, Hippler H J 1991 Response alternatives: The impact of their choice and ordering. In: Biemer P, Groves R, Mathiowetz N, Sudman S (eds.) Measurement Error in Surveys. Wiley, Chichester, UK, pp. 41–56
  11. Schwarz N, Strack F, Hippler H J, Bishop G 1991 The impact of administration mode on response effects in survey measurement. Applied Cognitive Psychology 5: 193–212
  12. Schwarz N, Sudman S (eds.) 1994 Autobiographical Memory and the Validity of Retrospective Reports. Springer Verlag, New York
  13. Schwarz N, Sudman S (eds.) 1996 Answering Questions: Methodology for Determining Cognitive and Communicative Processes in Survey Research. Jossey-Bass, San Francisco
  14. Sudman S, Bradburn N M 1983 Asking Questions. Jossey-Bass, San Francisco
  15. Sudman S, Bradburn N M, Schwarz N 1996 Thinking about Answers: The Application of Cognitive Processes to Survey Methodology. Jossey-Bass, San Francisco
  16. Tanur J M (ed.) 1992 Questions about Questions. Sage, New York
  17. Tourangeau R, Rips L J, Rasinski K 2000 The Psychology of Survey Responses. Cambridge University Press, Cambridge, UK
Adolphe Quetelet Research Paper
Questionnaires and Subjective Expectations Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!