Triangulation Research Paper

Academic Writing Service

View sample Triangulation Research Paper. Browse other  research paper examples and check the list of research paper topics for more inspiration. If you need a religion research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our custom writing services team for professional assistance. We offer high-quality assignments for reasonable rates.

1. Introduction

Triangulation refers to the strategy of using multiple operationalizations of constructs to help separate the construct under consideration from other irrelevancies in the operationalization. At its simplest level, triangulation refers to the use of multiple measures to capture a construct. The triangulation strategy, however, also can be applied to multiple operationalizations of treatments and manipulations and to the use of multiple theories, analyses, analysts, methodologies, and research designs, to name but a few. At a basic level, the triangulation strategy requires an acknowledgment that no one measure in the social sciences is a perfect measurement of the construct under consideration. With only one measure of the construct, the error and biases inherent in the measure are inextricably confounded with the construct of interest. By using different measures with different irrelevancies, a researcher can bring into better focus the construct of interest. Although this triangulation logic was initially explicated in the social sciences in terms of measures (e.g., Campbell and Fiske 1959), the same logic can be extended to treatments, settings, populations, and many other aspects of the research enterprise. Although simple in concept, triangulation ‘carries with it a host of implications regarding the proper conduct of social research, the effects of imperfection or unreliability of measurement operations on the development of theory, and the manner in which our field might gradually attain the status of the more developed sciences’ (Crano 1981, p. 320).

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


2. History Of Triangulation Concept

Over 40 years ago, social scientists called for triangulation of research measures as a means of more adequately capturing the essence of the construct under consideration (Campbell and Fiske 1959, Webb et al. 1966). The term ‘triangulation’ was probably borrowed from surveying, where multiple measures from different vantage points are used to locate an object in space, and explanations based on physical analogues are instructive. (See, for example, Crano’s 1981 example of locating a radio signal by using multiple directional antennae.) Social scientists suggested that social and psychological constructs, as well as physical objects, could be brought into clearer focus by using multiple measures from different vantage points with unique sources of error. The uses of triangulation quickly expanded to include triangulation of methods, sources, and theories, as well as measures (e.g., Denzin 1978, 1989).

Some researchers adopted the triangulation strategy whole-heartedly and launched large-scale, multiinvestigator, multimethod, multisite, multimeasure studies of social problems (e.g., Lewis 1981). The need to contend with conflicting results that arose from different methods and measures in the same study led to extensive discussions of the use of qualitative and quantitative research methods and data, across and within the same project (e.g., Reichardt and Cook 1979). Recently the focus of discussion has shifted to triangulating among multiple paradigms (Greene and Caracelli 1997, Greene and McClintock 1985), multiple stakeholders, multiple analysts, multiple theoretical and value frameworks, and multiple targets within a study, a concept Cook calls ‘multiplism’ (1985). Developments in analysis techniques have also promoted the use of triangulation, either at the measurement level, as in causal modeling (Joreskog and Sorbom 1984), or at the method level, as in metaanalysis (Glass et al. 1981, Rosenthal 1991).




3. Logic Of Triangulation

Triangulation strategies can be contrasted with what Cook and Campbell (1979) termed ‘mono-operation bias.’ Whenever a construct is operationalized in only one way in a research study, the error attached to the method of operationalization, as well as any random error in the operationalization, is inextricably linked with the true essence of the construct under consideration. This relationship is captured by the simple equation X = T + e, meaning that every data point (X ) is composed of part True Score (T ) and part error (e). (See, for example, Crano 1981 for further discussion). With only one measure of a construct, the error caused by method bias, response sets, or investigator bias, to name but a few, cannot be disentangled from the true measure of the construct. (See Cook and Campbell 1979, Webb 1978 for discussions of myriad types of error). By triangulating, or operationalizing the construct in several ways, the sources of influence on the data point can be disentangled, leading to clearer understanding of the construct under consideration. Campbell and Fiske’s classic discussion of the multitrait, multimethod matrix (1959) lays out a strategy for such specification.

The acknowledgment of the necessity of triangulation represented a maturing of the social sciences beyond the paradigm envy which had previously gripped the field (Crano 1981). Early social scientists had attempted to gain acceptance and legitimacy by adopting much of the research trappings of the physical or ‘hard’ sciences. Operating from the positivist view of science (Cook 1985), social scientists attempted to achieve parsimonious explanations of events and circumstances that would hold across times and settings, much as physical scientists were claiming to do. They sought the precise, well-calibrated measures, the elegant, critical experiments, and the straightforward analyses of the hard sciences. In some areas of research, this approach could be fairly successful. For example, a social scientist could operationalize ‘response time’ as ‘the amount of time that is recorded on this particular calibrated recording device between stimulus and response,’ with stimulus and response both being defined in terms of their operationalizations (e.g., ‘the onset of the light’ and ‘pushing the button,’ respectively). But as soon as the social scientists ventured beyond the laboratory and basic research, they discovered that mono-operationalization was inadequate to capture the true essence of the constructs, that simple research methods were inadequate to tap underlying relationships accurately, and that simple analysis strategies were inadequate to describe the complicated interactive patterns in the data.

For example, social scientists quickly discovered that they could not define ‘family size’ as ‘the number of people listed on the interview schedule under the heading Family’ and be confident that they had captured the true essence of the construct ‘family size.’ Beyond the problems with deciding whom to list as ‘family’ (such as deciding what to do with people who are not biologically related but may be committed to the unit and people who are biologically related but rarely present), social scientists realized that some people are not going to report accurately, either because they misunderstand the question, because they are not aware of the true situation, or because they choose to report incorrectly.

To compensate for these problems with error and bias, researchers began to use multiple-operationalizations of key constructs, and many claims of triangulation in research design are based solely on the use of multiple items to tap a construct. But, as Webb (1978) and others pointed out, using multiple instances of measures that share critical components of error does not improve the research much beyond identifying some random error. To be most useful, multiple measures must have differing sources of error, including method error, and errors due to response bias and self-presentation, among many others. Campbell termed this important qualification the ‘heterogeneity of irrelevancies’ (Brewer and Collins 1981).

So, for example, having the same interviewer ask the same respondent ‘how many people live in this house?’ in a dozen different ways during the same face-to-face interview does not advance the construct validity of ‘family size’ very much. A respondent who really does not know that her daughter’s boyfriend has been living upstairs will report incorrectly every time. A respondent who does not want to reveal that a cousin who is in the country illegally lives there will consistently not report that person as living in the house. An interviewer who consistently reverses threes and fives will do that every time. So repeated questioning only reduces the error associated with not understanding a particular question or with random errors made by the respondent or the interviewer. This is not to imply that reducing such error is unimportant, but triangulation can be much more powerful when used across types of measures instead of within one method of measurement.

Partly in response to the shared error inherent in verbal responses, (Webb et al. 1966) suggested the use of unobtrusive measures as a means of tapping constructs without the error inherent in verbal responses. Unobtrusive measures are not without error, but their usefulness comes from the fact that many of their error sources are different from the error sources that plague verbal responses (e.g., social desirability, response set). In one famous example, researchers measured the popularity of different museum exhibits by measuring the wear and tear on floor tiles surrounding different displays (Webb et al. 1966). Another researcher measured level of activity among nursing home residents by measuring how dirty adhesive tape attached to their wheelchair wheels became. And Mosteller (1955, cited in Webb 1978) measured the usage of various sections of the International Encyclopedia of the Social Sciences by noting how worn and dirty various sections of the book were. (Note to reader: Feel free to smudge, bend, and dogear this page.)

4. Extensions Of Triangulation

The logic of triangulation was quickly extended to methods, treatments, theories, settings, and other aspects of the research enterprise. Denzin (1978) compared the pictures of reality obtained from different researchers using different methodologies at different times to the changing images within a kaleidoscope, sharing some common elements but greatly influenced by the circumstances of viewing. This is not to imply that there is no objective reality, but rather, even if there is an objective reality, it is filtered through the researcher’s assumptions, research choices, and the factors unique to that point in time. In much the same way that Heisenberg pointed out to physical scientists that the very act of observing or measuring a physical object can alter the reality of that object, social scientists were coming to appreciate the extent to which their own assumptions, choices, historical time frame, and even mere presence influenced the data they collected.

Denzin (1978) outlined four basic types of triangulation: data, investigator, theory, and methodological. He further divided data triangulation into the three subtypes of time, space, and person. Data triangulation across time refers to both time of day as well as historical setting, which has been referred to as population changes associated with time (Campbell and Stanley 1963). Some time effects are profound and obvious. For example, attitudes toward national security in the middle of the Gulf War could be seriously effected by historical time frame. Other time effects are less obvious. For example, attitudes toward daycare among women who engage in telephone interviews during the day might be very different from attitudes of women who respond in the evening (and presumably might have been at work during the day). Similarly, space could have obvious or nonobvious effects. That gun attitudes might vary by geographic locale is fairly obvious, but the geographic effects on altruism might be less apparent. Similarly, with person effects, bosses and employees might have different views of the workplace, but so might workers near and far from the elevator.

A strength of triangulation is that the researcher does not need to understand completely the variation inherent in the data collection methods. In the same way that random assignment distributes error randomly across conditions in a true experiment even if the researcher is unaware of the importance of the error, triangulation captures and makes somewhat tractable irrelevancies of the research situation, even if the researcher is unaware of the importance of those irrelevancies. Of course, in the same way that true experiments can be strengthened if the researcher has identified problematic outside variables and blocked on those variables (i.e., randomized within levels of those variables), research designs can be strengthened if researchers can specify which outside factors might be problematic and can triangulate in such a way as to capture those irrelevancies. For example, if a researcher suspects that age of respondent will relate to the variables of interest, the researcher should specifically select for a variety of ages among respondents and analyze the data specifically to examine age effects, instead of just hoping that triangulating over samples will capture those irrelevancies.

Beyond data triangulation, Denzin (1978) also advocates investigator triangulation, whereby multiple investigators bring their unique perspectives and biases to the research project. This is an acknowledgment that people, no matter how well trained, are not robotic recorders of social events. Again, if the researcher has a priori reason to suspect that some characteristic of the investigator, such as gender, might introduce error into the observations, care can be taken to select equal numbers of male and female investigators and to analyze for differences between the observations of men and women. But the use of multiple investigators should reduce effects of irrelevant observer factors, even in the absence of the researcher’s awareness that such factors could be important. Of course, only irrelevant factors that vary among the investigators will be ameliorated.

In a similar vein, Denzin (1978) proposed that researchers approach projects from the perspectives of multiple theories, taking care to extend the scope beyond the theoretical orientation most comfortable for the researcher. He further suggested that rather than pit only two theories against each other in research, projects should be specifically designed to capture as many theoretical orientations as possible. Hypotheses derived from all of these theories could then be tested, and conclusions could be drawn about the relative strengths and weaknesses of the theories. The logic of triangulation is the same as with previous examples: by using multiple theoretical views, the researcher is better able to bring the research question into focus.

Finally, Denzin (1978) described both within-method and between-method triangulation. Within a single method, such as a telephone survey, multiple items that are intended to tap the same construct can provide much clearer measures of the construct under consideration. Multiple items can be combined via a wide assortment of scaling techniques, ranging from simple additive scales to elaborate factor analytic scales. Again, the triangulation strategy does not result in items measured without error, but, rather, it allows the error that is not shared among the items to be identified and removed. The resulting scale should reflect only the ‘true’ component of the construct measured plus any error shared by the various items. For example, items that all cause the respondent to respond in a socially desirable manner will not allow the researcher to separate out social desirability components from components that actually tap the construct under consideration. But if some of the items have a response set bias and others do not, the use of multiple measures will allow the response set bias to be identified (within limits).

Because within-method triangulation can leave varying amounts of error inextricably linked to the key construct, researchers are encouraged also to triangulate between methods (Brewer and Hunter 1989, Campbell and Fiske 1959, Cook 1985, Cook and Campbell 1979, Denzin 1978, Reichardt and Cook 1979, Webb et al. 1966). By combining methodologies as different as participant observation and survey research, for example, researchers can study processes under vastly differing assumptions, biases, and errors. For example, the Reactions to Crime Project (Lewis 1981) involved participant observation, content analysis, telephone surveying, and in-person interviewing, all within one large framework. The approach used by the Reactions to Crime Project was sequentially multimethod, with the participant observation being completed first, followed by the telephone survey, and finishing with the in-person interviewing. With sequential multimethod work, researchers can use results from previous to stages to inform design at later stages. Although this sequential strategy has strengths, Greene and McClintock (1985) point out that concurrent multimethod work, in which all methods are implemented at the same time, avoids contamination of one method with results from another, and produces a purer triangulation.

Probably the most extensive elaboration of triangulation logic was provided by Cook (1985) who described research that involves multiple stakeholders, multiple data analyses, multiple definitions, multiple methods, multivariate causal modeling, multiple hypotheses, multiple populations, multiple settings, and multiple times. He states that such multiplism allows the researcher to approximate the ‘ultimately unknowable truth through the use of processes that critically triangulate from a variety of perspectives on what is worth knowing and what is known. Triangulation requires generating multiple instances of some entity and then testing whether the instances converge on the same theoretical meaning despite the irrelevant sources of uniqueness that any one instance may contain’ (p. 57). Like previous writers, however, Cook is quick to point out that ‘multiplism is no panacea, for we can multiply replicate the same mistake or the same parochial set of assumptions’ (p. 58). Although triangulation can be vastly expanded in scope, its strengths and weaknesses remain the same.

5. Limitations Of Triangulation

As Flick (1992) and Fielding and Fielding (1986) have suggested that proponents of triangulation mistakenly believe triangulation solves all construct validity problems. To the contrary, early proponents of triangulation were sounding a warning about the inherent weaknesses of all measures, and the proposal to use multiple measures with nonoverlapping irrelevancies was an acknowledgment of the profound problems inherent in measuring. In a discussion of the distance between desired and achieved measures of social science constructs and hence of the hypotheses the operationalized measures test), Campbell (1969, p. 52) concluded that ‘direct knowing, nonpresumptive knowing, is not to be our lot.’ Therefore, one important limitation of triangulation is that it does not produce perfect measures.

Another limitation inherent in the triangulation strategy is that time, financial resources, and respondent goodwill conspire to prevent a researcher from triangulating on all the constructs in the research program. One can easily imagine how attempting to triangulate on theoretical orientations, research designs, and measurement strategies could quickly lead to a program of research that could never produce a single result, because it could never be completed. The successful researcher must therefore choose where to invest his her triangulation resources, and the soundness and usefulness of the research outcomes depend to a large extent on the wisdom of those choices during the research design phase.

The amount of effort a researcher chooses to devote to triangulating on a construct should reflect both the importance of that construct for the research question and the level of difficulty in achieving a good measurement. For example, if age is a crucial variable for a research question, investigators might invest extra resources in accessing birth records as well as asking for age and birthrate in a survey. If age is less crucial to the question at hand, the researchers might decide to rely on the verbal replies only. Similarly, a true experiment performed in a laboratory might provide adequate for testing a hypothesis about visual processing, but researchers investigating hypotheses about reactions to stress might need to invest in multiple methods to achieve clarity about the processes under investigation.

The researchers’ choices about constructs for triangulation resources can have significant effects on the results of the research. For example, if a researcher believes Family Income to be a more important construct in explaining school achievement than Parental Education Level and consequently devotes more effort to triangulating on the measure of Family Income than Parental Education Level, analyses might well reveal a stronger relationship between School Achievement and Family Income than School Achievement and Parental Education. If the researcher had made the opposite assumption and had devoted more resources to measuring Parental Education than to Family Income, the results might well have shown the opposite effect. The utility of triangulation, therefore, is limited by the soundness of the researchers’ initial decisions about resource allocation.

6. Advances In Triangulation

The logic of triangulation is inherent in both causal modeling (Joreskog and Sorbom 1984) and metaanalysis (Rosenthal 1991). Causal modeling is an analysis technique that relies on multiple measures (manifest variables) of some underlying construct (latent variable). Researchers specify which manifest variables relate to which latent variables, and the analysis strategy pulls out common variance among the manifest variables that feed into a latent variable. Variance in the manifest variable that is not shared with the other manifest variables for that latent variable is identified as ‘error.’ Error terms can be allowed to correlate with other error terms. This is a sophisticated extension of the logic used by Campbell and Fiske (1959) in their explication of the multitrait multimethod matrix. They pointed out that only part of what researchers are measuring is the key construct they are intending to measure. The rest of the measure includes method variance, random error, and possibly other types of error and bias. Causal modeling requires researchers to identify error explicitly in their models, which makes very clear the frailty of any one measure in isolation. In addition, causal models show clearly how error terms from similar measures correlate, increasing researchers’ awareness of such measurement effects.

In a similar vein, meta-analysis exploits multiple studies, with multiple operationalizations, multiple investigators, multiple methods, multiple sites, multiple populations, and many other differences to arrive at a summary statistic that reports the strength of underlying relationships across studies. Instead of requiring one researcher or research team to approach a research question from multiple perspectives, theories, and methodologies, meta-analysis capitalizes on natural variations among research programs to achieve triangulation. A main benefit of meta-analysis is that methodological variation can be explicitly built into the analysis. For example, in a recent review of the research on environmental and genetic effects on aggression, Miles and Carey (1997) reported that observational measures in laboratory studies showed no genetic effects and strong environmental effects, while self-report and parental rating measures showed both genes and environment to have strong relationships with aggression. Because variance associated with particular methods and measures can be explicitly analyzed in meta-analyzes, the increased understanding afforded by triangulation strategies becomes apparent. And the conflicting results from different methods, far from being a nuisance or drawback to triangulation approaches, are rich clues to contingencies that need to be brought to light if social research is ever to reveal fully how these processes operate (Cook 1985).

The importance of triangulation is also revealed by the prevalence of statistical interactions in social research. Evaluation researchers and others who study social problems rarely find simple, main-effect effects capture the richness (some would say messiness) of the world beyond the research laboratory (Cook 1985). Evaluators no longer ask ‘Does this work?’ but, rather, ‘For whom does this work, in what settings, with what constraints?’ The prevalence of interactions pushes researchers to branch out to different populations, different time frames, different methods, in other words, to triangulate across many factors in the research setting.

Social research has come a long way from the days when researchers sought pure measures, elegant, simple designs, and straightforward analysis strategies. The norm for social research now is multiple measures, elaborate multimethod design, and analysis strategies that clearly indicate the multiple causation and interactive patterns apparent in the social world. In no small part, this march toward increasing complexity of the research enterprise was begun by early explications of the logic of and necessity for triangulation.

Bibliography:

  1. Brewer J, Hunter A 1989 Multimethod Research: A Synthesis of Styles. Sage, Newbury Park, CA
  2. Brewer M, Collins B E 1981 Perspectives on knowing: Six themes from Donald T. Campbell. In: Brewer M, Collins B E (eds.) Scientific Inquiry and the Social Sciences. Jossey-Bass Publishers, San Francisco, CA
  3. Campbell D T 1969 A phenomenology of the other one: Corrigible, hypothetical, and critical. In: Mischel T (ed.) Human Action: Conceptual and Empirical Issues. Academic Press, New York
  4. Campbell D T, Fiske D W 1959 convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin 56: 81–105
  5. Campbell D T, Stanley J C 1963 Experimental and Quasi- experimental Designs for Research. Rand McNally, Chicago
  6. Cook T D 1985 Postpositivist critical multiplism. In: Shotland L, Mark M (eds.) Social Science and Social Policy. Sage, Beverly Hills, CA
  7. Cook T D, Campbell D T 1979 Quasi-experimentation: Design and Analysis Issues for Field Settings. Rand McNally College Publishing, Chicago
  8. Cook T D, Reichardt C S (eds.) 1979 Qualitative and Quantitative Methods in Evaluation Research. Sage, Beverly Hills, CA
  9. Crano W 1981 Triangulation and cross-cultural research. In: Brewer M, Collins B E (eds.) Scientific Inquiry and the Social Sciences. Jossey-Bass Publishers, San Francisco, CA
  10. Denzin N K 1978 Sociological Methods: A Sourcebook. McGraw Hill, New York
  11. Denzin N K 1989 The Research Act: A Theoretical Introduction to Sociological Methods, 3rd edn. McGraw Hill, New York
  12. Fielding N G, Fielding J L 1986 Linking Data. Sage, Beverly Hills, CA
  13. Flick U 1992 Triangulation revisited: Strategy of validation or alternative? Journal of the Theory of Social Behavior 22: 175–197
  14. Glass G V, McGaw B, Smith M I 1981 Meta-analysis in Social Research. Sage, Beverly Hills, CA
  15. Greene J, Caracelli V J 1997 Defining and describing the paradigm issue in mixed-method evaluation. In: Greene J, Caracelli V J (eds.) Advances in Mixed-Method Evaluation: The Challenges and Benefits of Integrating Diverse Paradigms. Jossey-Bass Publishers, San Francisco, CA
  16. Greene J, McClintock C 1985 Triangulation in Evaluation: Design and Analysis Issues. Evaluation Review 9: 523–45
  17. Joreskog K G, Sorbom D 1984 LISREL VI, Analysis of Linear Structural Equation Systems by the Method of Maximum Likelihood: User’s Guide. International Educational Services, Chicago
  18. Lewis D 1981 Reactions to Crime. Sage, Beverly Hills, CA
  19. Miles D R, Carey G 1997 Genetic and environmental architecture of human aggression. Journal of Personality and Social Psychology 72: 207–17
  20. Mosteller F 1955 Use as evidenced by an examination of wear and tear on selected sets of ESS. In: Daris K et al. (eds.) A Study of the Need for a New Encyclopedic Treatment of the Social Sciences. Unpublished manuscript
  21. Reichardt C S, Cook T D 1979 Beyond qualitative versus quantitative methods. In: Cook T D, Reichardt C S (eds.) Qualitative and Quantitative Methods in Evaluation Research. Sage, Beverly Hills, CA
  22. Rosenthal R 1991 Meta-analytic Procedures for Social Research. Sage, Newbury Park, CA
  23. Webb E J 1978 Unconventionality, triangulation, and inference. In: Denzin N K (ed.) 1978 Sociological Methods: A Sourcebook. McGraw Hill, New York
  24. Webb E J, Campbell D T, Schwartz R D, Sechrest L 1966 Unobtrusive Measures: Nonreactive Research in the Social Sciences. Rand McNally, Chicago
Tribe Research Paper
Trials Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!