View sample Amos Tversky Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. If you need a religion research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our custom writing service for professional assistance. We offer high-quality assignments for reasonable rates.
1. Amos Tversky
Amos Tversky was born in Haifa, Israel, on March 16, 1937. His father was a veterinarian, his mother a member of Israel’s Parliament, the Knesset. He fought in Israel’s 1956, 1967, and 1973 wars, and received its highest honor for bravery. He received his BA in psychology at the Hebrew University of Jerusalem, in 1961. In 1965, he received his Ph.D. from the University of Michigan’s mathematical psychology program. He worked there with Clyde Coombs (his doctoral advisor), Ward Edwards, and David Krantz, among others. He returned to Jerusalem as a faculty member in 1967, moving to Stanford in 1978. A year at the Center for Advanced Study in the Behavioral Sciences provided Tversky and Kahneman with concentrated time to develop their approach to judgment under uncertainty.
Tversky died of melanoma, on June 6, 1996, in Stanford, California. At his death, he was Professor of Psychology at Stanford University, where he was also a Principal of its Center on Conﬂict and Negotiation. He held positions at Tel Aviv University, as Visiting Professor of Economics and Psychology and Permanent Fellow of the Sackler Institute of Advanced Studies. His wife, Barbara, is Professor of Psychology at Stanford, studying perception. He had three children, Oren, Tal, and Dona. He was known for his great energy, joy of life, and sense of humor.
Tversky made fundamental contributions to the understanding of human thought processes, and the mathematical foundations of the behavioral sciences. His work was distinguished by elegant formal models, tested in simple, illustrative experiments. Throughout much of his career, he worked together with Daniel Kahneman, on studies of judgment and decision making under conditions of risk and uncertainty. On these topics and others, he also collaborated with other scholars from psychology, economics, political science, law, statistics, philosophy, and mathematics. These projects, and the personal interactions surrounding them, helped to integrate these disciplines, at a time of increasing specialization. Among his many honors were the Distinguished Scientiﬁc Contribution Award of the American Psychological Association, a MacArthur Fellowship, and honorary doctorates at Yale, Chicago, Goteborg (Sweden), and New York at Buﬀalo. He was a member of the US National Academy of Sciences. It served as one venue for pursuing his life-long commitment to encouraging peace and understanding among diverse people.
2. Axiomatic Theories Of Choice
An early paper (Tversky 1969) typiﬁes his approach. Normative theories of choice provide rules that individuals should follow, in order to make eﬀective decisions. The most prominent of these is utility theory, founded on a set of intuitively appealing axioms. It describes options in terms of a set of attributes, or features, that an individual might like or dislike about them. When buying a car, the attributes might be (price, size, styling, reliability, comfort). According to utility theory, an individual should consider all relevant attributes, then rank the options according to their attractiveness (or ‘utility’). That ordering should have the property of transitivity. Someone who prefers Car A to Car B and Car B to Car C should also prefer Car A to Car C.
Transitivity is a decision-making norm that most people would endorse. If it accurately described their behavior, they would be better oﬀ. So would scientists and policy makers. The former would ﬁnd it relatively easy to predict behavior, knowing that there is a stable overall ranking, which can be identiﬁed without examining every possible paired comparison. The latter would be able to create policy options (e.g., parks, housing, insurance schemes, transportation systems), knowing that the public’s response will be relatively predictable and consistent.
Tversky showed systematic violations of transitivity, with experimental tasks, similar to choices encountered in everyday life. Moreover, he went beyond merely showing that utility theory described behavior imperfectly. Such a limited demonstration could have been easily attacked as a destructive curiosity that does little to help scientists do their job of explaining and predicting behavior. However, Tversky showed that intransitivity arose from essential psychological processes, which he captured in a simple, but insightful model. These processes reﬂect the limits to human information processing capacity. In situations without the opportunity for trial-and-error learning, people can at best be boundedly rational, working in an orderly way on a subset of the problem (Simon 1957).
The coping process that Tversky proposed has people comparing options initially on the most important attribute (e.g., car price), then checking whether diﬀerences on the second attribute (e.g., reliability), and perhaps others, are so great as to change the choice. For example, imagine sequential comparisons between overlapping pairs of cars (A with B, B with C, C with D, and so on). The sequence is arranged so that each new car is both cheaper and less reliable that the previous one. Looking primarily at the price would make each new option more attractive, while the reductions in reliability slowly mount up. At some point, though, if the ﬁrst car were compared with the last car, a buyer might decide that reliability has gotten so low that the reduction in price is insuﬃcient compensation. In that case, the ﬁrst car, which was long ago rejected, would suddenly be preferred.
Such intransitivity could, in principle, turn people into ‘money pumps,’ willing to pay something to move from one option to the next, but eventually going back to the ﬁrst option and starting over again. As Tversky noted, car salespeople may exploit this cognitive weakness by adding options (and costs) to a favored car. Unlike the experimenter, though, they hope to stop just short of the point where customers balk at the increased price and revert to the stripped-down model. Hanson and Kysar (1999) discuss the implications of such exploitation for marketplace regulation.
Two themes in this research recur in Tversky’s work. One is that choices, especially novel ones, can be very sensitive to how the options are presented. That insight is central to prospect theory (see below), which also provides expression for results from a long tradition of psychological research into ‘context eﬀects.’ The second theme is that descriptively valid formal theories of human behavior should be assembled from psychologically plausible assumptions, rather than derived from normatively postulated principles.
The latter insight is expressed in many of Tversky’s projects, which articulate the formal structure of particular choice situations in ways that allow the expression of plausible behavioral responses (e.g., Tversky 1967, 1972). In some cases, these theories adapted existing mathematical approaches. In other cases, they drove the mathematics. The fundamental mathematical research associated with this research is summarized in three landmark volumes (Krantz et al. 1971 1989 1990). The study of alternative choice rules for dealing with multiattribute options has become a cornerstone of a conjoint measurement, a family of procedures used widely in applied research (e.g., marketing multiattribute consumer products).
3. Prospect Theory
These strands of research came together in Kahneman and Tversky’s (1979) ‘prospect theory.’ Published in a central economics journal, it oﬀered an alternative to utility theory, the core of that discipline. It claimed to achieve greater descriptive validity—but at the price of abandoning the normative aspirations of economics. It depicted individuals as thinking in an orderly, sensible way when faced with choices. However, the rules that people follow diﬀer from those prescribed by the axioms of utility theory, in ways that produce systematic violations. People are sensitive to features of choice tasks that have no representation in utility theory, and insensitive to features that are in the theory.
Superﬁcially, prospect theory resembles utility theory. ‘Prospects’ (i.e., options) are represented as sets of possible consequences. The value (rather than ‘utility’) assigned to a prospect is the sum of the values assigned to each consequence, weighted by the chances of it happening. However, the components of this evaluation diﬀer from their utility theory analogs in ways designed to make the theory more realistic psychologically (Tversky and Kahneman 1991):
(a) The evaluations are made relative to a reference point, rather than with respect to an individual’s net asset position (or entire worth). That reference point is typically the status quo. However, it could be another salient value (e.g., evaluating a raise relative to what one expected to get or to what others got, rather than relative to one’s current salary). This feature of prospect theory reﬂects people’s widely observed sensitivity to changes.
(b) People care less about a given absolute diﬀerence in gains or losses as the overall change increases. The idea the people care less about a ﬁxed gain as they have or win more is shared with utility theory. However, the idea that people care proportionately less as losses mount up is not. Utility theory explains insurance buying (among other things) in terms of people’s special aversion to large losses. Prospect theory reﬂects the psychological principle that the just-noticeable diﬀerence between situations is greater when the overall change is larger—whatever the direction of change.
(c) Before being combined with value judgments, the probabilities of the consequences are subjected to a ‘probability weighting function.’ It reﬂects the observation that people pay particular attention to sure things, while being relatively indiﬀerent to probability diﬀerences in the middle range (e.g., the diﬀerence between 0.35 and 0.45 matters less than that between 0.90 and 1.00).
As combined in the theory, these principles (along with some others) predict (among other things) framing eﬀects. Changing the reference point can reverse choices, in ways that have no reﬂection in utility theory (which has no such point). For example, an emergency medical intervention that could save 200 of 600 possible lives is more attractive when described in terms of the 200 lives saved than the 400 lives lost. Prospect theory’s challenge to utility theory evoked vigorous responses. One extreme involved radical methodological skepticism. Partisans attacked any aspect of prospect theory’s research method that could, conceivably, have contributed artifactually to behaviors oﬀered as evidence supporting the theory. Although some of these critiques were ad hoc claims, disconnected from the research literature, others raised fundamental questions with theoretical and methodological implications.
One such question concerns the conﬂicting norms of experimental economics and experimental psychology regarding the provision of ﬁnancial incentives. Economists typically believe that what people say and do has little meaning, unless they are rewarded for their performance. In this view, even if they are systematic, responses to hypothetical question could reﬂect no more than acquiescence to implicit task demands. In contrast, psychologists typically view all behavior as ‘real,’ with the attendant challenge of determining how individuals have construed their task. Even a hypothetical choice can induce high intrinsic and extrinsic motivation, as individuals try to impress themselves and those watching them. Camerer and Hogarth (1999) summarize the evidence, regarding the varied impacts of monetary rewards on decision-making performance. The other extreme involves economists who accept the validity of prospect theory’s results and attempt to reshape their science in its light. Some have examined its implications for the axiomatic basis of utility theory, either by reﬁning those axioms to ﬁt the data better or by replacing them with ones expressing diﬀerent normative principles. Others have examined the expression and implications of non-rational behavior in real-world circumstances, prompting new ﬁelds like behavioral ﬁnance, which questions widely accepted eﬃcient market theories.
Still others have sought to help people reduce such inconsistencies in their preferences. These ‘constructivist’ approaches assume that people’s inconsistencies arise because evaluating options requires an inferential process, whereby people derive a speciﬁc valuation from more general, basic values. If that process is incomplete, then people will be unduly inﬂuenced by the initial problem presentation. Reducing that risk requires a more interactive process, explicitly considering alternative perspectives—trying to balance those suggestions, so that people come to know what they want. Fischhoﬀ and Manski (1999) show the convergence among economists and psychologists concerned with these questions.
4. Judgment Under Uncertainty: Heuristics And Biases
For economists and psychologists concerned with decision making, values are a matter of personal preference. As a result, showing them to be suboptimal means demonstrating inconsistencies. With beliefs, suboptimality can be found by comparing judgments to one another (looking for inconsistency) and to an accepted external standard (looking for error). In their ﬁrst collaboration, Tversky and Kahneman (1971) demonstrated errors in the judgments of mathematical psychologists and statisticians, when estimating the statistical power of research designs. These individuals knew how to perform the relevant calculations. However, when forced to rely on intuition, they systematically exaggerated the chances of observing anticipated results—acting as though the law of large numbers applied to samples with small numbers of observations.
Tversky and Kahneman (1974) proposed that these judgments arose from relying on the representativeness heuristic. According to this rule of thumb, an event is judged likely to the extent that it captures (or ‘represents’) the salient features of the situation that might produce it. Conversely, reliance on representativeness should lead to neglecting factors lacking a salient role in such processes, even if they are normatively relevant. Sample size is such a factor. If a situation can produce an event, why should it matter how much one observes? A second normatively relevant factor with no role in representativeness is the relative frequency of an event in the past. Such base-rate (or a priori) information should guide predictions unless there is strongly diagnostic Individuating information, showing why a speciﬁc case is diﬀerent from the norm. (Medical students are sometimes told, ‘When you hear hoof beats, think horses, not zebras,’ at least in North America.) However, studies have found that even ﬂimsy evidence can attract the lion’s share of attention. It is natural to think about how well someone ﬁts the image of an engineer (or how much a mammogram image looks like a tumor). It is not so natural (or easy) to combine that vision or theory with a statistical summary of how frequent engineers (or tumors) are in the population. There have been many subsequent demonstrations of underweighting base rates (including in the interpretation of mammograms), as well as vigorous debate (sometimes with data, sometimes not) regarding the conditions determining its extent (Gilovich et al. in press, Kahneman et al. 1982).
Analogous reasoning led to predicting that reliance on representativeness would encourage underestimating regression to the mean and exaggerating the predictability of random sequences. The latter was demonstrated in studies showing an illusory belief in being able to identify streaks in basketball free-throw shooting. Both collegiate and professional players (and fans) interpreted patches of random sequences as reﬂecting transient ‘hot hands’ (Gilovich and Tversky 1985).
Tversky and Kahneman (1974) proposed the representativeness heuristic as one exemplar of a general strategy that people use when confronted with un- familiar tasks (and no opportunity for systematic study or calculation). A second such heuristic, a ail-ability, judges the probability of an event by the ease with which examples come to mind. That rule has some validity (in many domains, things that one hasn’t seen tend to be less likely than things that one has). However, it can predictably lead one astray, as when the news media disproportionately report one risk (e.g., homicide vs. asthma) (Slovic 1987). Subsequently, Tversky and Koehler (1994) formalized the relationship between evaluations of evidence and probability judgments in ‘support theory.’ Like its predecessors, it predicts the conditions under which simple, intuitive thought leads to good and bad judgments. A third heuristic, also with historical roots in psychology, is anchoring and adjustment. Users begin an estimation process from a salient value (the ‘anchor’), then adjust from there, in response to other concerns that come to mind. Typically, that adjustment is insuﬃcient. If the anchor is far from the correct value, then the resulting estimate will be as well.
One cross-cutting theme in Tversky’s judgment research (as with that on choice) is questioning the normative standards used to evaluate people’s performance. Here, too, the result has been reevaluating those standards and their domains of applicability (Shafer and Tversky 1985). Another recurrent theme is the importance of comparison processes in judgment. Representativeness asks how well an event ﬁts the process that would have to produce it. Availability asks how well an example ﬁts the archetype of the category whose probability is being predicted. In a parallel research program, Tversky (1977) created a theory of similarity judgments, based on measuretheoretic concepts and, again, simple, elegant experiments.
- Camerer C F, Hogarth R M 1999 The eﬀects of ﬁnancial: incentives in experiment. Journal of Risk and Uncertainty 19 7–42
- Fischhoff B, Manski C F (eds.) 1999 Preference elicitation.: Journal of Risk and Uncertainty 19 (1–3)
- Gilovich T, Griﬃn D, Kahneman D (eds.) in press The Psychology of Judgment: Heuristics and Biases. Cambridge University Press, New York
- Gilovich T, Tversky A 1985 The hot hand in basketball. Cognitive Psychology 17: 295–314
- Hanson J D, Kysar D A 1999 Taking behavioralism seriously: Some evidence of market manipulation. Harvard Law Review 112: 1420–1572
- Kahneman D, Slovic P, Tversky A (eds.) 1982 Judgment under Uncertainty: Heuristics and Biases. Cambridge University Press, New York
- Kahneman D, Tversky A 1979 Prospect theory: An analysis of decision under risk. Econometrica 47: 263–81
- Krantz D, Luce R D, Suppes P, Tversky A 1971/1989/1990 Foundations of Measurement, Vols. 1–3. Academic Press, New York
- Shafer G, Tversky A 1985 Languages and designs for probability judgment. Cognitive Science 9: 309–39
- Simon H A 1957 Models of Man: Social and Rational. Wiley, New York
- Slovic P 1987 Perception of risk. Science 236: 280–5
- Tversky A 1967 A general theory of polynomial conjoint measurement. Journal of Mathematical Psychology 4: 1–20
- Tversky A 1969 The intransitivity of preferences. Psychological Review 76
- Tversky A 1972 Elimination by aspects: A theory of choice. Psychological Review 79: 281–99
- Tversky A 1977 Features of similarity. Psychological Review 84: 327–52
- Tversky A, Kahneman D 1971 Belief in ‘law of small numbers’. Psychological Bulletin 76: 105–10
- Tversky A, Kahneman D 1974 Judgment under uncertainty: Heuristics and biases. Science 185: 1124–31
- Tversky A, Kahneman D 1991 Advances in prospect theory. Journal of Risk and Uncertainty 5: 297–323
- Tversky A, Koehler D J 1994 Support theory: A nonextensional representation of subjective probability. Psychological Review 101: 547–67