Philosophy of Probability and Chance Research Paper

Academic Writing Service

Sample Philosophy of Probability and Chance Research Paper. Browse other  research paper examples and check the list of research paper topics for more inspiration. If you need a religion research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our research paper writing service for professional assistance. We offer high-quality assignments for reasonable rates.

The central philosophical issue involving probability concerns the interpretation of ordinary probability claims, such as ‘there is a 30% probability that it will rain today.’ Epistemic interpretations of probability understand this to be a claim about belief under uncertainty: it describes the extent to which a certain type of rational agent would (or ought to) believe that it will rain, given the available evidence. Objective interpretations of probability read the claim as describing some quantifiable feature of the objective world, sometimes called ‘chance.’ The concept of chance is thought by some to be intimately connected with indeterminism. A variety of epistemic and objective interpretations of probability will be discussed below.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


1. The Problem Of The Interpretation Of Probability

The problem of the interpretation of probability can be posed using a familiar example: what does a meteorologist mean when he or she says that there is a 30 percent probability of rain on a given day? To give an account of what a proposition means is to say what must be the case in order for the proposition to be true. If the meteorologist says that it will rain, this assertion is true if and only if it does rain. Whether it rains or not, however, neither outcome by itself renders the probabilistic claim true. So what must be the case in order for this claim to be true?

An adequate solution to this problem must provide some way of making sense of the numerical values that appear in probability claims (and must do so in such a way that those values are in accord with the basic principles of probability—see Probability: Formal). The claim that 30 percent of the balls in a given urn are blue has a well-understood meaning: the number of blue balls in the urn, divided by the total number of balls in the urn, is equal to 0.3. In the case of the weather report, however, no comparable formula is immediately available.




Proposed interpretations of probability tend to fall within one of two broad camps, objective (or empirical ) and epistemic (or subjective) interpretations. Objective interpretations interpret probability claims as describing some feature of the world. Epistemic interpretations of probability, by contrast, interpret probability claims as claims about the extent to which the claim is or ought to be believed. Note that this standard terminology is somewhat misleading: some epistemic interpretations (such as the logical interpretation) take probabilities to be objective in the sense that they do not vary from person to person. The various interpretations of probability canvassed below are often presented as rivals, but it may be more appropriate to think of them as complementary. Most researchers who have grappled with interpretive issues have been of the opinion that the concept of probability has both an objective and an epistemic dimension. Accordingly, philosophers often talk of ‘objective probabilities’ and ‘epistemic probabilities.’ Objective probabilities are sometimes called ‘chances.’

Philosophers (as well as many philosophically minded statisticians) have become interested in the problem of interpretation of probability in part because of the ubiquity of probability claims in science as well as ordinary life, and in part because of the utility of probability theory in analyzing a variety of concepts of interest to philosophers, such as causation, confirmation of hypotheses by evidence, and rationality.

This research paper will focus on the philosophical, rather than the mathematical, dimensions of the problem of interpreting probability. Other surveys of this topic include Salmon (1967), Sklar (1993), and Howson (1995).

2. Determinism And Chance

In a famous thought experiment, the French physicist Pierre Simon de Laplace (1819) imagined a demon who could observe the exact position and momentum of every particle in the universe at a given time. Such a demon could plug this information into Newton’s equations of motion and predict the state of the universe at all future times. In this thought experiment, Laplace was articulating the concept of determinism. A system is deterministic if its state at a given time, together with the laws of evolution for that system, uniquely determine the state of the system at later times. For Laplace, determinism meant that all objective probabilities would have to have values of zero or one. Laplace thus thought that probabilistic claims, such as the meteorologist’s prognostication described above, reflect human ignorance about which outcome is determined to occur.

At present, there is good reason to believe that certain physical systems described by quantum mechanics behave indeterministically. For example, if a vertically polarized photon strikes a polarizer that is oriented with an angle of θ, quantum theory entails that the photon has a probability of cos2θ of passing through. According to the orthodox interpretation of quantum theory (which is not uncontroversial), this probability is irreducible: there is no further fact about the photon that determines what it will do on this occasion. If this is correct, then the probability value cos2θ does not merely reflect human ignorance about what the photon will do: it is an objective feature of the world. (For a thorough discussion of philosophical issues involving determinism, see Earman (1986).) One important philosophical issue is whether only such irreducible probabilities are genuine chances. If so, then it would be safe to infer that none of the probability claims made within the social sciences describe genuinely objective probabilities.

The word ‘chance’ is also sometimes used as a singular noun to refer to a chance that is not equal to zero or one. For example, the thesis of determinism is sometimes expressed by saying that there is no genuine chance in the world; ‘the USA soccer team has a chance of winning the next world cup’ says that the chance of their winning is greater than zero and less than one; to ‘take a chance’ is to embark on a venture that has a chance of success that is greater than zero but less than one; and so on.

3. Epistemic Probability

Laplace claimed that probability claims do not describe objective features of the world, but the state of human knowledge (and ignorance). Epistemic interpretations of probability all share this general feature.

3.1 The Classical Interpretation

If asked the probability of rolling an odd number on a six-sided die, most people would unhesitatingly respond with the answer one-half: there are six possible outcomes of the die roll, three of which are odd. This illustrates the classical interpretation of probability, often associated with Laplace (1819). According to this interpretation, the possible outcomes of a particular trial are divided into ‘equally possible’ cases, and the probability of outcome F on a particular trial is equal to the number of such cases in which F is true, divided by the total number of possible cases. Which cases are equally possible is determined by the Principle of Indifference, which states that two cases are equally possible in the absence of any reason to prefer one outcome over the other.

There are a number of difficulties with the principle of indifference. When the weather reporter claims that the probability of rain is 0.3, there seem to be at least two different types of outcome (rain and no rain) but it is hard to see what the 10 equally possible outcomes (three of which involve rain) could be. Even in much simpler cases, the principle of indifference is not precise enough to specify the equally possible cases. An urn contains both red balls and blue balls; some of the red balls are light red, and some are dark red. What is the probability that the next ball drawn is blue: are there two equally possible cases (red and blue) or three (light red, dark red, and blue)? This problem is further exacerbated when there are infinitely many possible outcomes.

3.2 Subjective Interpretations

According to the subjective interpretation of probability (whose proponents are variously called subjectivists’ personalists, or Bayesians), probability measures the degrees of belief or credences of a certain type of idealized rational agent. Subjectivists regard probability theory as analogous to formal logic. Logic alone never tells an agent whether to believe a proposition, but it does require that an agent’s beliefs be consistent with one another. For example, the rules of logic forbid an agent to believe ‘it will rain today,’ ‘if it rains today, it will not snow today,’ and ‘it will snow today’ at the same time. According to subjectivists, probability theory is an extension of logic that allows an agent to have degrees of confidence in a proposition that range in value between zero and one. Probability theory alone will not tell the agent how much confidence to place in the proposition ‘it will rain today,’ but rather imposes a consistency-like constraint (usually called coherence) on the totality of the agent’s degrees of belief.

The numerical value of an agent’s degree of belief in some proposition is connected with the agent’s evaluation of bets on the truth of that proposition. For example, if an agent has a degree of belief of 0.3 for the proposition ‘it will rain today,’ that means that he or she would find a bet on rain at 3:7 odds—a bet that costs the agent three units, and buys a chance to win 10 units (for a profit of seven units) if it rains—to be fair. This connection between degrees of belief, utility, and evaluation forms the foundation of a theory of rational choice wherein an agent always acts so as to maximize expected utility.

This connection between degrees of belief and evaluation is often deployed in arguments that rational degrees of belief conform to the formal rules of probability. When an agent’s degrees of belief conform to the probability calculus, then his or her overall system of evaluations will have a number of desirable features. For example, Ramsey (1926) showed that having one’s degrees of belief conform to the rules of probability is a necessary condition for having one’s overall system of preferences satisfy a set of plausible constraints. One such constraint is transitivity: if the agent prefers A to B and B to C, then the agent prefers A to C. A second class of results comprises the so-called Dutch Book theorems. (See, for example, de Finetti (1964) and Kemeny (1955).) These results show that if and only if an agent’s degrees of belief violate the rules of probability, the agent will be vulnerable to a ‘Dutch Book.’ A Dutch Book is a system of bets such that the agent finds each individual bet fair, but where the agent is guaranteed of suffering a net loss no matter what the outcome. For example, a bet on heads at 2:1 odds together with a bet of equal stakes on tails at 2:1 odds would constitute a Dutch Book, for the agent purchasing the two bets would be guaranteed of losing 2 units on one bet, and winning 1 unit on the other. (For more details, see Probability: Interpretations.)

The requirement that rational degrees of belief satisfy the rules of probability theory is a synchronic constraint on one’s degrees of belief: it concerns an agent’s degrees of belief at one particular time. Some subjectivists impose a further diachronic constraint on allowable methods for changing one’s degrees of belief over time. Suppose that a rational agent’s degrees of belief at some particular time are represented by the probability function P. If the agent then acquires some new piece of information I, then the agent’s new degree of belief in any proposition A is equal to the conditional probability P(A I ). This rule of updating degrees of belief by conditionalizing can also be motivated by a form of Dutch Book argument (proved by David Lewis and reported in Teller (1973)).

This rule for changing degrees of belief has been championed by so-called Bayesians as a theory of confirmation. According to this theory, a hypothesis H is confirmed or supported by a piece of evidence E just in case conditionalizing on E increases the probability of H. This probabilistic approach to confirmation theory has proven to be very fruitful. (see Howson and Urbach (1993).

The various theorems invoked by subjectivists are formal results, and there is considerable debate over their interpretation. In particular, there is disagreement about how, or even whether, the Dutch Book theorems really show that coherence or updating by conditionalization are canons of rationality. Critics of subjectivism point out that even an incoherent agent would be capable of following the proofs of these theorems, and hence of foreseeing the dangers of Dutch Books; thus coherence and updating by conditionalization are not really necessary for avoiding financial loss (see Howson and Urbach (1993) for further discussion).

A second focus of criticism is the subjective nature of degrees of belief. Rational agents may differ in the degree to which they believe a proposition such as ‘it will rain today’; indeed their degrees of belief may range all the way from zero to one. Moreover, the constraints of synchronic coherence plus updating by conditionalization do not rule out the possibility that one’s degrees of belief are dogmatic in a certain respect. It is consistent with these constraints that an agent have degree of belief of 0.99 that each toss of a given coin will land heads, and after observing a thousand tails in a row, still believe that the next toss will almost certainly be heads. Subjectivists have been able to prove many interesting results about how an agent’s degrees of belief would evolve in the face of new evidence if the agent’s original degrees of belief (or prior probabilities) were nondogmatic in various ways (de Finetti’s (1964) representation theorem is one important example), but there is nothing in subjectivism that would require the agent to have such nondogmatic prior probabilities. For many, this latitude makes subjective degrees of belief ill-suited as a foundation for theories of rational choice or rational inference.

3.3 Logical Interpretations

Logical interpretations aim to avoid the problems of subjectivist interpretations by imposing further constraints on an agent’s prior probabilities. Prior probabilities are determined by the logical structure of the space of outcomes, much as they are in the classical interpretation. Unlike the classical interpretation, however, probability assignments can reflect the bearing of evidence that discriminates among the possible outcomes. Thomas Bayes (1763) and Laplace (1819), who are commonly associated with the subjectivist and classical approaches (respectively), both endorsed a version of the logical approach to probability. Twentieth century proponents include Carnap (1950), Jeffreys (1961), and Jaynes (1973).

Logical interpretations of probability inherit many of the difficulties of the classical interpretation discussed above: logical considerations alone do not determine a unique prior probability distribution. Moreover, many of the most plausible candidates for logical probabilities—such as one in which a sequence of coin tosses is independent and identically distributed—turn out to be dogmatic in just the manner described in Sect. 3.2.

3.4 The Need For Objective Interpretations

Although few would doubt that our ordinary concept of probability has an epistemic dimension, it seems implausible that all probability claims can be interpreted epistemically. Consider the claim that a photon has a certain probability of passing through a polarizer; it would be very odd indeed if the truth of this claim (and hence of quantum mechanics) depended on the extent to which some agent(s) may believe that the photon will pass through or on the extent to which available evidence confirms this hypothesis. At best, epistemic interpretations of probability will have to be supplemented by an objective interpretation of at least some probability statements.

4. Objective Interpretations Of Probability

In practice, probabilities are often inferred from frequency data. To estimate the probability that a given coin will land heads when tossed, for example, one could toss it many times and observe the relative frequency with which the coin lands heads. Such relative frequencies are objective features of the world. Hence relative frequencies provide a natural resource for objective interpretations of probability claims.

4.1 Finite Relative Frequency Interpretation

Perhaps the simplest interpretation of a probability claim would be to understand it as a claim about an actual finite relative frequency. For example, if a coin were tossed exactly 10 times and then destroyed, resulting in ‘heads’ on six of those 10 tosses, then the probability that the coin lands heads when tossed would be 0.6. The central difficulty with this definition of probability is that it rules out unrepresentative samples by definition. For example, if a coin is fair, i.e., has a probability of 0.5 of landing heads when tossed, that should not be taken to rule out the possibility of it landing hands on six of the 10 occasions on which it is tossed. According to the finite frequency definition of probability, however, if the probability of heads is 0.5, then by definition the coin will land heads on exactly half of its tosses. Thus, while finite relative frequencies provide indispensable evidence for probabilistic claims, probabilities cannot be identified with finite relative frequencies.

4.2 Limiting Relative Frequency Interpretation

In order to avoid these problems with finite frequencies, some philosophers have suggested that probabilities be identified with frequencies in an infinite sample. If the first n tosses of a coin yields f(n) heads, then the relative frequency of heads in the first n tosses is f(n)/n. The limiting relative frequency of heads in the infinite sequence of tosses is the limit as n goes to infinity of f(n)/n. This approach to interpreting probability was pioneered by Venn (1866); leading twentieth century proponents of this interpretation (hereafter called frequentists) include Richard von Mises (1957) and Hans Reichenbach (1949).

One important problem with limiting relative frequencies is that they seldom, if ever, exist: no coin is actually tossed an infinite number of times. Thus the limiting relative frequencies that determine the probability of an outcome are really hypothetical limiting relative frequencies: to say that the probability of heads on a toss of a certain coin is 0.5 is to say that if the coin were to be tossed an infinite number of times, the limiting relative frequency of heads would be 0.5. This is a hypothetical or counterfactual claim; by itself, it says nothing about what features of the actual world make the probabilistic claim true. The problem of interpretation of counterfactuals is itself a thorny philosophical problem.

One important issue that divides frequentists is whether it is possible to ascribe a probability value to the outcome of a single trial. One tradition, following von Mises (1957), views probability as a property of a sequence of trials, or a ‘collective.’ Any single coin toss belongs to infinitely many different sequences: the sequence of all coin tosses, the sequence of all tosses of that particular coin, the sequence of all tosses of that coin by imparting a particular force from a particular height above the floor, and so on. The limiting relative frequency of ‘heads’ may be different in each of these sequences. Thus, there is no unique probability of ‘heads’ on a given toss, but only a probability for ‘heads’ in a collective of tosses. While frequentists in this tradition deny that there is one privileged sequence of coin tosses, not just any sequence of tosses is fit to bear probability values. For example, if an infinite sequence of coin tosses were to yield an alternating sequence of ‘heads’ and ‘tails,’ the relative frequency of ‘heads’ would be 0.5. The regularity of the pattern of outcomes would nonetheless render this sequence inappropriate for characterizing a probability value. Rather, probabilities are limiting relative frequencies in a random sequence of trials. There is no universally accepted mathematical characterization of random-ness, although this is currently an area of active research (see Earman 1986, Chap. VIII). For a more detailed exposition of this version of the limiting relative frequency interpretation, see Probability: Interpretations.

One feature of this approach is that it allows chances to take on values other than zero or one even if determinism is true. It may be that on any particular toss of a coin, the linear and angular momentum imparted to the coin together with its height above the floor and so on determine the outcome of the coin toss. Nonetheless, there may be a random sequence of coin tosses where the limiting relative frequency is not equal to zero or one, for the tosses comprising the collective need not be identical with respect to such determining factors.

The relegation of probabilities to properties of collectives would seem to be a drawback of this approach to probability. When deciding whether to bet on a coin toss, or to postpone a picnic, one wants to know the probability of heads on that particular toss, or the probability of rain on this particular day. Some frequentists, following Reichenbach, believe that it is possible to assign probability values to the outcomes of individual trials. This requires that a choice be made from among all of the sequences to which the trial in question belongs. For example, a particular day belongs in an infinite number of sequences of days, and the limiting relative frequency of rain need not be the same in all of them, so which sequence determines the probability of rain on the day in question? Presumably not just the sequence of all days, for then the probability of rain would not vary from day to day. A natural thought is that what is wanted is a sequence of days just like this one. No two days will be exactly alike in all respects, however; the best that can be hoped for is a sequence of days that are alike in all relevant respects. The key problem, then, is to say what makes some respects of similarity between days relevant and others irrelevant. One detailed proposal is developed in Salmon (1984, Chap. 3).

On this approach, determinism is compatible only with chances of zero or one. Suppose, for example, that on a given day, climactic conditions determine that it will rain. Then the appropriate sequence will presumably consist of days in which the same climactic conditions prevail, and hence the limiting relative frequency of rain within that sequence will be one.

4.3 Propensity Interpretation

The propensity interpretation of probability, first introduced by Karl Popper (1959), is motivated by some of the difficulties with the limiting relative frequency interpretation. According to this interpretation, a coin, in conjunction with the conditions in which it is tossed, has a certain disposition or propensity to land heads. The probability that the coin will land heads is a measure of the strength of this propensity. This propensity is a property of the particular experimental set-up, hence, it is appropriate to assign a probability to an individual trial. Moreover, this propensity provides grounds for the truth of claims about what the coin would do if tossed infinitely many times in identical (or relevantly similar) conditions.

While this is certainly a useful way of thinking about single case probabilities within a frequentist framework, critics wonder whether anything has really been added. A coin is about to be tossed: there is some property of this experimental set-up in virtue of which the coin would land heads with a certain limiting relative frequency if this experiment were repeated infinitely often; calling this property a ‘propensity’ of a certain strength does little to indicate just what this property is. Moreover, if propensities are Pnot simply identified with hypothetical limiting relative frequencies, it is hard to see what is supposed to ground the claim that propensities have the mathematical structure of probabilities.

4.4 Best System Interpretation

The philosopher David Lewis (1986) has proposed an interpretation according to which an objective probability claim is true when it is entailed by a probabilistic law. On this view, objective probability is intimately connected with indeterminism. A probabilistic claim is a law if it belongs to the set of propositions that provides the best overall systematization of the world. In order to be the best systematization, a set of propositions must obtain the optimal balance between informativeness, simplicity, and fit with data. For example, consider the claim that a photon w ill pass through a polarizer with a probability of cos2θ. Although the frequency with which photons do pass through when the experiment is performed may diverge from this number, the probability value is firmly grounded in the principles of quantum mechanics, which overall provides an excellent fit with experimental data, and is a powerful and elegant theory.

Unlike the frequency interpretations, however, the truth conditions for probabilistic claims provided by the best system interpretation are not very precise. It is not clear what makes one set of propositions simpler than another, nor how simplicity is to be balanced against informativeness and goodness of fit.

4.5 Theoretical Role Interpretations

One further line of interpretation is influenced by the philosophical school of pragmatism. In this approach, objective probability claims are to be understood in terms of their role in our overall conceptual system. In particular, to understand a probability claim is to know what kinds of evidence would support or undermine that claim, and how to act accordingly if one accepts that claim as true. This approach has been developed by Mellor (1971), Skyrms (1980), and Lewis (1986).

Consider an agent whose degrees of belief conform to the rules of probability (see Sect. 3.2), and let Ch be a function that assigns numerical values to propositions. The agent takes Ch to be objective chance precisely if his or her degrees of belief are guided by beliefs about the values of Ch. For example, suppose that the agent believes Ch (‘it will rain today’) to be 0.3. If the agent understands Ch to be objective chance, then his or her degree of belief in the proposition ‘it will rain today’ will also be 0.3. This degree of belief, together with the agent’s utility function, can then be used to decide whether or not to postpone a picnic. In this way, then, the agent’s beliefs about objective chances influence his or her decisions. Moreover, since objective probability claims are now propositions about which the agent has degrees of belief, the apparatus of Bayesian confirmation theory shows how those degrees of belief will change in the face of new evidence.

This approach has a number of virtues, not least of which is that it explains why objective chances have the formal properties of probability theory: in order for beliefs about chances to guide degrees of belief in the appropriate way, objective chances must have the same formal structure as rational degrees of belief. There is, however, an important drawback to this approach. In analyzing the meaning of objective probability claims in terms of their theoretical role instead of their truth conditions, this account does not answer what many have taken to be the central question of objective probability: what must be the case in the world in order for the (objective) probability of rain to be 0.3?

5. Conclusion

Interest in the philosophical aspects of probability dates back to the origins of the formal theory of probability in the seventeenth century. Epistemic and frequency interpretations of probability have been around for well over a century. In the second half of the twentieth century, interest in objective interpretations of single-case probabilities has dramatically increased the range of interpretive options available. Given the current lack of consensus, and the increasingly widespread use of probabilistic concepts in both philosophy and everyday life, it is safe to predict that interest in the problem of the interpretation of probability will continue into the twenty-first century.

Bibliography:

  1. Bayes T 1763 An essay towards solving a problem in the doctrine of chances. Philosophical Transactions of the Royal Society 53: 370–418
  2. Carnap R 1950 Logical Foundations of Probability. University of Chicago Press, Chicago
  3. de Finetti B 1964 Foresight: its logical laws, its subjective sources. In: Kyburg H E, Smokler H E (eds.) Studies in Subjective Probability. Wiley, New York
  4. Earman J 1986 A Primer on Determinism. Reidel, Dordrecht, Chaps. II, VIII
  5. Howson C 1995 Theories of probability. British Journal for the Philosophy of Science 46: 1–32
  6. Howson C, Urbach P 1993 Scientific Reasoning: The Bayesian Approach, 2nd edn. Open Court, Chicago
  7. Jaynes E T 1973 The well-posed problem. Foundations of Physics 3: 477–93
  8. Jeffreys H 1961 Theory of Probability, 3rd edn. Clarendon Press, Oxford, UK
  9. Kemeny J G 1955 Fair bets and inductive probabilities. Journal of Symbolic Logic 20: 263–73
  10. Laplace P S de 1819 Essai Philosophique sur les Probabilites. Mme. Ve Courcier, Paris
  11. Lewis D 1986 Philosophical papers. Oxford University Press, Oxford, UK, vol. II, see especially the introduction and Chap. 19
  12. Mellor D H 1971 The Matter of Chance. Cambridge University Press, Cambridge, UK
  13. Popper K 1959 The Propensity interpretation of probability. British Journal for the Philosophy of Science 10: 25–42
  14. Ramsey F 1926 Truth and probability. In: Mellor D H (ed.) Philosophical Papers. Cambridge University Press, Cambridge, UK
  15. Reichenbach H 1949 The Theory of Probability. University of California Press, Berkeley, CA
  16. Salmon W C 1967 The Foundations of Scientific Inference. University of Pittsburgh Press, Pittsburgh, PA, see especially Chap. V
  17. Salmon W C 1984 Scientific Explanation and the Causal structure of the world. Princeton University Press, Princeton, NJ, see especially Chap. 3
  18. Sklar L 1993 Physics and Chance. Cambridge University Press, Cambridge, UK, see especially Chap. 3
  19. Skyrms B 1980 Causal Necessity. Yale University Press, New Haven, CT
  20. Teller P 1973 Conditionalization and observation. Synthese 26: 218–58
  21. Venn J 1866 The Logic of Chance. Macmillan, London
  22. von Mises R 1957 Probability, Statistics, and Truth, 2nd rev. English edn. Allen and Unwin, London

 

Philosophy Of Property Research Paper
Philosophical Aspects of Pragmatism Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!