Probability Interpretations Research Paper

Academic Writing Service

Sample Probability Interpretations Research Paper. Browse other  research paper examples and check the list of research paper topics for more inspiration. If you need a research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our research paper writing service for professional assistance. We offer high-quality assignments for reasonable rates.

1. Qualitative And Quantitative Probability

1.1 Introduction

It is entirely unremarkable that the commonplace word ‘probability’ and its cognates display a wide variety of distinct meanings in ordinary language. It is used as a classificatory concept, as in ‘It is probable that it will snow this winter’ or ‘It is improbable that a fair coin will land heads 10 times in succession.’ It is used as a (qualitative) comparative relation, as in ‘It is more probable that someone will be killed by lightning than by a falling meteorite next year.’ And, of course, it is used as a quantitative concept throughout science, e.g., Mendel’s law, ‘The probability is 0.75 that the offspring of a cross between simple hybrids will show the dominant trait.’ What is worth noting, however, is that despite longstanding controversies over the legitimate interpretation of ‘probability,’ these diverse usages can nonetheless be linked easily to a few formal theories of mathematical probability.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


The received formal theory of quantitative probability, due to Kolmogorov, is elegant. Mathematical probability is a (real-valued) function, P, with values between 0 and 1 (inclusive), defined over a field of events closed under the Boolean operations of union, intersection, and complementation. The sure event Ω is certain, P(Ω) = 1, and P is additive over unions of disjoint events. The qualitative formal theory of probability uses a binary relation also defined on an algebra of events: E F means that event F is more probable than event E. The qualitative formal theory is presented through axioms that mimic the quantitative one.

The outline for this research paper is as follows. Sections 1.2 and 1.3 review relevant details of the received, formal (mathematical) theories of quantitative and qualitative probability, and some basic connections between them. (This section includes some technical results that the reader might bypass initially, to return to on a second reading.) Section 2 surveys several interpretations of the quantitative theory, including subjective probability (2.1), logical probability (2.2), objective probability (2.3), and an approach (2.4) here labeled the Fisher–Kyburg theory, relating to Fisher’s enigmatic fiducial probability. Additional readings are suggested in Sect. 3.




1.2 The Formal Theory Of Quantitative Probability

Begin with Kolmogorov’s (1956) mathematical theory for quantitative probability, dating from the 1930s. A measure space <Ω, F, P> is triple comprised by:

(1) a set of possibilities, Ω, whose elements, ω, are points of the space. A field F = {E of subsets of Ω}. Elements of F are the events of the space. It is assumed that Ω ϵ F. Being a field, F is closed under finite unions, intersections, and complementation with respect to Ω. When F is infinite, usually it is required

(2) that F is a σ-field, closed under countable unions and intersections.

(3) A probability function P over F satisfying these axioms:

(3.1) 0 ≤ P(E ) ≤ 1

(3.2) P(Ω) = 1

(3.3) P(E U F ) = P(E)+ P(F), whenever E ∩ F = Ø.

(finite additivity)

(3.4) P( U Ei) =ΣiP(Ei) whenever Ei ∩ Ej = F Ø for i = j. (σ-additivity)

Moreover, when P(F ) > 0, a conditional probability given F, P(•| F), is well defined over F by the equation

(4) P( •|F )P(F) = P(• ∩ F).

Kolmogorov’s mathematical theory provides a basis for each of the two nonquantitative uses of probability, illustrated above.

The classificatory concept of probability may be explicated as:

‘E is probable’ provided that P(E ) > c > 0.5, for some value c, determined by the context of the assertion. Then, ‘E is improbable’ means that the complementary event Ec is probable.

The (qualitative) comparative concept of probability may be explicated as:

‘E is more probable than F ’ provided that P(E ) > P(F )

1.3 The Formal Theory Of Qualitative Probability

A mathematical theory of qualitative probability is readily obtained from the quantitative one. The binary relation over events is said to be a (qualitative) comparative probability relation if it satisfies the following three axioms that parallel the first three axioms of Kolmogorov’s theory.

( 5.1) <is an acyclic, transitive relation over F. That is, if E<F and F<G then E<G, and it is never the case that E<E. The qualitative relation that E and F are equiprobable events, denoted E ≈ F, is defined by the condition that neither E<F nor or F<E. Then ≈ is transitive and reflexive, i.e., E ≈ E always obtains.

(5.2) Ø<Ω and for no event E, E<Ø.

(5.3) Provided that E ∩ (F U G ) = Ø, F<G if and only if (E U F )<(E U G).

Say that a quantitative probability P agrees with the comparative probability relation < on the condition that, P(E ) < P(F ) if and only if E < F. DeFinetti (1931) asked whether every comparative probability relation F has an agreeing quantitative probability. Kraft et al. (1959) established that the answer is no, even when is a finite space. However, they also showed that for finite spaces, the following (stronger) additivity condition is both necessary and sufficient for an agreeing quantitative probability to exist.

(5.4) Let (E1, …, Em) and (F1, …, Fm) be two sequences of events from F such that, for each point ω in Ω, the same number of events from each sequence contain ω. If it is not the case that Fi<Ei (i =1, …, m -1), then it is not the case that Em <Fm.

Aside: It is interesting to note that the very same condition, expressed for a quantitative probability, serves as a necessary and sufficient condition for extending the domain of a quantitative probability, Horn and Tarski (1948).

A different relationship between quantitative and qualitative probabilities is also worth consideration. Say that a quantitative probability P almost agrees with the comparative probability relation < on the condition that P(E ) < P(F ) only if E<F. That is, quantitative strict inequalities entail the corresponding qualitative (strict) inequality. However, when P almost agrees with <, it may be that P(E ) = P(F ) and E<F. Kraft et al. (1959) provide an example of a finite space equipped with a qualitative probability relation that admits a continuum of almost agreeing quantitative probabilities but no agreeing quantitative probability. For finite spaces, the analog to (5.4) is the following necessary and sufficient condition for a qualitative probability to admit an almost agreeing quantitative probability.

(5.5) Let (E1, …, Em) and (F1, …, Fm) be two sequences of events from F such that, for each point ω in Ω, fewer events from the first sequence contain ω than do events from the second sequence. If it is not the case that Fi <Ei (i = 1, …, m-1), then Em <Fm.

On infinite spaces, the situation is more complex and more interesting. The following illustrates how a qualitative probability may make finer discriminations than any quantitative probability, i.e., the qualitative relation < is non-Archimedean.

Example: Let F be the set of all subsets of the positive integers {1, 2, …} = Ω. Let be a qualitative probability subject to the following two constraints:

(a) Each integer, i.e., each point is as probable as any other. That is, for each pair of integers i and j, i ≈ j.

(b) The qualitative probability relation respects proper subsets. That is, whenever E c F (but E = F ) then E<F.

(Aside: To establish the existence of a qualitative probability satisfying these constraints requires an appeal to the Axiom of Choice.) Condition (a) ensures that no countably additive probability agrees with <, since any agreeing probability must assign each point of Ω probability 0. Condition (b) insures that no finitely additive probability agrees with <, because, by the fact just noted, P{i, j} = P{i} + P{j} = 0. But by (b) {i} < {i, j}. There exists an almost agreeing (uniform) finitely additive probability for <, where P{i} = 0 for each integer i. Hence, this qualitative probability ordering is finer than any quantitative probability can be.

2. Interpretations Of Quantitative Probability

2.1 Subjective Probability

The contemporary theory of subjective (or personal) probability has its roots in the writings by Ramsey (1926) and deFinetti (1937), from the 1920s and 1930s. Savage (1954) gives what to many is a definitive account of this line of reasoning. The subjective interpretation of the mathematical theory treats probability as a (rational) agent’s assessment of uncertainty about a truth-value bearing proposition, E. (Here, equate the proposition asserting E and the event E of the mathematical theory.) The agent’s uncertainty about E is expressed as her his degree of belief in E, and under appropriate assumptions may be measured by the agent’s disposition to wager on E.

The so-called ‘Dutch Book’ argument offers a prudential account of why the rational agent who is willing to engage in betting (under conditions made explicit, below) will have degrees of belief that conform to the formal probability calculus. Suppose that betting is in monetary units.

Definitions:

A bet on E, at odds of r:(1 – r), with total stake S > 0 pays off to the agent with a gain of (1- r)S in case E obtains, and with a loss of – r(S) in case E fails to obtain.

A bet against E, at odds of r:(1 – r), with total stake S > 0 pays off to the agent with a loss of – (1- r)S in case E obtains, and with a gain of r(S) in case E fails to obtain.

The agent’s betting odds of r:(1 – r) for E are fair just in case the agent is indifferent between betting on and betting against E at those odds, at least for moderate size stakes.

A (finite) set of bets constitutes a Dutch Book against the agent if the agent suffers a net loss from these bets regardless which state of affairs obtains.

The agent’s fair betting odds are coherent if no Dutch Book is possible against them.

An elementary version of the ‘Book’ argument is captured in the following result:

The Dutch Book Theorem (deFinetti and Ramsey): Suppose that the rational agent offers fair odds for each event E in the space F and for moderate size stakes is willing to accept finite combinations of bets made at her/his fair odds. Then the agent is coherent, that is, immune to having a Dutch Book made if and only if her/his fair odds satisfy the axioms of mathematical (finitely additive) probability, (3.1)–(3.3).

Conditional probability, too, can be made the subject of the Dutch Book criterion by introducing called-off bets, e.g., a bet on E given F, which is annulled if F fails to occur.

Definition: A called-off bet on E given F, at odds of r:(1 – r), with total stake S > 0 pays off to the agent with a gain of (1 – r)S in case E ∩ F obtains, pays off with a loss of – r(S) in case F obtains but E fails to obtain, and results in no change (0 payoff) in case F fails to obtain.

The concept of a called-off bet against E given F is likewise defined in parallel with the respective concepts for betting against an event, as is the condition of fair called-off odds. When called-off bets are included among those to be wagered on, the following improved result ensues.

Dutch Book Theorem for conditional probability: Assume that the agent offers fair odds and fair called- off odds, and for moderate size stakes is willing to accept finite combinations of wagers at these fair odds. Then the agent is coherent if and only if the fair odds satisfy the axioms of mathematical (finitely additive) probability (3.1)–(3.3), and the definition of conditional probability, (4).

  • Remark: The reader is alerted to the sad fact that when P(F ) = 0, condition (4) does not establish that P(•|F) is a probability over F , i.e., it may fail axioms (3.1)–(3.3). Hence, the Dutch Book argument for conditional probability does not establish that, given an event F of probability 0, the coherent agent’s called- off fair odds given F constitute a probability.
  • Remark: Some writers use a dynamic interpretation of the conditional probability, so that P(•|F) designates the agents subsequent degrees of belief upon learning that event F obtains. The Dutch Book argument for conditional probability does not under-write this dynamic interpretation. It is part of the static constraints on a rational agent’s degrees of belief at a fixed time. The called-off bets may, however, serve as a model of the agent’s hypothetical reasoning at the fixed time.

Shimony (1955) introduces a more restrictive standard of coherence, which he calls strict coherence. An agent’s fair odds are strictly coherent provided that there is no possibility of making a ‘book’ against her him where the agent stands no chance of winning on net, but risks losing for at least one point of Ω. Shimony establishes that an agent’s fair odds are strictly coherent if and only if they are coherent and only the impossible event Ø receives probability 0. Despite the desirability of having one’s odds be strictly coherent, the resulting standard precludes continuous probability distributions, unfortunately.

Savage (1954) gives an elegant version of personal probability in which the agent’s probability is reduced to her his rational preference over acts. In the briefest outline, Savage’s axioms include the assumption that the agent’s preferences reveal a qualitative probability, as follows. Let b and w be prizes and assume that the agent prefers receiving the better prize b to the worse one w. For an event E, let the wager-on-E be the act that rewards the agent with prize b if E occurs and with w if E fails. Thus, these wagers are generalization of the bets used in deFinetti’s Dutch Book argument. Savage postulates that the agent strictly prefers the wager-on-F to the wager-on-E if and only if the qualitative probability relation E<F obtains. Savage introduces additional postulates to insure that exactly one (finitely additive) quantitative probability agrees with this qualitative probability. That quantitative probability is offered as the agent’s personal probability over events.

2.2 Logical (Or Classical) Probability

Dating to Laplace’s (1951) eighteenth-century work, at least, is the view that quantitative probability is determined once the space <Ω, F> is fixed. In its simplest version, LaPlace’s Principle of Insufficient Reason states that the points of Ω determine equiprobable states. However, this principle leads quickly to a variety of inconsistencies, which are worth noting.

For one, LaPlace’s principle makes the probability of an event depend upon which partition of states is used to describe it. A die’s landing with the 1-spot up is made equiprobable with its not landing that way when Ω is just the binary partition; in which case, by LaPlace’s principle, the probability is ½ for each outcome. However, if the state space is partitioned into six points depending upon how many spots land facing upwards, LaPlace’s principle yields a probability of 1/6 for the outcome with 1-spot showing uppermost. We may continue to refine the state space, to include the information that the sum of the visible side faces on the die (as seen from a given perspective) is either less than, or is equal to, or is greater than the face showing uppermost. Then there are 14 distinct possibilities, but only one of these has a 1-spot uppermost. Then, by LaPlace’s principle, the probability of a 1-spot uppermost is 1/14.

When the state space is continuous, the problem of applying LaPlace’s principle grows more serious as a transformation to an equivalent random variable may lead to a different probability. For example, if Ω is the set of real numbers in the unit interval [0,1] and F is the space of Borel measurable subsets of the identity variable X(ω) = ω, then LaPlace’s principle yields a uniform distribution over X. However, if we transform the space so that Ω´ = [0, 1] with ω´ = ω2 and define the equivalent random variable Y(ω´) = ω´, then Laplace’s principle yields a uniform distribution when applied to the equivalent random variable Y. But a uniform probability distribution over X is evidently inconsistent with a uniform probability distribution over Y.

In an early-twentieth-century variant of LaPlace’s idea, Keynes (1921) strove to revitalize the principle by applying it to a formalized language in the fashion of Russell and Whitehead’s Principia Mathematica. Moreover, Keynes’s theory has the added subtlety that takes qualitative probability as primitive, and that relation is only partially defined: Not all propositions can be compared even by a qualitative probability relation in Keynes’s system. Following Keynes, Carnap (1950) developed quantitative probability for a logical calculus. At first, Carnap sought to determine a single, well-defined quantitative probability for the monadic predicate calculus. But soon enough his approach, Carnap and Jeffrey (1971), became one best described as subjectivist, favoring symmetry conditions for atomic propositions similar to deFinetti’s concept of exchangeability. Papers by Scott and Krauss (1966) and Gaifman and Snir (1982) continue this tradition.

A more productive alternative than appeal to a formalized language to revive LaPlace’s principle is the use of mathematical symmetries as a basis for fixing the necessary equiprobable states. Most notable is Jeffreys’s (1961) theory of Invariance, dating from the 1930s. Jeffreys’s sophisticated Bayesian treatment appeals to transformational invariants of parametric statistical models of the observed data to determine a ‘prior’ probability for the parameter. For example, a uniform ‘prior’ probability is the invariant distribution for a location parameter, e.g., for the mean µ of data from a normal N[µ, 1] distribution. Fraser (1966) uses mathematical groups to determine the necessary invariances for fixing a probability, e.g., Haar measure. Each of these theories provides a local, not a global, interpretation of probability. That is, these theories provide necessary conditions for probability defined with respect to a specific inference problem. When multiple inference problems are considered at once, e.g., when the same theoretical quantity is a location parameter for one data set and a scale parameter for a second data set, difficulties ensue. These theories then generate what must be seen as either inconsistent or non-unique solutions. Furthermore, the invariant probabilities are not always countably additive. So called ‘improper’ priors, e.g., a uniform distribution over a real-valued parameter, correspond to a purely finitely additive probability by assigning equal prob- ability to each unit interval for the parameter, which intervals constitute the elements of a countable partition. Dawid et al. (1973) provide an important litany of ‘paradoxes’ for these theories, involving both transformations of parameters and multiple, simultaneous inferences.

Yet another approach for reviving LaPlace’s principle is found in the original work of E. T. Jaynes (1983), who uses entropy as a guide to uncertainty. His principle of Maximum Entropy dictates that, subject to a set of constraints (i.e., expectations for bounded random variables), a rational agent should fix probability over the space by choosing that probability that simultaneously satisfies the constraints while maximizing entropy. (For discrete spaces this requires choosing the probability P = {pi: i =1, …} to maximize -∑ilog( pi) pi subject to the constraints.) For example, when Ω is finite and the constraint set is vacuous, the uniform distribution over Ω maximizes entropy. Unfortunately, the Maximum Entropy principle inherits the problems of LaPlace’s principle, as it is sensitive both to repartitioning the space Ω and, in the continuous case, equivalent transformations.

2.3 Objective Probability

2.3.1 Limiting and finite frequency interpretations. Since Venn’s (1876) writings in the nineteenth century, at least, it has been popular to attempt to link probability with limiting frequency in infinite sequences of repeated trials, thereby underwriting an objective interpretation. Von Mises’ (1931, 1939) work in the early twentieth century offered a mathematical program for this approach. For von Mises, probability applies to a collective, which is a suitably random sequence of repeated trials. For example, a (hypothetically infinite) sequence of repeated flips with a particular coin that can land either only heads-up or tails-up forms a collective and supports the fact that the objective probability of heads up is 1/2 provided that:

(a) the limiting frequency in the sequence of the outcome heads up is 1/2.

(b) the sequence of outcomes is random, i.e., this limiting frequency for heads up is invariant over selections of infinite subsequences.

Several important mathematical problems attend this approach. As is well known, the set of events that have a limiting frequency in a given sequence may fail to form a field (let alone a σ-field). Moreover, when Ω is infinite, even when limiting frequencies do exist they may fail to be σ-additive. However, the formal aspect of von Mises’ program that has been given most scrutiny is the theory of randomness.

Evidently, for a sequence to pass muster as random in von Mises’s sense, not all infinite subsequences are relevant. That is, assuming the limiting frequencies of the binary events heads up, tails up exist and are not either 0 or 1, trivially, there will be, e.g., the infinite subsequence consisting of outcomes with only heads up, whose limiting frequency for that event is 1, not . To sidestep this reduction of the randomness concept, von Mises proposed that only certain families of subsequences are relevant to objective randomness. These subsequences are determined by combinations of place selection rules, e.g., choose every kth trial, and tests for after-effects, e.g., consider the subsequence of outcomes preceded by two-consecutive ‘heads up’ outcomes. One intuition von Mises offered for these restrictions is that the random sequence should be shown to be immune to sure profits from any feasible gambling system that might be used against it. Wald (1937) showed that a formally consistent theory was possible if the tests for randomness were limited to a countable infinity of rules. Church (1940) suggested using the denumerable class of recursive functions, in an attempt to make the concept of a random sequence language invariant. However, Ville (1939) showed that collectives based on denumerably many selection rules fail to validate some of the basic ‘strong law’ theorems of Kolmogorov’s theory, e.g., the law of the iterated logarithm fails. Also, by appeal to martingale theory, Ville showed that such collectives fail von Mises’s intuitive criterion for randomness: they do not preclude successful gambling systems.

Reichenbach (1938, 1949) proposed a rival frequency theory that avoided von Mises’s randomness conditions, even permitting its application to finite sequences. In doing so, however, the resulting theory depends ever more heavily on epistemic considerations for its warranted application. For example, a (sequence) that alternates successively between heads up on odd numbered flips and tails up on even numbered flips has a limiting frequency of for each outcome. But, given that fact about the order of outcomes, we would be hard pressed to find a good use for this ‘objective’ probability when making inferences about such a process.

A more interesting finite-frequency interpretation is based on Kolmogorov’s (1963, 1968, 1984) idea for interpreting randomness as informational complexity in finite sequences. A proper account of this approach is beyond the scope of this paper. For first-rate discussions see Fine (1973) and van Lambalgen (1987).

2.3.2 Chance And Propensity Interpretation. The philosophers Braithwaite (1953), Hacking (1965) and Levi (1980) propose and defend an objective interpretation of probability based on the idea that an objective chance is a theoretical, quantitative disposition for a process or kind of trial to respond when activated. That is, consider a process or kind of trial described as flipping this coin by hand. The chance of its landing heads up on such a process is taken to be an unobserved, quantitative disposition for the coin to respond when flipped. The propensity interpretation is thin, on purpose, in that chances satisfy the mathematical theory of probability by stipulation!

What is particularly novel in these approaches about making chance a theoretical concept is that probability is not reduced to observed frequencies, contrary to the frequency interpretation. Rather, the meaning of the theoretical term chance is given by rules for testing chance claims (Braithwaite) or by rules linking chance with subjective probability (Hacking and Levi). In the latter case, central to the meaning of chance is a so-called Direct Inference principle. Put simply, that principle asserts: the personal probability P for a specific outcome O of the kind of the trail occurring is p, given the information that the chance is p for outcome O on that kind of trial. To wit: P (coin lands heads up on a toss chance is p that the coin lands heads up on a toss) = p.

This treatment of chance as a theoretical term, with it meaning supplied by rules for application of the concept, accords with the Pragmatic turn in philosophy as proposed by Peirce (1932) in the nineteenth century, though Peirce himself favored a frequency interpretation.

2.4 The Fisher–Kyburg Interpretation

Fisher (1956) and Kyburg (1961) introduce an original semantics for probability, focused o n the single-case assertion, e.g. (*) The probability is that the next flip of this coin lands heads up.

For this theory, probability is fundamentally epistemological, grounded on both knowledge and ignorance of class-frequencies. The statistician who asserts (*) is required to have:

(a) knowledge of frequencies of outcomes of heads in the population of flips with this coin, a population that includes the next flip.

(b) ignorance of the statistics of outcomes in a smaller statistical population of coin flips that includes the next flip.

In this sense, their theory takes instances of direct inference (based on frequency information about populations) as the definition of probability.

For both Fisher and Kyburg, this interpretation of probability underwrites an important statistical analysis, e.g., Fisher’s fiducial probability. For example, let the random variable X take values from a population with known frequencies approximated by a normal distribution, N(µ, 1), but with parameter µ unknown. Then the pivotal quantity V = (µ – X ) has a known distribution in this population. Its values are distributed according to a normal N(0, 1) distribution. Suppose X = 7 is observed on a particular trial. The Fisher–Kyburg theory then asserts that on this same trial, given X 7, the statistician’s probability distribution for V is N(0, 1). This claim rests on the assertion of ignorance on the part of the statistician: the statistician knows of no smaller population that includes this trial with statistics for V different from those of the N(0, 1) population. The upshot is a fiducial N(7, 1) probability distribution for µ, given X = 7. When a random variable X has a continuous, parameteric distribution F(X, θ), then F itself may be treated as the pivotal quantity, having a known uniform U[0, 1] distribution. Fiducial inference about F, given the observation that X x, yields probability for the unknown parameter satisfying Fisher’s enigmatic formula: P(θ|X) = -δF/δθ X = x. Kyburg’s theory has the added generality over Fisher’s in that he uses interval-valued frequency information, leading to a theory with lower and upper probability for singlecase assertions. Despite the considerable novelty of this approach, it is challenged severely by nuisance parameters. That is, this interpretation of probability provides only a limited account of marginal probability, as its theory of conditional probability is not well defined, at least not as required by Bayes’s theorem.

3. Additional Readings

Nagel’s (1936, 1939) essays on quantitative probability are excellent reviews, still, of philosophical issues relating to rival interpretations. In particular, one important issue not addressed here is whether objective probability requires a commitment to indeterminism in science. Nagel makes clear why there is no conflict between objective probability and deterministic laws in science. A more elementary presentation of rival interpretations of quantitative probability is found in Kyburg’s (1970) textbook.

Fine’s (1973) book gives a good overview of complexity theories and randomness, relating in particular to Kolmogorov’s theory from the 1960s.

Fishburn’s 1968) article offers a wealth of insights on both philosophical and formal questions relating qualitative and quantitative concepts of probability. The Bibliography: there is particularly noteworthy. Also valuable is the discussion of comparative probability given in Walley and Fine (1979).

Bibliography:

  1. Braithwaite R B 1953 Scientific Explanation. Cambridge University Press, Cambridge, UK
  2. Carnap R 1950 Logical Foundations of Probability. University of Chicago, Chicago
  3. Carnap R, Jeffrey R (eds.) 1971 Studies in Inductive Logic and Probability. University of California Press, Berkeley, CA, Vol. 1
  4. Church A 1940 On the concept of a random sequence. Bulletin of the American Mathematical Society 46: 130–5
  5. Dawid P, Stone M, Zidek J V 1973 Marginalization paradoxes in Bayesian and structural inference. Journal of the Royal Statistical Society 35: 189–233 (with discussion)
  6. deFinetti B 1931 Sul significato suggettivo della probabilita. Fund Math 17: 298–329
  7. deFinetti B 1937 Foresight: Its Logical Laws, Its Subjective Sources. Trans. in Kyburg and Smokler (eds.) 1964
  8. Fine T 1973 Theories of Probability. Academic Press, New York
  9. Fishburn P C 1986 The axioms of subjective probability. Statistical Science 1: 335–58 (with discussion)
  10. Fisher R 1956 Statistical Methods and Scientific Inference. Hafner Press, New York
  11. Fraser D A S 1968 The Structure of Inference. John Wiley, New York
  12. Gaifman H, Snir M 1982 Probabilities over rich languages, testing and randomness. Journal of Symbolic Logic 47: 495–548
  13. Hacking I 1965 Logic of Statistical Inference. Cambridge University Press, Cambridge, UK
  14. Horn A, Tarski A 1948 Measures in Boolean algebras. Transactions of the American Mathematical Society 64: 467–97
  15. Jaynes E T 1983 In: Rosenkrantz R (ed.) Papers on Probability, Statistics, and Statistical Physics. D. Reidel Publishing, Dordrecht, The Netherlands
  16. Jeffreys H 1961 Theory of Probability, 3rd edn. Oxford University Press, London
  17. Keynes J M 1921 Treatise on Probability. Macmillan, London
  18. Kolmogorov A N 1956 Foundations of Probability. Chelsea Publishing, New York
  19. Kolmogorov A N 1963 On tables of random numbers. Sankhya 25: 369–76
  20. Kolmogorov A N 1968 Three approaches to the definition of the concept of ‘amount of information’. Sel. Transl. Math. Stat. and Prob 7
  21. Kolmogorov A N 1984 The logical basis for information theory and probability theory. IEEE Trans. Inf. Theory, IT 14: 662–4
  22. Kraft C, Pratt J, Seidenberg A 1959 Intuitive probability on finite sets. Ann. Math. Stat. 30: 408–19
  23. Kyburg H E 1961 Probability and the Logic of Rational Belief. Wesleyan University Press, Middleton, CT
  24. Kyburg H E 1970 Probability and Inductive Logic. Collier– Macmillan, London
  25. Kyburg H E, Smokler H E (eds.) 1964 Studies in Subjective Probability. John Wiley, New York
  26. LaPlace P S 1951 A Philosophical Essay on Probabilities. Dover, New York
  27. Levi I 1980 The Enterprise of Knowledge. MIT Press, Cambridge, MA
  28. van Lambalgen M 1987 Random Sequences. University of Amsterdam, Amsterdam
  29. Mises R von 1931 Wahrscheinlichkeitsrechnung und ihre An-wendungen in der Statistik und theoretischer Physik. Deuticke. Leipzig
  30. Mises R von 1939 Probability, Statistics, and Truth. Macmillan, New York (Translation of [1928] Wahrscheinlichkeit, Statistik, und Wahrheit. Wien.)
  31. Nagel E 1939 Principles of the Theory of Probability. University of Chicago Press, Chicago
  32. Nagel E, Margenauf L, Ducasse C J, Wilks S S 1936 The meaning of probability. Journal of the American Statistical Association 31(193): 10–30
  33. Peirce C S 1932, 1936 Collected Papers. Harvard University Press, Cambridge, MA, Vols, 2 and 6
  34. Ramsey F P 1926 Truth and probability. In: Braithwaite R B (ed.) The Foundations of Mathematics 1950. Humanities Press, New York
  35. Reichenbach H 1938 Experience and Prediction. University of Chicago Press, Chicago
  36. Reichenbach H 1949 Theory of Probability. University of California Press, Berkeley, CA
  37. Savage L J 1954 The Foundations of Statistics. John Wiley, New York
  38. Scott D, Krauss P 1966 Assigning probabilities to logical formulas. In: Hintikka J, Suppes P (eds.) Aspects of Inducti e Logic. North-Holland, Amsterdam
  39. Shimony A 1955 Coherence and the axioms of confirmation. Journal of Symbolic Logic 20: 644–60
  40. Venn J 1876 The Logic of Chance, 2nd edn. Macmillan, London
  41. Ville J 1939 Etude critique de la notion de collectif. GauthiersVillars, Paris
  42. Wald A 1937 Die Widerspruchsfreiheit des Kollektivsbegriffs der Wahrscheinlichkeitsrechnung. Ergebnisse eins mathematisches Kolloquiums 8: 38–72
  43. Walley P, Fine T 1979 Varieties of modal (classificatory) and comparative probability. Synthese 41: 321–74
Random Numbers Research Paper
Mediating Variable Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!