Philosophy Of Truth Research Paper

Academic Writing Service

View sample Philosophy Of Truth Research Paper. Browse other  research paper examples and check the list of research paper topics for more inspiration. If you need a religion research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our custom writing service for professional assistance. We offer high-quality assignments for reasonable rates.

The point of an inquiry is to get at the truth—not the whole truth, but the whole truth of some matter at hand. This truism generates the central problems of epistemology: What is truth? What kinds of things are true? How can we get at truth, or know when we have attained it? Will any old truth do, and is any old falsehood a failure?

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


1. Truth And Its Bearers

Truth seems to be a simple affair. That snow is white is true if … well, snow is white! That snow is hot is true if … snow is hot! We know what it would take for snow to be white. A certain kind of stuff—flaky frozen water—has to have a certain appearance. So we know what it takes for it to be true that snow is white, and since the stuff has the appearance, it is true that snow is white. One could be forgiven for thinking that there’s an end of it. The so-called redundancy theory of truth elaborates this insight (Ramsey 1927, Horwich 1990) although it is also the main motivation behind the so-called correspondence theory.

But what exactly are the bearers of truth and falsity? One answer is sentences. Sentences (grammatically correct strings of letters like ‘Snow is white’) appear familiar, unproblematic, concrete. Another answer is propositions (or whatever it is that sentences mean). These appear more abstract and remote than sentences. The ‘linguistic turn’ in philosophy, a widespread presumption against abstract entities, and Tarksi’s ground-breaking definition of sentential truth (Tarski 1969), have all favored sententialism.




How many sentences are there on the next line?:

Snow is white. SNOW IS WHITE. Snow is white.

The answers, one and three, are both defensible. We distinguish sentence types (of which there is just one here) from sentence tokens, individual instances of a sentence type (of which there are three here). A type is not identical with any of its instances, or with a set of them. A set cannot change its members—its identity is given by its members—whereas the number of instances of a sentence type is in constant flux. A type, even a sentence type, is an abstract entity. That ‘Snow is white’ is true is a claim about the sentence type. So eschewing meanings in favor of sentences does nothing to rid the universe of abstract entities (Church 1956).

That ‘Snow is white’ is true is a fact about English, but the fact that snow is white is not. ‘Snow is white’ expresses a truth which can be expressed in other languages, suggesting language-independent truths (Frege 1984). Given continuous space–time the trajectory of a single particle generates nondenumerably many truths. Since any learnable language contains at most denumerably many sentences, the totality of truths outstrips the totality of sentences of any learnable language. So the primary bearers or truth cannot be sentences. Without meaning, it is obscure what the truth of a sentence would consist in. ‘Snow is white’ is true in virtue of its expressing something which in fact obtains. A meaningful sentence expresses a meaning and that, in conjunction with the state of the world, determines whether it is true or false.

2. Verifiability

To determine whether ‘Snow is white’ is true one must know what it means. And to know what it means one must know what the world would look like were it true. So meaning seems related to an ability to ascertain truth. The verificationist theory of meaning, the battle cry of the logical positivists, articulates this conviction: that a sentence is meaningful if and only if it can be verified by observational means (Schlick 1979).

The positivists took their inspiration from Einstein’s analysis of simultaneity. In his 1905 relativity paper,

Einstein argued that claims about simultaneity are physically meaningless until we agree on a procedure for determining simultaneity. Einstein supplied a verification procedure for ‘A is simultaneous with B,’ thereby precipitating a fundamental revolution in the physics of space–time. The positivists co-opted the approach to effect a similar revolution in philosophy (Schlick 1979). (Einstein later repudiated his early verificationism with the now famous put-down: ‘One shouldn’t tell a good joke twice.’)

The verificationist theory serves the twin desiderata of positivism: endorsing science while disenfranchising metaphysics, religion, and (perhaps embarrassingly) ethics. Sentences reporting observations (‘There is a red patch in my visual field’) are paradigmatically meaningful. Sentences like ‘The Nothing nothings,’ ‘God is omnipotent,’ and ‘Torture is wrong’ are meaningless, since there are no possible observations which could verify them.

Verificationism dovetails nicely with an uncompromising epistemology. A true sentence must be meaningful, hence verifiable, and so knowable. The facts cannot outstrip our means of discovery. This consequence attracted devastating criticisms, mostly leveled by the rigorously self-critical positivists themselves. For example, the theory deems scientific theories themselves meaningless, since they make wildly general claims. Newton’s gravitational theory, for example, claims that any two masses anywhere in space–time attract each other. All the observational data that we will ever have is but a tiny fragment of the plethora of predictions the theory makes. Even a simple claim like ‘metals expand when heated’ goes beyond any finite amount of data. For this reason, scientific theories are unverifiable (Popper 1963).

This is a variation on Hume’s notorious problem of induction. Whenever we make a prediction about the unobserved we go beyond what has been verified. But it is precisely because it does this that science is so interesting and useful. Science at its best gives us depth—profound and contentful theories which go far beyond the actual and possible data—and depth pulls away from the positivists’ goal of certainty (Watkins 1984).

The positivists made special provisions for logical and mathematical truths, as well as philosophical claims like verificationism itself. These are not established by observation, for they are not factual claims at all. Rather, they are analytic—true in virtue of the meanings of constituent terms—like ‘Bachelors are unmarried males.’ Some can be obtained by analysis of meaning, others by deduction from established analytic truths. Deduction is justified by appeal to the meanings of logical terms such as ‘and,’ ‘not,’ ‘all.’

A first cousin of verificationism is thus the doctrine that mathematical truth and provability coincide. The refutation of this also came from the ranks of the movement. Godel proved that any consistent axiomatization of elementary arithmetic fails to yield some arithmetic truths. Given any such axiomatization, we can construct a true arithmetic sentence not provable in that system. Strengthen that system (perhaps by adding that very sentence as a further axiom) and you have a new system generating its own unprovable truths. Mathematical truth outstrips provability (Godel 1986).

The verificationists tried weakening their core doc- trine, but the accounts failed to endorse science while repudiating metaphysics. Verificationism has, some-what surprisingly, experienced a recent revival in the guise of antirealism (Dummett 1978). This version of it is also motivated by the desire to cut content down to verifiable size, eschewing depth in favor of certainty, and it faces the same kinds of objections.

3. Truth Conditions

The standard counterexamples to verificationism are sentences with this feature: we know what it would take for them to be true, even though we have no way of verifying them. We do know what ‘Metals expand when heated’ means although it is unverifiable. The meaning of a sentence is given by the condition which would render it true, rather than to any procedure which would erify its truth. The truth-conditional theory of meaning is that the meaning of a sentence is its truth condition, not its verification condition.

One objection to abstract entities, like propositions or truth conditions, is that they do not have clear-cut criteria of identity. There is no criterion telling us when we are dealing with one or two or more such entities. Quine’s slogan, ‘no entity without identity,’ demands we make do without them (Quine 1981). Carnap, perhaps the most important verificationist, laid the foundations for verificationism’s successor: the possible-worlds account (Carnap 1947).

As it happens, it snowed in Boulder, CO, on 1 January 1999, but it might not have. The actual world is one complete way for the world to be, but it is not the only way it could be. It is not logically necessary that the world be the way it is. We countenance a space of different possibilities just one of which is actual. A claim divides the class of complete possibilities into two. It induces a mapping from the collection of complete possibilities (the logical space) to the two truth values. A set can be thought of as just such a mapping—it maps its members to a positive indicator (True), and nonmembers to a negative indicator (False). So a proposition is associated with a set of worlds. The possible-worlds account says there is no more or less to a truth condition than that (Lewis 1986). The account utilizes Quine’s paradigm of a clear-cut criterion of identity—the identity condition for sets. Sets are identical just in case they contain the same members. Propositions (sets of worlds) are identical if and only if they contain the same worlds. We can now tap set-theory for a unifying account of many logical phenomena.

For example, entailment is standardly understood in terms of truth preservation: P entails Q if the truth of P guarantees the truth of Q. This becomes: P is a subset of Q. Transitivity of entailment falls out of the transitivity of the subset relation. Disjunction is set-theoretic union, conjunction is intersection, and negation is complementation. A proposition is (logically) possible if it is true in some world; impossible if true in no world; necessary if true in all worlds. Modal principles—for example, that the negation of a necessary proposition is impossible—also fall out of set theory. Logical relations between sentences thus piggyback on the propositions they express. A proposition is true simpliciter if it contains the actual world, and a sentence is true if it expresses a true proposition.

Despite its explanatory power, this account cannot be the whole story. A necessary proposition is true in all worlds, but only one set of worlds contains all worlds. ‘One plus one is two’ expresses a necessary truth, as does ‘Two is two.’ Hence, on this account, these two sentences express one and the same proposition. However, the former expresses a mildly interesting truth about addition, while the latter expresses something trivial. Since what they express is different, what they express cannot be just a class of worlds. The account is too coarse-grained. While the sentential approach can make these fine-grained distinctions, it is too fine-grained. (‘Une piu une fa due’ expresses the same truth, in Italian, as ‘One plus one is two,’ but they are distinct sentences.)

There are a range of fine-grained theories of meanings which accommodate these distinctions without resorting to sentences (Bealer 1982, Tichy 1987). These theories do not need to dispense with possible worlds and some are built on them. Provided a proposition induces a mapping from worlds to truth values, even if it can not be identified with such a mapping, we can reap the benefits of the possible-worlds framework.

4. Falsifiability

That metals expand when heated is false in any world in which one piece of heated metal does not expand. Since we cannot check every piece of heated metal, the proposition cannot be shown to be true. By contrast, it can be shown to be false, or falsified. The observation of one nonexpanding piece of heated metal will do the trick.

Karl Popper (1963) was one of the first to stress the fallibility of scientific theories and the asymmetry of verification and falsification. If the positivists took inspiration from Special Relativity, Popper took it from General Relativity—specifically, its prediction of the bending of light near massive bodies. He was impressed that Eddington’s eclipse experiment of 1919 could verify neither Newton’s nor Einstein’s theory, but could have refuted either, and actually refuted Newton’s. Thenceforth he took falsifiability to be the hallmark of genuine science. Pseudoscientific theories (Popper’s examples were psychoanalysis and astrology) are replete with ‘confirming instances.’ Everything that happens appears to confirm them only because they rule nothing out. Popper argued that genuine theories must forever remain conjectures. But science is still rational, because we can submit falsifiable theories to severe tests. If they fail, we weed them out. If they survive? Then we submit them to more tests, until they too fail, as they almost assuredly will.

There are three serious problems with falsificationism. First, it does not account for the apparent epistemic value of confirmations. The hardline falsificationist must maintain that the appearance is an illusion. Second, it cannot explain why it is rational to act on the unrefuted theories. Confidence born of experimental success reeks of inductivism. Third, pessimism about the enterprise of science seems obligatory. Although truth is the goal of inquiry, the best we can manage is to pronounce a refuted theory false. We return to these in succeeding sections.

Other criticisms stem from Duhem’s problem (Duhem 1954). Duhem noted that predictions can be deduced from theories only with the aid of auxiliary assumptions. One cannot deduce the path of a planet from Newton’s theory of gravitation without additional hypotheses about mass and position of the Sun and the planets, and the absence of other forces. It is not Newton’s theory alone, but the conjunction of the theory with these auxiliaries, which faces the tribunal of experience. If the conjunction fails it is not clear which conjunct should be blamed. Since we can always blame the auxiliaries, falsification of a theory is impossible.

Quine generalized Duhem’s point to undermine the positivist’s distinction between factual truth and analytic truth. If no sentence is ‘immune to revision’ in the face of anomalies, no sentence is true simply by virtue of its meaning. Not even the law of excluded middle s sacrosanct, and meaning itself becomes suspect (Quine 1981).

Kuhn used Duhem to undermine the rationality of scientific revolutions. Since a theory cannot be refuted by experiment, it is not irrational to stick to it, tinkering with auxiliaries to accommodate the data. ‘Normal science’ consists precisely in that. A revolution only occurs when a bunch of scientists jump irrationally from one bandwagon to another. In Kuhn’s famous phrase (which he later regretted) a revolution is a ‘paradigm shift’ (Kuhn 1962).

Feyerabend followed Kuhn, maintaining that proponents of different paradigms inhabit incommensurable world-views which determine their perception of the so-called observational ‘facts’ (Feyerabend 1993), thereby setting the stage for constructivists who maintain that science is a human construct (obviously true) with no controlling links to reality (obviously false). From constructivism it is a short leap into postmodernist quicksand, the intellectual vacuity of which has been brutally but amusingly exposed (Sokal 1996).

The more extreme lessons drawn from Duhem’s simple point may sound sexy, but they do not withstand sober scrutiny (Watkins 1984). It is just false that each time a prediction goes awry it is one’s whole global outlook that is on trial, and that one may rationally tinker with any element at all. In the nineteenth century, Newton’s theory was well con- firmed by myriad observations and applications, but it faced an apparent anomaly in the orbit of Uranus. Because the theory was well confirmed, scientists looked to the auxiliary assumptions, such as the assumption that there are no unobservable planets affecting Uranus’s trajectory. It was far more probable, in the light of the total evidence, that there should be an as yet unobserved planet in the solar system than that Newton’s theory be false. It was eminently reasonable to postulate such a planet and search for it. The subsequent discovery of Neptune was a resounding success for the theory.

This rejoinder, of course, requires a positive account of probability and confirmation, one which falsificationism does not supply.

5. Confirmation

Having jettisoned verifiability, the positivists turned to the theory of probability to ground a rigorous logic of confirmation and induction, comparable to deductive logic (Carnap 1950).

The logical probability of a proposition is a measure of its area in logical space. It is a normalized measure, so the probability of the whole space (the necessary proposition) is 1; the probability of the impossible proposition, the empty set, is 0. Other propositions have a probability somewhere between 0 and 1, depending on the proportion of the space they cover. Conditional probability—the probability of P given Q, written Pr(P|Q)—is a kind of generalized entailment and is defined as a ratio of unconditional probabilities: Pr(P&Q) / P(Q). If Q logically entails P then Pr(P|Q) is maximal, 1. If Q is logically incompatible with P then it is minimal, 0. In between these cases, the closer P is to entailing Q, the closer Pr(P|Q) is to 1. Evidence E confirms hypothesis H just in case conditioning on E raises the probability of H: Pr(H|E) > Pr(H). If H entails E, Pr(E) < 1 and Pr(H) > 0, then if E is verified, E confirms H.

Where do the logical probabilities come from? Carnap searched for plausible a priori constraints on logical probability. The most obvious is given by the principle of indifference: all possible worlds have an initially equal probability. Logic does not play favorites. Unfortunately, this measure yields counterintuitive results. If the universe is infinite, and the evidence consists of a finite number of reports of the expansion of heated metals, it does nothing to raise the probability of the hypothesis that metals expand when heated. Even if the universe is finite, no amount of such data will raise our confidence that the next piece of heated metal will expand.

The search was on for a measure that yields more palatable results. However, there is one rather pressing problem. If the universe is infinite, then there will be an infinite number of mutually exclusive possible theories pretty much on a par in the absence of evidence. So they should all receive the same initial logical probability, which must therefore be zero. Now no evidence can ever raise the probability of an hypothesis if its initial probability is zero. So confirmation is impossible.

Inductive logicians turned to the task of motivating initial probability distributions which would give differential finite weight to generalizations, but the upshot was an embarrassment of riches, a plethora of ‘inductive methods’ (Hintikka and Suppes 1966). The failure to motivate a single preordained measure of logical probability is not a fatal blow to confirmation theory, however. The probabilist can allow a range of distributions, reflecting distinct subjecti e degrees of confidence. Bayesianism, as the position is called, places two objective constraints on rational belief (Howson and Urbach 1989). First, one’s degrees of belief must obey the probability calculus. Second, one must update degrees of belief by conditionalization on incoming evidence. Together these constraints motivate the convergence of rational opinion. Given suitable initial conditions, in the long run differences in initial distributions will be overwhelmed by incoming evidence.

The basic problem with Bayesianism, however, is that probability, like verifiability, strongly favors certainty over depth. The less content a theory has, the larger the region of the logical space it covers, and the more probable it is. The whole truth is just a tiny speck in logical space—the proposition which entails all true propositions is clearly the singleton of the actual world. Any deep theory which comes anywhere close to the truth will likewise be of very low probability. We want deep theories that are close to the truth. How can that be reconciled with the demand for evidential support?

6. Truthlikeness

Newtonian physics superseded Aristotelian physics, but it in turn was superseded by Einsteinian physics. Newton’s theory is now known to be false. Does it follow that Newton did no better at realizing the aim of physical inquiry than Aristotle? If truth is the goal, are all falsehoods alike in being total failures?

Take a simple query: ‘What is the number of the planets (Π)?’ The ancients gave the false answer Π 5, but the answer has been steadily revised upwards to Π = 9, and let us suppose that is the truth of this matter. Π = 5, Π = 6, Π = 7 and Π = 8 are all false, but each makes some progress towards the truth over its predecessor. We have here a sequence of four falsehoods which increase in their truthlikeness or verisimilitude.

Probability and truthlikeness are easily conflated. Subjective probability is, if you like, seeming like truth. Truthlikeness is being similar to truth. However, the two concepts are completely different. Probability is an epistemic concept, truthlikeness is not. Π = 8 is closer to, more like, the truth (Π = 9) than Π = 5, completely independently of the evidence. Moreover, it remains closer to the truth even after the probability of both falls to 0 with the discovery of Pluto. So having minimal probability is compatible with great truthlikeness. Or consider the logically necessary truth. This is not only true but certain, but it is not close to the truth about Π. Having maximal probability is thus compatible with low truthlikeness. Now consider the following sequence of truths:

Philosophy Of Truth Research Paper

This sequence exhibits decreasing probability, but increasing truthlikeness. So probability and truthlikeness, like certainty and depth, tend to pull in opposite directions. A proposition cannot be close to the whole truth without being deep. It cannot be probable without being shallow. The whole truth (the proposition which is true in just one world, the actual world) is undoubtedly closest to the whole truth. It is the deepest, correct proposition. But on any actual measure of subjective probability it will have probability 0 or close to 0.

Popper was the first to realize the importance of the concept of truthlikeness (Popper 1963). Popper wanted to be a realist (truth might outstrip our means of discovery), a fallibilist (any general theory is likely to be false), and an optimist (we can and do make epistemic progress). If all we can say for sure is ‘missed again!,’ a miss is as good as a mile, and the history of inquiry is a sequence of such misses, then epistemic pessimism follows. But if some false hypotheses are closer to the truth than others, then realism and fallibilism are compatible with optimism. The history of inquiry could turn out to be one of steady progress towards our goal, and there may be empirical indicators of that.

Popper also formulated the first rigorous account of the notion, in terms of the true and false consequences of theories. According to Popper, a proposition P is closer to the truth than Q if P entails more truths than Q, and no more falsehoods (or, fewer falsehoods, and no fewer truths). Unfortunately, this elegant theory implies that no false theory is closer to the truth than any other. (Surprisingly it took nearly 10 years before this defect was noticed (Miller 1974, Tichy 1974).) Let G be some truth implied by P but not by Q, and let F be any falsehood implied by P. Then F&G is a falsehood implied by P, but not by Q. (If Q implied F&G then Q would also entail the second conjunct, G—contradiction.) We cannot add truths to P without adding falsehoods (also, we cannot subtract falsehoods without subtracting truths).

A different approach takes the likeness in truthlikeness seriously, meshing this with a possible-worlds approach to propositions (Tichy 1974, Hilpinen 1976, Oddie 1986).

Suppose possible worlds exhibit degrees of similarity to one another, and distance between worlds is a measure of dissimilarity. If we could find a natural and plausible way of measuring the distance between subsets of the space we would have a measure of distance between propositions. Suppose T is the whole truth about some matter (as Π = 9 is the whole truth about the number of the planets). Answers to a query, whether partial (like 7 ≤ Π ≤ 9) or complete (like Π = 7), would then be ordered by their distance from the truth (Π = 9). This provides a schema for a theory, the details of the similarity metric and of propositional distance to be filled in.

There are two important criticisms of this likeness approach. First, any similarity metric is sensitive to the set of properties deemed salient (Miller 1974). If all possible attributes are salient, then distinctions of similarity collapse. So if the selection of attributes is to be objective, then some properties must ‘carve nature at the joints.’ That involves a robust version of realism about properties, but one the scientific realist might not mind embracing (Armstrong 1978).

Second, a dilemma: if we know the truth we do not need truthlikeness; and if do not know the truth we cannot know which theories are truthlike. Either way the concept is a useless addition to our intellectual armory.

Both horns are weak. First, even once we have attained the truth it is interesting to trace our history of attempts to reach it, and characterize the progress. Second, even though degree of truthlikeness is typically an unknown magnitude, with a measure of probability we can estimate its value in the usual ways. Expected degree of truthlikeness is epistemically accessible. Truthlikeness would then dovetail nicely with decision theory. Suppose there are cognitive utilities, just as there are noncognitive utilities. The point of inquiry would be to maximize cognitive utility. The best cognitive state to be in is, Godlike, to embrace the whole truth—the proposition with maximal truthlikeness. Embracing propositions far from the truth is cognitively bad. So it is natural to identify cognitive utility with degree of truthlikeness. However, the injunction to maximize objective cognitive value does not provide a selection procedure since, as in the practical case, we have no direct epistemic access to states with high utility. So we must be guided by expected cognitive utility preferring those theories with higher expected truthlikeness (Niiniluoto 1987).

7. Conclusion

This decision–theoretic approach is an interesting compromise between the positivists’s concern with truth and probability and the falsificationist’s apparently conflicting concern with depth. But as so often happens, this offspring exposes defects in both its parents. The positivists were misguided in concentrating on truth and probability. A theory can be true and have high probability and yet very low truthlikeness. The tautology is an extreme example, but trivial contingent truths can be very well confirmed, and still be far from the truth. Being true, or being likely to be true, is not where the action is. Such theories may lack depth. But the falsificationists were wrong to demonize falsity and exhalt falsification. A false and falsified theory can still be close to the truth, with good evidence that it is so (Newton’s theory). Falsifying a theory shows it is not the best, but the best theory is not the only good theory. A falsehood may not hit the bullseye, but it can come very close.

Coherent defensible accounts of truth, probability, and truthlikeness can be combined. The goal of inquiry is the whole truth. Actual progress occurs when truthlikeness increases. The epistemic indicator of truthlikeness is expected truthlikeness. This can be high even when a proposition is refuted, low even when it is confirmed. The conflict between the depth and certainty poles, between which epistemologists have been oscillating for 400 years, can thus be resolved.

Bibliography:

  1. Armstrong D M 1978 Universals and Scientific Realism. Cambridge University Press, Cambridge, UK
  2. Bealer G 1982 Quality and Concept. Clarendon Press, Oxford, UK
  3. Carnap R 1947 Meaning and Necessity. Chicago University Press, Chicago
  4. Carnap R 1950 Logical Foundations of Probability. Chicago University Press, Chicago
  5. Church A 1956 Propositions and sentences. In: Bochenski J M (ed.) The Problem of Universals. University of Notre Dame Press, Notre Dame, IN, pp. 1–25
  6. Duhem P 1954 The Aim and Structure of Physical Theory [trans. Wiener P]. Princeton University Press, Princeton, NJ
  7. Dummett M A E 1978 Truth and Other Enigmas. Harvard University Press, Cambridge, MA
  8. Feyerabend P K 1993 Against Method. Verso, London
  9. Frege G 1984 The thought: a logical inquiry. In: McGuinness B (ed.) Collected Papers on Mathematics, Logic and Philosophy. Blackwell, Oxford, UK, pp. 351–72
  10. Godel K 1986 On formally undecidable propositions of Principia Mathematica and related systems I. In: Feferman S, Przelecki M, Szaniawski K, Wojcicki R (eds.) The Collected Works of Kurt Gudel. Oxford University Press, New York, Vol. I
  11. Hilpinen R 1976 Approximate truth and truthlikeness. In: Przelecki M, Feferman S, Dawson J W, Kleene S C, Moore G H, Solovay R M, van Heijenart J (eds.) Formal Methods in the Methodology of the Empirical Sciences. Reidel, Dordrecht, The Netherlands, pp. 144–95
  12. Hintikka J, Suppes P (eds.) 1966 Aspects of Inductive Logic. North-Holland, Amsterdam
  13. Horwich P 1990 Truth. Blackwell, Oxford, UK
  14. Howson C, Urbach P 1989 Scientific Reasoning: the Bayesian Approach. Open Court, La Salle, PQ
  15. Kuhn T 1962 The Structure of Scientific Revolutions. University of Chicago Press, Chicago
  16. Lewis D 1986 The Plurality of Worlds. Blackwell, Oxford, UK
  17. Miller D 1974 Popper’s qualitative theory of verisimilitude. British Journal for the Philosophy of Science 25: 166–77
  18. Niiniluoto I 1987 Truthlikeness. Reidel, Dordrecht, The Netherlands
  19. Oddie G 1986 Likeness to Truth. Reidel, Dordrecht, The Netherlands
  20. Popper K R 1963 Conjectures and Refutations. Routledge, London
  21. Quine W V O 1981 Theories and Things. Harvard University Press, Cambridge, MA
  22. Ramsey F P 1927 Facts and propositions. Proceedings of the Aristotelian Society 7: 153–76
  23. Schlick M 1979 Form and content: an introduction to philosophical thinking. In: Mulder H L, Van de Velke-Schlick B (eds.) Philosophical Papers. Reidel, Dordrecht, The Netherlands, Vol. II, pp. 285–387
  24. Sokal A 1996 Transgressing the boundaries: toward a transformative hermeneutics of quantum gravity. Social Text 46/47: 217–52
  25. Tarski A 1969 The concept of truth in formalized languages. In: Woodger J (ed.) Logic, Semantics and Mathematics. Clarendon Press, Oxford, UK, pp. 152–268
  26. Tichy P 1974 On Popper’s definitions of verisimilitude. British Journal for the Philosophy of Science 25: 155–60
  27. Tichy P 1987 The Foundations of Frege’s Logic. de Gruyter, Berlin
  28. Watkins J W N 1984 Science and Scepticism. Princeton University Press, Princeton, NJ

 

Philosophy Of Verstehen And Erklaren Research Paper
Philosophy Of Trust Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!