This paper was originally published in the Festschrift
Vicinae Deviae: Essays in Honour of Raymond Earl Jennings,
ed. Martin Hahn, Burnaby, B.C., Simon Fraser University, 1993, pp. 125-141.
| |||

| |||

| |||

## The Obdurate Persistence of Rationalism | |||

| |||

Rationalism is the thesis that there are some
contingencies [Note 1] which are knowable
by human beings without their having recourse
to experience. Put another way, Rationalism is the thesis that
there are some truths about the world, truths which logically
'could have been otherwise', i.e. whose supposed falsity would
not be self-contradictory, which can be known without one's
experiencing the world.
Few philosophers today subscribe to Rationalism; indeed most explicitly repudiate it. And by all rights, Rationalism ought to be defunct. It ought, here at the end of the Twentieth Century, to be no more seriously embraced than, for example, Aristotle's theory of natural place, Descartes' theory of vortices, or Skinner's theory of semantic competence. Indeed, most philosophers believe that at least one hundred years ago Rationalism pretty much joined this ever-expanding multitude of discarded theories. More particularly, most philosophers believe that Rationalism has all but been vanquished by empiricism, and that although isolated pockets of Rationalism may exist here and there, its total extinction is assured. "The scope of the a
priori is limited", it is confidently asserted, "strictly to the
noncontingent – to mathematics, to logic, and to conceptual
analysis; it cannot encompass anything contingent." And yet many
philosophers who steadfastly believe that they are themselves
diehard empiricists, on occasion, unwittingly persevere with
Rationalist theses.
The belief that Rationalism has, for all intents and purposes, vanished has lowered the vigilance of empiricists. So complacent have many empiricists become that they have lost their sensitivity to detect, and to expose, occasional rationalist arguments, both in themselves and in others. Rationalism persists, not in the declared allegiances of philosophers, but in some of their tacit presuppositions. I am neither psychologist nor sociologist. Nonetheless I believe that Rationalist tendencies are very strong in many of us. We so much want to make sense of the world. We so much crave that the world should comport with our beliefs about the world. And how very much easier this agreement is when it is a matter of the world adapting to our beliefs than it is of our beliefs adapting to the world: "Surely the world is not just a certain way; surely there are ways it just has to be."
The most refractory bastion of Rationalism has been Probability Theory. Right from the very time of its creation in the seventeenth century, Probability Theory has harbored a fraudulent alliance of the empirical and the a priori (just as there had been, until it was rooted out, in geometry). For example, Leibniz writes: Probabilities are estimatedIgnoring the problematic assigning of a probability to a single event, we must applaud Leibniz, in this instance, for having got the philosophical principle exactly right. The probability of a child being a boy is determined solely and wholly by the relative frequency of male births among all births. (Modern statistics place the actual, late twentieth-century, relative frequency at 51.2% [B. Anderson, p. 31]. It is not known whether this figure fluctuates over a period of centuries.) In any event, Leibniz saw the point correctly: one cannot 'predict' the probability of a male birth a priori, any more than one can predict a priori the rate at which persons fall ill with hepatitis or the rate at which fluorescent lamps fail. Leibniz even highlights the point, we notice, by adding explicitly that the relative numbers of births, found experientially, are appropriate in this world.
And yet, in spite of Leibniz's having correctly seen the essential a posteriori character of assigning a probability to the birth of a male child, within the very same paragraph he invokes a priori reasoning in estimating the probability of another sort of event: The art of conjecture is based on what is more or less facile, or else more or less feasible, for the latinMany writers have criticized certain aspects of the reasoning at play in such explications of probability. Some critics, for example, immediately focus on the tacit assumption of the Principle of Equipossibility: that each of the possibilities is equally probable with the others. And they have gone on to point out that counting or individuating the
possibilities is problematic.[Note 2] Other writers
criticize a further underlying principle, the Principle of
Sufficient Reason: there must be some reason for two things to
differ; in the present case, there must be some reason for two
possibilities to occur at different frequencies: if there is no
such reason, then they must occur at the same frequency. (In
short, the Principle of Sufficient Reason grounds [or explains]
the Principle of Equipossibility.)
And thus we find Philip Wiener, for example, criticizing Leibniz for his reasoning: Leibniz erred here in not consideringThe sort of criticism Wiener levels is fairly standard, having been made by both earlier and subsequent writers. And yet it is one which has troubled some of those very writers. If the dice were, let us suppose, of two different colors, red and green, then we could enumerate the possibilities this way: Ways (or situations) of throwing a twelve:
- Red die – six
- Green die – six
that is, there is one way, and one way alone. Ways (or situations) of throwing an eleven:
- Red die – six
- Green die – five
- - or -
- Red die – five
- Green die – six
that is, there are two different ways. one way, as Leibniz said, of
throwing an eleven: one die turns up five and the other turns up
six."
At this point, it would seem that the debate is poised to veer into a discussion of the metaphysical status of the Principle of the Identity of Indiscernibles.[Note 3] And, indeed, on occasion the discussion has taken just such a direction.[Note 4] But there is a quite different response which is often given at this juncture, and it is this latter response which principally concerns me. When the question seriously arises as to how we shall count (or individuate), not the probabilities concerned, but the possibilities, one often finds philosophers arguing – in defense of the claim that there are two ways to throw an eleven – by their citing the actual relative frequencies when one tries the experiment with real, as opposed to imaginary, dice: "When we throw pairs of dice a great many times, we discover experimentally that eleven turns up Such arguments are specious. They are not only circular, they thoroughly confuse and conflate the empirical and the a priori. They are akin to arguments which would allege to show that because no physical measurement of the ratio of the circumference and the diameter of any actual circle has ever been irrational, then cannot be an irrational number. Most recipients of such an argument would immediately protest that the empirical cannot 'override' the a priori in this fashion; that what mathematics tells us of the nature of cannot be subverted by physical measurement; that the of mathematics is an irrational number, of an infinite number of decimal places, and may at best only be approximated, or may even be considerably disparate from, a number obtained by the physical measurement of actual circles. But this same sensitivity about the 'loose fit' (if any!) between the mathematician's and the engineer's (or physicist's) measurement of the ratio of circumferences to diameters of actual (physical) circles does not reliably carry over to a counterpart sensitivity about the difference between the mathematician's calculating of probabilities on the one hand and the statistician's or pollster's on the other. The belief has persisted since the seventeenth century, that somehow, in at least a significant number of cases, the world must comport with the mathematician's calculation of probabilities. And thus, more than 250 years after Leibniz's initial two-mindedness, we find some mathematicians persisting in it. Alan Anderson, in his textbook, From Set Through Function:
Elementary Mathematics for the Nonspecialist (1972),
presents students with this extraordinary exercise (p. 109)
[gloss added]:
State whether the probability of each of the following events is more likely to be determined on an empirical basis or on the basis of theoretical [read "a priori"] considerations:A thoroughgoing empiricism would demand that the answer be the same in all cases, viz. "empirical". But Anderson perpetuates the historical confusion. In his official 'answers' (p. 327), he gives "empirical" as the correct response only for questions 1 and 7; to questions 3, 5 and 9 he gives the answer "theoretical" [i.e. "a priori"]. Is mine an overzealous empiricism? "Do we really need to turn to experiment to know what is the probability of pulling a red ball from an urn containing three red and seven white balls? Can't we know a priori, just as Anderson alleges, even before, indeed without, conducting an experiment that the probability is 30%?" No, we cannot. Prior to experience it is utterly impossible to judge whether red and white balls 'mix randomly', whether our hands may have a natural proclivity (quite independent of our sight) to drift toward red balls, whether balls change colors when touched, etc. It is only by experience that we learn that certain sorts of events in this world behave as they are naively predicted, a priori, to do. That is, some of our guesses about probability-values turn out to be true. But that they are successful is sheer contingency. There is no a priori reason that they should be. Linguistic sleights of hand are no better. Some persons try to argue, for example, that "a balanced, well-made coin must turn up heads 50% of the time; after all, this is what we mean in
saying that the coin is 'well-balanced'." One doesn't know
whether to regard such an argument as simply confused or, more
drastically, as desperate. Definitions are not going to
dictate phenomena. Let it be ceded that by definition a
'well-balanced' coin will turn up heads 50% of the time. Then
the question becomes: can one possibly know a priori that a
given, real, coin is 'well-balanced' (or, as it is sometimes
called, 'fair')? One can of course learn that it has been
carefully machined, that its substance is isotropic, that there
is precisely as much mass in the raised portion of its head as
there is in its tail, that that mass is symmetrically
distributed, etc. But none of this tells us whether the coin is
'well-balanced' in the sense of the definition just given. For
there is no contradiction in supposing that a coin having all the
physical properties just listed will, when flipped several times,
turn up heads, let us suppose, 38% of the time. Without
experience it is impossible to know how such a coin will behave,
whether it will turn up heads 10%, 27%, 50%, or 100% of the time,
indeed, for that matter, whether it will even turn up heads or
tails, since without experience, for all we know, it could turn
to dust, undergo mitosis, transmute into an apple seed, float in
midair, or simply go out of existence altogether.
Appeals to such notions as "being well-balanced", "being fair", etc. capitalize, doubtless unwittingly, on an equivocation. For the sentence "A well-balanced (or fair) coin will turn up heads 50% of the time" can be used to express both a (definitional) necessary a priori truth as well as a (different) accidentally-true contingency knowable only a posteriori. A failure to spot the equivocation makes it seem as if a priori knowledge of the former truth is also knowledge of the latter. To be sure, a carefully machined coin behaving in the unexpected and peculiar manner latterly described (e.g. turning up heads 38% of the time) would be regarded as an anomaly. But the essential point is that such behaviour is nothing more than anomalous. In particular, such an oddity is not self-contradictory, it is nowise logically impossible. And, as Hume insisted, inasmuch as the description of a kind of event is not logically impossible, its occurrence cannot be known a priori.[Note 6] "Even so, is it reasonable, is it rational, to posit anything but a 50% probability for heads for a well-made coin?" Well, was it reasonable, was it rational, to posit anything but an evenhandedness, i.e. an indifference between right and left (i.e. parity), for the spin of elementary particles? Expectations, even reasonable and warranted guesses, are not knowledge and are no substitute for empirical investigation. Parity, we have learned from Yang and Lee, does not hold. There is, of course, no contradiction is supposing that there is a preferred direction in the universe. But we, poor mortals, could not imagine any reason why there should be. And thus, by invoking the Principle of Sufficient Reason, we convinced ourselves in advance of experiment that there was none. But the universe fooled us. Nature, apparently, cared nothing for our metaphysical principles. Without any 'reason' at all (or at least without any reason discovered to date), the universe does have a preferred direction, the universe perversely refuses to accord with our a priori reasoning.
While the universe does, contrary to our naive expectations, contrary to our a priori presumptions, have a preferred 'handedness', it seems to show no particular favoritism, indeed a virtually perfect indifference, towards either heads or tails in cases of flipping a coin, or towards any of the six faces in cases of tossing a die. But we should not read too much into this indifference. According to Leibniz, there is one way to 'throw an eleven' with a pair of dice: one die turns up five and the other six. According to Wiener and others there are two ways: (1) one die turns up five and the other six; or (2) one die turns up six and the other five. Who finally is right?; how is this dispute to be resolved? Many philosophers – as we have seen – have argued that Leibniz was wrong, and that he was, is demonstrated, they say, by the fact that when we actually try the experiment eleven turns up twice as often as twelve, i.e. the world behaves as if the possible ways of throwing eleven were twice as many as
the possible ways of throwing a twelve.
The qualification "as if" is crucial. On an a priori basis there is utterly no valid argument to favor one of these ways of counting the possibilities over any other way. (This is in fact the source of Bertrand's Paradox.) Simply put, any probability theory which would assign probabilities a priori is logically inconsistent.[Note 7] Now it so happens, that for certain a priori ways of counting possibilities, the predicted probability-values match fairly closely the observed frequencies in actual experimental data. But this 'fit' is purely fortuitous. The world just happens to be a certain way, a way which conveniently maps on to one of several different ways of counting possibilities a priori. Leibniz had the misfortune of coming up with a way of counting possibilities which did not map on to the observed relative frequencies. But his way of counting was no more or less 'mistaken' than Wiener's. Both ways are purely arbitrary. And both Leibniz and Wiener were equally mistaken in
believing that a priori calculations of probability could
ever substitute for calculations based on empirical observations
of actual phenomena.
The world is such that the observed relative frequencies in cases of coin flippings, card deals, and dice tossing, comport fairly well with one of several different a priori ways of enumerating possibilities. But to acknowledge this fact demands the realization that in addition the world fails to comport with other – equally arbitrary – a priori ways of enumerating possibilities. And it is a matter of sheer contingency which – of these different possible ways – the world actually is. Logically it need not have been the way it is. It is not that Leibniz got the numbers of possibilities wrong and that others have subsequently got them right. On an a priori basis there is utterly no way to adjudicate between ways of counting possibilities; still less is there any a priori way to assign probabilities to those possibilities, probabilities, that is, which mirror the actual observed relative frequencies. To believe that there is, is to succumb to Rationalism. Only empirical data reveal the probability of events; not our sense of seemliness or tidiness, and certainly not any Rationalist principles of Sufficient Reason or Equipossibility. Were it otherwise, there would be a 50% chance of your spontaneously incinerating as you read this final sentence, since there is both a possibility that you will and a possibility that you will not. ## Notes- i.e. contingent propositions or statements [ Resume ]
- Actually, to describe the
difficulty as "problematic" is to understate considerably.
In 1889, Joseph Louis Francois Bertrand (1822-1900), in his
*Calcul des probabilities*, demonstrated that classical probability theory which appeals to the Principle of Equipossibility is logically inconsistent. And yet, that very theory is still being taught to hundreds, if not thousands, of students in high schools and universities as you read this sentence. One can only speculate why mathematicians persist in teaching a logically inconsistent theory. In contrast, it is difficult to imagine philosophy instructors teaching a logical calculus knowing full well that it is inconsistent. What might we say to our students: "Oh, forget about the inconsistencies; it doesn't matter if you get two contrary answers."?! [ Resume ]
- or, equivalently (and perhaps more perspicuously), the
Discernibility of Non-identicals [ Resume ]
- E.g. see Rosen. [ Resume ]
- Be careful not to read the word
"probabilities" here instead. [ Resume ]
*How*do we know that there is no knowledge of this kind? Why, that is, should we subscribe to Hume's doctrine in this regard? Isn't that doctrine itself contentious and in need of support? Yes, I agree; Hume's claim does need separate, independent, support. But one cannot, particularly in a short paper, defend all of one's presuppositions. So permit me just to hint at what I think a defense might be. The answer to the question how we know the limits of a priori knowledge is itself something knowable only a posteriori, i.e. empirically. There is a priori knowledge; but what sorts of things may be known a priori, i.e. what categories of propositions may be known a priori, cannot itself be known a priori. There is no logical impossibility in supposing a person coming to know a contingent truth without appeal to experience. It is just that there is no good evidence in this world that such a thing ever occurs. It is not logically impossible that it should occur; it just simply does not occur. Strangely, it is by experience that we come to learn that some truths, e.g. some of those of arithmetic and logic, etc., can be known without experience. [ Resume ]
- Salmon has demonstrated that Carnap's twentieth-century attempt to construct a so-called theory of logical probability, in which the probabilities are assigned a priori, is not linguistically invariant, that its values vary according to the number of predicates admitted to the language, and hence – in effect – gives rise to inconsistent determinations. [ Resume ]
## ReferencesAnderson, Allan G., From Set Through Function: Elementary
Mathematics for the Nonspecialist, Wadsworth, Belmont,
Calif., 1972.
Anderson, Bonnie, and Judith P. Zinsser, A History of Their Own:
Women in Europe from Prehistory to the Present, vol. 1,
Harper amp; Row, New York, 1988.
Rosen, Robert, "The Gibbs' Paradox and the Distinguishability of
Physical Systems", in Philosophy of Science, vol. 31, no. 3
(July 1964), pp. 232-236.
Wiener, Philip P., ed. Leibniz Selections, Charles Scribner's
Sons, New York, 1951.
Return/transfer to Norman Swartz's Home Page |