This article was published in Dialogue, vol. XXXII (1993), pp. 527-540.
I wish to thank two anonymous referees for Dialogue for their helpful comments.



Getting from P to Q: Valid Inferences and Heuristics [Note 1]

NORMAN SWARTZ Simon Fraser University


Epistemologists have known for two-and-a-half centuries that there are serious difficulties surrounding nondemonstrative inference. The best-known problem, 'the' problem of induction, was first diagnosed by Hume (1739) in the Treatise. In our own century, several more problems were added, e.g., by Hempel (1945) – the paradox of the ravens – and by Goodman (1955) – the 'new', or exacerbated, problem of induction. But an even greater blow lay ahead: within the decade after Goodman's problem appeared, Gettier (1963) was to publish his famous challenge to the traditional analysis of knowledge which, again, underscored how problematic inductive inferences are.

These much-discussed problems, having been thrust into the spotlight, have contributed to the belief that the role of demonstrative inference in the acquisition of knowledge is, in contrast, relatively free of problems: that if one comes to believe by having inferred with deductive validity from a known proposition , then may – without any particular concern – be regarded as being known as well.

What I will try to do here is to show that there is a problem with demonstrative inference which is every bit as grave as any of those historically associated with nondemonstrative inference. If I am right, then it may be that the perennial crisis in epistemology stems not from problems inherent in nondemonstrative inference but in the very concept of inference itself.

The concept of inference, in spite of its familiarity and commonness, remains one of the most recalcitrant in the philosopher's inventory. About inference, very little can be said which is simple and true. At the outset I would want to say that inferring is an activity that humans beings (and likely many other organisms) perform. [Note 2] But even such a simple and direct claim – viz. that inferring is an activity – has been challenged.

D.G. Brown, attending to the peculiar grammar of the verb "infer", and in noting that there is no (nonparadoxical) continuous tense, e.g. "I am inferring", argues that inferring is not an activity (Brown 1955b, pp. 351-5). Moreover, he argues elsewhere, inference is not an act or an action either:

For there to be action, there must be something one could decide not to do. ... When I know that p, from which it in fact follows that q, and I consider and understand the suggestion that q, either I see that it follows or I do not. ... I find that I infer or that I do not. (Brown 1955a, pp. 140-1).

The emphasis here is on the words "see" (meaning, presumably, "understand") and "find". One sees (not decides) that q follows; one finds that one has inferred, but one does not resolve or decide to infer.

Just how murky the notion of inference is may also be seen by noting that there is nothing within philosophy which might properly be called a theory of inference, let alone the 'standard' or received theory. [Note 3] And thus, after at least a century of debate, we find philosophers – like Todd and Sterelny (Sterelny 1990) – continuing to argue whether there is an inferential step between 'raw' sensation and 'conscious' perception. But how are we to settle such debates without having tried to fathom what sort of thing inferring amounts to?

Surely inferring is something that persons do. Yet Brown's worries are well-taken: inferring is not something we choose to do, nor is it something that seems to take much time (unlike [his example] arguing, or [my example] constructing a multistep proof in quantification theory). If we are to regard inferring as an activity, in some ways it is more like digesting a meal or maintaining a constant internal body temperature than it is like selecting among a variety of possible purchases by weighing up their relative merits, i.e. it is not something we choose to do. And inferring is more like believing that one has two arms than it is like reciting a poem, in that it is not something which seems to take any appreciable time. Nevertheless, in spite of the significant differences between paradigm cases of acting on the one hand and inferring on the other, we still need a term for characterizing persons' making of inferences. And so, since it is so well established, I will persist in calling inferences "acts" or "activities" (bearing in mind the stretching of these terms involved in so doing).

Allowing that inferences may, with some license, be regarded as acts, we can often in practice date such acts, saying, for example, that a person, Sylvia, yesterday at 2:03 pm, after observing that the mail box had not been emptied as scheduled some three hours earlier of its contents, had inferred that a threatened postal strike had occurred. And we may appraise such acts on logical grounds, saying for example, that Sylvia's inference was warranted by her data, more specifically, that it was inductively valid.

But logic texts, too, which usually have no concern with the activities of any specific human beings whatever, often invoke the concept of inference. The difference is that in logic texts the 'inferences' which are referred-to are not acts; at least they are not the acts or activities of any particular person. If the inferences exhibited in the exercises in logic textbooks are to be regarded as acts at all, then they are 'depersonalized', idealized or abstracted acts: they are the sorts of things which might be performed by any person whatever (or, more exactly, by idealized persons – rational, adult, and intelligent).

My concern here has little to do with the kinds of inferences discussed in logic texts; my concern here is principally with those inferences which are bona fide cases of someone's (or something's) activity – not with inference in the abstract, but with inference in the flesh (literally inside the flesh).

A certain distortion, or at least a simplifying assumption, is required here. We shall pretend that acts of inference in some sense 'take' (move) one from one statement (or belief) to another. [Note 4] Such an account is incomplete. For it ignores the likelihood that we would want to broaden our analysis of inference to allow that one may make inferences not only from statements or beliefs, i.e. truth-value bearers, but from such things as concepts, open-statements, axiom-schemata, etc. I must gloss over such niceties simply in order to get on with the problem at hand: probing the role of inference in the acquisition of propositional knowledge. [Note 5]

Let's turn, then, to the role of inference in knowledge-acquisition. I will proceed dialectically, from a seemingly straightforward example through a succession of more problematic ones.

We begin by noting a need for an account of experiential knowledge which provides for the possibility of 'mixed-bag' inferences, inferences which proceed from premise-sets containing both a priori and a posteriori knowledge. For example, both intuitively and in the Kantian tradition, we will want to count the following as an instance of experiential knowledge.

[alexa] From her direct experience that there is a square in her visual field, and her a priori knowledge that all squares are regular polygons, Alexa infers – and thereby comes to know – that there is a regular polygon in her visual field.

Clearly Alexa's conclusion must be regarded as experiential knowledge: knowing that there is a regular polygon in one's visual field is quintessentially empirical knowledge, and being knowable empirically (i.e. being knowable only by experience) implies being knowable experientially. [Note 6]

To accommodate such cases, one will need a definition of "experiential knowledge" (at least roughly) along these lines:

" is knowable experientially" =df  

"It is possible to know either by direct appeal to (human) experience or by coming to believe as a result of a valid inference from known statements () one or more of which is known experientially." [Note 7]

The requirement that one's evidential base must be known is, I confess, too strong. In courts of law, for example, some of the evidence may be false, indeed the evidence produced may be inconsistent. And yet the preponderance of evidence may point to a particular verdict. Such cases suggest that inferential knowledge is possible even in cases of inconsistent data (evidence). I must let this point, too, pass. Happily, in recent years some other writers, e.g. Rescher (1976), have been exploring techniques for reasoning from inconsistent data.

There is nothing particularly novel or idiosyncratic about the offered definition. The problems I am about to review do not arise solely with the stated definition; they are present, if only indistinctly and implicitly, in virtually any such definition evolving out of similar philosophical intuitions about the nature of inferential knowledge.

At first, the proffered definition seems to work. At least it succeeds in properly categorizing the 'mixed-bag' case which prompted it. Specifically, it identifies Alexa's inferring that there is a regular polygon in her visual field as being a case of experiential knowledge. Unfortunately, not all is in order. Consider the following inference.

[bonny] Bonny comes to believe, as a result of her inferring from her experiential knowledge that it is raining, that is a nonrepeating decimal.
One may protest that there is some sense in which this is a 'crazy' or 'bizarre' or even 'pathological' inference to make. But suppose Bonny makes it anyway. (Perhaps she is disposed to draw this particular conclusion from virtually any premise whatever, and in fact does so ten to twenty times a day, from a great variety of premises. No matter. She makes the inference, odd though it may be.) Now the trouble is that, according to the definition given of "knowable experientially", Bonny knows experientially that is a nonrepeating decimal. This is so because (1) the inference proceeds from a statement (viz. that it is raining) known experientially, and (2) the inference is valid. It is valid because it satisfies one of the sufficient conditions for being a valid inference. In particular, the definition of "valid inference" has it that

An inference is valid if and only if it is either deductively valid or inductively valid.

The standard (semantic) definition of "deductive validity" states

An inference is deductively valid if and only if it is logically impossible for its premise-set to be true and its conclusion(s) false [i.e. ~(P & ~C )].

Bonny's inference satisfies this standard (semantic) account of deductive validity. (We will look at proof-theoretic accounts of validity shortly.) Inasmuch as her conclusion is necessarily true, that conclusion cannot be false. Thus it is impossible for her inference to have a true premise and a false conclusion. Her inference is valid (in particular, deductively valid). And hence – according to the definition of "knowable experientially" – she has knowledge and that knowledge is experiential. But even worse is in store.

[cathy] Suppose Cathy infers, and comes to believe, from her experiential knowledge that the piano is out of tune the statement that every even number greater than two is the sum of two primes, i.e. she infers what is commonly known as the 'Goldbach Conjecture'.

If the Goldbach Conjecture is true, then, again – according to the definition offered of "experiential knowledge" – Cathy knows that it is true. [Note 8]

This is not to say that her inference constitutes a proof. To count as a proof within mathematics, each step of the inference must be stipulated and each step (ideally) must be accompanied by a warrant, i.e. by mentioning an accepted inference rule which sanctions that step. [Note 9]

Our concern, however, is not whether Cathy's inference is a proof. Our concern is whether Cathy knows the Goldbach Conjecture (to be true). According to our definition, she does (provided only that the Goldbach Conjecture is true).

But clearly Cathy does not know whether the Goldbach Conjecture is true. Even if it is true, she still does not know it. Inferring the Conjecture in one fell swoop from her knowledge that the piano is out of tune cannot, on any reasonable account of inferential knowledge, be regarded as a bona fide case of knowledge.

In general there is nothing, in principle, fallacious in inferences which proceed from experientially known truths (whether contingent or noncontingent) to necessary truths. Although many persons have thought that such inferences are always fallacious, their worries often stem from their confusing universality with necessity. The following argument may not constitute a mathematical proof; but as an argument conferring knowledge, by perfectly acceptable standards, it succeeds.

Statements of arithmetic, we know a priori, are necessarily true or necessarily false. If a statement of arithmetic is necessarily true (true in all possible worlds), then it is universally true (true throughout the actual world).

By examining the unions of sets of discrete physical objects, I have found that two objects plus two objects is equal to four objects. [Note 10] On the (justified) assumption that the statements of arithmetic apply to sets of physical objects (inasmuch as they apply to any and all sets whatsoever), I have come to know experientially that two plus two is equal to four. I have not learned the necessity of this latter statement experientially; but I have learned its truth in that way.

Clearly, there are other, wholly a priori, routes to the identical conclusion (see any of a great number of twentieth-century texts in the foundations of arithmetic). It is certainly not my purpose to argue that one must learn truths of arithmetic experientially, only that it is possible to learn some in this manner. Recall, too, that children in grammar school are taught arithmetic in precisely the fashion just described. The experiential route to arithmetical truths must have some positive merit. And there must be something wrong with philosophical arguments which would attempt to prove that the kind of learning which in fact occurs in elementary schools is logically impossible.

In short, there is nothing in principle barring one's knowing the truth of (not the necessity of) some necessary truths (e.g. simple ones of arithmetic) by inferring those truths from premises, one or more of which is known experientially. [Notes 11,12]

And yet, Cathy's inference to the Goldbach Conjecture from her knowledge that the piano is out of tune strikes us as not yielding knowledge even if we are prepared to allow the cogency of the latterly examined inference, viz. the one arising from observations about sets of physical objects to the conclusion that two plus two equals four.

Let's back up just a bit. Cathy's inference – from her experiential knowledge that the piano is out of tune to her conclusion that every even number greater than two is the sum of two primes – satisfies the definition for "experiential knowledge". And yet our strongest intuitions (I am assuming, of course, that your intuitions in this matter are in concert with my own) tell us that Cathy does not know her conclusion (is true). Her conclusion is not a case of experiential knowledge.

Where does the trouble arise? It arises, I would like to stress, not in any peculiarity in inferring necessary truths from contingent ones, but – again – in the very obscurity of the notion of inference itself. For notice that what is truly puzzling about Cathy's inference would have persisted even if her inference had proceeded from a set of statements known a priori. For the following inference, as well, produces the same troubling results.

[diane] Suppose Diane infers from her a priori knowledge that the diagonals of a square intersect in right angles, the statement that every even number greater than two is the sum of two primes.

Again, we would want to insist that such an inference does not confer knowledge. And yet, just like Cathy's inference, which proceeded from a premise-set known experientially, this latter inference apparently satisfies a sufficient condition for one's conclusion being known: the premise-set is known, and the inference is deductively valid.

Clearly, for an inference to be regarded as conferring knowledge it must do more than satisfy a semantic definition of correctness. Such semantic definitions attend only to the 'endpoints', as it were, of the inference, in particular to the modal (or probabilistic) features of the relation between premises and conclusion. Such definitions neglect the route by which the inferrer 'gets from' the premises to the conclusion. But how are we to make a repair? What sorts of features, beyond semantic (or probabilistic) ones, do we want to require for an inference to confer knowledge?

To be perfectly frank, I do not know the answer to this latter question. But what I can, and will, do is review a number of suggestions which do not work, and then offer some ideas as to where a solution might eventually lie.

I think the most tempting move to make is to insist that there be a causal connection between believing (or entertaining) the premises and subsequently coming to believe (or entertain) the conclusion. But the trouble with this approach is that it merely postpones, and does not solve, the problem. Ex hypothesi there is a causal connection between Bonny's knowing that it is raining and her subsequently coming to believe that is a nonrepeating decimal; and likewise, between Cathy's knowing that the piano is out of tune and her subsequently believing that every even number greater than two is the sum of two primes. (What is being hypothesized is no more physically impossible than that the smell of chestnuts roasting may cause someone to recall a pleasant childhood Christmas.) The trouble is that in Bonny's and Cathy's cases, the causal connection is misbehaved: it ought not to have been made. But that it 'ought not to have existed' does not make an actual inference 'go away'. The judgment that an actual causal connection 'ought not to have existed' (ought not to have been made) introduces evaluative principles which are no part of a recounting of a causal sequence. These latter principles are logical or quasi-logical (methodological or epistemological) ones whose validity (or warrant) lies outside of psychology or neuro-physiology. [Note 13]

Perhaps the second most tempting move to make is to try to insist, then, on the kind of rigor displayed in logic textbooks, that is, to invoke a proof-theoretic concept of validity: "What we require for an inference to confer knowledge is that each step of the inference be sanctioned by a rule of inference."

There are a variety of ways to interpret this latter suggestion. The first is that the inference must accord with valid rules of inference; the second is that the rules of inference must be known to the inferrer.

Let's pursue the former suggestion for a moment. Which rules of inference should we suppose that the inference must accord with: rules of standard logic, or, perhaps, rules of a relevance logic? It might be supposed that the problems discussed above are unique to standard logic, being – as it were – the counterparts for inference of the well-known 'paradoxes' of (strict) implication, and are, therefore, eliminable by taking recourse to a relevance logic. But, as it turns out, it does not matter whether one is operating with a standard logic or a relevance logic. (We will examine the case of a relevance logic, but the argument is generalizable and obviously has a counterpart for the case of standard logic as well.)

In adopting a relevance logic, the number of such problematic cases may be (somewhat) reduced, but not to zero. For relevance logics have their own versions of these very problems. In a relevance logic, there will be a set of inference rules, IRL. Certain inferences will be sanctioned by these rules. Some of the 'proofs' in a relevance logic, just as in standard logic, will be very difficult, having many steps, and will try the best efforts of the finest logicians. Suppose one of the proofs is particularly renowned; we will call it "".  begins with the premise set, {}, and ends with the conclusion, . Each step of the proof is sanctioned by an inference rule in IRL. When is first published, the members of the College of Relevance Logicians are stunned: the proof is so difficult they marvel that any mere mortal could have constructed it.

[erica] However, Erica knows nothing of their wonderment, or even, for that matter, of the existence of . Moreover, she has quite independently and coincidentally some good reason to believe {}; and in a moment, in a split second, she infers – and comes to believe – from {}.

Does Erica, as a result, know ? It seems unreasonable to believe so. But yet the conclusion can – in principle – be shown to accord with IRL, and thus it is valid (without our invoking the standard [semantic] definition of "validity"). The problem thus persists.

What about the latter suggestion? Can the difficulties be removed by insisting that the inferences are to be regarded as knowledge-conferring only if they are made in accord with inference rules known to the inferrer?

Such a suggestion has merely to pass one's lips before one sees fatal objections. Young children make inferences. And they often come to know (the truth of) statements as a result. [Note 14] If, however, we were to insist that only inferences which accord with inference rules known to the inferrer confer knowledge, then inferential knowledge would be impossible for children: they do not know any inference rules. [Note 15] It is counterproductive to insist that only inferences which accord with inference rules known to the inferrer are capable of conferring knowledge. Insisting on such a solution would, at a stroke, eliminate a great deal, if not most, of what we take ourselves to know. (Remember: the proposal that there simply be a valid rule sanctioning the inference is not a viable solution; it is in fact the very source of the problem.)

But if it is too strong a condition to demand that the inferrer himself/herself know the rule, might we relax the condition to be simply that the rule be known, by anyone at all?

This fourth suggestion is considerably weaker, and hence more plausible, but in the end no more acceptable.

Generations of mathematicians labored at arithmetic, coming to believe many truths which emerged as conclusions of proofs that were – by modern standards – of decidedly suspect rigor (again see note 9). All sorts of moves were made which nowadays seem questionable and in need of further justification. But does this mean that they did not know (the truth) of their conclusions? That verdict is too severe. In many cases, they did know. Their conclusions were not just 'lucky guesses'. Nor were they merely 'educated opinions'. [Note 16]

The same lesson may be learned outside of mathematics, and possibly with greater clarity. In our ordinary day to day living in the world, many of our inferences which seem to us perfectly compelling (and some of which are undoubtedly deductively valid) do not proceed in accord with any known inference rules. Many of the inferences which we make daily, in a flash, in a twinkling of an eye, are so complex that when we come to try to analyze them, they defy canonization; even less are they able to have their validity demonstrated by anyone's citing known inference rules with which they accord. A great many formal logic textbooks perpetrate and perpetuate a myth: that ordinary inference exhibits the logic of the examples (so carefully chosen) in the textbooks. No one who has taught informal logic – and experienced the frustration of students who crave rules but whose yearning cannot be satisfied by the instructor – can have the least doubt about this. We learn to make valid inferences, neither by learning rules nor by testing our inferences against the opinions of experts in rules – but by hard knocks and by apprenticeship.

Many of us have been beguiled by our collective success in creating logic textbooks for our students. Particularly since the 1930s, with the development of so-called 'Natural Deduction', many have come to believe that the formal arguments of these textbooks portray our psychological operations, if not quite perfectly, then at least normatively. But to the extent that we believe this, to that extent we have been playing at empirical science in a Rationalist manner, more specifically, we have been trying to do psychology in an a priori fashion.

What are human inferences really like? Having studied and taught for so long what we assume inference ought to be viz. like the arguments of the textbooks), some of us have neglected to ask ourselves how persons actually reason from data (evidence / premises) to conclusions. How often do we ever stop to ask our students how they reasoned to a mistaken conclusion? Recently, I tried this exercise myself. In my introductory class, I asked my students to tell me shortly (1) whether they had ever before seen the question I was about to pose them; (2) what their first response was when they thought they had the answer; and (3) as best as they could, how they arrived at that answer or the reasons which prompted that answer. The question was adapted from one appearing in A.G. Latcha's How Do You Figure It?: Modern Mental Exercises in Logic and Reasoning (1970, p.  19). (Ironically, the puzzle as posed in Latcha's book is unwittingly subtly self-contradictory. [Note 17] When I posed the puzzle to my students, I repaired the question so that it was self-consistent.)

It takes twenty days for a certain tree to lose all of its leaves. On each day after the first day, the tree loses twice as many leaves as on the previous day. At some point, the tree is exactly half bare. When is that, i.e. when has the tree lost half its original number of leaves?

Eight students reported that they had seen the puzzle before. I have eliminated their responses from the tally (although not one of them got the right answer). The remaining 95 responses are categorized in Table 1.


unclear or no answer21
"Too little information given8
"Day 1"1
"Day 4"3
"Day 5"2
"Day 6"1
"Day 9"1
"Day 10"11
"Day 11"1
"Day 12"1
"Day 15"8
"Day 19"37
– Table 1 –

Of the 66 students who offered an answer, 37 (i.e. 56%) offered the same (wrong) answer: "Day 19". No student, either among the ones listed or in the group whose answers were eliminated, gave the correct answer: "During the 20th day". [Note 18]  I had expected that no student would give the correct answer. But what had motivated the exercise was my wanting to examine the reasoning lying behind the wrong responses.

Typical explanations among those who gave the overwhelmingly most frequent answer ("day 19") read: "If the tree loses twice as many leaves per day, then it loses half as many each previous day. So if it loses all its leaves by day 20, it must have lost at least half the day before, day 19." Or, "If on the 20th day, all the leaves are gone, then one half of that number must have been lost the day before. Thus ½ of the leaves are lost by day 19."

The reasoning is, of course, fallacious. But the wonder of it is that so many persons reason in the identical, fallacious matter. There are, I suppose, an infinite number of ways to reason fallaciously. But evidence such as this suggests that there are common patterns of inference – even in cases of fallacious reasoning.

To be sure, the evidence produced by this little lapse into empirical research is not the stuff articles are made of in professional journals in psychology or cognitive science. There is much to criticize in the design protocol, and I shall not pretend otherwise. My data are meant only to be illustrative and suggestive.

Data such as I have just reported fit well a theory that a number of cognitive scientists and researchers in artificial intelligence have been urging in recent years: that human beings reason, not so much in accord with valid inference rules of the sort identified and sanctioned by formal logicians, but rather in accord with perceived similarities with former cases and rules of thumb, so-called 'heuristics' (see e.g. Langley et al. 1987). The reasoning my students used to produce and justify their answer "day 19" is the sort of reasoning which can be used successfully in some (deceptively similar) cases. For example, that reasoning would work if applied to this sort of case: "A silo is filled with grain over a twenty-hour period. At the end of each hour after the first, the total amount of grain in the silo is twice that present at the end of the previous hour. When is the silo half filled?" The answer to this latter question is: "at the end of the 19th hour." [Note 19] I think it not hard to understand how one could see similarities between the two cases and fail to see relevant differences.

The crucial issue is this: "Suppose a person were to use in this latter case (filling the silo) much the same sort of reasoning used by the majority of persons in the previous instance (the defoliation of the tree). In this latter case, the person would arrive at the right answer. Would we, knowing nothing of her/his reasoning, think anything but that s/he knows the right answer?" I think we would be inclined to say that the person does know the answer. But if so, then knowledge can proceed from inferences drawn in accord with logically invalid heuristics. Even if we were to learn of the logically fallacious heuristic involved, inasmuch as that heuristic gives the right answer for this case, we would probably still, I think, want to say that the person does know the right answer.

If, and I want to underline that this is a big if, human reasoning does not typically proceed by taking account of valid rules of inference, but instead issues from our perceiving (real or apparent) similarities with previous cases i.e. causally, using heuristics – rules of thumb of proven application to a variety of cases but not enjoying universal application or logical necessity – then we may, and perhaps should, want to rethink our analysis of 'inferential knowledge'.

It may well be that the conceptual ingredients needed to reconstruct our actual concept of knowledge have less to do with valid inference than we have hitherto supposed. We may need to broaden our historical, normative, model of knowledge by asking ourselves both how persons actually go about acquiring knowledge and what we are prepared to recognize as being genuine cases of knowledge. Doubtless some philosophers are altogether too prone to answer this latter question a priori and with a normative fervor. It may well be that what we and, in particular, the vast majority of human beings who are not philosophers or persons trained in the niceties of formal logic, take to be knowledge includes a very great deal of that which results from inferences which proceed causally in accord with heuristics. (This is just a sketch. I need not share your heuristic in order to believe that your inference gives you knowledge. Perhaps I need only assume that you have some such heuristic.) But if anything like this is so, if, that is, the very concept (of knowledge) that we are trying to analyze allows for 'heuristic inference', then, as philosophers, we have gone pretty much as far as we can go. To give a proper analysis of the conditions under which someone can come to know as a result of having inferred from will require a conjoint effort of philosophy and cognitive psychology. Neither discipline is going to be able to produce this reconstruction on its own.

Notes

  1. This article is a revised version of an essay which originally appeared in a privately published festschrift for Raymond D. Bradley, Here & Now, edited by Raymond Jennings, Burnaby, B.C.: Simon Fraser University, 1991. Return

  2. I will leave aside questions whether machines, in particular computers, also perform (or could perform) acts of inference. Return

  3. Wittgenstein had asked "... what is left over if I subtract the fact that my arm goes up from the fact that I raise my arm?" (ยง621). We can ask a parallel question about inference: "what is left over if I subtract the fact that I-come-to-believe-q-after-having-believed-p from the fact that I-infer-q-from-p?" Return

  4. Brown challenges this characterization as well (1955a, pp. 137-9). Again his criticism invokes the fact that inferring seems to take no time; that one can stop halfway through a journey, but there is no stopping halfway through an inference. And again, I will grant Brown his insight, but persist with the standard terminology. Return

  5. Some philosophers, of course, eschew the theory that beliefs have objects. They would, thus, object that nothing whatsoever answers to the description I have just given, i.e. nothing which bears a truth-value is an object of belief. I will not attempt to rebut such objections here. Should one object to there being objects of belief, and object, further, to those objects being truth-valued (or at least eligible candidates for truth-valuedness), then s/he will surely want to object to most of what follows in my discussion of inference. Return

  6. Recall that a moment ago I suggested that construing inferences solely as relations between statements is overly narrow. As a matter of fact, the sentence just written suggests an inference proceeding, not from a statement to a statement, but from a concept to a concept, viz. from being knowable empirically to being knowable experientially. Return

  7. Notice that this definition is recursive: it invokes the very concept it is intent to define. Yet it is not circular, since it has as its so-called 'basis' a condition (viz. being knowable by direct appeal to [human] experience) which does not invoke the concept being defined. (An analogy may be apt: "x holds legal office" =df "x is elected directly by the people, or x is appointed by a person who holds legal office." For more on recursive definitions, see Recursive Definitions.) Return

  8. None of the points I argue below depends on the actual example used. My using the Goldbach Conjecture is a mere convenience and is meant only to be illustrative of a particular infinite class of propositions, viz. the set of necessary truths currently unknown (Fermat's Last Theorem, etc.). If it should happen that the Goldbach Conjecture comes to be proven, one way or the other, then one need only substitute "" in place of "the Goldbach Conjecture", where "" is an arbitrary name for an unproven necessary truth. (Shortly below, again for convenience, I will drop the reminder about the proviso of Cathy's conclusion being true.) Return

  9. P.H. Nidditch has argued that proofs are – in fact – relatively rare within mathematics.
    [In the entire literature of mathematics], from Euclid to Bourbaki inclusive, there are scarcely any proofs in the logical sense. The number of original books or papers on mathematics in the course of the last 300 years is of the order of 106; in these, the number of even close approximations to really valid proofs is of the order of 101. Thus the chances of finding a mathematical publication that contains even one genuine proof are less than 1 in 10,000. ... Mathematics is continually asserted to be a deductive science. Yet, with the extremely rare exceptions mentioned above, there is no piece of deduction in extant mathematics. None of the so-called proofs forms a deductive chain. They are at best outlines of proofs, not themselves proofs. Many of the necessary details are omitted; but if a detail is necessary for (the validity of) a proof, then its absence involves the absence of a (valid) proof. (Nidditch 1957, pp. 1-2) Return

  10. Note that the evidential base pertains to logical operations on sets of actual material objects, not to physical operations – such as placing objects in spatiotemporal proximity. The results of physical operations may be, and sometimes are, vastly different from the results of logical operations. Return

  11. On one account, the inference just made – from experientially sampling the features of sets of actual physical objects – to a universal conclusion, would appear to be no more than inductively valid; but on the account under consideration – insofar as the conclusion is a necessary truth – that inference turns out to be deductively valid. I will let this curiosity pass. Return

  12. Another example may be useful:
    Efrem knows experientially that there are exactly nine planets. From this he infers (and thus comes to know) that there are fewer than fifteen or more than five planets.
    Although his conclusion is both necessary and knowable a priori (it is, one can see on careful examination, a disjunction of two subcontraries), Efrem may not know it is necessary, and he certainly does not know it, when he has inferred it as just described, a priori. He does, however, know it (to be true): he knows it experientially. Return

  13. Thirty years have passed since Gettier published his paper (1963). In that time, no one has succeeded in producing a causal theory of knowledge which successfully solves his problem. It seems unlikely, then, that a solution in terms of causal connections is likely to solve the problem posed by Bonny's, Cathy's, and Diane's strange inferences. Return

  14. I, personally, happen to believe that if most of our inferences were not valid, we would soon end up dead. That most of our inferences are as a matter of fact (deductively or inductively) valid is a necessary condition for surviving in this world which punishes with death those whose inferences are frequently fallacious. One can not long 'get away' with inferring, e.g., that since it is safe to walk in front of a stationary bus, it is safe to walk in front of a fast moving bus, or that since both doorways and windows are rectangular, the windows on the seventh floor of the library are as suitable for exits as are doorways. But this thesis is contentious, and is not essential to my wider program in this paper. Return

  15. Or, to be more precise, they may know how to use a rule, but may be totally unaware that they are following a rule. (I was in my thirties before I became consciously aware of one rule of English pronunciation I had been following most of my life: "An 'e' added to the three-letter sequence consonant-vowel-consonant will lengthen the vowel [e.g. 'mat' and 'mate'].") Return

  16. It is perhaps useful to remind ourselves that mathematical proofs very rarely produce unexpected or surprising results. Almost always proofs come about because a mathematician strongly believes some particular statement is true, and then s/he sets about to prove it. The belief in the truth of the statement usually provides the motive force for seeking the proof in the first place; it rarely is a causal consequence of having produced the proof. Return

  17. Latcha neglected to include the phrase, "after the first day", which I have inserted. Return

  18. The series at issue is of this sort: n, 2n, 4n, 8n, 16n, ... One can terminate this series at any point one likes, and one will find that the last term of the terminated series will always be greater than the sum of all the previous terms. Thus if one sums through all the members of the series up to, but not including, the last term, one will not reach one-half of the total sum. Return

  19. The relevant series looks like this: n, n, 2n, 4n, 8n, 16n, ... That is, each term ni (for i >1) in the series is equal to the sum of all the preceding terms. (Compare with the series described in note 18.) Return

References

Brown, D. G.
1955a"Misconceptions of Inference." Analysis, 14 (June): 135-44.
 
________.
1955b"The Nature of Inference." The Philosophical Review, 114, 3 (July): 351-69.
 
Gettier, Edmund.
1963"Is Justified True Belief Knowledge?" Analysis 23: 121-3.
 
Goodman, Nelson.
1955Fact, Fiction and Forecast. Cambridge, MA: Harvard University Press.
 
Hempel, Carl G.
1945"Studies in the Logic of Confirmation." Mind, 54: 1-26 and 97-121.  Reprinted 1965 in Aspects of Scientific Explanation, New York: The Free Press, 3-51.
 
Hume, David.
1739A Treatise of Human Nature. 1888 reprinted and edited L. A. Selby-Bigge, London: Oxford University Press. 1960 reprinted.
 
Langley, Pat, and Herbert A. Simon, Gary L. Bradshaw and Jan M. Zytkow.
1987Scientific Discovery: Computational Explorations of the Creative Process. Cambridge, MA: MIT Press.
 
Latcha, Alfred G.
1970How Do You Figure It?: Modern Mental Exercises in Logic and Reasoning. Cranbury, N.J.: A.S. Barnes and Co.
 
Nidditch, P.H.
1957Introductory Formal Logic of Mathematics. London: University Tutorial Press.
 
Rescher, Nicholas.
1975Plausible Reasoning. Assen, The Netherlands: Van Gorcum.
 
Sterelny, Kim.
1990"Arguing with Don Todd." In Being and Somethingness: Essays in Honour of John Tietz, edited by Raymond E. Jennings. Burnaby, B.C.: Simon Fraser University, pp. 17-23.
 
Wittgenstein, Ludwig.
1953Philosophical Investigations. Translated by G. E. M. Anscombe, New York: Macmillan.