HANSON’S CLASS NOTES ON CHS. 11 AND 12:  FACTS, TRUTH, AND REALITY.

Facts.  A slogan version of the “correspondence theory of truth” is that a declarative sentence token or type, or statement or proposition or belief – in brief, a ‘truth-bearer’ --is true if and only if the content it expresses corresponds to the (typically extra-linguistic, extra-mental) facts.  Appeal to facts can occur in different philosophical contexts, but often it is parasitic on this correspondence-theoretic one, as in the causal theory of knowledge, to the effect that S knows that p (where ‘p’ stands for a declarative English sentence) just in case there is an appropriate causal connection between S’s belief that p and the fact that makes it true.

But what are facts, and what is their metaphysical status?  From what has been said so far, we may glean that facts are paradigmatically constituents of the extra-linguistic, extra-mental world that determine that truths are true and that falsehoods are false.  But should we suppose that there is some one kind of worldly entity that has this role, and is it a new basic metaphysical category of entity, or should it be considered a complex, non-basic kind of entity, built up, perhaps in different possible ways,  out of more basic kinds of entities – e.g., objects, properties, and relations --  which are its constituents?

The early Wittgenstein on facts.  In his Tractatus Logico-Philosophicus, which was his Ph.D. thesis written at Cambridge under Bertrand Russell, Wittgenstein (hereafter ‘W’) developed a version of the correspondence theory of truth that is sometimes referred to as the ‘picture theory of language’.  One place to begin characterizing W’s view is with his notion of ‘logical space’ which for him is an abstract conceptual space in which every coherent combination of possible states of affairs may be represented.  A state of affairs is just a particular possible combination of things.  W uses “things” here in a very generic way to include objects, properties and relations, and he also sometimes uses “names” equally generically, to refer to any expression that picks out a “thing”; so names can be nouns, verbs, adjectives, prepositions, etc..  The state of affairs of Jack loving Jill has as constituents Jack, Jill, and the relation of loving.  So does the state of affairs of Jill loving Jack.  But these are different state of affairs, because their constituent elements have been combined in different ways. 

 A (simple) fact is then for W an existing state of affairs, that is, an existing combination of objects, properties, and relations, such as an object having a property, or two objects bearing a certain relation to each other.  Suppose that Jack loves Jill but that Jill does not love Jack.  Then it is a fact that Jack loves Jill, but it is not a fact that Jill loves Jack, because the latter state of affairs does not exist or obtain. 

W opens his book enigmatically with the claims that the world is everything that is the case, and that the world is the totality of facts, not of things.  What does he mean by this second claim?  Perhaps just that you wouldn’t understand the structure of the world simply by knowing what objects properties and relations it contained.  You would have to know the particular ways in which they were combined or unified  into facts.  Some interpreters have suggested that for W facts were ontologically more basic than their constituent things, which latter were to be regarded as abstractions from facts.

W claims that we can ‘picture’ facts to ourselves using language, where a picture presents a situation in logical space and puts it forward as a model of reality.  ‘Picture’ here is either a metaphor for an ‘atomic’ sentence or thought, or is at least to be understood generically enough to encompass these as instances of pictures in the intended sense.  In such a picture, the things in reality being modeled have, corresponding to them, representational  elements of the picture.  The picture per se is itself also a fact, says W; i.e., it too is an existing state of affairs.  (So, say, the existence of a particular declarative sentence token is a fact.)  If we call an atomic sentence’s or thought’s  representational content its ‘sense’, then the agreement or disagreement of its sense with reality constitutes its truth or falsity.  W elaborates this idea in terms of the notion of ‘pictorial form’.  The picture’s pictorial form – the thought’s representational structures -- depicts things as related to one another in a particular way.  The picture’s pictorial form must be isomorphic with the form or structure of a fact in the world in order for it to be a true representation.  This is sometimes referred to as ‘The Picture Theory of Language’.

An important limitation of such pictures for W is that they cannot depict their own pictorial form, they can only display it, or show it.  W wants to draw deep consequences from this latter point.  It means that philosophers can never coherently describe or articulate or give an account of such form; everything which is involved in the very idea of the expressiveness of language must remain incapable of being expressed in language.  But since according to W that was all philosophers were ever interested in doing, philosophers are reduced to silence.  Wittgenstein regarded his own book, therefore, (but waits until the end of the book to say so) as a bunch of incoherent nonsense that can be used only as the expedient of a ladder by which one may climb out of philosophy.    And with that Wittgenstein in fact maintained professional silence for a number of years (until he had changed his mind about this).  In his introduction to the Tractatus, Russell remarks that “What causes some hesitation [in accepting Mr. Wittgenstein’s position] is that fact that, after all, Mr. Wittgenstein manages to say a good deal about what cannot be said, suggesting to the skeptical reader that there may be some loophole [in his arguments].”

Returning to the idea of facts, correspondence with which makes statements true, this becomes harder to articulate once we get past simple statements, and consider logically complex ones like negations, universal generalizations, and so on.  Wittgenstein’s view was that there are no logically complex facts, only simple ones.  The world is the sum of facts and each fact is simple and logically independent of the others.  The meaning of the logical operators are not some ‘logical objects’ which could be among the extra-linguistic, extra-mental constituents of the world.  As Garrett points out on p. 129, a difficulty with this idea of logical independence among simple states of affair is that some such states of affair do seem to exclude others: a ball being red all over precludes its being green all over.  So the structure of the world seems not to be exhaustively given simply by a list of all of its simple facts.  And in fact the existence of W’s ‘simple facts’ may seriously be in doubt, because if a ball’s being red is not a simple fact, what is?  But if there are no such simple facts, there are then no simple statements that express them.

Armstrong’s Truth-maker Theory.   David Armstrong has tried to articulate a robust version of the correspondence theory of truth that appears to use much to W, but without the same degree of dependence on the notion of fact, and without the general requirement of structural isomorphism between a truth and what makes it true.  Every truth about the world must still have a ‘truth-maker’ in the world, something in the world in virtue of which it is true.  Sometimes the truth maker will be a fact, other times it can just be an object.  Sometimes the logical form of the truth will mirror the logical form of the truth-maker, but other times not.

Garrett gives the example of a sentence, “Bill is bald” that we are to suppose is true.  What is its truth maker?  It cannot just be Bill; it cannot just be the property of baldness, and it cannot be just the set whose two members are Bill and the property of baldness, and that is because a principle of truth maker theory is that the existence of the truth maker must necessitate the truth of the sentence, and none of these entities necessitates the truth of “Bill is bald”.  What else then could its truth maker be, Garrett asks, besides the fact of Bill’s being bald? And of course in this case there is the sort of structural isomorphism, between the representational content of the sentence and the fact, that W required for all atomic truths and their corresponding facts.  But Armstrong wants to lay stress on the point that structural isomorphism with a truth is not a general requirement for its truth-maker.  He takes Bill’s being bald to also be a truth maker (and perhaps the only truth maker) for the logically complex sentence “Bill is bald or Sam is an octogenarian”, a sentence which clearly does not have the same form as the fact.  In general Armstrong accepts the principle that if X is a truthmaker for P, and P entails R, then X is a truthmaker for R.  Finally, Armstrong takes Obama per se to be the truth maker for the sentence “Obama exists”, and Obama is not a fact at all but an object.

Garrett notes that truth-maker theory has problems with some negative truths and with general truths.  Beginning with negative truths, the problematic ones are those not entailed by some positive truth.  The negative truth, “It is not the case that Fred is thin” is entailed by the positive truth “Fred is fat”, and so the fact that is the truthmaker for the latter can also be the truthmaker for the former, via the entailment principle mentioned in the previous paragraph.   But it seems that some negative truths are not entailed by any positive truth.  Garrett’s example is “It is not the case that there are unicorns”.  Note that it is not obviously entailed by a positive sentence like “Unicorns are creatures of legend, folklore and fairytale.”   Perhaps some legends, folklore and fairytales have a basis in fact.  On the other hand, if we qualify the sentence as follows: “Unicorns are purely creatures of fairytale” it is unclear that this is not a tacitly negative sentence, not different from “It is not the case that there are unicorns”.  One radical solution to this proposed by Stephen Mumford is simply to deny that there are any negative truths, and therefore any need for negative facts to be their truth-makers.  This seems pretty implausible.  Garrett notes that we have the intuition that if it is false that there are unicorns, then it is true that it is false that there are unicorns.  According to truth-maker theory, truths require a truth maker, and the truth maker for “It is true that it is false that there are unicorns” would appear to be the negative fact of:“There are unicorns” lacking a truth-maker!

The problem with general truths for truthmaker theory is that some general truths are intended to be lawlike, to project beyond actual cases.  “All A’s are B’s” might not have any actual counterexamples, in which case we would be tempted to take the generalization as equivalent to the conjunction of each of its positive instances.  But this leaves out the intended law-like character of the generalization.  It could just be a lucky accident that no A’s that were not B’s ever arose.  The only way that Armstrong has suggested for addressing this is to seemingly just arbitrarily rule out the there being any more A’s to worry about.  And of course, not only does that seem to be an ad hoc move, it also seems to be a negative fact!

Correspondence Theories of Truth.   If we take the correspondence theory of truth to be the claim that a truth-bearer is true in virtue of what it expresses corresponding to some worldly entity, then we have, in our discussion of the views of Armstrong and of the early Wittgenstein, already seen that there are a number of largish problems facing a correspondence theorist: saying how various sorts of logically complex claims are true; saying how there can be any truly logically simple claims. 

What other corrective moves are left for the correspondence theorist?  We will consider three.  The first, which we will just make brief mention of here, is to collapse the correspondence into an identity,by making it the case that what is expressed just is the very worldly entity to which, on the previous reading, it was taken to correspond.  So, what makes “Sam is bald” true, the fact of Sam’s being bald, is none other than what that sentence expresses.   This is no mere isomorphism, it is identity.  And so what the sentence expresses is no mere conceptual abstraction, but the concrete fact of Sam’s being bald.  It’s a nice idea, and Russell ran with it for a time.  The trouble is that it does not solve the problem of negative truths.

A second idea is to drop the assumption built into classical correspondence theories that truth typically involves a relation between a truth-bearer and something extra- linguistic and extra-mental in the world.  Coherence, Verificationist, and Pragmatist theories of truth pursue this path in their respective ways.  We will return to explore these soon.  The third and final idea is to drop the idea that there is any metaphysical nature to truth at all.  Instead our notion of truth is simply a logical notion, on a par with negation, conjunction, disjunction, and if-then.  The truth predicate has a distinctive syncategorematic function in communication – worth spelling out, worth preserving.  But that is all. In recent days there have been a number of versions of this approach, which in general we may call “Deflationist” accounts of truth.  We will begin with these first.

Deflationism.  The most common way of forming the name of an English sentence is to place it in quotation marks.  Let ‘p’ be a dummy variable standing in place of a particular declarative English sentence.  Then most deflationists subscribe to a ‘’disquotational schema” of the form:  

DS:  For all p, “p” is true if and only if p. 

Notice that on the left hand side of the biconditional the sentence is mentioned, while on the right hand side it is used.  So, for example, we have that “Sam is bald” is true if and only if Sam is bald.  Some may think that there is a hint of the old correspondence theory here, but they would be mistaken.  Nothing ‘substantive’ or ‘ontologically robust’ is intended at all.  The DS is taken to be tantamount to a logical truth.    So it is trivial, like all logical truths.  Correspondence theorists seem to think of truth as an explanatory notion.  For instance this is implicit in the claim that all truths must have a truth maker.  Why should that be the case?  Because truth makers are the reason, or explanation, of a truth bearer’s being true, and therefore an explanation of why subscribing to that truth-bearer contributes successfully, ceteris paribus, to our practical deliberations and actions.  The converse does not hold: the fact that p is not explained by “p” being true.   This asymmetry of explanation is deliberately absent from DS, of course.  One deflationist idea is that the function of the concept of truth is simply to permit us to compose generalizations of a certain sort, that would otherwise require infinite conjunction, such as “What Einstein said was true” – when, for instance, you are not sure what it was he actually said, but are nevertheless confident that it was true in the sense that it satisfies DS in the direction from left to right.  Or consider uttering a conditional like “If whatever Jones reported is true then I will eat my shirt”, where you don’t know what Jones reported  but have no confidence in his ability to get it right.  Another function of the notion of truth that has been cited by deflationist is as a device of ‘semantic ascent’.  Instead of just asserting p, one might instead want to assert it in the “metalanguage”, by saying “ “p” is true”.  But why is that helpful?  Usually the metalanguage for a given object language contains that object language as a proper part. But in the old way of doing logical formalism, inspired by Alfred Tarski,  there had to be a metalanguage which would include the truth predicate for the object language in question.  This was because it was thought that the only and best way to avoid paradoxes of truth like the Liar Paradox was to push the truth predicate up into the meta language so that one could not talk in the object language about the truth of the sentences of that language.  DS is best thought of as a metalingustic principle. 

But of course, English and other natural languages contain their own truth predicate.  And recently there has been a drive to explore the formal modelling of natural language in a way that incorporates a truth predicate into the object language while nevertheless avoiding contradiction.  Some of this work is quite technical, but it is also exciting.  Truth may be treated as an object language predicate, or even as an object language logical operater like conjunction or negation.  Let negation be represented by a tilde: ~.  Then we have that P is true iff  ~P is false.  Let truth be represented by a bold-face capital T: T. Then  P is true iff Tp.  The truth table for T is trivial of course.  This idea resonates with earlier forms of deflationism about truth, like P.F. Ramsey’s “redundancy theory” of truth:  that to say “ “P” is true”, where P is replaced by declarative sentence, is to say nothing more or less than  “P”.  It turns out that there is much to be learned about the nature of deductive validity and about the nature of truth from working out the intricate details of this approach.  Hartry Field’s 2006 volume SavingTruth from Paradox is an excellent overview, albeit technical, of many of the details and options. 

Garrett flags two problems for deflationism.  The first was simply that some people have the intuition that there is more to truth than this, that somehow it is an explanatory notion.  The second is that it is unclear how any treatment of vague terms in natural language can be consistent with DS. Suppose Bill is borderline bald.  Then it may seem that it is neither straightforwardly true that Bill is bald nor straight forward false.  To acknowledge this reality we could introduce the value “IND” for indeterminate, and say that it is indeterminate whether Bill is bald.  So now we have three semantic values that can characterize a sentence: true, false, and indeterminate.  Clearly, if Bill’s being bald is indeterminate, then it is not the case that “Bill is bald” is true.  So the LHS of the biconditional of DS, “ ‘Bill is bald’ is true” is false; while the RHS, Bill’s being bald, is indeterminate.  They should have the same value.  These are again technical issues, but I am happy to report that there has been a flurry of interesting work on the semantics of vagueness, yet another indication that the deflationist theory of truth is a flourishing research program.  How about the remaining options mentioned above?

Coherence Theories of Truth.  Coherence theories of truth are a kind of metaphysical theory of truth, if only by denial.  What is denied is the relevance of a mind and language independent reality to a proposition’s truth status.  Often, but not always, coherence theorists of truth are metaphysical idealists.  A proposition is true provided it coheres with some designated body of beliefs.  Of course a lot depends on which body of beliefs.  It cannot be just my beliefs, or even my society’s shared beliefs, on pain of being too subjective.  God’s beliefs then?  Or how about the beliefs we would have at the ideal limit of scientific theorizing?  The coherence theory of epistemic justification has garnered more plausibility than the coherence theory of truth, because it seems plausible to many that coherence with one’s other beliefs can be a good marker for epistemic justification – provided that one’s notion of epistemic justification allows one to have a belief adequately justified for a knowledge claim which may turn out nevertheless to be false.   If justification good enough for knowledge requires absolute certainty, as Descartes thought, then justification implies truth, such that truth does not need to be stated as a separate condition on knowledge.  Just to draw this out a bit:  such justification was thought by Descartes to imply correspondence-theoretic truth.  What Descartes had in mind by such complete justification was ruling out every possible scenario on which the proposition in question would be false.  That is a high standard of justification, that tests the coherence of a belief with all of one’s other beliefs, including beliefs about possible conditions under which the belief could be false, compatibly with one’s evidence for it.  No human cognizer can meet that standard of coherence, but that does not mean that lesser standards of coherence are not salient to epistemic warrant.  It is just that they will not normally be thought to imply the truth of the belief, unless, or course, one just decides that truth per se is coherence.  But surely that is just too easy.

Verificationism about truth was a view  popular among scientifically minded philosophers starting in the mid 20th century, including the Vienna Circle.   The intent behind the doctrine was to rid the world once and for all of metaphysical and other nonsense.  The idea was that for a sentence to be a truth-bearer it had to first be meaningful, and in order for it to be meaningful we had to know at least in principle what observations or empirical tests would tell in favor of or against the truth of the sentence.  A.J. Ayer, a verificationist, liked to make fun of neo-Hegelian idealists.  Take one of their sentences, like “The Absolute is perfect”.  What empirical observation tells for or against this claim being true?  Cannot say?  Then it is meaningless nonsense.  A more diffuse yet related conception of truth is the pragmatic conception that a sentence is true only provided we find it useful in specific ways to accept it as such.  Others may think that it useful to act as if (or to educate society to thinking that) a claim is true, even if we think it isn’t.  A pragmatist intuition is that such usefulness is indeed the mark of truth, so “as-if-ism” has it wrong .   There was also a school in the foundation of mathematics known as Intuitionism, which claimed the mathematical sentences were meaningful (and thus capable of being truth-bearers) only if we were in possession of a ‘constructive’ method of proof that would give us a clear answer one way or the other.  Constructive methods of proof excluded use of reductio ad absurdum arguments, for instance, which is required for some of the theorems of real number theory (also excluded  was double negation in one direction – the direction from the double negative to the positive),  and so those theorems as yet had no truth-value.  Michael Dummett was a champion of such constructivism in mathematics.   What these views about truth all have in common is that they appear to deny that there is any fact of the matter, or any reality an sich beyond our cognitive or practical capabilities of determining it.

One way of bring some focus to this issue is in terms of the notion of ‘epistemic blindspot’.  An epistemic blindspot is, or would be, a proposition that is true but that cannot be humanly known to be true, i.e., is ‘unknowable ‘, by anyone at any time.   Clearly true propositions not known at one time may be known at another, and true propositions not known by one agent may be known by another.  Equally, a true proposition may not be knowable at a time, by an agent, but be knowable at another time by the same or a different agent, if the means of knowing it is available at that other time for that agent.  A true proposition counts as an epistemic blindspot only if no one can ever know it to be true.

To highlight a contrast, let me also now introduce the notion of an “epistemic omission”, by which I will mean a proposition that is true but just happens not to be known by anyone at any time.  Notice that it does not follow from the definition of an epistemic omission that an epistemic omission is a blindspot.  Just because something isn’t ever known does not mean that it could not be known.  It should be embarrassing for an antirealist should it prove to be the case that the mere existence of epistemic omissions entailed the existence of epistemic blindspots.

Enter Garrett’s sketch of a famous little formal argument by the logician Frederic Fitch that shows on minimal assumptions about knowledge and truth that if there are any epistemic omissions, then there are epistemic blindspots as well.   All that needs to be assumed about truth is DS, above, the disquotational schema of deflationary views of truth.  No metaphysics there.  All that has to be assumed about the nature of knowledge, is that knowledge implies truth, and that it distributes over conjunction, so that if it is known that (p and q), then it is known that p and it is known that q.  For purposes of formalization one must also grant that knowledge can be represented as a monadic operator, syntactically on a par with negation.  So it has a recursive structure.  Putting the knowledge operator ‘K’ in front of a sentence letter ‘p’ creates a new sentence schema ‘Kp’, which reads that it is known by someone at some time that p.  ‘KKp’ will read that it is known by someone at some time that it is known by someone at some time that p.  Fitch also assume the classical logic of truth functions, assumes that logical theorems are necessarily true, and that to say that something is not possible is simply to say that it is necessarily not the case.

 

How does the argument go?  Suppose p is an epistemic omission.  Then it can be shown that the proposition q, where  q = (p & ~Kp) is an epistemic blinds spot, i.e. is unknowable.   Suppose not, i.e. suppose as a reductio assumption that K(p&~Kp).  The argument then proceeds as follows.

  1.  K(p&~Kp)                           Assumption (reductio)
  2. Kp & K~Kp                           from 1, distribution of knowledge over conjunction
  3. K~Kp                                     from 2, truth functional logic
  4. ~Kp                                        from  3, knowledge implies truth
  5. Kp                                           from 2, truth functional logic
  6. Kp &~Kp                              from 4,5 truth functional logic
  7. ~K(p&~Kp)                         from 1, 6, truth functional logic (reduction)

Now since the reduction assumption has been eliminated, the conclusion depends only on truth functional logic and the two formal logical properties of the knowledge operator.  Fitch thereby regards the conclusion as a theorem of epistemic logic, and therefore logical necessary.  So we have that it is logically necessarily not the case that the conjunctive proposition (p &~Kp) is known.  So q is unknowable.  So q is an epistemic blindspot.

From a formal standpoint, Fitch’s Theorem is compatible with the view that no non-conjunctive propositions are blindspots.  For all that has been shown, it may be the case, of each true atomic proposition, and of each true proposition expressing the claim that some atomic proposition is or is not known, that some human at some time has the means to know it.  But it does pose a challenge to blanket verificationist or anti-realist denials of blindspots.  And not to impute conscious motive, but surely it does take a special kind of hubris to be so convinced that all truths are – even in principle – knowable by us mere mortals, that we would accept some epistemically loaded notion of truth just to make it so.

Michael Dummett, of course, is willing to bite the bullet, and to reject parts of classical logic, such as the belief in the law of the excluded middle, reductio ad absurdum proof, and the like.  In his 1996 Gifford Lectures, in reference to the classical belief in the law of the excluded middle, he quips: “…classical logicians reason as if they were God; they are therefore guilty of overweening presumption” (Thought and Reality, OUP (2006), p. ix).  Not that Dummett has ever been guilty of that, this very statement being a case in point!  Dummett has never explained to us how to actually, in a practical way, carry out replacing classical logic as our best all-purpose deductive tool, with his intuitionistic logic, as evidenced, for instance, by his own use of classical logic in the metalanguage in which he describes his intuitionistic logic.  Classical logicians aren’t trying to play God.  They are just trying to be practical.

 

 

 

 

 

 

 

 

 

HANSON’S CLASS NOTES ON CHS. 8 AND 9: TIME

 

We can begin from the recognition that in our ordinary pre-theoretic talk about time, there are two different ways of ordering events in time or even moments of time per se, which in the post-McTaggart philosophical literature have come to be known as “B-series concepts” and “A-series concepts”. The B-series concepts  are “later than”, “earlier than”, and “simultaneous with”.  The A-series concepts are “past”, “future” and present.  We are to understand the B-series concepts as expressing certain temporal ordering relations among events (or, derivatively, times themselves), which are taken to hold independently of any temporal reference point.  A-series concepts, on the other hand, express temporal ordering relations among events or times relative to an indexically identified temporal location: ‘now’.  Events in the past happened earlier than now, events in the future will happen later than now, and events in the present are happening now.

The question arises how A-series and B-series ways of talking about time are related, whether they give us a coherent picture of time.  There have been two broad sorts of ways of arguing for their coherence: B-theories and A-theories. 

The B-theorist says that B-series concepts are primary, and A-series concepts are to be explained in terms of them.  For instance, the B-theorist may claim that for an event to have happened in the past is simply for it to be earlier than now, where “now” is an indexical that picks out the present moment at the time of utterance or thought.   Tensed talk is thus a dispensable artifact of natural language.  For something to have happened in the past means that it happens (tenselessly) before the moment simultaneous with this utterance, where the “happens” is to be construed as untensed, an instance of “the specious present” such as we find in logic, when, e.g., we say that “John’s being human implies that he is mammalian”, where the truth of such a claim is taken to be independent of matters of tense.

The A-theorist on the other hands insists that A-series concepts are the primary ones.  While B-series concepts do express ordering relations among events, it is their relation to A-series concepts that makes them temporal concepts.  The relation of being later than isn’t just a transitive, anti-symmetric, anti-reflexive ordering relation among events.  There could be lots of those.   The relation of being later than is not characterized simply by its formal, logical properties.  It is a temporal relation, and what is temporal about it, according to the A-theorist, is that if e1 is earlier than e2, then, if e1 is in the present, than e2 will be in the future, and if e2 is the in the present, e1 was in the past, etc..  Tenselessness, according to the A-theorist, is a systematically misleading feature of artificial languages like 1st order logic.  Tenseless is to be explained in terms of tensed, not the other way around.

A natural next question might be: who is right, the A-theorist or the B-theorist?  We will return to that question, but first we must look at a famous argument from McTaggart, the philosopher who introduced the distinction between A-series and B-series concepts, whose conclusion is that that neither A-theorist nor B-theorist can be right, because time is not real.  What follows is a ‘rational reconstruction’ of McTaggart’s argument, in premise and conclusion form.   The starred premises are those on which the argument seems to pivot.

MCTAGGART’S ARGUMENT

*(1)        The existence of time presupposes the existence of change.

*(2)        Change involves dynamic variation in the way things are.

  (3)        The B-series is a fixed and unchanging framework

*(4)        So if the B-series is to be regarded as a temporal ordering, it must presuppose the A-series, which does involve dynamic variation in the way things are: moving from being in the future to being in the present, to being in the past.

*(5)        But the A-series ascribes contradicatory properties to the same events.

 (6)         Since the B-series presupposes the A-series, it inherits the same incoherence.

 (7)         But our ways or talking about time are exhausted by A-series and B-series talk.

 (8)         Therefore, time is unreal.

Let us consider the starred premises in reverse order.  Why does McTaggart think that that the A-series is contradictory?  One line of interpretation would be that McTaggart thinks that being past, being present and being future are mutually exclusive properties, with the A-series requiring an event to have all three.  This seems to admit of an easy rejoinder: those properties are never had all at once, only one at a time. For instance, a present event had the property of being future, and will have the property of being past.  But McTaggart asks: what does that tensed talk mean?  Here is his answer, for an event e and one of its properties P:

MCTAGGART’S ANALYSIS OF TENSE

·       e is (presently)P iff e is (in the tenseless sense) P at some moment of present time t1.

·       e was P iff e is (in the tenseless sense) P at   some moment of past time t2

·       e will be P iff e is (in the tenseless sense) P at some moment of future time t3

But if the e of these conditions is the same dated particular (and isn’t that what events are?), then t1=t2=t3, and contradictory properties have now been ascribed to this time.  Ways of trying to avoid this result seems to lead to a vicious regress, e.g., of  ‘hypertimes’, each successive level in the temporal hierarchy in effect inheriting a contradiction from the previous level, and so requiring yet another level to be postulated.        

Well, maybe, or maybe not.  But for now let’s look for problems with McTaggart’s argument elsewhere.  What about *(4)?  If the B-theorist rejects the claim of *(4), she surely owes us an account of what is temporal about the B-series.  And furthermore, this challenge is independent of whether or not the B-theorist accepts *(1).  So again, let’s keep moving on to *(1) and *(2).  McTaggart, of course, thinks that *(1) is true because time itself is dynamic and changing: witness the “moving now”.  But of course that is precisely what a B-theorist rejects.  One of McTaggart’s students, Bertrand Russell, suggested that we think of change simply along the lines of an object, o, having some property P at t1, and not having it at some later time t2.  So there appears to be a way of thinking about change that doesn’t require it to involve anything ‘dynamic’.  What then about the alleged dynamism of time per se? Along B-theoretic lines, Russell could simply say that a given moment of time has a fixed position in the B-series ordering of moments, and it is we, as we dynamically pass through time, who mark our changing relation to that fixed moment by changing the tense of our language.  So it seems that the most that can be said about McTaggart’s argument for the unreality of time is that he is begging the question against a B-theorist.

So let’s return to the question of what time is like, assuming for now that it is real.  Here, then, is a brief taxonomy of B-theories and A-theories.

B-Theories:

·         4-dimensionalism:  all times are equally real, and already exist, in the same way that all places do.  The present moment is representable only as a point, and as such has no physical existence.  “The past” and “the future” are taken to be extended stretches of time, and as such exist; but which stretches they are are determinable only relative to a selected temporal point.

A Theories:

·         the ‘flashlight model’: time exists as a 4th dimension; but in addition, the present has a special ontological status, and its temporal location changes.

·         the growing universe:  both the past and the present exist with equal ontological status.  The future has yet to be determined.

·         the shrinking universe: the past is spent; only the present and (diminishing) future remain, and so still exist for us.

·         presentism: only the present (presumably of some thickness) exists.

From among these would-be options, Garrett singles out just 3: 4-dimensionalism, the growing universe, and presentism, and expends effort examining their pros and cons.  The following is an embellished review of his pros and cons, starting from the question: Why reject A-theories?  At the heart of this question is the question of why we should not think that time flows, why we should not think that time, in itself, is dynamic and changing.  Why, that is, suppose that the B-series is all there is to time per se, that in some sense we are simply among the physical processes that flow through time? 

Garrett offers the following objections to any version of A-theory:

  • If time flows, we should be able to say how fast it flows, but this is hard to do in any meaningful way.
  • The Special Theory of Relativity implies that there is no unique now.  What is happening now is relative to a frame of reference.  (STR also implies that there is a multiplicity of B-series, but this apparently does not seem so problematic.)  Note: STR implies that time is not absolute, but not that it is subjective; just that it is, in an objective way, relative to an inertial frame.

[Hanson adds:  It may also be thought that the A-series is perspectival in an inappropriately subjective way in that it makes essential reference to our conscious experience of time.  This is an issue to which we shall return below, when we discuss Hawking.]

Here are Garrett’s objections to Presentism in particular:

  • If only the present exists, what are the ‘truth-makers’ for tensed claims, for claims about what one remembers, and for causal claims in which the cause and effect are at different times?  [Of this assortment the last one seems the most serious.  How are we to make sense of such physical causation apart from the existence of some physical continuity linking the cause and the effect; but how can it link them when they never exist together?] 

Garrett’s objection to the Growing Universe (‘GU’) View:

  • GU implies that past people are ontologically on a par with us.  But people in 1900 experience 1900 as present , whereas we take it to be past.  But then who says that it isn’t, e.g., 4000 AD, and we are in fact living in the past?  So our ‘now’, the one we are experiencing, is in the past.  GU does not have the resources to rule out this seeming incoherence.

Objections to 4-dimensionalism:

  • We have ‘temporally biased’ attitudes that seem to be perfectly rational.  How can 4-dimensionalism account for these (cf. Garrett, pp. 101-103)? [Hanson:  How about this?   Our rationality is a product of natural selection, and there was a selective advantage to having temporally biased attitudes keyed to our passage through time – i.e., not time’s passage.  But note that even if that were the case it would not mean that it was to be spelled out the way B-theorists want to do, in terms of a semantic reduction of such talk to dated untensed propositions.  And in any case, talk of our passage through time still seems to presuppose change: we are moving through time, though ‘thenow’ is not moving.]
  • We can conceive of circumstances and observational data, that would make it rational to posit the existence of stretches of time in which there occurred no changes in the physical world  -- we leave aside here the A-theorist’s purely temporal changes, and focus on the others.  How can a 4-dimensionalist account for this latter, given that time for him is given simply by the fixed B-series of events, i.e., the series of changes in the physical world?  The envisaged circumstances in question are given by Shoemaker’s world (see Garrett, p. 105).  As Garrett points out, to make any causal sense of Shoemaker’s world, the B-theorist might have to countenance the possibility of direct causation, not mediated by a causal chain, where the cause occurred before the period of time without change and the effect occurred after that period – i.e., action at a temporal distance.  But why not, if we are already imagining such outré things as changeless periods of the universe?  Garrett also considers invoking Kim’s view of events to make sense of Shoemaker’s world.  According to Kim, an event is simply an object’s having a property at a time.  Events per se, then, do not have any change built into them.  If we think of change as “Russell change” (see above, and Garrett, p. 93), then it has to involve two times, at the second time of which an object has a different property than it had at the first time.   So there will be lots of Kim events taking place throughout the period: e.g., object o having property Pat time t1, at time t2, at t3, etc.; but no change.  This allows the possibility of time without change, by marking the passage of the time.  But we are left with something new to explain:  what causes this series of events, when normally there would have been a very different series – namely, the series that we are supposing resumes, a year later at the moment the ‘freeze’ ends?   Finally, Garrett suggests that the only remaining option for the B-theorist may be to embrace moments of time.  There will then be passage of time through the changeless period because of these moments, and not because of any (additional) events.  But this would seem pretty ad hoc.  What justification is there for this assumption? Up until now the B-theorist has said that the B-series is constituted by the ordered series of events, and now he says that it is by an ordered series of moments.  Why do we need the events now?  The B-theorist will have dramatically altered the metaphysics of the B-series, and the ramifications of this will need to be studied.

Time Travel and Our Subjective  Experience of Time

On a certain mundane construal, the possibility, indeed the actuality of ‘time travel’ is a presupposition of 4-dimensionalism.   The construal in question is the one whereby we and indeed all physical systems are moving through time in the direction of the future.  Not much to interest Hollywood there.   There is even less to interest Hollywood in Presentism, which appears to rule out time travel – as there is no other time, besides the present, to travel to.  As for the Growing Universe View (‘GU’), travel to the future is ruled out, because it doesn’t exist.   Garrett suggests in addition that travel to the past is incoherent on GU, because you would have had to come from the future, which doesn’t exist.  That is unclear, however.  If I get into my time machine in the present and travel into the past, there is no part of my path that takes me into the future.  It is true that my current present is in the future of the past to which I am going, but of course some of the future of that past , the part I am travelling through, has already occurred and has become part of the past, and so exists according to GU.  So GU seems coherent enough, and may have some limited Hollywood appeal, but not as much as 4-dimensionalism!

The time travel of interest to us here is not the mundane actuality of time travel presupposed by 4-dimensionalism, but, e.g., moving quickly from the present into the relatively distant future, or moving from the present into the past.  Or how about my personal favorite: remaining temporally stationary, bucking our tendency to flow into the future just enough to stay at the same temporal location for a while?  (For those with physics background: try modeling that with a Minkowski space-time diagram!)

Much of Garrett’s discussion of time travel follows David Lewis’s 1976 paper ,”The Paradoxes of Time Travel”.  (For those interested, I can also recommend another source:  ch. 7 of Asymmetries in Time, by Paul Horwich, MIT Press, 1978.)  Lewis explains the idea of ‘time travel’ in terms of a distinction between personal time and external time.  External time is just time itself, spread out as the 4th dimension.  Personal time is the time tracked and measured on the wristwatch one is wearing.  As Garrett notes, the kind of travel into the future of interest here requires a difference in duration between personal and external time, whereas travel into the past does not: one could take an hour to travel an hour into the past.  However, the prospect of travel into the past is the most fascinating to us perhaps because it is  fraught with the threat of paradox.

It is important to note that Lewis’s notion of ‘personal time’ is completely objective, as measured by the watch one is wearing, and also by how much one’s body has aged.  It is to be distinguished from any subjective sense of time, which involves conscious awareness.  We will talk more about this below.

Travel into the future, according to the Special Theory of Relativity, would be accomplished simply by moving quickly, thus slowing down one’s aging process and any clocks that are accompanying one.  A woman on a space craft travelling 1/7 of 1% less than the speed of light, leaving Earth and returning 20 years later will only have aged a year (as also indicated by the passage of time on the clock on board).  Time in STR is not, of course, absolute; it is relative to a frame of reference.  The woman travels for a year by her own clock, to get to Earth 20 years later than the beginning of the trip.  By the time, one year hence in her frame of reference, that she gets to the future, the future is actual, because 20 years have passed, relative to the frame of reference of Earth.  (A nice presentation of the phenomenon and the data that support it, see Dennis Sciama’s “Time Paradoxes and Relativity,” in The Nature of Time, Raymond Flood and Michael Lockwood (eds.) Basil Blackwell (Oxford, 1986).)

Horwich (op. cit.) defends the at least in principle possibility of travel to the past (though not through the local past), on the supposition that our universe might conform to the famous General Theory of Relativity cylindrical models discovered by Godel, in which one could travel to past in a very circuitous way, by moving at all times in a locally forward temporal direction.

Unlike Godel time travel, the coherence of travel into (through) the past presupposes the coherence of temporally backwards causation; but we previously encountered accounts of causation that do not rule it out.  It also requires that one not be able to change the past by travelling to it.; that would be incoherent.  And one can emphatically not go back and, e.g., kill one’s father when he was a baby.  That may still leave open the at least coherence of various ‘causal loops’ of the sort described by Lewis and reported by Garrett (p. 109), even if the loops themselves do not have a cause. 

Thinking about time travel emphasizes the extent to which our pre-theoretical notions of time are tied up with our sense of the subjective, conscious experience of time.  Stephen Hawking has argued that ‘the psychological arrow of time’ must be the same as the ‘thermodynamic arrow’, and that this has dramatic implications for the phenomenology of at least some types of time travel into the past: those that involve travelling ‘upstream’ against the flow of entropy.  According to Hawking, humans (somehow) living in a universe in which disorder decreased with time, in which, e.g., broken cups put themselves back together and flew up onto a table, would have a ‘backwards’ psychological arrow: they would remember events in their future, not in their past.  When the cup was broken, they would remember it (already) being up on the table; but when it was up on the table they would not remember it being broken (S. Hawking, A Brief History of Time, Bantam Books, 1988 pp. 146-147). I quote his elegant argument for this co-directionality:

It is rather difficult to talk about human memory because we don’t know how the brain works in detail.  We do, however, know all about how computer memories work.  I shall therefore discuss the psychological arrow of time for computers.  I think that it is reasonable to assume that the arrow for computers is the same as that for humans.  If it were not, one could make a killing on the stock exchange by having a computer that would remember tomorrow’s prices!  A computer memory is basically a device containing elements that can exist in either of two states.  A simple example is an abacus.  In its simplest form, this consists of a number of wires; on each wire is a bead that can be put in one of two positions.  Before an item is recorded in a computer’s memory, the memory is in a disordered state with equal probabilities for the two possible states. (The abacus beads are scattered randomly on the wires of the abacus.)  After the memory interacts with the system to be remembered, it will definitely be in one state or the other, according to the state of the system.  (Each abacus bead will be at either the left or the right of the abacus wire.)  So the memory has passed from a disordered state to an ordered one.  However, in order to make sure that the memory is in the right state, it is necessary to use  a certain amount of energy  (to move the beads or to power the computer, for example.)  This energy is dissipated as heat, and increases the amount of disorder in the universe.  One can show that this increase in disorder is always greater than the increase in the order of the memory itself.  Thus the heat expelled by the computer’s cooling fan means that when a computer records an item in memory, the total amount of disorder in the universe still goes up.  The direction of time in which a computer remembers the past is the same as that in which disorder increases.   Our subjective sense of the direction of time, the psychological arrow of time, is therefore determined within our brain by the thermodynamic arrow of time.  Just as a computer , we must remember things in the order in which entropy increases.  This makes the second law of thermodynamics almost trivial.  Disorder increases with time because we measure time in the direction in which disorder increases.  You can’t have a safer bet than that!

Note first that Hawking does not say that observers of a reassembling cup would experience their memories as , or under the description ‘memories of the future’, merely that what would be remembered would be, in fact, in the objective future.  It seems that if one were undergoing such a process, one would not be able, by observation alone, to recognize it as such.  Verificationists might take comfort from that, condemning the would-be possibility to meaninglessness.  Another possible reaction is the idealistic, neo-Kantian, one that what we mean by ‘past’ and ‘future’ is taken from consciousness, so that it is stipulative that the past is what we remember.  And this does have at least some intuitive purchase.   Imagine a temporally local reversing phenomenon.  Suppose that we saw the pieces of a broken coffee cup rise up and reassemble on the table from which we had just seen them fall.  Would we seriously entertain the hypothesis that we had just witnessed a ‘local’ reversal of before and after?  It can seem unlikely.

As a counterpoint both to verificationism and neo-Kantian idealism, let me then put forward for consideration a kind of representational realism about time, according to which time is real and objective, an d, contra Hawking, the psychological and thermodynamic arrows could sometimes point in opposite directions after all.  Our brains are a kind of computational device, but they may have representational capacities, like consciousness, whose representational arrow can diverge from the thermodynamic arrow.  A defining characteristic, many people think, for a capacity to count as representational at all, is that it admit of the possibility of error, or misrepresentation.  Consciousness in particular may involve the capacity for misrepresenting the objective temporal direction of its own manifestations.  If one is time travelling into the objective past, one’s conscious experience of time may nevertheless be pointing towards the subjective future.

Our subjective sense of time is only in part, and at one remove, an internal representation of the objective physical temporal order of external goings-on.  In the first instance, or most immediately, it projects temporal order and duration of our subjective, internal goings-on.  Or rather it constitutes it, for in this role it is surely stipulative.  No temporal reverses are to be allowed in the subjective, stream-of consciousness ordering of one’s own experiences.  This would amount to bad bookkeeping and thereby undermine the role of our subjective sense of time as an internal  benchmark for the representation of objective temporal order.  It would amount to a form of pathological breakdown.  (This is not to preclude our misremembering the order of our conscious experiences.)  This is surely presupposed in any coherent rendering of the possibility of conscious time travel.  So our subjective sense of time is anti-symmetric. 

It is ultimately only because our subjective internal goings-on include our experiences of external goings-on, that our subjective sense of time can, in the second instance, also represent their objective temporality.  We postulate objective temporality as whatever it is externally about things that constrains the temporal order of our experiences of them.  This duplicity of role is the hand that natural selection has dealt us.  It seems to have worked out rather well within internal and external limits: the internal limits imposed by our brain’s discriminatory capacities, and the external limits imposed by our local spatiotemporal environment unfiltered through the lens of theory.  We take ourselves to be justified in inferring, for the most part, a pleasing isomorphism or structural agreement between objective and subjective temporal order.  Our cosmologically ‘local’ time, at least, (i.e., time as we experience it here on Earth) seems to be an objective counterpart of our subjective temporal orderings.  No one, so far as we know, has experienced a temporal loop, or a reversing fluctuation of the objective temporal order.  Nor are those an intuitive extrapolation from our experience.   As a result, our naïve conception of cosmologically ‘global’ time has been the more or less simplest, most direct, extrapolation from our experiences of local time.  Now it may be that when the physics of global space-time is taken into account, there are ways for this nice isomorphism to break down.  Travel to the past may be possible.  But to repeat, even such a possibility may make sense only against a backdrop of anti-symmetric subjective temporal ordering.

What about travel to the distant past, then?  Is it possible without flowing against the current of entropy?  Let us try to imagine the circumstances.  Time travel into the past seems to rule out causal theories of time of the sort that declare an event e1’s being later than an event e2 is constituted by there being a possible causal path from e1 to e2.  But as stated  this seems unacceptable as it stands anyway, to the extent that it treats time as absolute.  So consider instead a version of this relativized to inertial frames.  Could we not say that relative to such a frame e2’s being later than e1 is constituted by there being a causal path definable in that frame from e1 to e2?  The theoretical home of inertial frames is of course STR. But we would like to appeal to them here in the context of a GTR Godelian global space-time structure.  Imagine someone climbing into a powerful rocket ship and living long enough (a million years??) to travel via a Godelian time loop to the past.  What does a clock on board say?  It surely measures the passage of some positive amount of time, at a rate depending on the speed of travel – which, recall, is at every point into the local future  So the time traveller gets to what is in the distant past from her time of departure, relative to her initial inertial frame, by getting to successive future points relative to each successive inertial frame that she occupies (say that she has to accelerate and deccelerate), without thereby bucking the flow of local time or of entropy at any point during her trip.

If this is coherent, a consequence seems to be that we entirely avoid having to treat the earlier than relation as symmetrical, at least relative to the experience of the time traveller.  The traditional distinction between past, present, and future, again relativized to the traveler’s inertial frame, or perhaps to the sequences of inertial frames occupied by the time traveler, can remain intact.  And a pleasing isomorphism is preserved between the subjective temporal order of the experiences of the time traveler and the objective order of local time relative to traveler’s inertial frames.   All this while from a global perspective, time is symmetric, containing as it does temporal loops.

Some might take exception to this intrusion of subjective representation into what is supposed to be an account of the properties of time, a physical magnitude.  But our pre-theoretic conception of time does seem to be inextricably tied up with our subjective sense of time.  Maybe that should be taken seriously, in perhaps somewhat the same way that conscious acts of measurement are taken seriously in our account of quantum mechanical properties like position or momentum or spin. 

In any event, considerations that support the possibility of time travel also give indirect support to a 4-dimensional conception of time. 

[For a recent  view of the nature of time from the bleeding edge of ‘extreme physics’  see Time Reborn,  by Lee Smolin (Alfred A. Knopf Canada, 2013) It is argued that the General theory of Relativity and Shape Dynamics are conflicting theoretical frameworks that are both empirically equivalent and form a ‘duality’.  Within Shape Dynamics, by contrast with GTR, time is real, fundamental, and flows throughout the universe at a constant rate.]

 

 

 

 

 

HANSON’S CLASS NOTES FOR CHAPTER 7: CAUSATION

We can distinguish between general causal claims, like “Smoking causes cancer”, and singular causal claims, like “Sirhan killed Kennedy .  In the latter sentence, the causal verb does not explicitly appear, but “kill” means: to cause to die.  The sentence alludes to the event of Sirhan shooting Kennedy, which brought about the event shortly thereafter of Kennedy dying.  There has been lots of recent work done on general causal claims, much of it involving the idea that, that, very roughly, causes raise the probability of their effects.  I say very roughly, because of course sharply falling barometers increase the probability of a storm occurring, but are not causes of that storm.   Anyway, Garrett focuses his attention on accounts of singular causation, which he construes as a two-place relation between two events, called the cause and the effect. (Events are another interesting ontological category, but one that we will not be covering in this course. But to just say something very partial about them,  think of them here as having duration, being dated, having ‘participants’ -- animal, vegetable and/or mineral! – and characteristically  involving change or the initiation of change.) 

Garrett presents and discusses 3 accounts of singular causation.  Two of them are put forward as reductive accounts, accounts that try to explain causation in non-causal terms.  These are:  the  ‘regularity theory’ of David Hume, and the ‘counterfactual theory’ of David Lewis.  The first of these accounts, while an account of singular causation, tries to explain particular instances of singular causation by subsuming them under general law-like patterns.  The essence of causation on this account is the general pattern.  Something counts as an instance of causation only by being grouped with other, relevantly similar, instances.  The second of these reductive accounts does not involve appeal to any general conditions.  The third account is due to Elizabeth Anscombe.  It is singularist in an even more robust sense than Lewis:  in the sense that what makes an instance of singular causation causal is inherent in the instance, and independent of what happens elsewhere and elsewhen.  And her account is non-reductive; causation is treated as a fundamental, basic notion.

HUMEAN REGULARITY THEORIES

Hume’s account of singular causal relations emerges from his empiricist account of the origin of our idea of causation, and which parts of our idea may be legitimately projected onto the world. 

According to Hume, our idea of causation includes the following 3 features:

1.       causes and effects are “contiguous” in space and in time; although often we allow talk of non-contiguous causes and effects, provided that there is a causal chain of contiguous causal relations connecting them;

2.       causes always temporally precede their effects, and this explains why the direction of causation is from the cause to the effect;

3.       there is a necessary connection between a cause and its effect.

Hume finds nothing illegitimate about the first 2 elements of our idea, but can find no empirical basis for the third.  There is no sensory impression of this necessity, so Hume tells a story: that this idea of necessity is something that we mistakenly confuse with our feelings of anticipation of an event of a certain sort to follow shortly after we experience an event of another sort, when that is the pattern of succession that we have experienced enough times in the past.

Hume’s account of singular causation, his ‘regularity theory’, has influenced subsequent regularity theorists, who, however, have dropped the spatial contiguity condition in light of developments on physics.  Here is a statement of the core of such theories.

RT: A caused B if and only if (i) A preceded B and (ii) all A-type events are followed by B-type events.

This is put forward by Hume as a reductive account.  So there must not be any hidden appeals to the notion of causation in (i) and (ii).  On p. 52, however, Garrett  raises legitimate concerns about whether the account is reductive, based on the possibility that the categorization of events into  A-type and B-type cannot be carried out reductively.  If the relevant respects by which A-type events are grouped together makes any appeal to what brings them about or what they bring about, then this amounts to an appeal to causation and the account fails to be reductive.  That might not be the end of the world except for Hume, who needs a reductive account; as already mentioned, we will discuss a non-reductive account later.   So let’s set this issue aside, to consider other problems with the account.

As Reid and others have pointed out, the account looks too weak: both conditions appear to be satisfiable where we wouldn’t want to say that we have causation; so the conditions appear not to be sufficient for causation.  Day and night appear to satisfy the conditions, but day doesn’t cause night, or vice versa.  We would normally say that day and night have a common cause, the diurnal revolution of the earth.  It is not even clear that the notion of common cause can be explicated in Humean terms.

The account also looks to be too strong: arguably, neither condition seems to be necessary for causation.    As for condition (i), we seem to have no trouble conceiving of simultaneous cause and effect, e.g. the transfer of energy that takes place at the instant of collision of two particles.  A large question then remains, of course, as to what the directionality of causation, from the cause to the effect, consists in.  We will see some other proposals soon.  As for conditions (ii), as Garrett points out, 20th century indeterministic quantum physics has taught us how to imagine cause-effect pairs that do not fall under exceptionless generalizations.  But if an effect is not determined by its cause, how are we supposed to conceive of the causal relation?  The effect is supposed to be somehow “conditioned on” the cause, but what does that come to if this is neither necessary nor even  generalizable? What remains to give appealing to causation explanatory force?  We will return to this.

LEWIS’S COUNTERFACTUAL THEORY

There are many different ways in which counterfactual conditions might, and have, entered into accounts of singular causation.  Here we will look at David Lewis’s earliest version of the account:

CT:  A caused B if and only if, had A not happened, B would not have happened.

Notice that the possibility of simultaneous, even temporally backwards, causation  is not conceptually blocked. The directionality of causation is cashed out counterfactually: A and B may be simultaneous, for instance, but the direction of causation is from A to B because if A had not happened then B would not have happened either.  But the converse need not hold: if B had not happened A might still have happened.   If I hadn’t turned the key, my car wouldn’t have started; but if my car hadn’t started, I might still have turned the key.  Maybe the distributor is removed, or maybe I just flooded the carburator. 

Notice next that on Lewis’ account an instance of singular causation does not have to fall under a universal generalization either.  All that has to be true in the actual world is that A and B both occur.  All that has to be true in the other possible worlds is that in the most similar possible world to the actual world in which A does not occur, B does not occur either.  This clearly allows for non-deterministic causation.  I might toss a pair of dice and they turn up snake eyes.  There is only one way for that outcome to occur,  and it is very unlikely  if the die and the toss are fair.  So there is no universal generalization under which the event pair falls.  Yet on Lewis’ account it will be a case of causation, because both events happened, and the most similar world to this world (the ‘nearest’ world) in which I don’t toss the dice is a world in which they don’t turn up snake eyes.  Lewis’s explication of the counterfactual account does seem like an implausible account of the explanatory force of the causal claim, however.  What explains the dice turning up snake eyes?  I tossed the dice, and in this other isolated world that is otherwise the most similar to this one except that I don’t toss the dice, snake eyes doesn’t turn up either.  But what does what goes in in some isolated world have to do with what goes on causally in this world??

There are also the problems of causal pre-emption and causal over-determination that face counterfactual theories like Lewis’s, and which raises the question whether or not such theories can be reductive.  In cases of overdetermination, there are at least two active causes of the same effect.  Brutus and his pals simultaneously stabbed Caesar .  Let us suppose that each stab was sufficient  all by itself to do Caesar in.  It was thus overdetermined that Caesar would die.  If Brutus hadn’t joined in, Caesar would have died anyway (many times over).  But it seems that none of those individual stabbings can have killed Caesar on Lewis’s account.  Take  Brutus’ knife thrust.  In the nearest possible world where Brutus doesn’t stab Caesar, Caesar still dies.  Pre-emption is a variant on this theme.  Suppose the story went a bit differently.  Brutus was the only one who stabbed Caesar, and Caesar died.  But Brutus’s pals didn’t trust him, so they were waiting in the wings watching, so that if Brutus was faint of heart they could step in and finish it.  In the nearest possible world to this one except that Brutus doesn’t stab Caesar, Caesar still dies.  So Brutus didn’t cause him to die according to Lewis’ account.  That seems clearly wrong.  But is there a way of ruling out causal overdetermination and pre-emption without appealing to the notion of causation?  It can seem very unlikely.  In later work Lewis adds many epicycles to his account in order to overcome this problem, but many have found it unconvincing and ad hoc. 

ANSCOMBE’S SINGULARIST  THEORY

There are those who think that singular causation is too basic a notion to succumb to a reductive account.  But that does not mean that there is nothing insightful to say about it. Minimally, it just means that we should not expect a set of necessary and sufficient conditions for singular causation that in no way circle back on the notion of causation.  Here, then, is a formulation of Anscombe’s proposal, as given by Garrett, pp. 84-5:

AT:  A causes B if and only if B derives from A, where derivation is meant to encompass and be paradigmatically exemplified by an effect being generated by or produced by its cause.

Of course, one can easily think of the ideas of generation or production as themselves causal notions.  But they do have dynamic implications about the mechanisms that bring about or produce the effect, that cannot be captured by any merely correlational account of causation such as Hume’s, or worse still, by an account like Lewis’s, that merely depends, as far as what goes on in the actual world, on the existence of the cause and the effect.  Cases of causal overdetermination and of pre-emption are still cases of causation on AT.  Brutus’s stabbing did produce or bring about Caesar’s death on both the pre-emption and overdetermination scenarios sketched above. 

How is the effect conditioned by the cause in an explanatory way on Anscombe’s account?  She distinguishes “necessitating causes” from non-necessitating” causes.  A necessitating cause is one that is sufficient to bring about the effect and will do so unless blocked from doing so.  A non-necessitating cause is one that can fail of its effect without the intervention of anything to frustrate it, but such that nothing else is needed – it alone can bring the effect about.

Consider again the dice example.  My throwing the dice is a random process, but can bring about the effect that the dice role snake eyes.  It can do that all by itself.  But equally (and more often than not) my throw can fail to bring about that effect without the intervention of anything to frustrate it (such as someone grabbing the dice in midair before they role).  My throw is thus a non-necessitating cause.  Contrast that with my moving a magnet close to iron filings; that is sufficient to cause the movement of the  filings towards the magnet, and will do so unless blocked by, say, a lead barrier, or by another magnet being moved close to the iron filings from the opposite direction so as to counterbalance the pull of the first magnet.  Perhaps the best way to think of such cases of necessitating causation is in terms of a deterministic physical continuity that links the cause and the effect.  In the magnet case it is a force field.  The account can be given counterfactually, but non-reductively, thus (I here borrow some elements and inspiration from Fred Dretske, Aaron Snyder, and John Mackie): 

A necessitatingly causes B if (i) A and B occur and (ii) are numerically distinct; (iii) there is a productive, dynamic physical continuity linking them; (iv) if that continuity had been broken, or interrupted, or absent, but things had been as far as possible the same, then A would have occurred but not B, barring the presence of overdetermining or pre-emptive conditions; (v) otherwise , even in the absence of overdetermining and pre-emptive conditions , A is a sufficient condition, a determiner in those circumstances, for the  occurrence of B. (see, e.g., John Mackie, “Mind Brain, and Causation,” in French, P., Uehling, Jr., T., and Wettstein, H, (eds.) Midwest Studies in Philosophy., Vol. IV, U of Minnesota Press (Minneapolis, 1979), pp. 19-29.)

 A non-necessitatingly causes B if (i) –(iv) same as above;(v)  though far from  a guarantee of B’s occurrence in the circumstances, A is all that is necessary, needed or required in the circumstances for  an occurrence of B to come about; if B does occur, it was because of A.  (Compare, e.g., Fred Dretske and Aaron Snyder, “Causal Irregularity,“ Philosophy of Science, Vol. 39 (1972) pp. 69-71.)

(Note that I have presented these as merely sufficient conditions.  One reason is to not foreclose on the possibility of keeping an over-all account consistent with the possibility of simultaneous and temporally backwards causation.  More shortly. ) 

If we want a single unified account of causation, than we should take the weakest, non-necessitating sufficient conditions as our account.  The stronger, necessitating conditions can then be regarded as a limiting case.  

It is not clear that temporally backwards causation is ruled out; that depends on whether or not the dynamic physical continuities of (iii) can include some temporally backwards ones.   It does seem, though, that instantaneous causation is ruled out, since it is hard to imagination what goes on in an instant as constituting a continuity. 

I would therefore like to suggest that we might wish to distinguish between causal processes and causal interactions.  Anscombe’s conditions can be regarded as conditions about causal processes. Following Wesley Salmon, we might think of a causal interaction as what occurs when two (or more) causal processes spatially intersect at a time.  They will interact, and there will be an instantaneous exchange of energy of some sort at the given time and spatial point of intersection.  Putting this together with Anscombe’s account of causal processes than gives us the following non-reductive, recursive account of A causing B, where A and B are events:

  1.   A causes B if A Anscombe’s non-necessitating or necessitating conditions are satisfied. (causal processes)
  2. A causes B if A is the event of two (or more) causal processes intersecting at time T, and B is the mark left on these causal processes by the exchange of energies that results at T. (causal interactions)
  3. There are no other instances of A’s causing B. 

(See Wesley Salmon, Scientific Explanation and the Causal Structure of the World, Princeton University press (Princeton, NJ, 1984).

Note on the Coherence of Backwards Causation

Garrett emphasizes that neither of what he regards as the two most plausible accounts of causation that he has put forward rules out the possibility of backwards causation.  There is a method of argument, though, first sketched by Max Black in 1956, and later elaborated by a number of other prominent  philosophers , which purports to undermine the possibility.  Suppose that we observe an event E at T1.  Assume for the sake of argument that E’s cause is alleged to be some future event C at T2 > T1.  Now suppose that we try to prevent C from occurring, i.e., we attempt to “bilk”.  The point of trying to do this seems to be to try to show that E was not caused by C.  Either we will succeed in preventing C or we will not.  But what follows?  If we succeed in preventing C the most that seems to follow is that E was not caused by C, not that it had no future cause.  If we fail to prevent C, of course that result seems to be consistent with the possibility that E was caused by C.  Either way, the bilking attempt seems to fail of its purpose. 

Stepping back from this, though, one thing to note about the bilking argument is that it is keyed to the possibility of human interventions and manipulations of the causal context.  Recent work on causation, (e.g., in ‘interventionist’ or ‘manipulationist’accounts of causation, not covered by Garrett’s overview) argues that the point of our concept of causation is precisely to tell us what processes, or variables, of a dynamic system are such that if an agent were in a position to intervene upon or manipulate them, are the ones that would allow them to control the resulting outcomes of that system.  It is hard to see such a practical perspective on causation as temporally backward looking.  But at the same time that does not seem to rule out the cosmological possibility, as it were, of temporally backwards causal processes.

 

 

PHIL HANSON’S NOTES  FOR PHIL 203, GARRETT, CH 3, “EXISTENCE”

Distinguishable questions about existence:

(1)     What exists?

(2)    What sorts of things exist?

(3)    What is the concept of existence?

(4)    What is the nature (if anything) of existence?

Garrett indicates that his main interest in this chapter is (4), and that his method for answering it will be to proceed ‘logico-linguistically’.

Taxonomy of Existence Attributions

                                Positive                                                                                                                                Negative

Singular                                                Plural                                                                     Singular                               Plural

Mt. Baker exists                               Horses exist.                      Santa Claus does not exist           Unicorns don’t exist.

 

Truth-maker Principle:   When an attribution is true, there is something that makes it true.

Question:  So, what makes existence attributions true?

[SIDEBAR: PREVIOUS METHODS FOR ANSWERING THE QUESTION OF THE NATURE OF EXISTENCE

1.       Early Modern Rationalism -- Descartes: reflecting directly on our concept of existence.

 

The transparency principle: everything in our minds (including the concepts in terms of which we formulate our thoughts) is accessible to reflective consciousness. 

 

Existence is a basic concept whose content is not derived from anything external to our (conscious) minds.  (Consider, e.g., the cogito)  The new science shows (contra Aristotle) that all of our ideas are innate1, in the sense that, though their content is occasioned by and correlated with external stimuli, the specific form that they take in the mind is determined by the dispositions of the mind itself.  But some of our ideas are also innate2 in the sense that their content is not occasioned by external stimuli at all, e.g., our ideas of God, the infinite, and existence.

Question: How objective is this method of reflection?

 

2.       Early Modern Empiricism – Hume:  reflecting on our memories of sensory impressions, then querying how our idea of existence could have arisen from them.

 

The transparency principle: (same as above).

 

The empiricism principle: All of our (legitimate) ideas are derived from our sensory impressions.

 

So there are no innate concepts.  Hume, Treatise, Bk I, Pt. II, sec. VI: We have no idea of existence at all separate from our ideas of things that we take to exist.  To exist is to be perceived or conceived (comp. Berkeley).

Question:  Is this method any more objective than that of the rationalist?   END OF SIDEBAR]

 

(Back to Garrett.) 

‘Surface grammar’ vs. ‘Logical grammar’ (or ‘Real grammar’)

Two views about existence claims:

1.        The Property View: existence is a property, and the surface grammar of a positive existence attribution captures the logical (‘real’) grammar of the attribution.

Compare:  ‘John is bald’, and ‘John exists’.   In both cases, the truth-maker for the claim will be the fact that the object denoted (i.e., John) has the property attributed by the predicate (in the 1st case the property of being bald; in the 2nd case, the property of existence).

2.        The Quantifier  View: the logical grammar of existence claims is not their surface grammar.  “Exists” is not a predicate, and so existence is not a property.  “Exists” is a quantifier, meaning roughly “at least one”.  The truth maker of “John exists” is simply John.   The logical grammar of “John exists” is rendered roughly as “There exists an x such that x is numerically identical to John”, or in Standard Logical notation: Ǝx(x = John).

[SIDEBAR:  Note that in one way the Quantifier View  resembles Hume: there is just the object, not the object plus its property of existence.  But this does not let Hume off the hook, given his rejection of innate ideas.  If our idea of existence is the idea of a quantifier, where does it come from?  According to Hume it would have to come from our sensory impressions.  Seems unlikely.]

But both of the Quantifier and Property views have trouble dealing with true negative existential claims, like “Santa Claus does not exist”.     If the negative existential claim is true, then it is asserting that there is nothing to which to attribute the property of existence (or, on one version of the Property View, the property of non-existence), but then what is the truth-maker for the claim supposed to be on the Property View?  Turning to the Quantifier View, is the truth-maker supposed to simply be that there is no object?  If so, this proves hard to represent quantificationally.  Either ‘Santa Claus’ is not admitted into the language of 1st order logic at all because it does not refer to anything in the Domain of discourse,  or it is admitted but then we can easily generate a contradiction.

[SIDEBAR: STANDARD LOGIC, I.E., 1ST ORDER PREDICATE LOGIC WITH IDENTITY.  It was developed by Frege, Russell, and Whitehead for the purpose of formally representing or ‘modeling’ number theory.  Later, Quine proposed that it be used to model natural language semantics and reasoning .  Garrett seems to be following Quine.  So, here are some key ideas about Standard Logic (“SL”)

A rule is deductively valid just in case any interpretation of the formal language of SL that makes its premises all true makes its conclusion true as well.

Consider the Rule of Existential Generalization (EG):

….a…..

Ǝx (….x….)

EG is a valid rule of SL.  But why?  It must be because of the way the intended interpretations of SL are structured.

An interpretation consists of a non-empty, possibly infinite set of objects, called the ‘Domain’ , the set of truth values, {T,F}, and an assignment of values from the Domain to referring expressions of the language and from the set of truth values to sentences of the language.  So for instance,

Individual Constants: a,b,c… are assigned a particular (not necessarily distinct) objects in the Domain of the interpretation as fixed values for that interpretation.  Note:  every constant is assigned a value.

Individual variables: x,y,z….take individuals in the Domain as variable values.

(Monadic) Predicate letters : F,G,H,…are assigned particular subsets (possibly the empty set) of the Domain as fixed values for that interpretation.

The sentence ‘Fa’ is assigned the value T just in case the object assigned to ‘a’ by the interpretation is a member of the subset of objects assigned to ‘F’ by the interpretation.  Otherwise ‘Fa’ is assigned the value F; i.e., The Law of the Excluded Middle holds, that for all sentences P, P or not P.

The sentence ‘ƎxFx’ is assigned the value T just in case the subset of the Domain assigned to ‘F’ is not empty; otherwise it is assigned the value F.

Now, back to the rule EG.  Its premise is   ….a….., Where’…a…. ‘is just some sentence admissible in the language of SL  containing the individual constant ‘a’;   e.g.,  Fa.  If we are given that Fa is true, then of course that means that the object assigned to ‘a’ is in the subset assigned to ‘F’.  And that means that that subset is non-empty.  But then ƎxFx must be true, given the interpretive rule above.  So now we can see why the rule EG is valid.  It is because the intended interpretations of SL do not allow there to be any constants that do not pick out something in the Domain.  No non-referring names like ‘Santa Claus’ are allowed!  So SL doesn’t try to solve or even model the problem in English and other natural languages of what to do about non-denoting names, it just side-steps it.  END OF SIDEBAR]

We are now in a position to be more explicit about the problem that true negative existential claims raises for both the Property View and the Quantifier View.  We will focus here just on true negative singular existential claims.  Here are some examples of such claims:

1.        Santa Claus does not exist.

2.       The tooth fairy does not exist.

3.       A perfect performance does not exist.

4.       It does not exist (referring back to a perfect performance).

5.       This does not exist. (pointing, say, at an hallucination, that one is aware of having at the time, of a pink elephant).

The Property View holds that existence is a property.  But it may hold either that non-existence is also a property just like existence, or not.  Let us take these in turn.

If non-existence is a property, then the truth of the negative existence claim requires that there be an object which is the bearer of the property, ‘not-E’, of non-existence.  But of course that is precisely what the claim is denying. If the object is identified with a proper name, as in sentence 1 above, then the logical grammar of the sentence will be ‘not-Ea’, i.e., a has the property of non-existence.  EG applies to such sentences, yielding (Ǝx)not-Ex., which says that there exists an x that does not exist., an evident contradiction.   

On the other hand, if non-existence is not a property, than according to the Property View, the true negative existence claim is merely a denial of the existence claim , of the form ‘not(Ea’).  No matter, though, EG still gives us the same contradiction.

Solution:  Reject EG and embrace non-existent objects.  So, e.g., there is a Santa Claus (and that is to whom we are referring) but he doesn’t exist.  If Fa, then something is F, but that does not imply that it exists.

[Note:  There is actually an independent reason, that Garrett does not mention, for invoking non-existent objects, if you hold the property view; cf. Penelope Mackie’s nice entry on “Existence”  in  the Routledge Encyclopedia of Philosophy:  doing so makes positive existence claims less trivial seeming: we talk about lots of things, but only some of them exist. ]

The Quantifier View holds that existence is not a property  (or at least not a property of objects, though on a version of the Quantifier View mentioned but not endorsed by Garrett, it is a property of properties).  But consider the claim “Adam does not exist”.  Its logical grammar, according to the Quantifier View, is ~(Ǝx)x =a, where ‘~’ stands for negation.  A problem is that EG still yields the contradiction:  (Ǝy)~(Ǝx) x = y, which says roughly that there exists something y which is numerically identical to something x which does not exist!

Solution: Deny that any singular terms occur in the logical grammar of a natural language: no names, no definite description, no indefinite descriptions, no pronouns, no demonstratives; at least not as singular terms.  So then the problem the problem of true negative existentials (at least for singular ones) cannot arise.  Everything legitimately expressed (i.e., in real grammar not surface grammar) by a singular existential claim will be expressed in terms just of predicates, quantifiers, and identity. 

Indefinite descriptions are the easiest to dispose of.  Sentence (3) above  becomes ~(Ǝx)(x is a performance and x is perfect).  Definite description require a second kind of quantifier, ‘(x)’ which reads “For all x”, to capture the uniqueness expressed by “the” in definite descriptions.  Sentence (2) above becomes, via Russell’s famous theory of Definite Descriptions, ~[(Ǝx)(x is a tooth fairy and (y)( y is a tooth fairy then y=x)], which reads:  there does not exist an x such that x is both a tooth fairy and the only tooth fairy, in that anything y that is a tooth fairy must be identical with x.  Quine was the one who proposed that proper names be replaced by predicate letters.  So sentence (1) above would  become ~(Ǝx)(x Santa-Clausizes), where the predicate ‘Santa-Clausizes’ has as its descriptive content various properties which are taken to uniquely identify  Santa Claus, i.e., in the story of Santa Claus.

Garrett favors this Quantifier View approach to the problem of true negative existentials over the Property View approach.  His main reason seems to be that the Property View violates ‘Ockham’s Razor’; it multiplies entities beyond what is needed to solve the problem, given that the Quanitifier View works.  The Property View posits the property of existence, plus possibly also the property of non-existence, plus tons of ‘non-existing’ objects: all the merely possible ones like the Golden Mountain as well as all the  impossible ones like square circles.

But against the Quantifier View, one might note the following.

  1. The restriction to 1st order quantifiers – i.e. quantifiers whose variables take only objects are values – has not been motivated, just assumed.  So only objects exist, not properties, and therefore not the property of existence.  Surely that is just too easy, pending a justification for the restriction.
  2. It posits a logical grammar starkly at odds with the surface grammars of natural languages, in some cases misrepresenting the semantics of ordinary language singular terms, which seem to function often with little or no descriptive content semantically attached to them at all.
  3. Surface grammar names in particular are to be replaced with complex predicates that would uniquely identify the intended would-be object being picked out by the name.  that is a lot of predicates!  How are they to be determined?
  4. The Rule of Existential Generalization is spared as a valid rule only by having been rendered superfluous.
  5. The Quantifier View still seems to be a kind of property view.  Even Garrett, who sets aside the idea that quantifiers might express ‘higher order properties’  -- properties of properties or the like, such as the property of being instantiated that a property may have – still at one point refers to existence as a ‘formal property’ (cf. p. 29, 13 lines from the  bottom) .  The contrast he seems to have in mind is with natural properties had by objects, and that ground their causal powers.  But what is a formal property?  What has them, and where are they?  We just have not been told.  Just invoking the special syntax of quantifiers does not help.  Recall Quine’s famous slogan:  “To be is to be the value of a bound variable” (and he meant only  1st order variable).  How is that supposed to count as an answer to the question Garrett wants to answer?  How does it help us understand  “the nature(if any) of existence”?  So for all we have been told existence is a special kind of property that needs a special kind of syntax to represent it and distinguish it from other kinds of properties.  But what is its nature??

Of course, someone might still think that we are better off with this Quantifier View, given the extreme ontological excesses of the Meinongian solution to the problem of true negative existential claims on behalf of the Property View.  But Garrett has given no argument that that is the only solution available to the Property View.  And there is another kind of solution not considered by Garrett, but worth considering, that carries with it no new ontological commitments.  It is a solution that is as available to the Quantifier View as it is to the Property View.  It involves simply rejecting the classical Law of the Excluded Middle as a universal law, by holding that sentences with non-denoting names lack truth values:  are neither true nor false.  Therefore, since no negative existential statement containing a non-denoting name will be true, it will never be appropriate to apply the law of Existential Generalization to it, and so it will never yield a contradiction. 

It is beyond these notes to go into the formal details of such a solution, but logicians have been working on so-called ‘Three Valued logics’, and even more generally ‘Multi-Valued Logics’, since the ‘30s, and there is precedent for using them to avoid longstanding semantic paradoxes and contradiction.  And the fact that, if such a solution is viable, it is as available to a property conception of existence as to a quantifier conception suggests that the antagonism that Garrett portrays as holding between these views as a false antagonism.

HANSON’S NOTES ON GARRETT, CH.1, ON ANSELM’S ONTOLOGICAL ARGUMENT

At the end of Ch.3, brief mention was made by Garrett  of Descartes’ version of the Ontological Argument, suggesting  that it suffers the demise of the Property View of existence:

Descartes’ Argument (5th Meditation):

1.        My idea of God is the idea of a being with all the perfections.

2.       Necessary existence is a perfection.

3.       Therefore, God necessarily exists.

In the passage in the 5th Meditation, Descartes also argues that there is no less contradiction in conceiving a supremely perfect being who lacks existence, than there is in conceiving a triangle whose interior angles do not sum to 180 degrees.  Since we do conceive of a supremely perfect being, we must therefore conclude that he exists.  Garrett says that Descartes’s argument seems to presuppose that existence (or necessary existence) is one of God’s properties.   And of course Garrett has argued against the Property View of existence.

Be that as it may, it seems to me that there are other problems with (this reconstruction of) Descartes’ line of argument.  Surely no one would accept the inference from ‘By definition, God is an existent being’ to ‘God exists’.  It is patently invalid.  Perhaps the inference from the same premise to the conclusion ‘By definition God exists’ is valid, but it is uninteresting.   Garrett seems to think that Descartes’ argument is valid but that it is unsound because premise 2 is false because it violates the Quantifier View of existence.  But surely even assuming the Property View of existence it is just invalid to infer that because existence is part of my idea of God that God exists. 

What is needed here is a distinction between an idea that we are entertaining, on the one hand encoding the property of existence, and one the other hand our actually attributing existence to what the idea is supposed to represent.  I can have the idea of ‘smallest really existing Martian’, and that idea encodes real existence.  It is surely perfectly possible to have this idea without supposing that there really are any Martians.  What justifies Descartes’ leap from his having an idea of God, which encodes God’s necessary existence, to his belief that God actually exists? Is it the appeal here to the idea of necessary existence, rather than just existence per se?  The idea of necessary existence is the idea of existence in all possible worlds.  If something exists necessarily then it follows that it exists in this world in particular.  But surely the distinction between encoding and attributing applies as much to the property of necessary existence as it does to existence.

Let’s turn now to Garrett’s discussion of Anselm’s famous argument.    Here is his reconstruction:

  1. God is that than which nothing greater can be conceived
  2. God either exists in the understanding alone or exists both in the understanding and in reality.
  3. If God existed in the understanding alone, a greater being could be conceived, namely, a being with all God’s qualities who exists both in the understanding and in reality.
  4. So, God cannot exist in the understanding alone.
  5. So God exists in both the understanding and reality.
  6. So God exists.

As far as I can tell, Garrett does not challenge the validity of this argument either, but rather takes issue with the truth of premises 2 and 3.  He says that Anselm  “…has identified the mind’s grasping a concept with the mind’s containing the object conceived.” (p.3),  He refers to this as a ‘fallacy of reification”.  Once we (fallaciously) have God actually existing in the mind, then it is a short step, given his definition in premise 1, to having him exist in reality as well.   Maybe.  But maybe Anselm was just using the language of “existing in the mind” as another – poetic?  Medieval? --  way of talking about our having the concept of God in our minds as per the definition in premise 1.   Still, if not, there does still seem to be this leap, as in Descartes,  from our idea of God encoding one of his properties as existence, to his existing.  That is surely also a fallacy of reification, and one that effects the validity of the argument.

Of course Hume, the idealist, could never have raised this objection to the Ontological Argument, since, as we saw, Hume thought that our idea of something existing is no different from our conceiving of it. To raise this objection one needs to believe that there is a mind-independent reality, and then the question is whether or not God is part of that.  I might have come up with the idea of a really existing kind of horse-like animal with black and white stripes.  But does it exist?  I have to look in the external world to see whether or not it does.   Hume has to go through unnatural contortions to try to make sense of that.

Perhaps a word about Meinong is also appropriate here.  He thought that there were non-existent objects, like spherical cubes.  There are spherical cubes, because that is what we are talking about when we say “spherical cubes”.  But they do not exist.  This can be seen as transposing the reasoning of Descartes and Anselm from the sphere of our ideas to the sphere of our utterances.  At least Meinong recognized that just because we talk about something does not mean that it really exists!  Too bad Descartes and Anselm thought that the mere ideas we entertain about something as existing can imply that something really does exist.

Maybe Anselm’s argument goes like this.

  1.  (Even) the Fool has the concept of that than which no greater can be conceived.
  2. (Hence) (Even) the Fool believes that that than which not greater can be conceived exists in the understanding.
  3. No one believes that that than which no greater can be conceived exists in the understanding can reasonably believe that that than which no greater can be conceived exists only in the understanding.
  4. (Hence) (Even) the Fool cannot reasonably deny that that than which no greater can be conceived exists in reality.
  5. (Hence)  That than which no greater can be conceived exists in reality.

Putting it this way brings out its invalidity: e.g., the move from 1 to 2.  3 is also a false premise, but it doesn’t get its purchase in the argument without the invalid move from 1 to 2.  [This reconstruction of the argument may be found in the Stanford Encyclopedia of Philosophy entry on” Ontological Arguments”, by Graham Oppy.  The distinction made above between encoding vs believing  or attributing existence is also developed there;  the ‘smallest really existing Martian’ example too.]

Finally, as far as I can tell, thinking about the role of the notion of existence in these versions of the Ontological Argument does not help us to in any way adjudicate between the Property and Quantifier Views.  Contra Kant, for instance, the central reasons why the argument does not work hold whether or not one thinks that existence is a property.  

 

 

PHIL HANSON’S NOTES FOR PHIL 203, GARRETT, CH 6: OBJECTS AND PROPERTIES

I.        INTRODUCTION. 

What are properties and objects, and the relations between them?  Why are the categories of ‘property’ and ‘object’ useful  to us?  Garrett’s question for this chapter is the first.  I add the second just to serve notice that it is a good question, and that I briefly speak to it (in a digression) in the course of going through Garrett’s discussion.

Pretheoretically, we suppose that no two  concrete objects can fully occupy the same place at the same time, and that no concrete objects can be fully present within discrete (non-overlapping) places at the same time.  By contrast, we suppose that two properties can occur at the same place at the same time, and that a property can occur in more than one place at the same time.

According to Realism, properties are ‘universals’, repeatable entities.  Plato and Aristotle were both realists about properties in this sense.  But the devil is in the details, and the details reveal  important differences between Plato and Aristotle. 

According to Plato, universals are abstract, transcendent , eternal entities, existing outside of space and time.  They are repeatable by being multiply instantiable.  For an object to have a property is for it to instantiate or exemplify an abstract universal.  But the existence of a property does not depend on its being instantiated, since universals exist eternally whether or not they are instantiated at some time. 

According to Aristotle universals are concrete, immanent, non-eternal entities.  They are repeatable by being wholly present in each of their occurrences.  For an object to have a property is for a concrete universal to inhere in it.  If no object ever has the property, then that property never exists. 

A notable 20th century realist is Bertrand Russell, who held that there are only concrete universals.   Concrete objects are just bundles of universals.  For a concrete object to have a  property is simply  for that property to be one of the universals of the bundle of universals  that constitutes that object.

The denial of Realism is called ‘Nominalism’, the view that there are no universals, only particulars, of which concrete objects are an example (perhaps the prime example), and which are taken to be non-repeatable.  Assuming that the nominalist does not want to outright deny that there are properties, she must give an account of properties on which they are a kind of particular, or constructed out of, or with reference to, only particulars.  As we shall see, there are a number of different ways in which nominalists have tried to carry this out.

[ASIDE:  There are at least four different uses of the term ‘realism’ in philosophy, which can be distinguished by their respective contrast terms.  We have just introduced the distinction between Realism and Nominalism, which is au fond a disagreement about whether ‘repeatable’ entities exist.  Then there is realism vs. idealism, a disagreement about whether or not a mind-independent external world exists; realism vs. instrumentalism or fictionalism, a disagreement about whether or not the unobserved entities/structures/processes postulated by our best scientific theories exist; and finally, realism vs. verificationism, coherentism or pragmatism, a disagreement about what makes our claims or beliefs about the world true or false, the realist saying that it is whether or not they have  correspondence relations with appropriate states of affairs in the mind-and-language-independent  world.]

II.      GREEK REALISM

Plato and Aristotle had contrasting methods of inquiry.  Plato was an early rationalist, who thought that knowledge was acquired through rational insight and reasoning therefrom. Aristotle was an early empiricist, who thought that knowledge was acquired through observation of the natural world, experimentation, and reasoning therefrom.  Since both rational insight and reasoning may seem to involve language, it is perhaps natural that Plato should have turned to the study of language for metaphysical insights.  Plato’s universals are taken by him to be the referents of abstract nouns (like ‘courage’ or ‘goodness’).  A predicate like “is courageous” in the sentence “Socrates is courageous” is taken both to be true of each courageous particular, and to express the abstract universal courage. You cannot point to the universal courage, only to particular instances of it; but one may posit the universal by the rational insight that abstract nouns like ‘courage’ are meaningful and therefore must refer to something had in common by courageous beings.  By contrast, Aristotle’s starting point is looking  at the natural world and observing that objects have natural properties (i.e., not just properties that are an artifact of language or some whimsical grouping together of things)  in virtue of which they both resemble each other and have similar causal capacities and powers.  Aristotle posits his concrete universals as the best explanation of these resemblances and powers. 

The project of constructing an adequate semantics for natural language seems prima facie like a good one (although there are those who are skeptical about it), as does the project of explaining natural resemblances and causal capacities and powers (there have been skeptics about this project too).  But why should we suppose that the same entities – e.g., either Platonic or Aristotelean universals -- must figure in both projects?  Why should Plato and Aristotle be taken to be having a dispute about the nature of universals at all?  Perhaps it was because Plato thought that the world of concrete objects was less real – more shadowlike – than the world of universals; while Aristotle rejected entirely Plato’s domain of abstract universals.  For him only the concrete world was real.  But setting that aside, one wouldn’t expect that the meaning of all general terms, such as ‘trouble maker’ and ‘bachelor’, ought to be some natural property, or that what gives water its peculiar causal powers is some  abstract entity lying outside of the natural order.

Garrett highlights two arguments for what he refers to as “Greek Realism”. 

        The Semantic Argument

1.        The meaning of a word is the entity referred to by the word.

2.       Singular terms refer to particulars.

3.       General terms do not refer to particulars.

4.       So they must refer to universals.

This first argument seems very Platonic .   Garrett  asks:  Why premiss  3?  Why must the general term ‘round’ refer to the universal roundness rather than, to each round thing individually?  And why premiss 1?  Why must general terms refer to anything?  (Recall that Garrett has already argued in Ch. 3 that empty names are meaningful even though they do not refer.)

Well, one Platonic response might be that the set of all and only round things cannot be the meaning of the term’ round’ because then if all and only the round things turned out to be also all and only the red things, then the general terms’ round’ and ‘red’ would have the same meaning, but they don’t.  Plato invokes universals here.  The two general terms express distinct universals even if they are instantiated by exactly the same set of things.

                The Metaphysical Argument    

1.       The essence of universals is repeatability.

2.       So appealing to universals explains the truism that different objects can have the same property.

Garrett asks: But do we really need repeatability in Aristotle’s sense – i.e., numerical identity of the universal in each of its occurrences, as opposed to qualitative identity -- i.e., numerical distinctness but exact similarity, to explain the truism?  We will see shortly how Aristotle might want to respond, when considering Garrett’s further elaboration of his question.

Garrett offers four objections to ‘Greek Realism’, on pp. 68-69.  The first is that the idea of a universal is hard to understand on either the Platonic or Aristotelian reading. 

 On the Platonic reading, there is the problem of seeing how instantiation is possible, how an abstract universal existing outside of space and time could be instantiated by an object existing in space and time.  Indeed.  Presumably instantiation is supposed to be a relation between an object and a universal, and therefore itself a universal.  If not what is it? (One author, Michael Loux, suggests that instead it is a ‘fundamental nexus’ -- but clearly that is just to give a name to the problem!)  But if it is a universal, then it seems that it itself can be multiply instantiated.  This seems to lead to an unattractive regress of instantiations:   a’s being F is a’s instantiating F-ness, and a’s instantiating F-ness is a’s instantiating the instantiation-of-F-ness, and so on ad infinitum.  But then there would seem to be an infinite number of distinct instantiations involved in a’s being F (This argument is sometimes called ‘Parmenides’s Regress’).  That is not very parsimonious!

On the Aristotelian reading, there is the problem of understanding how the numerically same universal can be wholly present in two non-overlapping places at the same time.  Garrett does concede, though, this this worry is not decisive.  After all, we seem to have no trouble allowing that the numerically same concrete particular can be wholly present at two non-overlapping times while at the same place.   We would therefore seem to need some principled reason for ruling out the intuition about universals without ruling out the intuition about particulars.  At least, these intuitions may stand or fall together.

[SLIGHT BUT HOPEFULLY SUGGESTIVE DIGRESSION: One might think that a purpose of our pre-theoretic notion of object is precisely for purposes of tracking and re-identification over time, But if a correlative purpose of our pre-theoretic notion of property is to locate and re-identifying over space at a given time, then why is the Aristotelian notion of concrete universal so much more difficult for us?  Perhaps something could be said from a natural selection perspective about the relative importance of object identification and reidentification to our evolutionary ancestors’ survival .  Property identification and reidentification per se may only have come with the emergence of science.]                               ]

Garrett’s second objection is directed just against the Platonic conception of universals.  It is that Plato’s universals actually seem to be abstract objects, and thus a kind of particular.  But then how is a particular supposed to be instantiated in or exemplified by other particulars?  Indeed.

Garrett’s third objection is directed against the Aristotelian conception of universals.  And here we seem to get to the heart of Garrett’s worry.  Why, he asks, cannot Aristotelian universals be rejected in favor scattered (i.e., spatio-temporally discontinuous) particulars?  Instead of the universal greenness, wholly present in each of its instances, we could have the scattered property of greenness, taken as  a particular not a universal, each instance of which is in fact simply a proper part of greenness.  Greenness would not be wholly present in each of its instances, but a proper part of it would be. 

How might Aristotle respond to this?  He wants universals to explain natural resemblances and causal powers.  Let’s consider these in turn.  Supppose that b is green and c is green.  According to Aristotle they resemble each other in this respect because they share the natural property of greenness.  On Garrett’s proposal they resemble each other because a’s greenness and b’s greenness are distinct proper parts of the same scattered natural property.  But what makes a property natural?  What distinguishes a scattered but natural property, on this conception, from an artificially gerry-mandered scattered  property  ,e.g., the property consisting of the greenness of this leaf, the yellowness of this banana peel, and the blueness of this table-top?  Call this the property of being ‘grellue’.   Aristotle wants to claim that there is no universal of grellueness, but there is a universal of greenness, because greenness is a natural property.

We may also suppose that the greenness of this leaf causes me to have a certain characteristic color perception when I look at it in normal light.  Now the greenness of other far-away objects seems completely irrelevant to this causal process, though they are, on Garrett’s proposal, parts of the property of being green.   Yet it remains true that other green things besides this leaf would, in similar circumstances, cause me to have a similar color experience.  What is the explanation of that?  It seems odd to say that each of these envisaged distinct experiences of green are caused by distinct proper parts of the same scattered property. For Garrett it must be that the greenness of these other things are parts of the same scattered property of greenness as the greenness of this leaf, and that this is no gerry-mandered property.  So its parts must bear the relevant resemblance relations to each other.  But, first, what, then, is resemblance if not a universal?  In which case, how have universals been avoided?  And second, what does the greenness of this leaf bearing resemblance relations to other green things contribute to the process of the greenness of the leaf causing the experience of seeing  green in me?  Aristotle, of course, thinks that it does not, which is why he invokes the universal greenness, wholly present in the leaf, as the cause of my experience of green.

Garrett’s final objection to Greek Realism concerns the status of the objects, per se, that are supposed to be the exemplifiers of universals.   He asks “Is it supposed, then, that we can conceive of the horse apart from all its properties?”, and “How can we know anything about the horse apart from knowing about its properties?”   As for the latter question, Aristotle, at least, would agree that we cannot.  It is only through our causal interactions with horses that we learn about horses, and such interactions are all grounded in the properties of the horse.  But the first question is ambiguous.  Are we being asked whether we can conceive of the horse as existing independently of its properties, or merely whether we can consider the horse independently of its properties, via an act of abstraction?  If the former, Aristotle would deny that we can conceive of such existence, any more than we can conceive of a property existing uninstantiated.  If the latter, I suspect both Aristotle and Plato would think that such mental abstraction is possible.  Russell, our 20th Century Realist, to whom we now turn, would also think it possible.  He would regard it simply as a psychological process, facilitated by grammar, for which we do have the cognitive capacity; but that we shouldn’t read any ontological or metaphysical commitment into it.  But that, of course, is because he doesn’t think that any objects exist, except as bundles of properties.   But is that right?

III.     Russell’s Realism

Russell’s attempt to reduce the category of concrete particular to that of concrete universal is a bold hypothesis.   If it worked, it would represent a large ontological economy:  all we would need is universals, not particulars.  Obviously, it all hangs on what account of the bundling of universals into objects can be given.  Russell introduces as a technical term the relation of ‘compresence’, in terms of which he arrives in two steps at the definition of ‘complete complex of compresence’, and then declares that objects are simply complete complexes of compresence.

Df. of ‘compresense’:  the relation that holds between any two universals that are universals of the same thing.

Df. of ‘complex of compresence’: a class of universals each member of which has the compresence relation to each other member

Df. of ‘complete complex of compresence’: a complex of compresence to which no further universals can be added.

One problem with this is that even if we accept these definitions, and the claim that this is what objects are, it doesn’t tell us how universals manage to bind together into relations of compresence.  The definition of compresence is actually quite telling, since it invokes the thing, the object, which the point of Russell’s bundle theory is actually supposed to be explaining away!   How, then, is the bundling supposed to work sans object?

But there are other problems.  Russell’s theory makes it impossible for there to be two qualitatively identical objects.  Since objects are just a bundle of properties, then if we are talking about the same properties, we must be talking about the same object.  But that seems very counterintuitive.  The bundle seemingly cannot include relational properties, because those would have to be identified partly in terms of their relata.  But their relata are going to typically be objects, and we don’t yet have the bundle theory to appeal to in order to identify those objects – the bundle theory is still under construction, as it were!  So the properties must all be non-relational.  Which makes the account all the more implausible since it then seems to rule out the possibility of two objects which are qualitatively identical in the sense of having all the same non-relational properties.

Furthermore, the bundle theory seems to make every attribution of a non-relational property to an object either trivially true or trivially false.  Since objects are complete complexes of compresence, to which no further universals can be added, if it is true that the object has a certain property, it can only be because it is partly constitutive of the object: it is one of universals which actually make it up.

[Here are a couple of more technical, and more speculative difficulties.  Even if we set aside the previous worries, there are reasons to question whether Russell’s account can give us the whole story.  One philosopher, David Armstrong, has suggested that there is nothing a priori incoherent about the possibility of there being an object which is qualitiatively identical to another, except for having one less (or one more) property.  If so, then Russell’s conditions could be regarded as not necessary for objecthood, though possibly at least sufficient.  A problem here is that no very compelling a posteriori example of this has been provided.   Another philosopher, Nelson Goodman, noted that if there are three non-relational properties such that they can never be, all three, compresent in the same object, but such that any pair of them can be compresent in the same object, then it is easy to imagine 3 objects which satisfy the condition of being a complex of compresence.  If we then just add to that mix enough further universals so that no more can be added, then it will turn out that we have a complete complex of compresence which is not a single object, but three objects.  So this possibility would challenge the assumption that Russell’s conditions for being an object are sufficient. 

How could Russell have missed this?  Here is a speculative just-so story.  Russell looks at some nearby object and asks himself how to think of it as a bundle of universals.  He notes several properties compresent in the object – its particular shape, size and color, for instance.  And then he thinks to himself: these are all different types of property.   Science is in the business of telling us what kinds of property are had by different kinds of concrete object .  The kinds of property had by a kind of object will be finite in number.  Clearly there is more to this object than its particular shape size and color, but if I added these a particular property of each of the other kinds science claims are had by this kind of object,  I would arrive at a complete complex of compresence.  No other property could be added to an object of that type without either contradicting a property that it already has, or turning it into something else.  He started from his perceptions of a particular object, and he didn’t notice that his definition of a complex of compresence could be satisfied by a group of objects.

This seems to demonstrate the difficulty of making do without objects as a fundamental ontological category.

IV.    NOMINALISM

If Russell’s attempted reduction is a failure, and parsimony is desideratum of our metaphysical theorizing, then maybe we should consider a reduction in the other direction: a reduction of properties to particulars.  Nominalists through the ages have tried to accomplish just this.  The question is what particulars to invoke in such a reduction.  Garrett quickly dismisses what he takes to be some non-starters.

We do not want to reduce properties to the predicates of a language, or to concepts in our mind, because that would be to suppose either that we have thought of everything, or that the expressive power of our language matches the complexity and richness of reality – two eminently doubtable suppositions. There are other problems too.   If we think of the predicate “is red” as an expression in our language, it is quite natural to think of it as an expression type with many tokens.  As an expression type, it seems a lot like a Platonic  universal: abstract and multiply instantiable.  If we are restricted to tokens, however, to avoid the abstractness, what is the principle that we use to classify tokens as of the same type?  Is it resemblance?  Then what is resemblance if not a universal (albeit a concrete universal here, since it will be the resemblance of concrete tokens,  and resemblance may be regarded as an  ‘internal relation’ determined entirely by the inherent properties of the relata)?   It also seems clear that predicates and concepts are a very implausible substitute for concrete universals as the ground of an object’s causal capacities and powers.

There are other challenges  not mentioned by Garrett for the predicate nominalist who wants to avoid the appeal to universals in her semantics of natural language.  A nominalistic semantics must be provided for abstract nouns like “courage” or “virtue”, one that makes sense of translational equivalents across languages.  How do we semantically interpret the English sentence “Courage is a virtue”?  It is surely not equivalent to “‘Courageous’ is a virtue predicate”; nor is that remotely what a German would be trying to say when saying in German that courage is a virtue.  There have been attempts in the literature to address these problems but it is unclear that any in the end avoid appeal to universals (such as the universal of an expression in one language having the same semantic function as an expression in another language,  and so on).

Garrett next considers the idea that a property could be identified with the set of all and only those things that have the property, but quite properly notes that this not only ignores the possibility of two different properties picking out the same set of objects, but also seems to depend on the universal of class membership.

Adding that the objects in the set of objects must resemble each other doesn’t help, because, as already noted, resemblance itself seems to be a relation, and hence a universal, especially if we have to talk not simply about resemblance but about resemblance in given respects.  Finally, this picture also seems to just rule out a priori that there could be only one object having a given property (e.g., being spherical), unless the object’s resembling itself in this respect (resemblance being a reflexive relation) is enough to ground its having the property in question.

These considerations quickly lead Garrett to what he seems to feature as the only serious nominalistic contender:

Trope Theory.

Tropes are a certain kind of particular: they are a property that can be had by only one object – they cannot be shared, they cannot be repeated, or multiply instantiated.  Examples abound:  not whiteness, but the-whiteness-of-my-shirt.  Your white shirt does not have the-whiteness-of-my-shirt.  Garrett notes that tropes have been called ‘abstract particulars’ but emphasizes that ‘abstract’ here does not mean outside of space and time.  I think that the best way to think of it is: abstract in the sense of identified through a mental act of abstraction from the other properties of the object whose trope it is.  Some trope theorists go on to try to reduce objects to bundles of tropes, and Garrett seems to include this as part of the  trope theory that he then goes on to evaluate.

Garrett notes some evident advantages of such trope theory over all versions of realism previously considered, as well, of course, as the other versions of nominalism that he briefly sketched.  For instance, in a world in which all and only red things were also round things, the set of all red tropes shares no members with the set of all round tropes.  Also, explaining the causal capacities of something in terms of their tropes lines up nicely with our intuition that causation is ‘local’: the green leaf’s causing me to have a certain visual experience when I look at it is independent of the greenness of other objects.

Garrett also acknowledges some problems.  The three that he mentions are:

1.       The idea that a trope belongs to an object in virtue of being a part of it may not be able to account for the distinction between essential and merely accidental properties of an object. (More on this distinction in our next chapter.)

2.       We have yet to be told what binds tropes together to form an object.  Is it their spatial-temporal location? One might have thought that spatio-temporal location was an accidental matter, and external to the object per se.  Even if somehow it isn’t, we would still need to be told much more about how bundles of tropes manage to cohere as an object.

3.        Things can be predicated of tropes themselves – e.g., a red trope is a color trope, has a certain particular shade of red, etc..   How are we to understand that?  Tropes are presumably not themselves bundles of tropes.  But if not, what explains that?

I would add two more concerns:

4.        Intuitively, tropes are identified by the object to which they belong: e.g. , the whiteness of my shirt.  We abstract from the object before us, by a cognitive act of selective attention, and consider just its color.  But if objects are supposed to reduce to the bundle of their tropes, which object a trope belongs to better not be constitutive of which trope it is, on pain of circularity.

5.       Scientific theories have generalizations, sometimes in the form of equations relating various magnitudes:  e.g. f=ma.  Here ‘m’ is the physical magnitude of mass, for instance, thought of not as a trope but as a property that can be had by many physical objects.  Or think of all the red tropes.   What are we talking about when we talk of red?   It seems trope theorists are going to have to say something like: we are talking about the set of all and only red tropes, collected together to form a set on the basis of their resemblance to each other.  This is reminiscent of the idea of a scattered property previously discussed, although this time the problem is not with explaining the causal powers of something, but instead with our penchant for making lawlike generalizations about such powers.  For this we need not only individual tropes, but also maximal sets of exactly resembling tropes.  Aristotle, by contrast, could handle both projects with just his concrete universals.  Laws of nature were relations among such universals, and causal powers of individuals were grounded in the universals that inhered in them.

 

PHIL 203 CLASS NOTES (BY THE INSTRUCTOR, P. HANSON) FOR CHAPTER 4 OF GARRETT: MODALITY

Modality is the study of necessity and possibility.  In metaphysics there are two central types of modal claims that have been of enduring interest: first, claims that some properties are necessary or ‘essential’ to the identity of an individual object  or of a kind; second, claims that some propositions or statements are necessarily true.  There has been a good deal of skepticism, peaking in the mid 20th century, about both of these sorts of attribution of necessity, but this skepticism has also been met with a vigorous defense.

Sometimes the difference between attributions of essential properties and necessary truths is marked by the medieval distinction between ‘de re’ and ‘de dicto’ necessity – or necessity in the things themselves, vs necessity in what is said.  Also, in claims of de dicto necessity, the modal adverb is given ‘wide scope’ (e.g., by being placed at the front of the sentence claimed to express a necessary proposition), whereas in attributions of essential properties, the modal adverb is given ‘narrow scope’ by appearing inside the sentence, in front of the predicate which attributes the property to the object.  So we have:

Necessarily Socrates is human (de dicto) vs.  Socrates is necessarily human (de re)

Necessarily all bachelors are unmarried (de dicto) vs. All bachelors are necessarily unmarried (de re)

On the usual understandings, the de dicto claim about Socrates is false (since he might not have existed at all), whereas the de re claim is true (since it would not be Socrates if, e.g. ,it were a dog, a frog, or a pebble; he has to be human; although of course it still would be Socrates if he merely hadn’t gotten married, or if he had chosen a different profession, or had a snubbed nose due to an altercation.  Those latter are mere ‘accidental’ properties of Socrates, not essential ones like being human or being rational.)  On the usual understandings of the claims about bachelors, the de de dicto claim is true (since it is a matter of stipulative definition) whereas the de re claim is false (since it is typically a purely contingent thing that someone is a bachelor—he might have been married, even though as it happens he isn’t).  So objects have natures – consisting of their essential properties – which are constitutive of their identity.  And this is independent of how they may be described.

ESSENTIALISM

Focusing on essentialism first, then, there are four questions we should ask about it:

(1)    Is it coherent?

(2)    If the answer to (1) is yes, then is it true?

(3)    If the answer to (2) is yes, then which properties are essential to which things and kinds?

(4)    How can we tell?

Quine is famous for arguing against the very coherence of essentialism.  Here is a regimentation of his argument in the passage quoted in Garrett;

(1)    Cyclists are necessarily 2-legged, but not necessarily rational.

(2)    Mathematicians are necessarily rational but not necessarily 2-legged.

(3)    All attributions of an essential property must be relative to a background grouping, as in (1) and (2). (unstated enthymeme)

(4)    Therefore, for an object, otherwise unspecified as to what groups it may belong to, it does not make sense to rate some of their attributes as necessary and others as contingent.

 

Quine would concede that an object has the properties it has independently of whether we think so or are aware of them.  What he does not agree to is the idea that a property’s being essential or accidental is something that exists independently of what we think.  Things do not have natures in that sense, according to him.  Not only do such natures not exist, but the very idea that they do is incoherent because, he thinks, the whole point of calling a property “essential” is always just relative to some prior and contingent characterization, like being a mathematician. 

 

But there are many things that seem to be wrong with Quine’s argument.  Premiss 3 appears to beg the question, and is certainly not established by premisses 1 and 2; yet it seems required to take us from (1) and (2) to (4).   Premisses 1 and 2 are also ambiguous.  Take Premiss 2.  Let us grant that mathematicians are necessarily rational as opposed to non-rational; but then in a corresponding sense they are also two-legged: that is, rationality and two-leggedness are both part of the human genome, for instance.  On the other hand, if rational is here opposed to irrational, then mathematicians can surely be irrational.  Enough.  This is surely not a compelling argument against the coherence of essentialism, and uncharacteristically sloppy for Quine. 

 

Things and kinds do seem to have constitutive natures, if only in a weak sense.  There are certain formal properties that any object will have, like being identical with itself, which seem essential.  On the other hand it seems false that all of an object’s properties are essential, although Leibniz argued for this from the Principle of the Identity of Indiscernables.  If that principle held, then every object must differ from every other in at least one property, which would mean that over-all of an object’s properties may be essential to its numerical distinctness from other objects, and hence to its identity.  But the Identity of Indiscernables is a controversial metaphysical principle, not a truth of logic.  But then What essential properties other than purely formal ondes like self identity might form  the nature of an object, and how do we tell?  Surely it is for empirical science, not metaphysics to determine these details.  If so, then there can be a posteriori (i.e., empirically discovered) de re necessities, like water being necessarily H2O, or the Morning Star necessarily being the Evening Star.  Note that these are empirically discovered identities.  But once discovered, it is the logic of identity and necessity that supplies the essentialism.   Here is how.

 

1.       Suppose we have discovered that x=y (e.g., that The Morning Star is the Evening Star)

 

2.       Then by Leibniz’s Law, everything true of x is true of y and vice versa:  x=y à (Ǿx ↔Ǿy).

 

3.       We are given the necessity of numerical identity:  (x=x), that is, everything is necessarily identical with itself (a trivial essential property, had by everything)

 

4.       Substituting 3 into Leibniz’s Law, we get:  x=y à [□(x=x) ↔ □(x=y)]

 

5.       From 1 and 4 by Modus Ponens we get:  [□(x=x) ↔ □(x=y)]

 

6.       From 3 and 5, after breaking up the biconditional into its constituent conditionals, and once again using Modus Ponens, we may conclude : □(x=y).  Q.E.D.

 

The story about a posteriori but necessary predications is a little more complicated, but is in fact also derivative from identities.  Thus, if Water is necessarily H20, and the chemical properties of H20 determine that it is a clear liquid at room temperature, then water is necessarily a clear liquid at room temperature.  Or if genomic studies determine that humans are essentially rational, and it turns out that Socrates is essentially human, then it will follow that Socrates is also essentially rational. 

 

 

NECESSARY TRUTHS

 

The truths of number theory, logic, and definitional truths are among those typically taken to be necessary.  To say that they are necessary is to say that they are true “no matter what”, or “in all possible circumstances”.  Otherwise they are contingent.   But what reasons might we have for giving such a privileged modal status to a truth?  Is it that keeping the meanings of our expressions fixed, we just cannot imagine or conceive of a scenario in which they are false?  It would seem dangerous to let the issue of their modal status rest on our powers of imagination.  In the early part of the 20th century, some empirically minded philosophers known as the Logical Positivists argued for a doctrine called Conventionalism.  This doctrine requires a distinction between analytic and synthetic truths.  Analytic truths are truths whose truth is guaranteed by meaning alone.  Truths whose truth is not so guaranteed are called ‘synthetic’.  Conventionalism is then the doctrine that:

 

All and only analytic truths are necessary truths, and furthermore their being analytic is the explanation of their being necessary.

 

If analytic truths are true merely  by conventions or stipulations of meaning, then, given that we are adult native speakers of the language, and have reflective access (or at least dictionary access!) to the relevant lexical information, that explains why we can be so confident in giving such privileged modal status to these truths.  Logical Positivists worked hard trying to show that the truths of logic and math were analytic in the required sense.  It was a 2 stage process.  Logicists first tried to reduce mathematics to logic plus definitions; then they tried to argue that logic was analytic.  Problems emerged in both stages.  The ‘logic’ that math reduced to was problematic 2nd order logic – or standard logic plus set theory, where the logical status of set theory was in doubt.  But then 1st order logic was hard to reduce to analytic truths in a non-circular way, because logic was required to bring about the reduction.  And  Quine was aware of all this. 

 

Quine became just as skeptical of the existence of necessary truths  as he was of the existence of essential properties.  He took conventionalism to be the last best hope for any viable account of necessary truth, and proceeded to argue against it, by arguing against the very intelligibility (do you see a pattern?) of the analytic/synthetic distinction.  He argued this in a famous paper entitled “Two Dogmas of Empiricism”.   Setting aside many details, the structure of the argument was like this:

 

1.       To explain a concept/term T it suffices, or is necessary and sufficient, to give a non-circular method of finding/generating all actual (past/present/future) things to which T applies.

2.       A method is circular, relative to T, if for its appreciation one needs to understand T or some other concept/term which can only be explained in terms of T.

3.       “Analyticity” belongs to a class of terms/concepts which are interdefinable, but into which we cannot break:  meaning, synonymy, semantical rules, necessity, analyticity, explication, definition, intension, etc..

4.       Therefore we do not understand “analyticity” (except in this circular way)

 

The access to the circle of concepts/terms would have to be through the actual or potential behavior of speakers of the language, insists Quine.   But proceeding in this way does not ground our traditional notion of analyticity.  Rather, it turns out that all truths are contingent.  We might, and with good reason, be less ready to give up some putative truths than others, in the face of recalcitrant evidence.  But this is just a matter of degree.  All truths face the tribunal of empirical evidence.  All are empirical, and all are contingent.  All we could coherently mean by the ‘de dicto necessity’ of a putative truth is our reluctance to give it up, given the ensuing adjustments that would have to be made as a consequence throughout our body of beliefs.  But of course, that was not what we meant.  Quine’s argument if accepted, would mean the death of de dicto modality as traditionally understood.

 

But one of Quine’s star pupils, Saul Kripke, demonstrated at least the coherence of both de dicto and de re necessity claims, by constructing a formal mathematical model of these notions, using  a set theoretical construction inspired by Leibniz’s talk of possible worlds,  and then demonstrating that the model was consistent, sound , and complete.  De dicto necessary truths in the model were assigned truth in every possible world; so logical and mathematical truths were true in every world.   De re necessary truths were modeled as true in every possible world in which the object existed; so every possible world in which Socrates existed was a world in which he was human; every world in which water existed was a world in which it was H2O. There was nothing like the conventionalist appeal to analyticity.  And here is the rub:  Kripke knew that his teacher Quine was a set theorist, someone who respected set theory!  Note:  I am not meaning to imply here that the analytic/synthetic distinction cannot be rescued by an empirically based account of natural language; just that we do not have to do that first, in order to move forward.

 

MODAL REALISM

 

But it is one thing to demonstrate coherence of modality attributions, it is another to give an explanation of them.  At least conventionalism had a kind of explanation.  What explanation of modality does possible worlds modeling provide?  A minimalist version of modal realism simply affirms the truth of modal truths.  Even conventionalists were modal realists in this sense about de dicto modality, at least.  But if we reject conventionalism and still want to be a modal realist, then what?  How does possible worlds talk help?

 

This is where things begin to get really interesting in late 20th century metaphysics of modality!  Some wanted to treat possible worlds talk as just that: a convenient fiction; others wanted to take it very seriously.   If we are looking for an explanation of modality in possible worlds talk, then it seems that we have to take it at least somewhat seriously.  But there are very different ways of doing that.   Enter another star pupil of Quine, David Lewis.  Lewis and Kripke, as we shall see, represent two extremes when it comes to explaining modality using possible worlds talk. 

 

Lewis proposed to not only “reify” possible worlds (i.e., posit as existing), but to reify them as concrete entities exactly on a par with our world.  Here’s the rub again:  as Quine’s student, he knew that Quine loved all things concrete, that Quine accepted as little that was abstract into his ontology as he possibly could – e.g., just enough set theory to underwrite just the mathematics actually required by empirical science.  Lewis’s possible worlds were not abstract at all.  (BTW, Lewis also wrote his Ph.D. dissertation on the notion of convention, showing that it had a foundation in game theory, and showing that the game-theoretic account of convention also provided a coherent analytic/synthetic distinction.)  His idea was that you posit all these concrete possible worlds, and then you give a reductive explanation of modality in terms of their properties.  Every coherent possibility occurs in one or more possible world.  The world that we live in, that we call “the actual” world, has no special ontological status, except for us.  Denizens of other worlds will rightly regard their world as the actual world – actual for them.  There is no way to travel to one of these other worlds, though; no spatio-temporal relations between stuff in our world and stuff in those worlds.  And something concrete can only exist in one world.  

 

But then what does it mean to say that Socrates is essentially human?  It cannot mean that every world in which Socrates exists is a world in which he is human.  Socrates only exists in this world!  Lewis wants to be able to give an account of essentialism, and so he introduces ‘counterparts’.  Even though Socrates only exists in this world, he has (relevantly similar) counterparts in other worlds, and all of his counterparts in those other worlds are human.  And that is what he proposes it means for Socrates to be essentially human.

 

Kripke, however, takes issue with this account, largely because of its implications for accidental properties.  Perhaps you would have gotten an A+ on a test if you had studied harder, and so you wish you had done so.  There is another possible world in which one of your counterparts did study harder and did get the A+.  But you don’t care a whit about that.  What you wish is that you, not your counterpart, had studied harder.  You believe that if you had studied harder you would have gotten an A+, not that there is some counterpart of you that did study harder and did get the A+.

 

What then is Kripke’s account of modality?  He does not attempt a reductive explanation.  I will focus here just on the issue of essentialism.  Kripke is committed to supposing that there is a concrete reality to our natures and causal capacities, in this world and no other.  The other worlds are just a mathematical fiction that we use to model our natures and capacities in this world. To say that the litmus would have turned red if it had been dipped in an acid, is not to say that its counterpart in some other world does turn red when it is so dipped.  It is to say something about the litmus in this world, that it has certain natural capacities that are part of its ‘modal nature’.  So Kripke in a way wants to return to the Aristotelian picture of essential properties.  By contrast Lewis wants to deny that, say, a given magnet has the power as part of its nature to attract any nearby iron filings, except in the sense that there are counterparts of the magnet in other worlds isolated from this world towards which some nearby iron filings do happen to move.  Kripke’s is a kind of modal realism that involves positing modal properties in this world.  It strongly contrasts with Lewis’s version of modal realism, in which there are indenumerably many isolated worlds, but all impoverished with respect to modal properties existing in those worlds per se.  Lewis’s modal properties are just a bunch of transworld similarity relations between counterparts.

 

[ADDENDUM.  On p. 49 Garrett briefly introduces us to a very clever modal version of the Ontological Argument, due to Alvin Plantinga.  The core of it is just this: (1)It is possible that a necessary being exists;  therefore  (2) A necessary being exists.  Plantinga, a believer, to his credit is careful not to claim that the argument is actually decisive.  It is valid in, e.g.,Quantified  S5, one important modal logic.  I am not going to comment much except  to say that one way to approach the argument critically is to think about the difference between Kripke’s and Lewis’s understanding of the formalism of modal logic.  On Lewis’s understanding, the possible worlds represented are all real and concrete and isolated from each other.  Suppose a necessary being exists in a world other than ours.  Then a counterpart of that being must exist in every other possible world including ours.  Hmmm.   Which one is God?  On Kripke’s understanding, the other possible worlds are just fictions constructed to help us talk about the modal properties existing in this world.  Suppose that the concept of a necessary being is coherent.  Is it therefore exemplified in one of those fictional worlds?  It depends what we are trying to model.  If we already think that a necessary being exists in this world, we might try to explain some features of the world in terms of that being’s activities and capacities.  So then she will also appear in all of the other worlds in our model of possible worlds.  But if we do not think that a necessary being exists in this world, what reason would we have to put her in our possible worlds model of the modal properties of this world?  Maybe we just want to model the coherence of the notion of a necessary being.  So then we place such a being in our representations of all the fictional worlds, one of which we take to represent the actual world.  Does anyone think that this act of representation magically brings a necessary being into existence, or means that she has existed all along?]

 

HANSON’S CLASS NOTES FOR CHAPTER 5: PUZZLES OF CONSTITUTION AND IDENTITY

Background to puzzle (ii): Lumpl and Goliath.

Re.: Essentialism and non-natural kinds.  Our discussion of essentialism in Ch. 4 focused mainly on claims that natural kinds and physical objects had certain essential properties – properties partly constitutive of their identity and nature as a kind or as that very physical object.  And the physical objects were typically objects belonging to a certain natural kind (or kinds), where it was that membership in the kind and the ensuing properties that that encompassed that underwrote the object’s essential properties.  Natural kinds were supposed to be the kinds that ‘carved Nature at her joints’.  Essential properties were had independently of any issue of how the object was referred to, described, or thought about.   There were mind-independent and language independent facts that constituted the nature of water, of gold, of being human, or being Socrates in particular, facts about their essential properties.

By contrast, consider being a bachelor.  “Bachelor” is not a term introduced with the intention to carve Nature at her joints!  It is a legal distinction, with a legal definition.  Of course, a particular bachelor, John Doe, being a human, will have essential properties in virtue of that; and he will have further essential properties in virtue of being the particular human that he is – e.g., having certain biological parents, having been born a particular sex, coming from a particular sperm and egg, and so on.  These are all natural facts.  Now given that John Doe is a bachelor, it follows that he is previously unmarried.  And it might seem natural to say that being previously unmarried is an essential property of bachelors.  But what would that mean?  Merely that in a society that has the institution of marriage, there is a need to distinguish between those males who have not availed themselves of the institution and those who have, for a variety of reasons of a religious and legal nature that characterize the institution  within a given society.  Those who have not availed themselves of the institution are called bachelors.  Keep in mind the differences of custom that characterize marriage in different societies, and the differences in social standing that can (or have) characterize(d) married and unmarried males in those societies, and that some primitive societies did not have such an institution at all. 

Still, it is a necessary condition of being a bachelor, that one is a male not previously married.  If one were to indulge here in calling this a case of essentialism, this seems to be a case of essentialism that lends itself to ‘conventionalist’ explanation.  That is, we have a definition of “bachelor”, explicitly laid out in law, and it is in virtue of satisfying all of the conditions of that definition that John Doe is “essentially unmarried”.  This explanation only serves to emphasize the difference between this sort of case, construed as a case of essentialism, and the cases of essentialism central to Ch. 4.  Note also a certain irony here.  Those who espoused conventionalism about necessary truths typically rejected essentialism.  Quine argued against conventionalism by arguing against the analytic/synthetic distinction on which it was based.  But given an appropriately empirically based account of that distinction, conventionalism could be revived to give us an account not only of necessary truths, but of essential properties of artifacts qua artifacts, and of non-natural kinds.

Another kind of non-natural property is that of being an artifact of a certain sort.  Artifacts are man-made.  Certainly on one meaning of the term ‘natural object’ the contrast is with man-made objects: e.g., natural lakes vs. man-made lakes; natural vs. humanly cross-bred kinds of plants and animals; natural beauty vs. human art, and so on.  Unfortunately this contrast is often linked to the idea that man transcends the natural order.  Someone who wants to deny this and see human kind as simply another evolved manifestation of the natural order, will want to hold that in an important sense everything is natural,  and therefore that humans creating artifacts and definitions and new biological kinds are and simply acting according to their natures, with the end results therefore also being  natural.  Artifacts and definitions would therefore be natural in this sense.   But doesn’t this attitude tend to undercut the distinction between essential and non-essential properties?  Doesn’t it even play into Quine’s hands?  (Recall his line of argument that a property is essential only relative to a background classification. And recall his examples of such classifications: being a mathematician and being a cyclist.)  I claim not, that the relevant distinction here can be one between those “essential properties” that are to be explained in something like conventionalist terms, and those essential properties that are not.    

Let’s now take a first pass at Allan Gibbard’s case of Lumpl and Goliath.  ’ Lumpl’ is the name the lump of clay of which a statue of Goliath has been formed. Call the statue ‘Goliath’.  This first pass will not attend to the temporality of the examples, and how best to deal with it.

 One can of course imagine a lump of clay sitting around, and then one day a sculptor coming along and shaping it into a statue.  This is the preliminary Lump and statue of David example of the text.  Once that process is completed, it may seem that the lump of clay and the statue, though distinct objects with distinct histories, come to occupy the same place at the same time – in violation of one of our pre-theoretic ideas about physical objects.  How is that possible?  Well, it can be noted that while occupying the same place at the same time, they are composed of exactly the same matter.  They are distinct merely because of their distinct histories.  The lump existed before the statue, and could exist after the destruction of the statue.  One problem with this, though, is that it would violate the “mereological principle” – a principle about the relation between wholes and their parts which Garrett calls “Extensionality Principle” or ‘EP’ – that claims things that have all of the same proper parts are the same objects.  Lump and David have different temporal parts.   (Note that this is the mereological analogue of the controversial Identity of Indiscernibles principle, that if objects share all of their properties, they must be one and the same.  The converse of this latter is known as Leibniz’s Law, and is not at all controversial.  Nor is the converse of EP controversial.   We will return to EP later.

Another issue here is about the relation between the identity of a physical object and the space it occupies at a given time.  In light of the Lump and David example, one might consider weakening our pre-theoretic intuition, so to speak, by qualifying it thus:  Things of the same kind cannot occupy the same place at the same time (but if they are of different kinds it might be okay.)  Garrett refers to this as “Locke’s Principle” or ‘’LP”.  That would leave the possibility open that, under the right circumstances, distinct things of different kinds could occupy the same place at the same time.

And now for Lumpl and Goliath.  Gibbard tells the story so that there are no differences of history or of parts between Lumpl and Goliath, and so that EP and LP are both satisfied.  The bottom and top halves of the statue were sculpted separately from separate lumps of clay. The instant that the separate halves are brought together is the instant both that Lumpl comes into existence and that Goliath comes into existence.  What happens next is that having slept on it, the sculptor decides that he really dislikes Goliath, and so he uses TNT to blow up Goliath, and of course Lumpl with it, to smithereens.  So they both pass out of existence at the same time as well.   And throughout both their existences they shared exactly the same matter (in the same arrangement).  Yet it still seems easy to have the intuition that they are nevertheless distinct entities.  What then is the basis now of this intuition, since their histories and material constitution are identical?  What is the implicit criterion of identity here that would justify seeing these as nevertheless distinct? The notion of criterion of identity will receive further scrutiny below.

A suggestion might be: Lumpl and Goliath differ in their essential properties, so Leibniz’s Law suffices to distinguish them.   Goliath’s particular shape is essential to it, being a sculpture; whereas Lumpl’s particular shape is not essential to it, being a mere lump of clay.  Sculptures are defined to be the products of particular shapings of matter.  Again, as with the case of bachelors above, the essentialism here seems to be explainable in terms of the definition of what a sculpture is, whose conditions are satisfied by Goliath.  A lump of clay has some shape or other, of course --since it is extended in space -- but none in particular.

On Lewis’s account of essentialism as given in Ch. 4, to say that Goliath has its particular shape essentially is to say that every counterpart of Goliath in every possible world has the same shape.  Kripke’s account of essentialism as found in Naming and Necessity is focused mainly on objects (both concrete and abstract) and on natural kinds.   A few of his examples of objects are of artifacts, though.  For instance he talks about an oak desk, and claims that that very desk couldn’t have been made instead of ice, suggesting, perhaps, that the kinds of material of which an artifact is made can be essential to it.  But here he simply appeals to his intuitions about the case, and does not try to incorporate them into a general account of essential properties of artifacts.  It could be held to be an advantage of Lewis’s framework that it lends itself to articulating a general account of essential properties, including for artifacts.  But let’s see how it goes. 

Let’s start with the question:  On what basis would we say that there can be no even slight variance in shape in a genuine counterpart of Goliath?  Let’s suppose that the identity of a sculpture is tied to its maker.  Suppose in some other world, the counterpart of the sculptor makes some ever so slight variations to the shape – detectable only by an expert using special scanning equipment.   (There will be such a world, because remember every possible way that things might be is a way that some concrete possible Lewis world is.)   The slight variation in shape doesn’t even have to reflect a difference of intent.  The sculptor could have been just momentarily distracted.   Otherwise things are the same – the otherworldly counterpart of Lumpl is an exact replica, say, in that it has the same number of molecules of the same kind of clay, from a counterpart of the same source.  So is the result a counterpart of Goliath or not?  If such a sculpture were produced by someone else in the home world it would of course be judged a forgery.  Detecting forgeries is big business in the art world.  A forgery, no matter how convincing, is not worth as much as the original.  While that does provide a motivation for requiring exact replication as to shape for counterparts of Goliath, it also seems relative to particular cultural contexts and practices.  We can imagine a society that does not care about forgeries.  Or, what if the counterpart of the sculptor in this other word does make a shape and size identical Goliath, the only difference being that it weighs a gram less because there is a small pocket of empty space somewhere in the sculpture, the amount of clay therefore being slightly less.  Again, suppose all of this is detectable only by special equipment.  It is surely then hard to know whether or not the lump of clay, when it is formed, should be considered a counterpart of Lumpl.   Suppose not.  In that case should the resulting Goliath still not be considered a counterpart of Goliath?  It has exactly the same size and shape and beauty, and it is not a forgery, but sculpted by the counterpart of the sculptor.  One can intuit that it should.  In which case it would not be essential to Goliath that it be formed out of Lumpl.  One’s head at this point begins to swim….

All of this must  be contingent, surely, on the nuances of practice in the social and economic  and legal world of art, art collection, art auctions, etc.., which we can at least imagine differing from culture to culture.    We don’t want it to be the case that counterpart theory per se dictates an answer to any of this stuff, including how exactly the shape and size and constitution of a counterpart of Goliath must conform to Goliath.  That would be the tail wagging the dog.  The dog in this case is cultural practices and institutions and legalities around sculpture identity, which of course may vary between cultures.  And note that what is then driving which other possible worlds are the relevant ones to consider here are: the particular conventional cultural practices, imagined or real, under which Goliath was produced, not the concrete status of the worlds.

 Now Lewis is a good enough philosopher to have anticipated many of these issues, so he adds an epicycle to his account of counterpart theory.  Instead of just the bare claim that counterparts are possible things in other worlds that bear an appropriate relation of resemblance to the object of which they are counterparts, he introduces a whole bunch of different counterpart relations, one for each different kind.  There is the counterpart relation for lumps of clay, and the counterpart relation for statues, etc..  The counterpart relation for statues will tell us that the statue-counterpart of a statue will have to be “statue-shaped”.  This is not required of lumps of clay by its counterpart relation. So statues are essentially statue-shaped, but lumps of clay are not.  Note that there is no general condition saying that statue-counterparts of a statue must have the very same shape.   (Note also that Goliath has the property that all of its counterparts are statue-shaped, while the property that Lumpl fails to have is not that property, but the property that all of its lump-of-clay-counterparts are statue-shaped; so Leibniz’s Law is not violated.)  And Lewis’ account still allows a lump of clay to be ‘inadvertently’ statue-shaped.  We might come across this lump of clay and marvel at it, turn it into “found art” by displaying it at a museum, etc..  Of course, at that point it is not a mere lump of clay any longer, it is a sculpture, by the finder’s exercise of his stipulative intentions. Very clever of Lewis.  

But again, note that one thing this suggests is that the modal realism of Lewis’s account is a distraction here, not a contributor.  Recall that one of his motives for proposing his modal realism was the prospect of a reductive account of modality in terms of what goes on in all of these concrete worlds.  There seems little indication that this reductive project provides any more insight to the nature of essentialist claims about artifacts and artifactual kinds than it did to the nature of essentialist claims about natural kinds and objects.  Rather, Lewis’ account of the essentialism of artifacts appears to be driven, as it should be, by the desire to accommodate the variability, the vagueness, even the indeterminacy of our socially formed identity conditions for being the same sculpture.  So why the concrete worlds, then?  Because Lewis wants a reductive, extensional account of modality, in terms just of set theoretic properties of these concrete worlds, and their denizens of concrete possibilia.  But why a reductive account?  Why not take modality as primitive, and thus additional?  Too spooky?  Spookier than all those concrete isolated possible worlds?

Second pass at Lumpl and Goliath.   We are now ready to factor in the relevance of different conceptions of how physical objects persist through time.  According to the perdurantist, physical objects are 4-dimensional worms with both temporal and spatial parts, suitably related.  An object is entirely constituted by its series of temporal parts.  An object persists through time by perduring in this sense, by having temporal parts that occupy each moment of time of its existence. Of course there are different grains of division into temporal parts.  They might be one day in duration.  That would be a very course grain of division, that potentially ignored many changes in the object.  It could be just a nanosecond.  Would a limiting case be the indenumerable series of its instantaneous time slices?  Who is to say, and what would that mean?  Are we all constituted by our instantaneous time slices?  By contrast, according to the endurantist, ordinary objects have no temporal parts, and are “wholly present” at each moment of time at which they exist.  It is literally the same, wholly present object that existed yesterday, exists today, and will exist tomorrow (or that, in the specious present tense, exists at  all three times).  

Why is the distinction between endurantism and perdurantism relevant to Lump1 and Goliath?  Garrett notes that the perdurantist can in general accept both EP and LP.  For the perdurantist, if a lump and a sculpture come into being at different times (unlike Lumpl and Goliath), then of course they will not have the same temporal parts, and EP says that they will be different.  And their nonidentity when their spatio-temporal worms merge together will be supported by LP. But for the endurantist, since Lumpl and Goliath are wholly present at each moment of their occurrence, they are not, when spatio-temporally coincident, distinguishable via EP.  So given EP they must be identical. If they are deemed to be not identical, EP must be rejected by the endurantist. 

Of course, Lumpl and Goliath have identical histories and physical constitution, and so would seem to encompass the same temporal and other parts.  The perdurantist cannot therefore invoke EP this time to distinguish them.  This makes very important the possibility of distinguishing them via their essential properties.  David Lewis’s account of this is motivated by his attraction to perdurantism and his desire to uphold EP.  A consequence of his account, not mentioned in the first pass above, is that he takes Lumpl and Goliath to be only contingently identical in the world where they exist.   The necessity of numerical identity is rejected.  There is another world in which counterparts of Lumpl and Goliath both come into being at the same time but where later Lumpl gets squeezed into a ball (thus destroying Goliath).  In such a world they are not identical, and so they are not necessarily identical.

The endurantist, on the other hand, appears to have no new moves, according to Garrett.  She thinks that Lumpl and Goliath are distinct, but since they have all the same spatial parts must again reject EP, and so the endurantist seems to be in need of criteria of identity to distinguish them.  Where to from here?   The endurantist may counter that at least one physical object, namely herself,  is not a spatio-temporal worm, because she has the modal property that she could have died at age 30, whereas a spatio-temporal worm couldn’t have a temporal extent different from its actual extent.  But Lewis can counter by agreeing that she might have died at age thirty, which is simply to say that at least one of her person-counterparts dies at age thirty.   He can also agree that none of the spatio-temporal-worm-counterparts of her spatiotemporal worm is only 30 years long.  True.  Yet it doesn’t follow from this according to Lewis that I am not contingently identical with my spatio-temporal worm.

Re. EP.  It is worth noting that EP, or the “Extensionality Principle”, is not so intuitive a principle in abstraction from particular examples.  Surely it is not enough, in order to be identical, to have all the same parts:  one must also have all the same parts related to each other or arranged in the same way.  One can think of spatial rearrangements of parts of an object, for instance, which would constitute a change of identity of the object.  What then about rearrangements of temporal parts of the object?  Is this precluded by EP?  Apparently not.  Calling the principle “the Extensionality Principle” invokes a comparison between sets and their members and wholes and their parts.  Sets are constituted entirely by their members, in no particular order.  Yet surely wholes are not intuitively constituted by their parts, temporal or otherwise, in no particular order.

This is a good spot to probe Lewis’s modal realism further.  Recall from our discussion of properties that intuitively, two quite distinct properties, like sphericity and color, might have the same extension – i.e., all and only the same set of things might be both red and spherical – while being distinct properties.  This makes it look hard to give an extensional account of property identity; that is, an account in terms of what sets of things have the property.  But Lewis’ modal realism actually provides a way.  Let the set of red things be the set of all and only all the possibilia in all of the possible worlds, including ours of course, that are red; and let the set of spherical things be all and only the possibilia in all of the possible worlds that are spherical.  These will be different sets!  In other possible worlds there will be spheres that are not red, and red things that are not spheres.  So this is an extensionalist account of properties.  Of course, the ontological cost is rather large:  one must posit all of Lewis’ concrete possible worlds, and all of the concrete possibilia that are denizens of those worlds.  Is it worth it, just to have an extensionalist account of properties? 

Note also that Lewis’ properties span different worlds:  part of (or a subset of) the property red is in this world, other parts (or subsets) in other worlds.  By contrast Lewis’ view of objects precludes them and their parts from being in more than one world.  But of course he has to say something about the persistence of objects through time.  And he does not want to be an endurantist about objects, any more than he wants to posit concrete universals.  And so he becomes a perdurantist, and therefore sees objects as constituted by their temporal segments.  So objects are not ontologically basic, for him, but their temporal slices are.

But then what about the problem with EP noted above?  How do the temporal slices of an object turn out to have the proper order?  Lewis must think that the nature of time takes care of this.  Time just sees to it!  But still at the very least this would require an amendment to EP: If x and y are objects with the same proper parts in the same order, then x =y.  How, though, does time provide the order for temporal parts??  I submit that EP thus revised is no longer an extensional principle.  Time is of course the subject of two future chapters, and so we will be returning to this issue then.     

Puzzle (iii): The Ship of Theseus.  The puzzle as presented involves 3 scenarios.

Scenario (a):  The Ship of Theseus (‘ST’) leaves port at T1.  Having found that it cannot navigate a narrow channel, it is disassembled and its parts transported over land to the other side, where it is reassembled and continues on its way.  Garrett suggests that our pre-theoretic intuitions allow an object to pass out of existence (because disassembled) and then come back into existence (because reassembled).  He says that it is unusual (really, these days of Ikea furniture??) but not paradoxical.  So, common senses’ verdict is that the ‘Dissasembled and Reassembled’ ship (or D&R) that appears at T2 is The Ship of Theseus; i.e., ST = D&R.

Scenario (b):  ST leaves port at T1 as before.  During the course of a long voyage, with stops in many ports, repairs take place over time resulting in a piecemeal (e.g., one plank at a time) replacement of all of its (spatial) parts by T2.  Call the resulting ship the “Piecemeal Replacement of Parts”ship, or ‘PRP’.

Scenario (c):  is the same as (b) except that as each part is replaced, it is kept.  Eventually there is a complete set of parts, which is then reassembled as in (a).  So now there are two ships: PRP(c) and D&R(c).  And the question is now: which one is ST?  It cannot be both, since clearly PRP(c) is not identical with D&R(c).

What we seem to need is an authoritative criterion of identity over time for physical objects.  But what would it be?  Scenario (a) might make us think that X at T1 = Y at T2 iff Y at T2 has the numerically same spatial parts in the same configuration as those of X at T1.  By that criterion we then have that ST = D&R(c).  On the other hand, Scenario (b) might make us think that X at T1 = Y at T2 iff a temporally continuous series of temporal ‘slices’ connects X at T1 with Y at T2.  Then ST = PRP(c).  How do we choose between these competing criteria? 

Garrett at one point suggests that maybe there is no absolute answer, that which criterion is chosen can be relative to our interests.  Motivation for using the scenario (a) criterion might be a purely antiquarian interest in having the ‘original’ ST.  Motivation for using the scenario (b) criterion might be our interest in identifying a continuously functioning ship between the times T1 and T2.    Why not leave it at that, then? 

One reason is that we might just balk at there being no fact of the matter about which of PRP(c) and D&R(c) holds.  Another, stressed by Garrett, is that it seems to lead to contingent identities.  If we say that ST = PRP(c), then it seems that we can say that if the gradually removed planks had not been replaced, then ST would have been identical to D&R(c) after all.  That is, in some other world ST = D&R(c), i.e., the original identity is contingent.  By the same token, if we say that ST = D&R(c), if the replaced parts had not been horded and reassembled, then in that world ST = PRP(c), so again the original identity seems contingent.

Garrett’s perdurantist, being David Lewis, has no problems with biting the bullet of contingent identity. And he regards both ships, PRP(c) and D&R(c), as having existed from T1, and thus having overlapping though non-identical histories (i.e., sharing some but not all temporal parts).   Garrett identifies two problems with this stance: one can no longer think of (a)-(c) as possible futures for one and the same ship at T1; and the name “Ship of Theseus” when introduced, turns out to have no clear reference.

Garrett’s endurantist insists that identity is necessary, and so must reject both of identity claims:  ST = PRP(c) and ST = D&R(c).  Garrett then further supposes that the endurantist would still want to hold that wrt scenarios (a) and (b), ST = PRP(b) and ST = D&R (a), about which he then raises still further, convoluted problems of intuition.  But why suppose this?  Why not instead suppose that the endurantist rejects the very idea that numerical identity is relative to different interests, and therefore rejects these identities as well?

Kripke on Identity and Time

 It has been a feature of Garrett’s presentation of the debate that the perdurantist has the resources to deal with problems as they arise, if not always completely convincingly, whereas the endurantist is stuck in need of a workable criterion of identity for objects.  The perdurantist is portrayed as having a criterion of identity for physical objects via EP.  This seeming imbalance in portrayal is something worth redressing.  I am now going to sketch Saul Kripke’s response on behalf of the endurantist.  This is the response he developed over three lectures on Identity and Time given here at SFU in October of 1980.  As far as I have been able to determine, these lectures have not been published; but they are well known, he gave them lots of places, and they have a large underground reputation; there are taped versions of them, etc..  I am reducing three lectures to the barest framework here.  One major theme of the lectures is the rejection of the demand for metaphysical criteria of identity for physical objects.  He argues in particular against any spatio-temporal continuity criterion of object identity.  Another theme is the articulation of a kind of generalization of perdurantism to the universe as a whole: instead of talking about time slices of a particular object, he talks about time-slices of the universe as a whole, which he dubs ‘holograms’. He then argues against the ‘holographic hypothesis’ that the universe is given by its series of holograms.  He also critiques the role of puzzle cases like the Ship of Theseus and Lumpl  and Goliath in shaping our conception of physical objects.

(Beginning of Kripke lectures outline)

Presuppositions:

  1.  Talk about enduring medium-sized physical objects is meaningful.
  2. Medium-sized physical objects exist and have their natures independently of our perceptions of them.
  3. Must distinguish between the metaphysical issue of what identity through time consists in, for such objects, and what our evidence for it might be.

Question:

Is it the case that in order meaningfully to talk about things of a certain sort – e.g., physical objects – there must be metaphysical ‘criteria of identity’ for things of that sort, criteria that would tell us what the identity at a time and the identity over time of a thing of that sort consists in?

Conclusion to be Argued:

Such criteria cannot be a general requirement, and in particular cannot be a requirement for medium-sized physical objects, because there are no such criteria, yet our talk of such things is meaningfgul.

Question:

What general form would such a criterion of identity take?

Davidson’s Metacriterion for Criteria of Identity: such criteria must be of the form

                For all x and y, if x and y are of kind K, then x=y iff____________________________   e.g.,

I.                   For all x and y, if x and y are events, then x=y iff for all events z, z causes x iff z causes y; and x causes z iff y causes z.

II.                 For all x and y, if x and y are material objects, then x=y iff for all places p and times, t, x occupies p at t iff y occupies p at t.

But besides being circular, I rules out a priori the possibility of a universe containing two isolated events.  II on the other hand seems completely trivial, and doesn’t, for instance, give us any help with the ship of Theseus.

Perry’s Metacriterion for Criteria of Identity: such criteria must be of the form

A = B iff A R B, where the types of entities and the relation on the RHS of the biconditional are different than those on the LHS.  e.g.,

III.  A at time t1 is the same river as B at time t2 iff the river stage A observed at t1 bears the appropriate (e.g., spatio-temporal-causal continuity) relation to the river stage B observed at t2.

IV.  X at t1 is the same physical object as Y at t2 iff physical object temporal slice x observed at t1 bears the appropriate spatio-temporal-causal-continuity relation to physical object temporal slice y, observed at t2.

But then what are the criteria of identity of the more basic entities invoked on the RHS, i.e., the temporal slices and if these entities can be taken as basic, why shouldn’t the entities on the LHS be taken as basic?

Thesis: Some enduring physical objects must be taken as basic.

Argument (roughly):

Suppose not.  Suppose we were given all the qualitative information about the world as it exists at each instant of time, but no independent information about identities over time.  The requirement that there be criteria of identity then amounts to the hypothesis that this is all the information about the world, that any information about identities must somehow be already included or extractable.  But this would imply that certain conceivable histories of the world are ruled out a priori.  But these histories cannot be so ruled out.  Therefore this hypothesis is false, and therefore there are no metaphysical criteria of identity for physical objects.

Argument (more detailed):

Preliminaries:  By ‘hologram’ let us mean a ‘purely qualitative’ description or representation of the entire world at a given instant.  “Purely qualitative” implies no labels or names, or trick predicates that function as labels or names.  The restriction to an instant is also meant to preclude any properties being mentioned which depend on earlier or later states of the world (e.g., velocity).

Holographic Hypothesis: The world is entirely given to us by its series of holograms.  The real history of the world is just its history of holographic states.  There are no facts over and above these.

                Strict Version:  no labels whatsoever.

                Attenuated version: only labeling of places allowed.

A criterion of identity conforms to the Holographic Hypothesis iff it can be given in terms just of the information contained in the series of holograms.  The following criterion can be seen as a precisification of EP: it focuses just on temporal parts and includes their temporal ordering.

The S-T Continuity Criterion: For persisting objects X and Y, and their temporal slices the x’s and the y’s, X = Y iff X and Y consist of the same spatio-temporally continuous series of temporal slices; i.e., iff the x’s = the y’s, and both are in the same order and spatio-temporally continuous. 

Now, does the S-T Continuity Criterion conform to the Strict Holographic Hypothesis (‘SHH’)?  No, for without identification of places, there is no way to determine which paths through space are the continuous ones.  SHH would also rule out as a priori meaningless the following, which certainly seems to be an intelligible possible history of the world that philosophy ought not to be able to rule out all by itself.  For some period of its history, the cosmos might have consisted to two stationary qualitatively exactly similar metallic spheres.  But without the labeling of places in each instantaneous hologram, there is no way to distinguish the case where the sphere really are stationary from the case where they are only stationary relative to each other but both moving in parallel through space,, or from the case where at some point they instantaneously change places, ‘jumping’ into each other’s positions, etc..

Then does the S-T Continuity Criterion at least conform to the Attenuated Holographic Hypothesis (‘AHH’)?  Probably (pending further clarification of the former).  But AHH implies that there is no fact of the matter as to whether or not a disk made of uniform continuous matter is rotating or not.  (i.e., labels of places won’t help with this). Even if the matter isn’t uniform, AHH implies that there could be no fact of the matter as to whether the disk is remaining still and its features rotating, or instead the disk and features rotating together; and no fact of the matter as to whether the disk and features are standing still or the disk is rotating with its features also rotating in the opposite direction at a compensating rate.  Consider also a ‘river’ of uniform continuous fluid, extending infinitely in both directions.  The AHH cannot distinguish even in principle between the fluid actually flowing in a particular direction, or standing still.  Even with all the information about which qualities occur at which places at which instants, the respective ‘movie’ – i.e., the series of holograms -- of the moving river and the still river will be indistinguishable.  If the matter isn’t continuous, but made of molecules, then the problem of spinning arises with respect to each molecule.  If the molecules are point masses, then AHH cannot distinguish between point masses A and B colliding and A veering up and B down, versus A veering down and B up.

Given all these and other inadequacies, it seems better to simply reject AHH and the S-T Continuity Criterion along with it; better to take at least some enduring objects, rather than their temporal slices, as basic.  Their identity over time is not to be explained in terms of anything else. We do not have to have solved puzzle cases like the Ship of Theseus in order to find unproblematic the endurance through time of the vast preponderance of medium-sized physical objects.

(End of Kripke Lectures outline)

So then, if the endurantist response to the Ship of Theseus is simply to reject the call for a general metaphysical criterion of physical object identity over time, where does this leave us with our puzzle cases?  One idea might be that our basic notion of physical object has the functional role precisely of reidentification of objects over time, but that it has evolved with respect to a certain core of reidentification scenarios that are those most salient to survival and flourishing.  And that biological role has not required it to have the wherewithal to help us resolve the perfectly coherent but unlikely and contrived scenarios philosophers think up.   So there needn’t be a fact of the matter of identity over time, or not, for every imaginable scenario.  Our notion of medium sized physical object can have a degree of vagueness or indeterminacy at its edges.  If so, then the exactly wrong way to go about developing an account of  our notion of identity over time would be to make addressing the exotic puzzle cases  the criterion of success.