CHAPTER THIRTEEN, SECS.
1-3, JUST THE PARTS ON IDENTITY, ITS PROPERTIES, AND ITS USES IN THE
REPRESENTATION OF THE LOGICAL FORMS OF ENGLISH SENTENCES
THE KIND OF IDENTITY AT ISSUE HERE IS
NUMERICAL IDENTITY, NOT QUALITATIVE IDENTITY. IDENTICAL TWINS MIGHT BE QUALITATIVELY IDENTICAL
-- MIGHT HAVE THE SAME NON-RELATIONAL FEATURES – BUT THEY ARE NOT NUMERICALLY
IDENTICAL – THERE ARE TWO OF THEM, AFTER ALL.
IF X AND Y ARE NUMERICALLY IDENTICAL, THEN WE ARE TALKING ABOUT JUST ONE
INDIVIDUAL; FOR EXAMPLE, SAMUEL CLEMENS IS NUMERICALLY IDENTICAL TO MARK TWAIN;
THE MORNING STAR IS NUMERICALLY IDENTICAL TO THE EVENING STAR, THE NUMBER THREE
IS NUMERICALLY IDENTICAL TO THE SUCCESSOR OF THE NUMBER TWO, AND SO ON. HENCEFORTH ‘IDENTITY’ WILL ALWAYS MEAN NUMERICAL IDENTITY UNLESS
OTHERWISE INDICATED.
IDENTITY IS A TWO-PLACE RELATION
WHICH OBTAINS BETWEEN ONE AND THE SAME THING!
WE COULD REPRESENT IT IN OUR PREDICATE LOGIC USING A RELATIONAL
PREDICATE CONSTANT THUS: ‘Ixy’ for ‘x is identical with y’. BUT BECAUSE IDENTITY HAS DISTINCTIVE LOGICAL
PROPERTIES, WE USE A SPECIAL NOTATION FOR IT: ‘x =y’ FOR ‘x is identical with
y’.
SO WHAT ARE THE SPECIAL PROPERTIES OF
IDENTITY? IDENTITY IS A BINARY RELATION
THAT IS (1) REFLEXIVE, (2) TRANSITIVE, (3) SYM-METRIC, AND (4) WHICH SATISFIES
LEIBNIZ’S LAW. HERE ARE PREDICATE LOGIC FORMULATIONS
OF THESE PROPERTIES, WHICH ARE CONSIDERED LOGICALLY VALID TRUTHS OF PREDICATE
LOGIC WITH IDENTITY.
REFLEXIVITY: (x)x=x
TRANSITIVITY: (x)(y)(z)[( x=y ·y=z) ᴝ x=z]
SYMMETRY: (x)(y)(x=y ᴝ y=x)
LEIBNIZ’S LAW: (x)(y) [x=y ᴝ (Fx ≡ Fy)]
I HOPE THAT IT IS OBVIOUS THAT
IDENTITY IS REFLEXIVE, I.E., THAT EVERYTHING IS IDENTICAL WITH ITSELF (AND
DISTINCT FROM EVERYTHING ELSE!) THE
OTHER LOGICAL PROPERTIES OF IDENTITY ARE ALSO OBVIOUS, AS LONG AS YOU REMEMBER
THAT IT IS NUMERICAL IDENTITY THAT WE ARE TALKING ABOUT, I.E., THAT IF X=Y THEN
THERE IS JUST ONE INDIVIDUAL WE ARE TALKING ABOUT. SO, FORINSTANCE, IF WE ARE JUST TALKING ABOUT
ONE THING WHEN WE SAY THAT X=Y, LEIBNIZ’S LAW MUST HOLD, BECAUSE A THING JUST
HAS THE PROPERTIES THAT IT HAS!
[NOTE THAT THE CONVERSE OF LEIBNIZ’S
LAW IS NOT CONSIDERED A LAW OF LOGIC.
THERE IS NOTHING LOGICALLY INCOHERENT ABOUT SUPPOSING THAT THERE ARE TWO
INDIVIDUALS HAVING EXACTLY THE SAME PROPERTIES. ONE CAN CONSTRUCT A MODEL IN
WHICH THAT HOLDS, FORINSTANCE. THOUGH THIS
WOULDN’T HOLD OF IDENTICAL TWINS, SINCE EVEN IF THEY HAVE THE SAME
NON-RELATIONAL PROPERTIES, THEY WON’T HAVE THE SAME RELATIONAL PROPERTIES;
E.G., THEY WILL OCCUPY DIFFERENT REGIONS OF SPACE, AND ONE WILL HAVE BEEN BORN
BEFORE THE OTHER.]
LEIBNIZ’S LAW IS WHAT MAKES IDENTITY
SPECIAL. THERE ARE LOTS OF OTHER
RELATIONS THAT ARE ALSO REFLEXIVE, TRANSITIVE AND SYMMETRIC. ONE THAT WR HABVE ENCOUNTERED IN THIS CLASS
IS THE RELATION OF LOGICAL EQUIVALENCE BETWEEN SENTENCES. NUMERICALLY DISTINCT SENTENCES MAY BE
LOGICALLY EQUIVALENT -- TAKE ANY TWO SENTENCES THAT ARE EQUIVALENT ACCORDING TO
ONE OF THE EQUIVALENCE RULES, SUCH AS THE RULE OF IMPLICATION. YET THE RELATION OF LOGICAL EQUIVALENCE IS
REFLEXIVE, TRANSITIVE, AND SYMMETRIC.
IDENTITY IS INVALUABLE IN THE
REPRESENTATION OF THE LOGICAL FORM OF A VARIETY OF IMPORTANT TYPES OF ENGLISH
SENTENCE.
HERE IS ONE THAT WE DEFERRED TALKING
ABOOUT UNTIL NOW. HOW DO WE REPRESENT
THE CLAIM THAT SOMEONE LOVES SOMEONE ELSE (ASSUMING A DOMAIN OF
PERSONS)? HERE IS HOW:
(Ǝx)(Ǝy)(~x=y · Lxy)
WE KNOW
HOW TO REPRESENT THE CLAIM THAT THERE IS AT LEAST ONE THING HAVING THE PROPERTY
F, BUT HOW DO WE REPRESENT THE CLAIM THAT THERE ARE AT LEAST 2 THINGS HAVING
F? THUS:
(Ǝx)(Ǝy)[(Fx·Fy)·
~x=y]
HOW
ABOUT THE CLAIM THAT THERE ARE AT LEAST 3 SUCH THINGS?
(Ǝx)(Ǝy)(Ǝz){
[(Fx · Fy) · Fz] · [(~x=y · ~x=z) · ~y=z]}
I HOPE
THAT YOU CAN SEE HOW TO EXTEND THIS TREATMENT TO LARGER NUMBERS.
WE CAN
NOW ALSO REPRESENT THE CLAIM THAT THERE IS AT MOST ONE THING HAVING THE
PROPERTY F:
(x) [Fx ᴝ (y)( Fy ᴝ
y=x)]
OR AT
MOST TWO THINGS:
(x)(y){(Fx · Fy) ᴝ
(z)[Fz ᴝ (z=x v z=y)]}
THIS
TREATMENT ALSO EXTENDS TO ANY NUMBER OF THINGS.
TO SAY
THAT THERE IS EXACTLY ONE THING HAVING THE PROPERTY F IS TO SAY THAT
THERE IS AT LEAST ONE THING HAVING F AND AT MOST ONE THING HAVING F, I.E.:
(Ǝx)[Fx · (y) (Fy ᴝ y=x)]
WE CAN
ALSO REPRESENT THE CLAIM THAT THERE ARE EXACTLY TWO THINGS HAVING THE
PROPERTY F:
(Ǝx)(Ǝy){[(Fx · Fy) · ~x=y]
· (z)[Fz ᴝ (z=x v z=y)]}
AND SO
ON FOR LARGER NUMBERS OF THINGS.
DEFINITE
DESCRIPTIONS IN
ENGLISH ARE DESCRIPTIVE SINGULAR TERMS THAT IMPLY UNIQUENESS, LIKE “THE FIRST
PRIME MINISTER OF CANADA”, “THE AUTHOR OF WAVERLY”
OR “JOHN’S MOTHER”. SUCH UNIQUENESS IS SIGNALED IN ENGLISH TYPICALLY EITHER BY
THE USE OF THE DEFINITE ARTICLE “THE” OR BY THE USE OF THE POSSESSIVE, AS IN
THE LAST OF THE ABOVE EXAMPLES. SUPPOSE
THAT WE ARE GIVEN THE ENGLISH SENTENCE “SCOTT IS THE AUTHOR OF WAVERLY”. THIS CAN BE PARAPHRASED IN ENGLISH AS: “THERE
IS EXACTLY ONE AUTHOR OF WAVERLY,
NAMELY SCOTT”. SOME BOOKS ARE CO-AUTHORED, OF COURSE, LIKE YOUR LOGIC
TEXT. BUT NOT THE NOVEL WAVERLY.
HERE IS THIS SENTENCE’S RENDERING IN PREDICATE LOGIC.
Letting
‘s’ be a constant naming Scott, ‘w’ be a constant naming the novel Waverly, and ‘Axy’ be the relation of x
being an author of y, we have:
(Ǝx)(Ǝy){[( x=s ·
y=w) · Asw]· (z)(Azw ᴝ z=s)}
THIS
TREATMENT OF DEFINITE DESCRIPTIONS WAS CONCEIVED OF BY BERTRAND RUSSEL, WHICH
HE INTRODUCED TO ADDRESS A PHILOSOPHICAL CONUNDRAM ABOUT NON-REFERRING SINGULAR
TERMS. SUPPOSE WE TAKE A DEFINITE DESCRIPTION
THAT DOES NOT PICK OUT ANYTHING, SUCH AS “THE PRESENT KING OF FRANCE”. AN NON-REFERRING DEFINITE DESCRIPTION IS A
POSSIBILITY, OF COURSE. (UNLIKE
INDIVIDUAL CONSTANTS WHICH MUST BE ASSIGNED A DETERMINATE VALUE IN A THE DOMAIN
OF DISCOURSE, WHETHER OR NOT A DEFINITE DESCRIPTION PICKS OUT SOMETHING IN THE
DOMAIN OF DISCOURSE DEPENDS ON WHETHER OR NOT ANYTHING IN THE DOMAIN HAS THE
PROPERTIES MENTIONED IN THE DESCRIPTION.
PREDICATES DO NOT HAVE TO PICK OUT INDIVIDUALS – THEY CAN BE ASSIGNED
THE NULL SET AS THEIR EXTENSION.) AND THEN SUPPOSE WE ATTRIBUTE A PROPERTY TO
THE PRESENT KING OF FRANCE, AS IN THE STATEMENT “THE PRESENT KING OF FRANCE IS
BALD”. SOME PHILOSOPHERS ARGUED THAT
SINCE THERE IS NO PRESENT KING OF FRANCE, SUCH A CLAIM IS NEITHER TRUE NOR
FALSE, AND GENERALLY THAT A PROPERTY ATTRIBUTION CAN ONLY BE EITHER TRUE OR
FALSE IF THE SINGULAR TERM USED TO PICK OUT THE INDIVIDUAL TO WHICH THE
PROPERTY IS ATTRIBUTED ACTUALLY PICKS OUT AN INDIVIDUAL. IF NOT, THEN THE
ATTRIBUTION IS NEITHER TRUE NOR FALSE.
BUT THEN THE LAW OF THE EXCLUDED MIDDLE, P V ~P, WOULD NOT BE A LAW OF
LOGIC, SINCE IT WOULD HAVE COUNTEREXAMPLES. RUSSELL RESCUED THE LAW OF THE
EXCLUDED MIDDLE, BY BUILDING AN EXISTENTIAL CLAIM INTO DEFINITE DESCRIPTIONS.
“THE PRESENT KING OF FRANCE IS BALD” BECOMES A COMPLEX EXISTENTIAL
GENERALIZATION
(Ǝx){[Kx
·(y)(Ky ᴝ y=x)]· Bx},
WHERE
‘Kx’ IS ‘x is a present king of France’. THIS SENTENCE IS CLEARLY FALSE, NOT
TRUTH-VALUELESS, SINCE IT CLAIMS AMONG OTHER THINGS THAT THERE IS A PRESENT
KING OF FRANCE, WHEN THERE ISN’T ONE.
CHAPTER EIGHT:
PREDICATE LOGIC SEMANTICS
The semantics of predicate logic
builds on the semantics of truth-functional sentential logic. In truth-functional sentential logic an interpretation,
or ‘valuation’ consisted of an assignment of a truth-value – true or false – to
each atomic sentence, which then determined an assignment of a truth value to
each compound sentence, in accordance with the truth table interpretations
of the connectives occurring in them. The notion of interpretation in predicate
logic is more complex. An interpretation first specifies a particular domain of
discourse, which must be a (non-empty) set of individuals. Then the interpretation must assign to each
individual constant a unique fixed value in that domain. Each individual constant must have such a
value (which is why the domain must be non-empty), but different constants can
have the same fixed value (just as in a natural language the same thing can
have more than one name). Each
non-relational predicate constant will be assigned as its value a subset of
individuals in the domain. If no
individuals in the domain have the property that the predicate expresses, then
it will be assigned ‘the null set’ – the subset having no members – as its
value. By mathematical convention, the null set is a subset of every set of
individuals; so it is always a subset of any domain of discourse.
A brief digression is needed
here. It is important to recognize the
limitations of how non-relational properties are being modeled in our predicate
logic. A non-relational property is
being modeled simply as the set of things
that have the property. There clearly has to be more to being a property
than that, since we can think of examples of where distinct properties are had
by the same set of individuals. Take the
domain of plane figures. The property of
having just three sides is distinct from the property of having just three
angles, but both properties are had by the same subset of plane figures. Or take the domain of (healthy, normal)
animals. The property of having a heart
is distinct from the property of having kidneys, but both properties are had by
the same subset of members of the specified domain. Or we can coherently imagine a world in which
all and only the red things are spherical. That would not make the properties
of being red and being spherical the same property in such a world. So
there is more to being a property than just being the set of things that has it. Still, there are many logical truths and
validities about individuals and their properties that can be effectively
modeled in this way.
What about relational
predicates? Relative to a given domain,
D -- say, the domain of persons -- an interpretation will assign to a two-place
predicate -- say ‘Lxy’ for x loves y -- the set of all and only ordered pairs
of individuals in D such that the first member of the pair loves the second.
(You see, now you know what it is to love someone: it is to be the first member
of an ordered pair belonging to the set of ordered pairs assigned to the
predicate Lxy!) In general, for any n-place
relational predicate for finite number n, an interpretation will assign to it
the set of all ordered n-tuples of the given domain bearing that relation to
each other. If there are no such ordered
n-tuples in D, than it will be assigned the null set.
Finally, what about truth? Truth is
predicated of sentences, so an interpretation would have to proceed by cases,
relative to the structure of the sentences.
First of all, an interpretation will
assign a truth value to each sentence constant, just as it did in our
sentential calculus.
Second an interpretation will make an
atomic sentence like Lbc true just in case, for its domain D, the ordered pair
of individuals <b,c> is a member of the set of ordered pairs of
individuals in D assigned by the interpretation to the predicate L. Otherwise, the interpretation will assign the
value false to Lbc.
Third, a universal generalization in
variable x will be assigned the value true provided every member of D satisfies
the condition expressed by the sentence form
in variable x that constitutes the scope of the universal quantifier.
Otherwise it will be assigned the value false.
And an existential generalization in variable x will be assigned the
value true just in case at least one member of D satisfies the condition
expressed by the sentence form in variable x that constitutes the scope of the
existential quantifier, and will be assigned false otherwise.
Fourth, negations, conjunctions, and
the other truth-functional operators will obey their usual truth-table
interpretations.
[Note: this is not a completely
rigorous definition of a predicate logic interpretation. But it is more detailed than the one in the
text, and it is good enough for our purposes.]
Here, then is a very important
definition that appeals to this notion of interpretation.
VALIDITY IN PREDICATE LOGIC: An
argument is valid in predicate logic if and only if there is no predicate logic
interpretation of its premises and conclusion on which all of its premises are
true and its conclusion false.
Note all of the different ways in
which predicate logic interpretations may vary, on which the validity of an
argument depends. First, it can vary in
the assignment of truth values to sentence constants. Second, it can vary in its domain of discourse. Third, relative to a given domain of
discourse, it can vary in its assignment of values in that domain to the
individual and predicate constants. Here
are two more important definitions.
LOGICAL EQUIVALENCE IN PREDICATE
LOGIC: Two predicate logic sentences are
logically equivalent if and only if there is no predicate logic interpretation
on which they differ in truth value.
That is, no matter what the domain of
discourse, and no matter what the assignment of values to sentence, predicate,
and individual constants, logically equivalent sentences must take the same truth
value.
CONSISTENCY IN PREDICATE LOGIC: A
sentence or group of sentences in predicate logic is consistent if and only if
there is at least one predicate logic interpretation on which the sentence or
sentences are all true.
MAJOR FACT ABOUT PREDICATE
LOGIC: THERE IS NO ALGORITHM FOR
DETERMINING WHETHER OR NOT SUCH VALIDITY, EQUIVALENCE, OR CONSISTENCY
OBTAINS. THERE IS THUS NO EQUIVALENT TO THE
TRUTH-TABLE TECHNIQUES OF TRUTH-FUNCTIONAL SENTENTIAL LOGIC.
WHAT WE ARE LEFT WITH ARE SOME HANDY,
PRETTY RELIABLE, METHODS FOR DETERMINING THAT AN ARGUMENT IS INVALID, OR
THAT A PAIR OF SENTENCES ARE NON-EQUIVALENT, OR THAT A SENTENCE OR SET
OF SENTENCES IS CONSISTENT. THESE METHODS ALL INVOLVE IDENTIFYING OR
CONSTRUCTING PARTICULAR INTERPRETATIONS.
FORINSTANCE, TO SHOW THAT AN ARGUMENT IS INVALID IT IS SUFFICIENT TO
FIND ONE INTERPRETATION MAKING ITS PREMISES TRUE BUT ITS CONCLUSION FALSE. OR TO SHOW THAT A SET OF SENTENCES IS CONSISTENT,
IT IS ENOUGH TO SHOW THAT THERE IS AN INTERPRETATION MAKING EVERY MEMBER OF THE
SET TRUE AT ONCE.
BUT THESE METHODS WILL NOT WORK FOR
DETERMINING VALIDITY, EQUIVALENCE OR INCONSISTENCY. YOU MIGHT RECALL FROM THE VERY BEGINNING OF
OUR COURSE THAT THE VALIDITY OF AN ARGUMENT IS EQUIVALENT TO THE INCONSISTENCY
OF THE SET OF SENTENCES THAT CONSISTS OF THE ARGUMENT’S PREMISES TOGETHER WITH
THE NEGATION OF ITS CONCLUSION. SO IF
THERE WERE A GENERAL TEST FOR EITHER INCONSISTENCY OF A SET OR VALIDITY OF AN
ARGUMENT, THERE WOULD BE A TEST FOR THE OTHER TOO. BUT THERE ISN’T, SO THERE ISN’T.
TESTING FOR
INVALIDITY, ETC.: TWO WAYS.
1. USING (TRUTH-FUNCTIONAL) EXPANSIONS ON SMALL DOMAINS
2. CONSTRUCTING A SIMPLE PARTIAL INTERPRETATION.
USING EXPANSIONS.
THE IDEA BEHIND EXPANSIONS IS THAT
WHEN THE DOMAIN OF DISCOURSE IS FINITE, EXISTENTIAL QUANTIFICATIONS ARE
EQUIVALENT TO LARGE DISJUNCTIONS AND UNIVERSAL QUANTIFICATIONS ARE EQUIVALENT
TO LARGE CONJUNCTIONS. IF OUR DOMAIN IS
ONLY, SAY, 2 OR 3 MEMBERS LARGE, IT IS EASY TO WRITE OUT THOSE DISJUNCTIONS AND
CONJUNCTIONS, AND THEN JUST USE TRUTH TABLE TECHNIQUES TO FIND AN
INTERPRETATION THE SHOWS THE ARGUMENT TO BE INVALID. FOR EXAMPLE, TAKE THE
ARGUMENT:
(x)(Fx
ᴝ Gx), (Ǝy)Gy /:. (Ǝz)Fz
NOW CONSIDER A DOMAIN D CONSISTING OF
JUST TWO INDIVIDUALS: D = {a,b}
HERE IS THE ARGUMENT’S EXPANSION FOR
THAT DOMAIN:
(Fa
ᴝ Ga) · (Fb ᴝ Gb), Ga v Gb /:. Fa v Fb
NOW LET’S SEE WHETHER WE CAN ASSIGN
EXTENSIONS TO THE PREDICATE CONSTANTS ‘F’ AND ‘G’ THAT MAKE THE TRUTH VALUES OF
THE ATOMIC COMPONENTS RESULT IN THE TRUTH OF THE PREMISES AND THE FALSHOOD OF
THE CONCLUSION. THAT TURNS OUT TO BE
EASY, BECAUSE TO MAKE THE CONCLUSION FALSE, BOTH DISJUNCTS MUST BE FALSE, SO
NEITHER a NOR b CAN HAVE THE PROPERTY F.
SO THEN LETS HAVE THE INTERPRETATION ASSIGN THE NULL SET, SYMBOLIZED
THUS: ‘ᶲ’, TO F AS THE EXTENSION OF ‘F’.
AND THEN LET’S HAVE IT ASSIGN THE SUBSET OF D WHOSE SOLE MEMBER IS a AS
THE EXTENSION OF ‘G’. SINCE WE THEN HAVE
THAT ‘Ga’ IS TRUE, THE SECOND PREMISE ‘Ga v Gb’ is TRUE AS WELL.
WHAT ABOUT THE FIRST PREMISE? SINCE
NEITHER ‘Fa’ NOR ‘Fb’ ARE TRUE, BOTH CONJUNCTS OF PREMISE ONE ARE TRUE (WHY? LOOK AT THEM!) SO WE HAVE SUCCEEDED IN FINDING A SIMPLE
PREDICATE LOGIC INTERPRETATION MAKING BOTH PREMISES TRUE AND THE CONCLUSION
FALSE. SO WE HAVE SHOWN THAT THE ARGUMENT IS INVALID.
THIS TECHNIQUE CAN SOMETIMES BE ADAPTED TO SHOW DIRECTLY THAT
A STATEMENT IS A CONTRADICTION. CONSIDER
THE SENTENCE:
(Ǝy)~(Fy v ~Fy)
CONSIDER A DOMAIN D CONSISTING OF
{a}, AND REMEMBER THAT THERE CAN BE NO EMPTY DOMAIN IN PREDICATE LOGIC. WE THEN HAVE AS THE EXPANSION OF OUR
SENTENCE:
~(Fa
v ~Fa)
THE EXPANSION JUST HAS ONE SENTENCE,
AND IT IS A TRUTHP-FUNCTIONAL CONTRADICTION, NO MATTER WHAT EXTENSION IS
ASSIGNED TO ‘F’ (AND THERE ARE ONLY TWO POSSIBILITIES HERE, THE NULL SET AND
THE SET WHOSE SOLE MEMBER IS a.) AND THEREFORE NECESSARILY FALSE. BUT WE CAN ALSO SEE THAT NO MATTER HOW MANY
INDIVIDUALS ARE ADDED TO THE DOMAIN OF DISCOURSE, EACH DISJUNCT OF THE
EXPANSION OF THE SENTENCE ON THAT DOMAIN MUST ALSO BE A CONTRADICTION
(ESSENTIALLY THE SAME TRUTH-FUNCTIONAL CONTRADICTION) AND THEREFORE FALSE, AND
SO THE WHOLE EXPANSION MUST BE FALSE.
LET’S NOW USE EXPANSIONS TO SHOW THAT
THE FOLLOWING TWO SENTENCES ARE NOT LOGICALLY EQUIVALENT:
(x)(Ǝy)Lxy :: (Ǝy)(x) Lxy
AS OUR DOMAIN WE WILL AGAIN USE D =
{a,b}. WHEN DEALING WITH EMBEDDED
QUANTIFIERS LIKE THIS, ONE MUST DO THEM ONE AT A TIME, STARTING WITH THE
OUTERMOST QUANTIFIER. SO IT IS A
TWO-STAGE PROCESS. WE START WITH THE
SENTENCE ON THE LHS.
STAGE 1: ((Ǝy) Lay) · ((Ǝy) Lby)
STAGE 2: (Laa v Lab) · (Lba v Lbb)
NOW FOR
THE SENTENCE ON THE RHS.
STAGE
1: ((x)Lxa v (x)Lxb)
STAGE
2: (Laa · Lba) v (Lab · Lbb)
SO ARE
THESE STAGE 2, FINAL, EXPANSIONS LOGICALLY EQUIVALENT? OR CAN THEY DIFFER IN TRUTH VALUE? FOR EASE OF IMAGINATION, LET’S TRY TO MAKE
ONE TRUE AND THE OTHER FALSE. LET’S
SUPPOSE a AND b ARE PEOPLE AND THAT ‘Lxy’ EXPRESSES THE RELATION OF x LOVING y.
AND LET’S SUPPOSE THAT WHILE a AND b LOVE EACH OTHER, NEITHER OF THEM LOVES
THEMSELVES. SO THE EXTENSION ASSIGNED TO THE RELATIONAL PREDICATE ‘Lxy’ is:
{<a,b>, <b,a>}. THAT MAKES THE EXPANSION OF THE RHS SENTENCE FALSE,
BUT THE EXPANSION OF THE LHS SENTENCE TRUE!
SO WE HAVE JUST SHOWN THAT THOSE SENTENCES ARE NOT LOGICALLY EQUIVALENT!
FINALLY,
LET’S ILLUSTRATE HOW THIS METHOD WORKS TO SHOW THE CONSISTENCY OF A SET OF
SENTENCES. AS OUR SET LET’S TAKE:
{(x)(Fx ᴝ ~Fx), (Ǝy)(Fy v ~Fy)}
HERE ARE
THEIR EXPANSIONS IN THE ONE OBJECT DOMAIN D = {a}:
{(Fa ᴝ
~Fa), (Fa v ~Fa)}
I THINK
THAT IT IS EASY TO SEE THAT AS LONG AS OUR INTERPRETATION ASSIGNS TO ‘F’ AS ITS
EXTENSION THE NULL SET, BOTH EXPANSIONS WILL BE TRUE, AND SO THIS PROVES THAT
THE SET IS LOGICALLY CONSISTENT IN PREDICATE LOGIC.
CONSTRUCTING
A SIMPLE INTERPRETATION.
IF YOU
FIND THE TECHNIQUE OF TRUTH-FUNCTIONAL EXPANSIONS TIRESOME ( ;-)) YOU SHOULD TRY A MORE INTUITIVE APPROACH THAT
I WILL NOW EXPLAIN. I WILL ILLUSTRATE IT
JUST FOR TESTING FOR INVALIDITY. SUPPOSE
THAT YOU ARE GIVEN THE FOLLOWING ARGUMENT, WHICH YOU ARE SUSPICIOUS IS INVALID:
(Ǝx)(Fx
· Gx), (Ǝx)(Fx · Hx) /:. (Ǝx)(Gx
·Hx)
TO SHOW
THAT, WE NEED TO FIND AN INTERPRETATION MAKING THE PREMISES TRUE AND THE
CONCLUSION FALSE. LET’S CHOOSE AS OUR
DOMAIN THE DOMAIN OF HUMANS. TO MAKE THE
CONCLUSION FALSE WE NEED TO THINK OF TWO MUTUALLY EXCLUSIVE PROPERTIES THAT
HUMANS CAN HAVE, PROPERTIES THAT CAN BE HAD BY HUMANS BUT NOT HAD
TOGETHER. IF WE REPRESENT THOSE BY ‘G’
AND ‘H’, THEN THE CONCLUSION WILL BE FALSE.
LET ‘Gx’ BE ‘x IS MALE’ AND ‘Hx’ be ‘x IS FEMALE’. TO MAKE THE PREMISES BOTH TRUE, WE THEN JUST
NEED TO THINK OF A PROPERTY THAT CAN BE HAD BY BOTH MALE AND FEMALE
HUMANS: LET ‘Fx’ BE ‘x HAS
PARENTS’. THEN BOTH PREMISES ARE TRUE
AND THE CONCLUSION FALSE. SO THIS
INTERPRETATION DEMONSTRATES THE INVALIDITY OF THE ARGUMENT.
LET’S DO
IT AGAIN, BUT WITH A DIFFERENT DOMAIN. LET’S CHOOSE AS OUR DOMAIN THE NATURAL
NUMBERS. AND LETS THINK OF TWO
PROPERTIES THAT NO NATURAL NUMBER CAN HAVE BOTH OF, AND THAT WILL MAKE THE
CONCLUSION FALSE. LET ‘Gx’ BE ‘x IS
EVEN’ AND ‘Hx’ BE ‘x IS ODD’. THOSE ARE
MUTUALLY EXCLUSIVE PROPERTIES OF NATURAL NUMBERS! NOW ALL WE HAVE TO THINK OF IS A PROPERTY HAD
BY SOME EVEN NUMBERS AND SOME ODD NUMBERS….HOW ABOUT BEING DIVISIBLE BY 3? LET ‘Fx’ BE ‘x IS DIVISIBLE BY 3’. LET THE NUMERALS, ‘0’, ‘1’, ‘2’……SERVE AS
INDIVIDUAL CONSTANTS NAMING PARTICULAR NATURAL NUMBERS. SO ‘9’ AND ‘30’ BOTH NAME NATURAL
NUMBERS. 9 IS AN ODD NUMBERS, AND ‘30’
IS EVEN. AND BOTH 9 AND 30 ARE DIVISIBLE
BY 3! SO BOTH PREMISES OF THE ARGUMENT ARE TRUE IN THE DOMAIN OF NATURAL NUMBERS
UNDER THOSE INTERPRETATIONS OF THE PREDICATES.
BUT OF COURSE THE CONCLUSION IS FALSE.
SO CLEARLY THIS IS AN INVALID PREDICATE LOGIC ARGUMENT.
HERE IS
ANOTHER INVALID ARGUMENT:
(x)(Fx ᴝ
(Ǝy)Gyx) /:. (Ǝy)(x)(Fx ᴝ
Gyx)
LET’S
CHOOSE EARLY (PRE-CIVILIZED) HUMANS AS OUR DOMAIN OF DISCOURSE. WE WANT TO INTERPRET THE PREDICATES ‘F’ AND
‘G’ SO THAT THE PREMISE IS TRUE AND THE CONCLUSION IS FALSE. LET ‘Fx’ BE ‘x IS MARRIED’, AND ‘Gxy’ BE ‘x IS THE SPOUSE OF y’. OF COURSE MARRIAGE DID NOT EXIST AS AN
INSTITUTION BACK THEN. SO THE PREMISE IS
TRUE, BECAUSE IT WILL BE FALSE FOR ALL MEMBERS OF THE DOMAIN THAT THEY ARE MARRIED,
AND SO THE CONDITIONAL WITHIN THE SCOPE OF THE QUANTIFIER WILL HOLD BY DEFAULT. FOR ALL SUCH EARLY HUMANS IT WILL HOLD THAT
IF THEY ARE MARRIED THAN THEY HAVE A SPOUSE!
BUT THE CONCLUSION WILL BE FALSE.
IT IS FALSE THAT THERE EXISTS SOME ONE EARLY HUMAN, Y, SUCH THAT ALL MARRIED HUMANS ARE MARRIED TO Y.
OR WE
COULD LET THE DOMAIN BE ANIMALS INCLUDING HUMANS, LET ‘Fx’ BE ‘x IS FENDING OFF
AN ATTACK’ AND ‘Gxy’ BE ‘x IS ATTACKING OR PREYING ON y’. THEN THE PREMISE IS
TRUE BUT THE CONCLUSION IS FALSE: NOT EVERY ANIMAL UNDER ATTACK IS UNDER ATTACK
BY THE SAME ANIMAL!
CHAPTER SEVEN:
PREDICATE LOGIC SYMBOLIZATION
WE ARE NOW READY TO TAKE THE FIRST
STEP IN TURNING OUR TRUTH-FUNCTIONAL SENTENTIAL CALCULUS INTO THE MUCH MORE
POWERFUL LOGIC KNOWN AS: STANDARD 1ST ORDER LOGIC. OUR FIRST STEP WILL BE TO MAKE IT INTO A
PREDICATE CALCULUS: A CALCULUS THAT TAKES INTO ACCOUNT CERTAIN LOGICALLY
SALIENT ASPECTS OF THE GRAMMATICAL STRUCTURE OF SENTENCES THAT IN OUR
SENTENTIAL LOGIC WERE REGARDED AS ATOMIC: SUCH ASPECTS AS
SUBJECT/(NON-RELATIONAL)PREDICATE STRUCTURE AND (1ST ORDER)QUANTIFICATIONAL
STRUCTURE. THE SECOND STEP WILL BE TO
ADD RELATIONAL PREDICATES TO THIS MIX, AND THE THIRD AND FINAL STEP WILL BE TO
ADD THE TREATMENT OF A VERY SPECIAL RELATION: NUMERICAL IDENTITY. LET US CONSIDER SOME SIMPLE ENGLISH
ARGUMENTS:
A.
1.
All politicians are corruptible.
2.
Steve is a politician.
Therefore
3.
Steve is corruptible.
B.
1.
John is a bachelor.
2.
John is lonely.
Therefore
3.
Some bachelors are lonely.
C.
1. Someone is rich. Therefore
2. Everyone is rich.
D.
1. All humans are mammals.
2. All mammals are
warm-blooded. Therefore
3. All humans are warm-blooded.
NONE OF THESE ARGUMENTS IS VALID
BECAUSE OF ITS TRUTH-FUNCTIONAL STRUCTURE (WHY?)
BUT THREE OF THEM ARE INTUITIVELY
VALID, AND THE FOURTH, THOUGH
INTUITIVELY INVALID, IS IMPORTANTLY SO FOR REASONS OTHER THAN TRUTH
FUNCTIONAL STRUCTURE. LET’S FIGURE OUT
WHAT MAKES THEM VALID OR INVALID, AND HOW TO REPRESENT THIS, STARTING WITH A.
Premise 2 of A and the conclusion of
A are sentences that predicate a particular property to a particular
individual. The particular individual is
the same one in each case, namely Steve.
Premise 2 attributes the property of being a politician to Steve, and
the conclusion attributes to him the property of being corruptible.
Let us introduce lower case letters
from a to t as individual constants.
An individual constant is a bit like a proper name, and its role, in a
given context, is to pick out the same individual each time it is used. Let’s use ‘s’ as a constant picking out
Steve.
Then let’s introduce upper case
letters as property constants to pick out properties of
individuals. So, we could use ‘C’ as a
property constant that picks out the property of being corruptible. And we could use ‘P’ as a property constant
to pick out the property of being a politician.
We shall then represent premise 2 as
‘Ps’, read ‘Steve is a politician’; and we will represent the conclusion as
‘Cs’, read ‘Steve is corruptible’.
Premise 1 is more complicated. It says, of everything, that if it has
the property of being a politician, then it also has the property of
being corruptible. We need a way to
refer to everything. To do this we first
need something called an individual variable. We will use lower case letters starting at u
for this purpose – but by a certain tradition, the most commonly used ones are
x, y, and z. What we have to imagine
here is a domain of individuals that our conversation is about. Then an individual variable is sort of like a
pronoun, whose value, unlike that of an individual constant, can vary from one
use to the next in the same context. A
variable can take as its value any individual of the given domain of
individuals; whereas a constant will have assigned to it a fixed value with the
given domain. So instead of saying that
Steve is a politician, we might say, e.g., that
they are a politician (using ‘they’ in that somewhat awkward way
intended to be both singular and gender neutral, and supposedly better than
“he/she”). In symbols, we can represent
this as ‘Px’, read “x is a politician”.
One difference, though, between English and our new symbolic system, is
that while “They are a politician” is at least intended as a sentence of
English, “Px”, unlike “Ps”, is not intended as a sentence. It is called a sentence form.
[You might at this point be wondering
why we are using upper case letters for predicate constants, when we already
use them for sentence constants. The
answer is that we can always tell if it as a sentence constant, because it will
not have any individual constants or variables immediately following it. If it does, then it is a predicate constant.]
One way that a sentence form can be
made into a sentence is simply by replacing its individual variable with a
particular individual constant. But
there is another way, which brings us to the second thing that we need to
represent premise 1: an expression that can be used in attributing something to
everything in the domain. It is called a
universal quantifier. It
consists of an individual variable enclosed in parentheses, like this:
‘(x)’. When we place that in front of
the sentence form “Px”, we get a sentence:
“(x)Px” which says, if our domain is the domain of humans, that everyone
is a politician, or, following the symbolization a bit more closely, “For all
x, x is a politician”.
We can now represent argument A in
our new notation.
1.
(x) (Px ᴝ Cx)
2.
Ps /:.
3.
Cs
In effect, premise 1 attributes to
everything in the domain the ‘conditional property’ of: being corruptible if a
politician. Or to put it another way
that will be useful later on when we talk about the semantics of our new
calculus, premise 1 says that everything in the domain satisfies a
certain condition: the condition expressed by the truth-functionally
complex sentence form: ‘Px ᴝ Cx’.
That still does not explain why A is
valid. But before talking about that,
let’s look at argument B. The 2 premises
are already easy to represent. We will use ‘j’ as an individual constant denoting
John, and ‘B’ as a predicate constant for the property of being a bachelor, and
‘L’ for the property of being lonely.
But what about the conclusion? We
are going to need another kind of quantifier, one that says not that everything in the domain has a property
or satisfies a condition, but that something
(at least one thing) in the domain does.
We will call this an existential quantifier. It may be represented in our notation by
placing a backwards E in front of an individual variable and enclosing the
result in parentheses, thus: ‘(Ǝx)’.
We are now ready to represent argument B:
1.
Bj
2.
Lj /:.
3.
(Ǝx)(Bx · Lx)
This seems valid. We are supposing that j denotes an individual
in the domain of discourse. On the
supposition that j is a bachelor and that j is lonely, it immediately follows
by truth-functional logic (the rule of conjunction) that j is both a bachelor
and lonely. But then something in the
domain (or someone, if we are taking the domain to be just humans) is a
bachelor and lonely. What could be more
obvious? Similarly, premise 1 in A gives
us that that everyone in the domain is such that if they are a politician then
they are corruptible. But then in
particular, if Steve, whom we are taking to be a member of the domain, is a
politician, then Steve is corruptible.
But premise 2 gives us that Steve is a politician. So, it follows by Modus Ponens that he is
corruptible.
Notice that the sentence form
embedded in the conclusion here is conjunctive, not conditional. The sentence “(Ǝx)(BxᴝLx)” means
something subtly different: what it says is that there exists at least one member
of the domain of discourse having the conditional property that if it is a
bachelor, then it is lonely. But that
can be true without any member of the domain of discourse being either a
bachelor or lonely. In fact, it is
trivially true provided that there are not any bachelors, since conditionals
are true if their antecedents are false!
An analogous point holds for premise
1 of argument A. Its embedded sentence
form needs to be a conditional to express the intent of the English, not a
conjunction. To say that (x) (Px · Cx)
is to say that every member of the domain is both a politician and
corruptible. But it can true that
everyone who is a politician is corruptible but false that everyone is a
politician.
Suppose that our domain of discourse is
a finite set of individuals. Then the claim that everything in the domain
has a certain property is equivalent to a long conjunction whose conjuncts
respectively attribute the property in question to every member of the
domain. A conjunction is true only if
all of its conjuncts are true. And the
claim that something in the domain has a certain property is equivalent to a
long disjunction whose disjuncts respectively attribute the property to each member of the
domain. A disjunction is true provided that
at least one of its disjuncts is true.
This shows that if our domain of discourse were always finite, then our
two quantifiers would be dispensable in principle at least. The expressive power of the quantifiers comes
when we want to make generalizations about the members of an infinite
domain. For instance we can say that
every positive integer has a successor, but we cannot express that as a
conjunction, given that sentences must be finite in length. Or we can claim (perhaps falsely, but at
least we can coherently claim) that there is a greatest prime number using an
existential quantifier ranging over the natural numbers, but cannot express
that as a disjunction.
Nevertheless, it is sometimes useful
to represent the meaning of a quantified statement in a (usually small) finite
domain D, whose members are all designated by individual constants as a
conjunction. For instance, if D consists
of just two members, a and b, then the universal generalization that all members
of D have some property F – i.e., (x)Fx – will be the conjunction (Fa · Fb). We will call this the expansion of
(x)Fx in D. The expansion of (Ǝx)Fx in D will be the
disjunction (Fa v Fb). This will prove
to be a useful notion when in Ch. 8 we consider various techniques for checking
for invalidity of arguments.
Let’s look now at argument C
above. From the premise that someone is
rich, it is alleged to follow that everyone is rich. Intuitively, this seems outrageously
invalid. And it is. It may be represented thus in our new symbolization,
where we use “R” as a predicate constant for the property of being rich:
1.
(Ǝx)Rx /:.
2.
(x)Rx
But
how are we to think of validity and invalidity?
Let us say that an argument is valid in Predicate logic just in case
there is no interpretation of its premises and conclusion on which all of the
premises are true but its conclusion false.
Since an argument is invalid just in case it is not valid, an invalid
argument must be such that there is an interpretation making all of its
premises true and its conclusion false.
Such an interpretation is easy to find.
We just need to specify a domain of discourse making the premise true –
which we can do by making sure Jimmy Pattison is a member – and the conclusion
false –by making sure some very poor people – say some homeless beggars -- are
also members. If we specify the domain
as the domain of all currently living humans, that will do nicely. Then the premise will be true and the
conclusion false, and so the argument will be invalid. That is because the property of being rich
will be represented in the domain by the subset of all and only its members who
are rich, and that will be a proper subset (i.e., have fewer members than the
whole domain), since there will be other members of the domain who are not in
the subset of rich members.
Just to drive the point home, one way
to think of it is this: (Ǝx)Fx is
logically equivalent (and provably so) to ~(x)~Fx (i.e., the negation of our conclusion in
C!) To say that something has the
property F is to say that not everything doesn’t have it! Similarly, (x)Fx is logically equivalent to
~(Ǝx)~Fx: to say that everything is F is to say that it is not the case
that anything isn’t F. Related equivalences that are handy to keep in
mind are ~(x)Fx and (Ǝx)~Fx; and ~(Ǝx)Fx and (x)~Fx. All of these equivalences can be understood
in terms of the DeMorgan equivalences, and the fact that a universal
generalization is a big conjunction, while an existential generalization is a
big disjunction. So for instance to
negate a universal generalization, ~(x)Fx
is to negate a conjunction, which is equivalent to a disjunction of
negations, which is just what (Ǝx)~Fx expresses.
Let us finally represent argument D
in our symbol system:
1.
(x) (Hx ᴝ Mx)
2.
(x) (Mx ᴝ Wx) /:.
3.
(x) (Hx ᴝ Wx)
Why does this seem valid? Certainly if the domain of interpretation is
finite we can see that if each of the instances of the two premises are true,
then each of the instances of the conclusion will be true by truth-functional
logic – by doing a series of Hypothetical Syllogisms. [This can be generalized to the case of an
infinite domain, but only using certain mathematical techniques that are in
effect built into the intro- and elim- rules for the universal quantifier.] This conforms to a common pattern in
constructing proofs in predicate logic.
One uses elim- rules to eliminate occurrences of quantifiers, then one
does some truth-functional logic, then one uses intro- rules to put quantifiers
back in. [The rules for introducing a
universal quantifier and for eliminating an existential one are the ones that
are complicated. So if you are only
asked to eliminate some universal quantifiers and introduce some existential
ones, it can be pretty straightforward.
Eliminating existential quantifiers and then introducing universal ones
is more subtle. Alas, time will not
allow us in this course to study predicate logic proofs.]
Here are some select definitions
relevant to syntactic notions that have been introduced in this chapter. But please consider all of the definitions at
the end of Chapter 7.
QUANTIFIER: a symbol
-- a new kind of logical operator -- used to state how many items (all
or some) in the universe of discourse are being referred to.
UNIVERSAL QUANTIFIER: an individual
variable enclosed in parentheses. E.g.,
‘(x)’ is a universal quantifier in variable x, and reads ‘for all x’ or ‘for
any x’ .
EXISTENTIAL QUANTIFIER: an individual
variable preceded by a backwards capital E, the whole enclosed in
parentheses. Thus ‘(Ǝy)’ is an existential
quantifier in variable y, and reads ‘for some y’ or ‘there is at least one y’.
SCOPE OF A QUANTIFIER: the extent of
an expression quantified by a quantifier.
Sometimes to make the intended scope of a quantifier clear, parentheses
or brackets are required.
BOUND VARIABLE: the occurrence of an
individual variable within the scope of a quantifier in that variable.
FREE VARIABLE: the occurrence of an
unbound variable. Note: when an
individual variable is unbound, the expression in which it occurs is not a
sentence but a sentence form.
CHAPTER TEN (SECS. 1-5)
STEP TWO: ADDING RELATIONAL PREDICATES
So far, our predicates have all
expressed non-relational properties of individuals. But many properties are relational: like the
property of being loved by Sue. We could
just introduce a predicate constant for that property, say L, and attribute it to Jack, say j, thus we have:
Lj, which reads, Jack is loved by Sue.
What about Sue, though? Is she
loved by Jack? If so, and we want to
express it using a predicate constant, we will need a different one, since the
property of being loved by Jack is clearly different from the property of being
loved by Sue. A better, more efficient
way to proceed is to introduce a two-place predicate, Lxy, for expressing the
binary relation of x being loved by y.
Then to represent Jack’s being loved by Sue we may write: Ljs, and to
represent Sue’s being loved by Jack we may write Lsj. We can generalize this approach by
introducing 3-place predicates for representing
3- place relations (such as the relation of occurring between an earlier
and later event), 4-place predicates for 4-place relations,…, all the way to
n-place relations for some arbitrarily large but finite number n. Things get
interesting when we combine this with quantifiers. The expressive power of our formal system
greatly increases. Not only does the
order in which constants or variables following a relational predicate matter,
but if it is individual variables, the order and type of the quantifiers
binding these variables also contributes another whole dimension of semantic
nuance. Take the old Dean Martin song,
“Everybody loves somebody sometime….” Of course the song does not mean that
there is one person that everyone loves at the same time; rather, it means that
everyone, at one time or another, loves somebody or other. Let Lxyz express x loving y at time z (note: not being loved by, as above; changing
from the passive verb to the active verb changes the direction of the
relation!), Px express being a person, and Tx express being a time. Then we have (x) {Px ᴝ (Ǝy)(Ǝz)[(Py · Tz)· Lxyz]} or equivalently
(x)(Ǝy)(Ǝz){Px ᴝ [(Py·Tz) · Lxyz]}.
But what if we wanted to say instead
that there is some one person that, and some one particular time at which,
everyone loves that person? Then we have
to change the order of the quantifiers in the last formula as follows:
(Ǝy)(Ǝz)(x){Px ᴝ [(Py · Tz) · Lxyz]}.
Notice that only the order of the
quantifiers has changed. Finally, what
if we wanted to say something different yet again: that there is some one
person that everyone at some time or other loves? Here it is:
(Ǝy)(x)(Ǝz){Px ᴝ [(Py · Tz) · Lxyz]}.
If you say “For all x there exists a
y…” your existence claim falls within the scope of your universal quantifier:
for each value of x there could be (though needn’t be) a different value of y
to which it bears some particular relation.
If you instead say “There exists a y such that for all x…” it is the
universal quantifier that falls within the scope of the existence claim: for some particular value of y, each value of
x bears some particular relation to it.
CHAPTER 5: CONDITIONAL AND INDIRECT PROOFS
We want all of our proofs to be valid
– that is, to satisfy a truth-table test for validity. The property of a system of rules, according
to which all proofs constructed by correct use of the rules are valid, is
called the “soundness” of the system. So
far, our system of 18 rules is sound. And it will remain sound when we add the
2 more rules that we are about to add.
Proving this soundness is beyond the scope of this course however.
We also want our system of rules to
be “complete” in the sense that for every valid argument – for every argument
that satisfies a truth-table test for validity – there is a proof of it using
our system of rules. So far, our system
of rules is not complete in that sense.
But it will become complete with the addition of just 1 of the 2 rules
we are about to add. Again, proving this
is beyond the scope of this course.
Our 2 new rules are a new kind of rule. Instead of taking us
from previous lines in a proof to a new line, these new rules begin by
introducing a new line as a temporary assumption, and then taking us to another
new line given that assumption plus any previous lines if there are any
previous lines. And then the rule takes
us to a third new line and discharges the temporary assumption in the process!
We want our system of rules to enable
us to construct proofs of tautologies.
Tautologies are true no matter what, and so they are not conditional on
the truth of anything else. Their truth
does not depend on the truth of any given premises, so a proof of them reveal
this: it should be a proof from no given premises at all! All of our previous 18 rules require that we
have at least one premise to start with.
But the 2 new rules do not. So
constructing a proof of a tautology will always require the use of one of our
new rules, just to get the proof started.
We will call a sentence that has a proof from no given premises a
“theorem”. We want all and only
tautologies to be theorems in this sense,
and the soundness and completeness properties of our system of 20 rules
will guarantee this.
The first new rule is called
Conditional Proof or “CP”. It works like
this. Suppose that what you are trying
to derive is a conditional – i.e., a sentence whose main connective is the
material conditional. CP allows you to
introduced as a new line the antecedent of that conditional as a temporary
assumption. Then one tries to derive the
consequent of the conditional from the assumption of the antecedent together
with any other lines, if any, that one already has. Assuming that one succeeds in doing this, CP
then allows one to introduce the whole conditional as a new line of the proof,
and in doing so discharges the temporary assumption. That is, the conditional does not depend on
the assumption any more. It only depends
on the premises, if any, of the argument.
And if there were no premises, then the conditional will have been
proven as a theorem.
To illustrate the power of this new
rule, which in effect is a general “conditional introduction” rule, recall from
Ch. 3 that every argument has its “corresponding conditional”, in which the
conjunction of all of the premises of the argument forms the antecedent, and
the conclusion of the argument forms the consequent. An argument is valid by a truth table test
just in case its corresponding conditional is a tautology by a truth table
test. Using CP, one can convert any
proof of an argument into a proof of its corresponding conditional as a theorem. Here is how:
one adds as a new first line of the proof a line consisting of the
conjunction of all of the given premises of the argument. This new line is
being introduced a temporary assumption as part of a CP strategy. In our bookkeeping, we will mark it as “AP”
or “assumption premise”. Then one simply
uses Simplification to get to each of the conjuncts of our AP standing
alone. Those conjuncts were premises in
the original derivation, but here they have been derived from AP so they won’t
be marked as premises any more. The
original proof of the argument then takes us to its conclusion, which is of
course the consequent of the corresponding conditional. We then use CP to infer that corresponding
conditional as a new line, in the process discharging the assumption premise,
AP. Our bookkeeping for the new line
will be citing the steps that took us from the assumption of the antecedent to
the consequent, and then mentioning the rule, CP. And so now we have a proof of the
corresponding conditional depending on no premises at all; i.e., we will have
proven it as a theorem.
To make graphic that the line derived using CP
does not depend on the temporary assumption that was introduced as part of the
CP strategy, the authors of our text mark the AP line to its left with an arrow
pointing to it, and then extend that line down the left side of the proof, and
then underneath the line where the consequent has been derived from AP. This line represents the scope of the
assumption. So when the conditional
derived by CP occurs underneath that line, it falls outside the scope of the
temporary assumption. See your text, and
in class and tutorials, for examples.
Our second and final new rule is
called “Indirect Proof”, or “IP”. It is
a version of an ancient proof strategy known as “reductio ad absurdum”.
Suppose that you want to derive some particular sentence S, but you
cannot see how to do so ‘directly’, e.g., from the previous lines (if any) that
you already have. What you do then is
introduce ~S as a temporary assumption, again marking it “AP”, and then you try
to derive a contradiction from that assumption.
If the negation of what you are trying to derive leads to a
contradiction, then what you are actually trying to derive must be okay, and
the rule IP allows you to enter it as a line.
Once you have reached the
contradiction, which, for purposes of the rule must be a sentence of the form p·~p, the rule IP allows
you to write as a new line the sentence S itself, at the same time discharging
the temporary assumption of its negation.
The bookkepping to the left of S will cite the lines that took you from
the assumption of ~S (sometimes referred to as the “reduction assumption”) to
the contradiction, and then the rule IP.
The arrow graphics is used as with CP to indicate where the scope of the
AP assumption ends. See the text, and in
class and tutorials, for examples. If a sentence
that you want to prove to be a theorem does not have a conditional as its main
connective, and you do not see any easy way to get to it from some conditional
that you might try to prove using CP, then IP is available as a proof
strategy. It turns out that our system
of rules is complete without IP, but IP is nevertheless very convenient, and
allows some proofs to be shorter than they otherwise would be. But of course, the rule IP is not just for
proving theorems; it can be used anywhere in the context of constructing a
proof.
CHAPTER 4: PROOFS
Let us call an argument form
an ordered sequence of sentence forms such that the last one is called the
conclusion and is said to logically follow from the previous ones. Every
substitution instance of an argument form will be an argument. Given any argument in English, its
truth-functional argument form may be determined by first identifying all of
the atomic components in all of its sentences, representing them with distinct
sentence constants, representing the truth-functional relations between these
atomic sentences as expressed by the English sentences in terms of our truth-functional
connectives, and then converting each of the resulting sentences of our
symbolism to their respective one-one logical forms. The result will be an argument form
representing the truth-functional structure of the English argument.
Now as we learned in Ch. 3, truth
table analysis provides us with a method for determining, for any argument
whose validity turns on its truth-functional structure, whether or not it is
valid. Once the truth functional
structure of an argument is determined, and represented as an argument form, the
truth table method for testing for validity (in any of the 3 variants
illustrated in ch. 3) constitutes a decision procedure, in the
mathematically precise sense that it is a purely mechanical procedure, which,
if correctly followed, is guaranteed to determine in a finite
number of steps whether or not an argument form is truth-functionally
valid or invalid. A machine can be
programed to apply this procedure.
But though it is comforting to have
such a decision procedure, it turns out to be rather impractical and cumbersome
when the arguments become very complex and with lots of sentence
variables. So logicians have devised
other, simpler ways of determining the validity of an argument. One of these, which we will study, is the method
of proof. In this method, one
tries to go in a finite series of steps from the given premises of an argument
to its given conclusion, in a way such that each intermediary step is reached
from previous steps, by designated argument forms (sometimes called
‘rules of inference’) recognized as valid (or ‘truth preserving’ in the sense
that if applied correctly they could never lead us from true statements to
false). If one succeeds, then the
resulting sequence of sentences is called a proof of the conclusion from
the given premises, and the original argument will thereby be deemed valid.
This method of proof is not a
decision procedure in the formal mathematical sense referred to above, because
it depends on one’s ability to detect an appropriate sequenced pattern of steps
taking one from the given premises to the given conclusion via the designated
valid argument forms that one is permitted to use by the given method of
proof. (Different methods will vary in the valid forms they
allow for proof construction.) Often this can be challenging. Still, if one can see the pattern of
steps required, it is usually less work to go through thos steps than would be
required to run a full truth-table test for validity; and, having constructed
the proof, one will know that the argument is valid. But if one cannot see how
to construct the proof, that will not
constitute knowledge that the argument is invalid. So it is not a decision procedure.
Of course this raises a question
about the rules or valid argument forms that one is given, by one’s method, for
constructing a proof: how do we know
that they are up to the task of supplying a proof for any valid argument,
that is, for any argument that can be independently shown to be valid by a
truth-table test? How do we know that
when we cannot see how the proof should go, the problem is with us and not with
the inadequacy of our rules?
Mathematical logicians call this question about a method or particular
system of proof the question of its completeness. We want our method of proof to be complete in
the sense that its rules of inference or designated valid argument forms for
constructing proofs will yield a proof of any valid argument form, given enough
ingenuity on our part. You will be happy
to know that the method of proof for truth-functional sentential logic in Chs.
4 and 5 is in fact complete. Not
only is it complete, it contains more valid rules of inference than needed. And more is better here, because it increases
the chances of finding a shorter proof.
Proving such completeness, though, would a topic for more advanced logic
courses.
Our method of proof will ultimately
involve three different kinds of rules, 20 in all. We will get introduced to the 3rd kind only in
ch. 5. We study the first two kinds in
ch. 4. And we will begin today with 8
rules that express some basic, and pretty intuitive, valid argument forms. Each rule has a name. The most obvious ones can be thought of as
‘introduction’ and ‘elimination’ rules.
Let’s start with conjunction and disjunction. The introduction rule for conjunction tells
us how to get a conjunction – i.e., a sentence whose main connective is
conjunction -- as a new line in the proof we are constructing. We can do it only by conjoining two previous
lines of the proof (in any order). The
elimination rule for conjunction tells us how to infer something from a
conjunction that we already have; it tells us that we may infer either of the
conjuncts as a new, separate line.
Appropriately enough, the Intro rule for conjunction is called
“Conjunction” or “Conj” for short; the Elim rule for conjunction is called
“Simplification” of “Simp” for short.
Here is how these rules are represented:
Conjunction (Conj.): p Simplification
(Simp.): p·q/:.p
q/:.p·q
I hope that it is clear that these
are valid argument forms. It is
important that the rules are formulated with statement variables. Given that the forms are valid, we know then
that any substitution instances of them are also valid.
The Intro rule for disjunction is
called “Addition” or “Add” for short. It
says that given a previous line in the proof we are constructing, we may enter,
as a new line the disjunction of anything (and in any order) to that previous
line. After all, if we are supposing
that a sentence is true, then we should hold that the result of disjoining
anything to that sentence will also be true.
The Elim rule for disjunction is “Disjunctive Syllogism” or “DS”. It says that if we have a disjunction as a
previous line, and have the negation of one of the disjuncts as another line,
then we may infer the other disjunct as a new line. A disjunction is true only if one of the disjuncts
is true. If it isn’t, say, the left
disjunct, because we have its negation as a previous line, then it must be the
right disjunct. Here is how these rules
are represented:
Addition (Add): p/:. pvq Disjunctive Syllogism (DS): pvq
~p /:.q
We actually have four rules for the
material conditional. The most general
Intro rule for the conditional is not introduced until ch. 5. But one rule that could be thought of as a
kind of limited Intro rule for the conditional is called “Hypothetical
Syllogism” or “HS”, and it basically expresses the transitivity property for
the material conditional. A two place
relation R is said to be transitive whenever if xRy and yRz, then xRz. Suppose
that we already have two conditionals as lines of the proof we are
constructing, such that the antecedent of the first one is the consequent of
the second one. Then we may introduce as
a new line of the proof a conditional having as its antecedent the antecedent
of the second one and as its consequent the consequent of the first one. The rule looks like this:
Hpothetical Syllogism (HS): pᴝq
qᴝr/:.
pᴝr
We then have three rules that could
be regarded as Elim rules for the conditional:
Modus Ponens (MP): pᴝq Modus
Tollens (MT): pᴝq
p /:.q ~q/:.
~p
Constructive Dilemma (CD): p v q
pᴝr
qᴝs/:.
r v s
If you have any doubts about the
validity of these argument forms you should do a truth table test. (Note that CD can equally be thought of as a
disjunction introduction rule. It tells
you how to get a certain disjunction as a new line of the proof you are
constructing, given that you already have a disjunction, together with two
conditionals whose respective antecedents are the disjuncts of the disjunction
you already have, and whose respective consequents are the disjuncts of the new
disjunction you are going to introduce as a new line.)
All 8 of the valid argument forms
that we have introduced so far as ‘designated’ rules of inference have two
constraints on their correct use: they
may only be applied to whole sentences, and they may only be applied in one
direction. The next ten designated rules
(some of these are actually related groups
of rules) are bi-directional, and may be applied not only to whole sentences
but to parts of sentences. As a group
they are called “Valid Equivalence Forms”.
They express relations of truth-functional logical equivalence between
statement forms. The relationship of
equivalence explains both why they may be used in both directions (because if a
sentence p is logically equivalent to a sentence q, then q is logically
equivalent to p); and also why they may apply to parts of sentences. The reason
for the latter is that all we are tracking in our formal symbolic system so far
is truth-functional structure, and its bearing on the truth of a sentence. So if you replace a part of a sentence with
something that is logically equivalent to that part, then we know that it will
have to be true when and only when the original part is true. So that cannot make any difference to the
truth value of the whole sentence. It also
flows from this line of reasoning that if the part of the sentence that you are
replacing occurs more than once in the sentence, it isn’t going to matter to
the truth value of the sentence whether you replace all of its occurrences with the logically equivalent sentence or
just some.
Double Negation (DN) tells us that any sentence is logically equivalent to its double
negative.
Commutation (Comm) tells us that you can always replace a conjunction with another
conjunction that reverses the order of the conjuncts. Same for a disjunction.
Association (Assoc) tells us that a three term disjunction with grouping to the left is
equivalent to that same disjunction with grouping to the right. So also with a three term conjunction.
Contraposition (Contra) tells us that a conditional is logically equivalent to its
contrapositive.
Implication (Impl) tells us that a conditional is logically equivalent to a disjunction of
the negation of the antecedent of the conditional with the consequent (in that
order).
Tautology (Taut) tells us that any sentence is logically equivalent to a conjunction of
that sentence with itself, and to a disjunction of that sentence with itself.
Equivalence (Equiv) tells us both that a biconditional is logically equivalent to a
conjunction of a conditional with its converse, where the terms of the
conditional are the terms of the biconditional; and that a biconditional is
logically equivalent to a disjunction, one of whose disjuncts is the
conjunction of the two terms of the biconditional, the other disjunct being the
conjunction of the negations of both of those terms.
Exportation (Exp) tells us that the conjunction of
p and q materially implying r is logically equivalent to p materially implying
that q materially implies r.
DeMorgan’s Theorems (DeM) tell us, first, that the negation of a conjunction is logically
equivalent to the disjunction of the negations of those conjuncts; second, that
the negation of a disjunction is logically equivalent to the conjunction of the
negations of the disjuncts. Finally,
Distribution (Dist) is a little hard to capture in colloquial English, so I will give its
symbolic representation here. (For the
other symbolic representations, see the inside flap of the front cover of your
text.) It has two forms:
[p
· (q v r)] :: [(p · q) v (p · r)]
[p
v (q · r)] :: [(p v q) · (p v r)]
Now, for a little logical magic. Here is an example of a proof that relies
mostly on our new replacement rules. If
you practice with these rules, using the exercises in your text, and the
answers in the back, then before too long you will acquire the ability to see
how to go about constructing a proof like this.
1. (A v B) ᴝ C p /:. A ᴝ C
2. ~( A v B) v C 1, Impl
3. (~A · ~B) v C 2, DeM
4. C v (~A · ~B) 3, Comm
5. (C v ~A) · (C v ~B) 4, Dist
6. C v ~A 5, Simp
7. ~A v C 6,
(Comm)
8. A ᴝ C 7, Impl
Of course, you could stare at that
for a long time and not know what to do.
Maybe the first thing to do is convince yourself that it really is
valid. It says, roughly: “ Having either
one (or both) of A and B gives you C; therefore having A gives you C” The same should hold for B, right? “Having either one (or both) of A and B gives
you C; therefore having B gives you C. Having
obtained both of those conclusions, one could conjoin them: Having A gives you
C and having B gives you C. That might
suggest to you that this conjunctive conclusion is a “distributed” version of
the premise. So maybe we can get the
conclusion we want through distribution. But as the premise stands we clearly cannot
apply distribution to it. We will need
to get rid of the conditional. That is
easily done using Impl in one direction.
But now at line 2 we have the problem of the negation, whose scope is
the disjunction that contains A as a disjunct.
If we are going to use Distribution to extract that A out, we will need
to get rid of that negation. But we can
easily do so simply by applying the appropriate De Morgan principle, in one
direction, just to that part of the line. We are almost ready to apply one of the
Distribution principles. All we have to
do is turn the disjunction at line 3 around, using Comm. Now we can apply Distribution: we, as they
say “distribute the disjunct C over the two conjuncts ~A and ~B”, to get line 5,
whose main connective is now conjunction.
Notice that at line 5 we could infer either
conjunct by Simplification. This
corresponds to our earlier intuition that we should also be able to get B ᴝ C as a conclusion from
our premise. But of course that is not
what we want here; we want Aᴝ C as our conclusion, and so we will use Simp to infer the conjunct with
the A in it, to get line 6. Now at least
we have both A and C appearing together is the sole two atomic components of
the sentence. We are not quite there
yet, though, because A and C are not in the right order and they are not
connected by the right connective, and there is a negation symbol we don’t
want. Well, it is easy to get them in
the right order, using Comm. And then
one more step, of Impl in the other
direction from before this time, gives us exactly our conclusion. Q.E.D.!
CHAPTER 3: TRUTH TABLES
Sentence form:
An expression containing sentence variables, such that if all its
sentence variables are replaced by sentence constants, the resulting expression
stands for a particular compound sentence.
Substitution instance: A sentence obtained from a sentence
form by replacing all the sentence variables in the sentence form by sentences
(atomic or compound), making sure that every occurrence of a given sentence
variable is replaced by the same
sentence. (Note: more than one given
variable may be replaced by the same sentence, but if so, every occurrence of
each of those variables must be replaced by the same sentence.) Substitution instances of a sentence form may
have more truth-functional structure than the form of which they are an
instance, but they can never have less, and they must have the same ‘gross,
over-all’ truth-functional structure, including the same main connective.
One-one logical form of a statement: The replacement of
each statement letter of a statement by a distinct sentence variable, resulting
in a sentence form; making sure that all occurrences of the same letter are
replaced by occurrences of the same variable throughout.
Valuation:
An assignment of truth-values to the atomic statements of a compound
statement, from which the truth-value of the compound statement in which they
occur can be calculated using the truth tables of the connectives.
Truth table analysis: A method for determining the truth value of a
sentence, from knowledge of the truth values of its component sentences, or
from a given valuation of its atomic sentences.
Also, a method for determining whether a sentence is truth-functionally tautologous, contradictory, or
contingent, by considering the truth table for its one-one sentence form. Also a method
for determining the truth-functional validity or invalidity of arguments and
argument forms, and the truth functional consistency or inconsistency of sets
of sentences or sets of sentence forms.
Tautology: A statement whose one-one logical
form guarantees that it is true. Every
tautology is a substitution instance of a tautologous form. Note:
if a statement is tautologous, then every substitution instance of its one-one
form will also be tautologous.
Tautologous statement form: a statement form that has all T’s
under its main connective in its truth-table.
Contradiction: a statement whose one-one logical
form guarantees that it is false. Every
contradiction is a substitution instance of a contradictory statement form. Note:
if a statement is a contradiction, then every substitution instance of its
one-one form is also a contradiction.
Contradictory statement form: A statement form that has all F’s
under its main connective in its truth table.
Contingent statement: a statement not guaranteed by its one-one
logical form alone to be either true or false. Note: it is not the case that that every
substitution instance of a contingent statement’s one-one form is
contingent. Some of them are, but others
will be tautologous and still others contradictory.
Contingent statement form: A statement form that has at least
one T and one F under its main connective in its truth-table.
Three easy
pieces.
1.
Either snow is white or it is not white.
W: Snow is white
W v ~W (statement)
p v ~p (one-one logical form)
other substitution instances of p v ~p:
V v ~V; ~B v ~~B; (SᴝW) v ~(SᴝW);
p
p v ~p
T T
F F So:
p v~p is a tautologous statement form
and
W v ~W is a tautology.
Question: is
(SᴝW) v ~(SᴝW)
also a tautology? Why? Let’s check our intuitions with a truth-table
for its more detailed one-one logical form.
↓
p q (p ᴝ q) v ~ (p ᴝ q)
T T T
T F
T F F
T
T
F T T
T F
FF T
T F
So: all T’s under the main
connective makes
It a tautologous statement form.
2.
John is
both married and not married.
M: John is married
M · ~M (statement)
q · ~q (one-one logical form)
q q · ~q
T F
F F So: q · ~q is a contradictory statement
form,
and M · ~M is a
contradiction.
3.
If Sue is not a logician then she is a
logician.
L: Sue is a logician
~L ᴝ L
(statement)
~r ᴝ r (one-one logical
form)
r ~r ᴝ r
T F
T T
F T
F F So:
this is a contingent statement form, and
~L ᴝ L is a contingent statement.
Substitution instances of ~r ᴝr:
~C ᴝ C;
~(B v ~B) ᴝ (B v ~ B); ~(D · ~D) ᴝ (D · ~D) The first of these is also contingent; the
second is tautologous (why?) and the third is a contradiction (why?).
Note: In example 1 above, we were able to determine
that a sentence was a tautology, and therefore true, just by virtue of its
logical form alone. In example 2 above,
we were able to determine that a sentence was a contradiction, and therefore
false, again just by virtue of its logical form alone. So sometimes logic alone
can tell us whether or not a sentence is true or false. But in example 3 we determined that the
sentence was a truth-functional contingency, so the question of its truth or
falsity was not thereby determined.
Here, then, is a case where we need to know whether or not the atomic
sentences (in this case “Sue is a logician”) is true or false. That is, we need a valuation. Once we are
given a valuation for the atomic sentence components (hopefully the valuation
with the correct information about Sue), then and only then can we determine a
truth value for 3.
SOME
MORE DEFINITIONS
logical
equivalence: a tautology whose main connective is ‘≡’;
therefore, a material equivalence whose truth can be determined by means of
logic alone.
p and q
are logically equivalent sentences
just in case p≡q is a tautology, i.e., is a logical equivalence.
logical
implication: a tautology whose main connective is ‘ᴝ’;
therefore a material implication whose truth is determined by means of logic
alone.
p
logically implies q just in case ‘pᴝq’ is a tautology,
i.e., a logical implication; and
therefore it is not possible for p
to be true and q false.
corresponding
conditional of an argument: the conditional whose antecedent is the
conjunction of the argument’s premises, adding parentheses where appropriate,
and whose consequent is the argument’s conclusion. An argument is valid if and only its corresponding
conditional is a logical implication, i.e., a tautology.
counterexample
set: the set consisting of the premises of an argument together with the
denial of the argument’s conclusion. An
argument is valid if and only if its counterexample set is inconsistent (i.e.,
there is no valuation – no assignment of truth values to its atomic components
– on which all of its member sentences turn out true).
SOME
EXAMPLES OF THE ABOVE
Consider the following sentence forms:
~(p · q), ~p v ~q.
Are they logically equivalent?
If so they should have the same truth values on the same lines of their
truth tables; alternately, the sentence formed by making them the terms of a
material equivalence should be a logical equivalence, and so a tautology (all
Ts under its main connective). Check it
out!
Does p logically imply qᴝp?
If so, then there should be no lines of their truth tables making p true
and qᴝp false; alternatively, it does just in case the material
conditional pᴝ(qᴝp) is a tautology (all Ts under its main connective. Check it out, and also check out whether ~p
logically implies pᴝq.
Is the following a valid argument?
pᴝr, ~sᴝ~r /:. pᴝs It is just in case no assignment of truth
values to its sentence variables both makes all of its premises true and its conclusion
false; also, just in case its corresponding conditional is a tautology; also,
just in case its counterexample set is inconsistent. Let’s check all three.
↓
p r
s pᴝr ~sᴝ~r /:. pᴝs
TTT T
F T F
T
TTF T
T F F
F *
TFT F
F T T
T
TFF F
T T T
F *
FTT T
F T F
T
FTF T
T F T
T
FFT T
F T T
T
FFF T
T T T
T
The only lines where the
conclusion is false are lines 2 and 4.
But on both of those lines, one of the premises is false also. So there is no line on which the premises are
true and the conclusion false. So the argument is valid.
This is a good place to talk
about the so-called ‘shorter truth-table
test for invalidity’. The usefulness
of this method depends a lot on what the sentences or sentence forms are. To check for invalidity, look for a valuation
(corresponding to a line of the truth table) making all the premises true and
the conclusion false. If you find one,
the argument is invalid, if not, it is valid.
In our example the conclusion is a material conditional, and there is
only one way for a material conditional to be false: when its antecedent is
true and its consequent is false. So set
p as T and s as F, and then see whether or not there is a way to make the
premises both true. The first premise is
pᴝr. Since p is T, the only way
for that premise to be true is if r is also T.
But if r is T then ~r is F, and so the only way for the second premise,
~sᴝ~r, to also be true is if ~s is also F. But then s would have to be true, but then
the conclusion wouldn’t be false. So the
argument is valid. Note that this method
would be a lot more work if, e.g., the conclusion happened to be a conjunction,
since in that case there would be 3 ways for it to be false.
Corresponding conditional:
↓
p r
s ( (pᴝr) · ( ~sᴝ~r)) ᴝ
(pᴝs)
TTT T T F
T F T T
TTF T F T
F F T
F
TFT F F F
T T T T
TFF F F
T T T
T F
FTT T T
F T F
T T
FTF T F
T F T
T T
FFT T T
F T T
T T
FFF T T
T T T
T T
It is all Ts under the main
connective, so the corresponding conditional is a logical implication, a
tautology. So the argument is valid.
counterexample set:
↓ ↓
p r s { pᴝr ,
~sᴝ~r , ~(pᴝs) }
TTT T
F
T F F
TTF T
T F F T
TFT F
F T
T T
TFF F
T T
T T
FTT T
F T
F F
FTF T
T F
T T
FFT T
F T
T F
FFF T
T T
T F
There is no line making all of
the sentences true, so the sentences form an inconsistent set. So the premises of the argument together with
the negation of its conclusion form an inconsistent set. So whenever all the premises are true, the
negation of the conclusion must be false, so the conclusion true. So the argument is valid.
The text also talks about a ‘shorter truth table test for consistency’. If you can find an assignment of truth values
making all of the sentences or sentence forms of the set true, then it is a
consistent set (and so not inconsistent).
Again, the usefulness of this method will vary. There is only one way to
make a conjunction true or negation true, but three ways to make a conditional
or disjunction true.
CHAPTER 2: THE LOGIC OF TRUTH
FUNCTIONS
SOME SYNTACTICAL CONCEPTS:
Sentence connective: a term or phrase of English used to
make a larger sentence from two smaller ones, or used to form the negation of a
sentence.
Atomic sentence: a sentence that contains no
sentence connectives.
Compound sentence: a sentence that contains at least
one sentence connective.
Component sentences: the smaller sentences from which a
compound sentence has been formed by the use of sentence connectives.
SOME SEMANTICAL CONCEPTS:
Statement:
the use of a (declarative) sentence that has a definite fixed truth
value.
Truth-functional sentence connective: a sentence connective such that the
truth values of the sentences formed by its use are completely determined by
the truth values of their component sentences
SOME ELEMENTS OF OUR FORMAL SYSTEM OF
SENTENTIAL LOGIC:
Sentence constant: a capital letter (A, B, C,…) used
to abbreviate a particular English sentence,
atomic or compound.
Statement variable: A lower case letter starting from p
(p, q, r,…) which represents no particular statement, but for which statements
can be substituted.
Truth-functional operator symbols: symbols that express particular
truth-functional sentence connectives, and which are used with sentence
constants and statement variables to form compound sentence forms whose truth
values are completely determined by the truth values of their component
sentence forms.
Five truth-functional operator
symbols:
For negation: ~ ; called “tilde”; “~A” reads “It is not the
case that A”, and is true just in case A is false; otherwise it is false.
For conjunction: · ;
called “dot”; “A·B” reads “It is
the case both that A and that B”, and is true just in case both A and B are
true; otherwise it is false
For disjunction: v ; called “wedge”; “AvB”
reads “Either it is the case that A or it is the case that B”, and is true just
in case at least one of A and B (possibly both) are true; otherwise it is
false.
For material conditional: ᴝ ; called “horseshoe”; “AᴝB” reads “If it is the case that A
then it is the case that B”, and is true unless A is true and B is false, in
which case it is false. For material biconditional: ≡ ; called “tribar”; “A≡B” reads “It is the case that A if
and only if it is the case that B”, and is true just in case A and B have the
same truth value.
These
interpretations can conveniently be represented using the truth-table format, as found in your text. The truth tables for each of our 5
truth-functional connectives should be learned.
[Some non-truth-functional
connectives in English:
It is
possible that A; It is necessary that A; It is likely that A
Fred
believes that A; Fred wishes that A
That A is
caused by B; That A occurred before B; A implies B]
Syntactic Disambiguation devices:
Parentheses: ‘(‘, ‘)’ Brackets:
‘[‘, ‘]’ Braces: ‘{‘, ‘}’
These are
used to make clear the intended scope of
particular occurrences of operators in compound sentences or sentence forms, where the scope of an occurrence of an operator
is the component sentence(s) or sentence form(s) that it operates on.
The main connective of a sentence or
sentence form is the connective with the greatest scope.
Consider: (1) DvG·H
and (2) ~AvB
(1) is syntactically ambiguous between (a)
(DvG)·H and (b) Dv(G·H). As we will soon
have a test to determine, these are not equivalent. In (a) the main connective
is the conjunction, where as in (b) the main connective is the
disjunction. (a) implies the truth of H,
whereas (b) does not.
(2) Is ambiguous between (c) ~(AvB) and (d)
(~A vB), which are not equivalent either.
(c) implies that B is false, but (d) does not.
Note: for negation we will adopt the
convention that the scope of a negation symbol is always the shortest complete
sentence that follows it. Therefore, on
this convention, (2) as originally given is unambiguous after all, so the outer
parentheses that were used in (d) are not necessary. The parentheses in (c) are needed to distinguish
it from (d).
NOTES ON REPRESENTING
THE TRUTH-FUNCTIONAL STRUCTURE OF ENGLISH SENTENCES
It is important to note:
first, that a given truth-functional structure can typically be expressed in
English in more than one way;
second, there will often be more expressed by an English statement than just
its truth-functional structure, yet only its truth-functional structure need be
relevant to the validity or invalidity of an argument in which the statement
appears. So it is important to be able
to detect the relevant truth-functional structure, while ignoring the ‘background
noise’ of other aspects of what the statement may express;
last but not least, there can typically be more than one equivalent way to correctly
represent in our symbolic notation the truth-functional structure of a given
English statement.
Some illustrations of these points:
The negation of “John is married” can be expressed as “It is not the
case that John is married”; “John is not married”; “John isn’t married”. (What do you make of: “Would that John were
married”?)
The truth-functional structure of
each of the following is completely captured as a conjunction: “John is
married and has two kids”, “John is married, but wants a divorce”, “John is
married, yet you would never know it”, (What about: “John is married because
that’s what was expected of him”?)
The following typify different ways
of expressing (inclusive) disjunction
in English: “Either John wrote a haiku, or he wrote a limerick” “I am in debt or I won the lottery” (How
about: The final test is either this week or next”?)
Here are some material conditionals, together with their correct representation
in our symbolic notation:
“If Sue got an A then she deserved it” (‘A’,
for “Sue got an A”; ‘D’ for “Sue deserved an A”: AᴝD);
“I will spill the drink if you bump
me” (‘S’ for “I will spill the drink”; ‘B’ for “You bump me”: BᴝS);
“I will spill the drink only if you
bump me” (SᴝB);
Note the difference in direction from the previous example. It matters!
Can you think of ways of representing this truth-functional structure
other than by using the material conditional symbol?
“Sue will pass easily, provided she
keeps up” (‘P’ for “Sue will pass easily”; ‘K’ for “Sue keeps up”:KᴝP).
(What
about the following: “Sue will pass easily unless she does not keep up”;
“Unless you try you will not succeed”?
Try to think of two ways, one that uses negations one that does not.)
Some
material biconditionals in English: “The
litmus turned red if and only if the liquid was acidic”; “Jones passed the exam
just in case he got a grade of at least 50%”.
(Think of ways to represent the truth functional structure of these
sentences other than by using tribar.)
‘Not both’
e.g.,
“Sam cannot both have his cake and eat it too”
H: Sam
has his cake.
E: Sam
eats his cake.
~(H·E) or, equivalently, ~Hv~E (Why?)
Question: Does this capture the “cannot” in the English
sentence?
‘Neither……nor’
e.g.,
“He was neither willing nor able to complete the task”
W: He
was willing to complete the task.
A: He was able to complete the task.
~(WvA) or, equivalently, ~W·~A
(Why?)
Question: Can we represent truth-functionally what is
essential to our definition of (deductively) valid argument: an argument is valid if and only if it is not
possible for all of its premises to be true and its conclusion false? Let’s try
V: The
argument is valid
P: All of its premises are true
C: Its conclusion is true
V ≡ ~(P·~C)
Does
that capture the notion of validity? If not, what is left out? (Try to think of an argument that satisfies
the truth-functional condition just articulated, but is not valid.)
JUSTIFYING THE TRUTH-TABLE INTERPRETATION OF
THE MATERIAL CONDITIONAL
p q
pᴝq compare: ~pvq
1. T
T T T
2. T
F F F
3. F
T T T
4. F
F T T
As you
can see, the material conditional is truth-functionally equivalent (i.e., has
the same truth values for the same component truth values) as a certain
disjunction. We could introduce the material conditional by defining it in
terms of disjunction and negation. We
don’t even need it. Still, our goal here
is to design a tool that enables us to represent the truth-functional structure
of English sentences and arguments in a natural way, and English is rife with
conditionals. So as a way of justifying
our truth table for horseshoe, let’s try to systematically explain each
numbered line of it in terms of our intuitions about the validity and
invalidity of various patterns of English reasoning involving conditionals, in
terms of an interpretation of that conditional as our truth-functional material
conditional. We will consider 4
examples, one for each numbered line of the truth-table.
If the litmus turned red, the liquid was acidic.
The litmus turned red.
Therefore,
The liquid was acidic.
This
argument intuitively satisfies our definition of valid argument. It has the form: p, pᴝq /:.q. To make it invalid,
there would have to be an assignment of truth values making pᴝq and p
both true, but q false. But the only
line where p is true and q is false is line 2 of the truth table, but that line
makes pᴝq false.
Consider
next the following argument:
If 5 ≥ 3 then 5 ≥
2.
5 ≥ 3. Therefore
It is not the case that 5
≥ 2.
This
argument does not, intuitively satisfy our definition of valid argument: its
premises seem both true yet it conclusion seems false. It has the truth-functional form: pᴝq, p/:.~q. In order for the
conclusion to be true, q would have to be false. The only line of the truth table on which the
premises are both true is line 1. But on
that line q is true, making ~q false. So
the conclusion is false and the argument is invalid.
Now
consider this argument:
If Jimmy Pattison wins the lottery, he is a millionaire.
Jimmy Pattison does not win the lottery. Therefore
Jimmy Pattison is not a millionaire.
This
argument is also intuitively invalid.
After all, as we know, Pattison is already a millionaire, many times
over, and lotteries had nothing to do with it.
The argument’s truth-functional structure is given by: pᴝq, ~p/:.~q.
Line 3 of our truth table is the only line where we can assure the
invalidity of this form, by assigning the value T to pᴝq. This is because the conclusion, as we know,
is false, and we are supposing that the premise that he does not win the
lottery is true, which requires that the claim that he wins the lottery be
false, as on line 3. For the argument to
be shown invalid, both premises must be true, given that the conclusion is
false. So we must assign the value T to
the other premises, i.e. the conditional.
Finally,
along the same lines, consider the following invalid argument:
If Sue is a bachelor, then Sue is male.
Sue is not a bachelor.
Therefore
Sue is male.
So the
form is: pᴝq, ~p/:.q. The truth of the first premise logically
follows directly from the definition of ‘bachelor’, of course, but that is not
a merely truth-functional consideration. And we can imagine that the other
premise is also true, because we can imagine that Sue is female (this is not
the ‘boy named Sue’ of Johnny Cash’s famous song!). That means that the claim that Sue is a
bachelor must be false. Line 4 is the only
line of the truth-table where we can represent the invalidity of this argument,
but that requires assigning the value T to the other premise, i.e., the
conditional. If we assigned the value F
to the conditional on that line, we would have no way of explaining the in validity
of the argument in terms of its truth-functional structure involving a
conditional.
PHILOSOPHY 110:
INTRODUCTION TO LOGIC
AND REASONING
INSTRUCTOR: Phil Hanson
TAs:
Tiernan Armstrong-Ingram, Chris Spiker
**Important Announcement: no tutorials the first week of
classes. The first tutorials will be next
Friday, i.e., January 13th.
Text: Logic
and Philosophy, 11th Ed.; Hausman, Kahane, Tidman
Features of text: (relatively) error free; glossary of key terms at the end
of chapters in which they are introduced; lots of exercises with solutions to
even-numbered ones.
Website: go to www.sfu.ca/~hanson and follow the links. So far, you
will find the course description and a syllabus (including important exam and
assignment dates) plus office hrs. and coordinates for the 3 of us.
Reading Assignment for next week:
for Wed., ch. 1; for Fri., ch. 2
pp. 19-33 (i.e., the first 9 sections).
Today:
We begin to introduce central concepts from ch. 1: argument; deductively valid argument; sound argument; and consistent set of statements – if we get
that far.
Df. argument:
a series of declarative sentences (or assertions), the (typically) last one
of which (‘the conclusion’) is claimed
to be supported – or asserted on the strength of -- the others (‘the premises’).
Consider
whether the following satisfy this definition.
A.
1.
All
ravens are black.
2.
Neato!
3.
Is that one over there? No
(why?)
B.
1.
All
ravens are black.
2.
That
is a raven in the tree. (And)
3.
It
is black.
No (why?)
C.
1.
All
ravens are black.
2.
That
is a raven in the tree. Therefore
3.
It
is black. Yes.
Df. deductively valid argument (hereafter
‘valid argument’): an argument whose conclusion must be true, provided only
that all of its premises are true.
Some invalid
arguments.
D.
1.
This is a logic course. T
2.
You
are all persons. Therefore T
3.
SFU
is on
E. (“an inductive argument”)
1.
I
have observed many ravens. T
2.
They
were all black. Therefore T
3.
All
ravens are black. F
F. (“a strong inductive argument”)
1.
Lotto 649 is a fair lottery. T
2.
Jones
has bought one ticket. T
3.
A
million tickets were purchased. T Therefore
4.
Jones
will not win. F
Questions: Must the conclusion of a valid argument be
true? If not, under what circumstances
must it be true?
Let’s
see. Some valid arguments:
G.
1.
Your logic instructor has purple hair. F Therefore
2.
Your
logic instructor has hair. T
H.
1.
All cats are purple. F
2.
Your
logic instructor is a cat. F Therefore
3.
Your
logic instructor is purple. F
I.
1.
7
is greater than 5. T
2.
5
is greater than 2. T Therefore
3.
7
is greater than 2. T
J.
1. 7 is greater than
5. T
2. 5 is greater than
8. F
Therefore
3. 7 is greater than
8. F
[Why is this
latter valid? Because the relation of
being greater than is transitive: i.e., because if x is greater than y and y is
greater than z, then x must be greater than z.]
So, a valid
argument’s conclusion needn’t be true, as long as at least one of its premises
is false.
Df. sound
argument: a valid argument, all of whose premises are true.
Question: Must the conclusion of a sound argument be
true? Yes (Why?)
The central hypothesis of deductive
logic: the validity
of an argument is not a function of the specific content of its premises and
conclusion, but is completely determined by the formal, structural pattern that it exhibits. (Thus: ‘formal logic’)
Standard First Order Logic:
the general, systematic, and complete representation, in a symbolic
system, of argument forms or patterns whose deductive validity depends solely
on their truth-functional structure,
or on their truth-functional plus
predicative and (first order) quantificational structure.
Of course,
the quasi-technical notions above (the ones in italics) still have to be
explained. That is what this course is
about. This course will be an
introduction to Standard First Order Logic, a powerful and useful tool of
critical analysis. Roughly the first
half of the course will cover the logic of truth functions. In the second half we will add predicates
(including relations and the special relation of numerical identity) and
quantifiers.
Df. consistent
set of statements (or declarative sentences): a set of statements whose
members can all be true together.
Df. inconsistent set of statements: a set of statements whose members cannot all
be true at once.
[Questions:
must all the members of a consistent set be true? Must any of them be true?]
A consistent
set: {“Hanson is bald”, “Hanson is a
bachelor”, “Hanson is happy”}
An
inconsistent set: {“Hanson is a
bachelor”, “Hanson is married”}
Let the
“negation” of a statement S be the new statement formed by adding to the
beginning of S the words “It is false that”.
Thus if S is the statement “God exists”, then the negation of S is the
statement “It is false that God exists”.
Note: a statement, S, together with its negation always form an inconsistent set; e.g.:
{“Hanson is a sexagenarian”, “It is not the case that Hanson
is a sexagenarian”}
AN IMPORTANT RELATIONSHIP BETWEEN
VALIDITY AND CONSISTENCY:
AN ARGUMENT IS VALID IF
AND ONLY IF THE SET OF STATEMENTS WHOSE MEMBERS ARE ALL OF THE ARGUMENTS
PREMISES PLUS THE NEGATION OF ITS CONCLUSION IS INCONSISTENT.
So if we had a way of testing a set
of statements for inconsistency, we would thereby have a way of testing an
argument for validity: namely, by forming the set whose members consisted of
the premises of the argument together with the negation of the conclusion. If that set proved inconsistent by our test,
than the argument is valid. But if the
set proved consistent by our test, then the argument would be invalid.
More inconsistent sets:
{“Your logic instructor has purple
hair”, “Your logic instructor does not have hair”}
{“All cats are purple”, “Your logic
instructor is a cat”, “Your logic instructor is not purple”}
A Valid Argument?
1. If the litmus turned red, then the
liquid is acidic.
2. The litmus turned red. Therefore,
3. The liquid is acidic.
If so, the following set must be
inconsistent:
{“If the litmus turned red, then the
liquid is acidic”, “The litmus turned red”, “It is not the case that the liquid
is acidic”}
Is it? How can we tell for sure? (Rigorous methods will be developed in
A valid Argument?
1. If the paper burned, then oxygen was present.
2. Oxygen was present. Therefore
3. The paper burned.
If so, then the following set must be
inconsistent:
{“If the paper burned, then oxygen
was present, “Oxygen was present”, “It is not the case that the paper burned”}
Is it? Why or why not?
Question: Why is Standard First Order Logic centrally concerned
with validity, but not so much with soundness?
A TALE OF TWO
ARGUMENTS
Leibniz:
If a God who is all
powerful, all knowing, and wholly good exists, then this is the best of all
possible worlds.
Such a God exists. Therefore,
This is the best of all
possible worlds.
Voltaire:
If a God who is all
powerful, all knowing, and wholly good exists, then this is the best of all
possible worlds.
It is not the case that
this is the best of all possible worlds.
Therefore,
It is not the case that
such a God exists.
Questions:
1. Which of these arguments is valid?
2. Which of these arguments is sound?
3. What can logic tell us about these
arguments?
Logic can, and does, tell us that both arguments are valid, but it cannot
tell us which, if any, is sound. It cannot, because to do that it would also
have to be able to tell us that all the premises are true. But those premises, if true, are not true by
virtue of logic alone.
But logic does tell us something else
that is interesting about these arguments: they cannot both be sound. (This is not to rule out the possibility that
neither argument is sound.)
Why?
Well, if they are both sound, then, by the definition of “sound” all of
their respective premises and conclusions must actually be true, and therefore, trivially, it must be possible for them all to be true together. Therefore, by the definition of a consistent
set, the premises and conclusions of both arguments collected together into one
big set must form a consistent set. But
clearly that is not the case, since, e.g., that set would have as members both “This
is the best of all possible worlds” and “It is not the case that this is the
best of all possible worlds”, and clearly only one of these can be true.
(Technical side bar: how many members would the set just alluded
to have?
Answer: 5. Why? Clue: it is because
of what we mean by a set)