COGS 300, Fall Semester 2006
Research in Cognitive Science: Reasoning
COGS 300 is an upper division course intended for Cognitive Science majors. The goal is to take students who have had COGS 100 (or the old 200), plus many of the lower division courses required from their choice of areas of concentration, and introduce them to some research topics in Cognitive Science. The content of COGS 300 changes from year to year, as the different instructors choose topics that they happen to be doing research in.
This term I will be investigating the topic of reasoning. My interests here are to see how this concept is employed in formal logic (what is correct reasoning? what is an abstract system?), in linguistic semantics and pragmatics (e.g., to go from what is said to what is meant, or to determine the truth conditions of natural language utterances), in artificial intelligence (particularly in the realm of knowledge representation), and in cognitive psychology (where I am interested in studies of how people actually reason).
There are no textbooks for this course; I will pass out readings as we go along, or refer you to some. These will mostly be in pdf, and therefore will require Acrobat Reader to view them. They will be downloaded onto your computer when you click on them from my webpage, and then you can view them with your copy of Acrobat Reader. Acrobat Reader is free and can be gotten from here.
This page will be updated as the course goes on, with links to other readings, homework assignments, and the like. You should monitor this page for information relevant to the course. One thing that is relevant is that I will be out of town on Thursday, September 28th and Thursday, October 5th. (I will be here on the Tuesday in-between). We will make up these times in some way or other.
The first topic is an introduction to some elementary parts of logic, especially those areas that have found their way into other fields, such as linguistics and psychology. One area is the classical Aristotelian syllogism. Here is a writeup on the syllogism and related concepts, including how to tell whether a syllogistic argument is valid. OOPS!! I MADE SOME MISTAKES ON THE EARLIER VERSION OF THE SYLLOGISM WRITEUP!! THEY ARE NOW FIXED (Friday, 8 September), AND IF YOU DOWNLOADED BEFORE NOW YOU SHOULD GET THE NEW VERSION! Double oops!! I didn't change all of the mistakes! If you downloaded before now (Sept 18) you should get the new version. (Sorry!)
We are going to discuss some psychological research on the way people deal with syllogisms. There's a lot of such research, but here's a good one that covers a lot of the earlier stuff. This is by Monica Bucciarelli and Philip Johnson-Laird (1999). You might notice that it is very long (57pp), but much of it has already been covered in class (about the background to the syllogism). Unfortunately, they use some non-standard terminology, including a redefinition of the four figures. Nonetheless, I think you can go through the background material pretty quickly, and get on to their results. (Remember that they are arguing for a "mental-models" story about how people solve syllogisms.)
Here are some papers that I think we could look at in this class. In particular I was thinking that they might be suitable for people to give presentations on. You should look them over, and if any seem to be in your area or on a topic you are interested in, send me an email.
Geurts, B. (2003). Reasoning with quantifiers. Cognition, 86, 223-251.
[some links between reasoning and formal/linguistic semantics]
Stenning, K., & van Lambalgen, M. (2004). A little logic goes a long way.
Cognitive Science, 481-530. [reasoning and linguistic semantics]
(2004). A theory of causal learning in children: Causal maps and Bayes
nets. Psychological Review, 111, 3-32. [causal reasoning + AI + philosophy]
(1995). Distinct brain loci in deductive versus probabilistic reasoning.
Neuropsychologia, 36, 369-276. [reasoning + neuroscience]
Rader, A. and V. Sloutsky (2001) Conjunctive Bias in Memory Representations of Logical Connectives. Memory and Cognition 29, 838-849.
Here is an interesting paper that was published in the most recent issue of the journal Cognitive Science. It tries to give an account of how people interpret individual premises of a syllogism, talking about Euler diagrams. (It also has an interesting Appendix about the history of diagrammatic methods in logic). Anyone who wanted to give a presentation on this paper, giving a lecture about what the highlights of the study are, should contact me.
In our study of semantic methods for evaluating arguments in propositional logic, I mentioned that we would look at semantic tableaux methods (some of you might have heard these things called "truth trees"). Using them requires understanding of two things: here is a general statement of how the underlying algorithm works; the algorithm makes reference to "decomposition rules", which are the second thing and are explained here. (In case you have seen a version of this before, you should note that what I am describing are signed tableaux, whereas you might have seen unsigned tableaux. This makes for some differences in the organization and in the rules, but as we will mention in class, the two methods are equivalent.)
A classic paper on conditionals was written by Ruth Byrne in 1989. It describes some facts about when people are willing to affirm or deny conditionals based on how easily they can envision there being more or fewer necessary or sufficient conditions. This would make a nice presentation topic for someone. (Let me know if you want to do this). Sorry this paper is so fuzzy. Byrne told me that she only had a scanned copy and it came out blurry. Because it was scanned, it is also quite a large file. You should be on a high speed connection if you try to download it.
Here are two more papers that you may wish to consider doing a presentation about: Lance Rips on "Knights and Knaves" and Leda Cosmides on "Evolutionary Explanations of the Wason Task". The paper by Cosmides won the 1988 best paper award in the biological sciences. It is very long and will be hard to summarize in its entirety, so whoever chooses this ought to pick some one point and discuss just that. (e.g., her famous examples that illustrate social interaction and rule-following).
Here is some relevant stuff about presentations:
The slides for Ken MacAllister's Oct. 19th presentation on the Osherson et al paper are here.
On Oct 24th, Calen Walshe will give a talk on ACT-R, and the theory of mind it presupposes, using this paper by Anderson et al as a resource. You might also be interested in checking out the ACT-R webpage. Also, at the end of the talk we discussed SOAR and CYC as alternative big-scale cognitive theories. The CYC information is below, in Ken MacAllister's email; but here are the links for ACT-R and for SOAR. Calen's Powerpoint slides are here.
On Oct 26th, Anton Venema will give a presentation based on Byrne's paper on suppression of inference (link above, but here it is again). His Powerpoint presentation is here. A paper by Bonnefon and Hilton tries to extend this topic to a different sort of conditional. Johnson-Laird and Byrne have a more recent and longer paper on conditionals in general.
On Oct 31st we will have a double-header on Knights and Knaves! Grant Christian will talk about Lance Rip's paper (link above, but here it is again). This will be followed by Rob Janousek discussing Johnson-Laird and Byrne's reply to it. Here are Grant's Powerpoint slides. Here are Rob's Powerpoint slides.
Speaking of knights and knaves problems, here is a brief illustration of how to solve them using the semantic tableaux method that we learned earlier in the course.
On Nov 21st, Maria Michielli will give a presentation on how the concept of negation develops. She will focus on the paper by Roy Pea, "The Development of Negation". This is a scanned paper. That same day will see Rosa Gidora giving a talk about the Cosmides paper (link above, but here it is again). Maria's presentation slides are here. Adrian will be presenting on Thursday and his slides are here (note that this is an rtf file).
Classical logic, as well as almost all its non-classical alternatives, is monotonic. This means that when you start with a bunch of premises and correctly draw a conclusion, then you are guaranteed that conclusion is a legitimate consequence of those premises. Adding another premise cannot take that guarantee away. The reason for this is that validity in classical logic is defined as if all the premises are true then so must the conclusion be true, and adding another premise cannot take away the fact that the conclusion followed from the earlier bunch of premises. Non-monotonic reasoning (sometimes called Default Reasoning) is a type of reasoning in which this principle is denied. Further new premises can make you retract a conclusion, even though you don't retract any of the previous premises. A classic example is:
Tweety is a bird
So, Tweety flies.
This conclusion is correctly drawn, but if you later discover
Tweety is an emu
then you are entitled to give up the original conclusion. The enterprise of nonmonotonic logic is to give a coherent set of principles that guide this sort of reasoning. You can find out a little about the general topic by looking at the Wikipedia entry, or much more by looking at the Stanford Encyclopedia of Philosophy entry. I will be talking about this sort of logic and giving some of my own work on the psychology of reasoners using this type of logic. This paper has a very long introduction about all the places one can find nonmonotonic reasoning, and then a discussion of the role of "psychologism" in logic. But persevere and you will eventually get to some discussion of what valid arguments in nonmonotonic reasoning might look like and some studies about whether people actually reason that way.
We will be covering the topic of inference in areas of language. To give us a start at some fundamental issues, we might look at the famous Grice 1957 paper, where he distinguished natural from non-natural meaning, and defined the latter in terms of iterated intentions. Equally, we need to look at Logic and Conversation, by Grice. Although this was circulated unpublished for a decade, it was first published in 1975. I could not find a complete copy of it online, so you should try to find it (it is in very many anthologies on philosophy of language, as well as Grice Ways of Meaning (1989). I found these two pages online. Aha!! Adrian Tough scanned the article. Logic
Please try to read all three papers for Tuesday: Cosmides, Pea, and Grice. (Links are above for all these papers). The last presentation will be on Thursday. They are central to his position, but do not give the examples of inferences that he thought we could employ to generate implicatures. Nor does it give his account of irony, metaphorical meaning, etc. If this is the sort of thing that interests you, there is plenty of work using the basic Grice mechanism. It would make an interesting paper topic for this course, but be sure that you focus on the notions of reasoning and inference.it is, and is relevant to his presentation on Tuesday.
Here are two papers about pragmatic influences on understanding. One is by Ira Noveck "Pragmatic inferences linked to logical terms" In I.A. Noveck & D. Sperber (eds). Experimental Pragmatics. Basingstoke: Palgrave Macmillan; and the other one is Novek et al "Pragmatic factors in interpreting disjunctions".
When I was at the University of Alberta I regularly taught a course for first-year graduate students in Computing Science. The goal of the course was to get the students to start in their research pathway, and to that end I taught about many different aspects of getting one's research going. One topic was about how to read articles. The idea was that they would be coming across articles in Computing that maybe would later become relevant to their research, and so they'd like to have some record of that article that they could check out in order to discover whether they¹ve read something relevant. This is a webpage I wrote that listed a bunch of questions that they should answer about any article they read. I also recommended that they keep their notes of this in some sort of external memory. I would recommend that you do the same for any articles you are reading, especially if you think you might be going on to grad school. For the purposes of this course, I think it would be a very good exercise for you to answer these questions on your article summaries. I know the grader of your summaries, and I'm sure he'd look favourably to summaries that are written with these questions in mind.
In this course I also taught about writing style. One of the topics was about ways of using a bibliography and citation method. Here is what I told them. I should mention that I told them about bibliographies and citations as part of a section on "academic honesty and plagiarism". The message was that anything you want to employ in your own writings but which originated in the writings of someone else needs to be acknowledged as being like that. Citations/bibliographies are ways to allow this to happen. You should follow some such method in your final paper for this course. Note that this writeup was geared to the style that the Department of Computing Science at the University of Alberta wanted. Maybe in the SFU Faculty of Arts and Social Sciences it is more common to use the "method #2" that I discuss.
Ken MacAllister sent an email to the class, but here is its content:
Here are some links to
things that were mentioned in today¹s class:
CYC the ontology of everything from Doug Lenat:
What does CYC know?
(Click the graphic for an abbreviated answer.)
Want to help teach CYC? Play FACTory:
The program that lets you build your own ontologies is Protégé:
Once you have an ontology constructed, use the Java Expert System Shell (JESS) to define production rules:
Ken Koedinger and Vincent Aleven over at Carnegie Mellon developed Cognitive Tutor Authoring Tools (CTAT):
I did a workshop with these guys last year at the AIED conference, so I have their workbooks and such if anyone is interested in playing with this tool.
Other examples of production rule based approaches to A.I. Include BRUTUS, which can write fiction:
And SWALE, which can generate explanations for things using case-based reasoning:
I wrote about these in an article for Adbusters this past April on creativity and A.I.:
Finally, need to do more digging on the generalization engine that runs on web queries, but the attached paper is related, as are these links:
DARPA Agent Markup Language (US Military Stuff)
IMKA knowledge representation:
Translingual summarizers over at Carnegie Mellon University:
The guy who gave the talk on generalizing from Google queries was at the ICALT 2006 conference, and I remember he was a Saudi fellow, but I need to look up his talk in the proceedings. The conference site is here:
Ken also attached a paper to his email. It is about using data-mining techniques to find association rules, using the notion of a flock of queries.