I can offer a perspective based on my experience with open-book exams in (my version of) BISC 102. I have given open-book mid-terms and finals since I started teaching this ~25 years ago. The armory of aides students are allowed has expanded steadily to
include ever smaller and more powerful devices, the internet, Google, and now (as of the Dec 16 final) ChatGPT.
Exam performance did not improve (or worsen) over this quarter-century. I like to think that I became better at posing exam questions that require reasonable comprehension of the course material before Google can be usefully employed. Even if that isn't
self-delusion, I think I could defend the notion that students are now required to have greater command of more material than before, and that I have enabled this to drift upward as their armory expanded. This speaks to the idea that we should look at ChatTPG
as another teaching tool that requires us to adapt - rather than (futilely) seeking to ban its use.
I should perhaps have anticipated that students would be quick to use ChatTPG for exams - but I didn't. The TAs were alert enough to notice this during the final, and recorded names of the students doing so. (No penalties were applied for its use though,
nor were students told to stop.) An analysis showed that students known to have used ChatTPG did generally not do better on the final than on the preceding mid-terms. In interviews with several of them after the final, they reported their assessment that ChatTPG
didn't help them much, if at all.
Of course an open book exam with a time limit is a rather different context than essay writing, and my experience with the former may not apply well to the latter. But multiple contexts will likely be helpful as we work through
this issue and decide how best to handle it.
Ron Ydenberg
Professor, Department
of Biological Sciences
Director, Centre for Wildlife Ecology
&
Evolutionary and Behavioural Ecology Research Group
Simon Fraser University
Burnaby, BC V5A 1S6 Canada
From: Julian Christians <julian_christians@sfu.ca>
Sent: December 28, 2022 2:37 PM
To: Steve DiPaola
Cc: James Fleming; Sam Black; academic-discussion@sfu.ca
Subject: Re: Some advice please re. ChatGPT
Hi
Thanks everyone for chiming in, and Steve, for your offer. Steve- what I think would be useful to know is what the AI is bad at, both in terms of designing assignments that are hard to cheat on, and for knowing what our human students will need
to be able to do as these AI tools become ubiquitous.
Cheers
Julian
On Dec 28, 2022, at 1:36 PM, Steve DiPaola <sdipaola@sfu.ca> wrote:
I am a SFU researcher/prof in the AI space and surely know and teach these systems - so if consulting is needed.
My lab writes these systems (NSERC/MITACS granted) for studies, health and education coaches as well as for the arts. And I give keynote talks (and TV interviews) about the ethical issues
So I can discuss with a group what they can do now and in the future.
-steve
For those interested in what we do at SFU with them:
video Here is some work where I am chatting real time with Picasso:
video (from a talk in Cambridge Conf)
We are in talks for a project where visitors can talk live to Van Gogh about his life for the Van Gogh Museum.
(and with the philosophy comment in mind)
El Turco I recently completed a large art installation for the Japan Triennial (with artist Diemut Strebe) that showed the issues with these systems where
our "Socrates" debating our GPT system in 17 dialogues.
It is both right on and not at all at the same time.
El Turco (see videos of the dialogues)
Again, I can help with this effort of understanding issues with plagiarism (as well as tools and more so effort for our students to understand how AI works and its implications)
Note the visual space ( AI visual generation) equally has issues for us.
steve
- Steve DiPaola, PhD - -
- Prof: Sch of Interactive Arts & Technology (SIAT);
- Past Director: Cognitive Science Program;
- - Simon Fraser University - - -
our book on: AI and Cognitive Virtual Characters
At Simon Fraser University, we live and work on the unceded traditional territories of the Coast Salish peoples of the xʷməθkwəy̓əm (Musqueam), Skwxwú7mesh (Squamish), and
Səl̓ílwətaɬ (Tsleil-Waututh) and in SFU Surrey, Katzie, Kwantlen, Kwikwetlem (kʷikʷəƛ̓əm), Qayqayt, Musqueam (xʷməθkʷəy̓əm), Tsawassen, and
numerous Stó:lō Nations.
Coincidentally those of us currently serving as departmental Academic Integrity Advisors are having a chat about this issue on our own list--not with regard to policy, but pedagogy and evaluation. Would there be interest in broadening the discussion? JDF
James Dougal Fleming
Professor, Department of English
Simon Fraser University
Burnaby/Vancouver,
British Columbia,
Canada.
The truth is an offence, but not a sin.
-- Bob Marley
Hi All,
Does anyone know if some policy guidelines have been issued by SFU re. ChatGPT and academic dishonesty. Specifically, what would SFU accept as dispositive evidence that an essay had been generated using ChatGPT or similar AI software? Obviously, it will
be impossible to introduce as evidence materials that have been cut and pasted without acknowledgement.
In this vein, I recently had a chat with an engineering student (but not an SFU student!) who received an A+ on an assignment using ChatGPT.
The software could not generate an A+ paper in Philosophy (of course not!). For the moment, I'm mostly concerned with suspicious C+ papers.
Thanks in advance,
Sam
Sam Black
Assoc. Prof. Philosophy, SFU
I respectfully acknowledge that SFU is on the unceded ancestral and traditional territories of the səl̓ilw̓ətaʔɬ (Tsleil-Waututh), Sḵwx̱wú7mesh Úxwumixw
(Squamish), xʷməθkʷəy̓əm (Musqueam) and kʷikʷəƛ̓əm (Kwikwetlem) Nations.
|