|
Hi Steve, Nicky and others,
Nicky, your advice is helpful, and I will use it. I wonder if Steve has advice on how to deal with plagiarism using these AI tools?
Ronda
On Dec 28, 2022, at 2:27 PM, Nicky Didicher <didicher@sfu.ca> wrote:
Thanks, Sam, for mentioning this, and I'm interested in hearing what the AIAs have been discussing, James.
I hadn't heard of ChatGPT before, but I just spent half an hour getting it to generate English papers for me. They're all about four paragraphs long and C- or D quality, but the program was also able to generate lists of appropriate quotations to use as
evidence (not always the best quotations) for both a work in public domain (a Jane Austen novel) and a work currently in copyright that shouldn't be available full-text online legally (a Rebecca Stead novel). The bot's plot summaries and critical assessments
were paraphrases of existing ones, but not close enough to be detected as plagiarism, and it can create new versions of those paraphrases instantly. This means that we can't tell classes "I've already put this essay topic through ChatGPT, so I'll recognize
if you use it."
Maybe we could suspect that a C paper's thesis and evidence was generated by ChatGPT, but I don't think we could prove it. Its idea of writing a conclusion is to repeat and rephrase its introduction and the writing style is completely bland and repetitive,
but those are true of D or C- English papers in general. If we're lucky, the generated essay will have a big error in it (one of the Rebecca Stead ones I asked for identified the wrong character as Black and the other used evidence that didn't really make
sense for the topic), and we can ask the student questions to show whether they actually read the material they were supposed to.
I think this situation may end up being like one in which a student asks or pays someone else to write a paper for them, but the paper *isn't* obviously way above the student's writing or thinking level as demonstrated in other assignments. We can interview
the student and ask them questions about their work and hope they can't explain it, but we won't be able to prove they didn't write it themselves.
Perhaps as teachers the best we can do is make fun of ChatGPT in class when we're talking about academic dishonesty and say how we've tried it out on our essay topics and never gotten anything back worth more than a C-.
Nicky
From: James Fleming <james_fleming@sfu.ca>
Sent: December 28, 2022 12:57:38 PM
To: Sam Black; academic-discussion@sfu.ca
Subject: Re: Some advice please re. ChatGPT
Coincidentally those of us currently serving as departmental Academic Integrity Advisors are having a chat about this issue on our own list--not with regard to policy, but pedagogy and evaluation. Would there be interest in broadening the discussion? JDF
James Dougal Fleming
Professor, Department of English
Simon Fraser University
Burnaby/Vancouver,
British Columbia,
Canada.
The truth is an offence, but not a sin.
-- Bob Marley
From: Sam Black <samuel_black@sfu.ca>
Sent: December 28, 2022 12:24 PM
To: academic-discussion@sfu.ca
Subject: Some advice please re. ChatGPT
Hi All,
Does anyone know if some policy guidelines have been issued by SFU re. ChatGPT and academic dishonesty. Specifically, what would SFU accept as dispositive evidence that an essay had been generated using ChatGPT or similar AI software? Obviously, it will
be impossible to introduce as evidence materials that have been cut and pasted without acknowledgement.
In this vein, I recently had a chat with an engineering student (but not an SFU student!) who received an A+ on an assignment using ChatGPT.
The software could not generate an A+ paper in Philosophy (of course not!). For the moment, I'm mostly concerned with suspicious C+ papers.
Thanks in advance,
Sam
Sam Black
Assoc. Prof. Philosophy, SFU
I respectfully acknowledge that SFU is on the unceded ancestral and traditional territories of the səl̓ilw̓ətaʔɬ (Tsleil-Waututh), Sḵwx̱wú7mesh Úxwumixw
(Squamish), xʷməθkʷəy̓əm (Musqueam) and kʷikʷəƛ̓əm (Kwikwetlem) Nations.
|