|
Another potential disruptive approach? Perhaps another idea would be to *generate* one or more ChatGTP essays, paragraphs, answrs etc for a particular question or tooic and get the students to identify flaws in the thinking or reasoning. If they don’t know
the material, or more to the point, if they have not actually done the work of thinking about issues around the topic or approach, they will struggle. I am feeling somewhat jaded these days - in my class this past term, most of my students did not do the readings
or even the practice exercises, because they thought the extra work was not worth it, and they should be able to manage all the expectations of the class from reading the slides from 2 hours of lecture (a few even said coming to class all the time was too
much work.).
So perhaps we can turn this trend on its head a bit, and get them to show they are actually thinking.
Sent from my iPad
On Dec 28, 2022, at 3:10 PM, Ronda Arab <ronda_arab@sfu.ca> wrote:
Very true, Nicky. But we’re already doing that to avoid plagiarized essays from the internet. But I’m getting the impression that if students can feed AI the prompts, even if the prompts require original thinking and quoting from the source,
the bot can come up with something that might squeak through with a low grade. We may need to expect more from even the C- essays, which will mean more students getting Ds and Fs.
Ronda
On Dec 28, 2022, at 4:04 PM, Nicky Didicher <didicher@sfu.ca> wrote:
While the calculator analogy may not hold true, Behraad is right in saying we're going to have to stress new knowledge and creativity in our assignments.
Instead of asking students to collect and summarize five articles on a subject (which the bot can do in a few seconds), we'll need to ask for reasoned judgements in which students assess the articles, compare them, etc.
Instead of asking for the kinds of essays the bot can generate (or the students can already generate using Google and Wikipedia), we're going to have to make originality an essential criterion, even at the lower-division level of courses. If it can be generated
by rehashing existing knowledge, then it fails to meet our criteria.
The stronger students will relish more emphasis on knowledge creation, and the weaker ones will struggle with it. As they do already, but perhaps they will have to cope with that a little sooner.
Good in the long run, even if more work for us now in thinking through assessments that require genuine learning.
Nicky
From: Ronda Arab
Sent: December 28, 2022 2:39:21 PM
To: Behraad Bahreyni
Cc: Nicky Didicher; James Fleming; Sam Black; academic-discussion@sfu.ca
Subject: Re: Some advice please re. ChatGPT
I don’t think so. If a student can get a passing grade on an essay about “Hamlet” without having read the play, they have learned nothing that they are supposed to learn. The point isn’t what a student can claim to know about the play; it is the process
by which they used their own brain, not an AI tool, to learn what is now written on the essay that they’ve handed in. I’m not a mathematician and I don’t know what all a calculator can do these days, but even with a calculator in my (decades ago) first year
calculus class, I still had to learn and understand the formulas to get good grades.
Ronda
On Dec 28, 2022, at 3:30 PM, Behraad Bahreyni <bba19@sfu.ca> wrote:
Is using AI tools to generate a unique text and using it as part of your work plagiarism?
I believe our teaching will have to evolve. As of now, the only big of creativity in responses from GPT thing is in rewording of the sentences and the system does not produce new knowledge. Our assessment methods maybe have to prioritize creative responses
rather than rehashing of Google/Wikipedia results. I think of this new technological tool as when we allowed calculators in the exam rooms or take-home exams in the age of internet.
Cheers
Sent from a mobile device
On Dec 28, 2022, at 2:07 PM, Ronda Arab <ronda_arab@sfu.ca> wrote:
Hi Steve, Nicky and others,
Nicky, your advice is helpful, and I will use it. I wonder if Steve has advice on how to deal with plagiarism using these AI tools?
Ronda
On Dec 28, 2022, at 2:27 PM, Nicky Didicher <didicher@sfu.ca> wrote:
Thanks, Sam, for mentioning this, and I'm interested in hearing what the AIAs have been discussing, James.
I hadn't heard of ChatGPT before, but I just spent half an hour getting it to generate English papers for me. They're all about four paragraphs long and C- or D quality, but the program was also able to generate lists of appropriate quotations to use as
evidence (not always the best quotations) for both a work in public domain (a Jane Austen novel) and a work currently in copyright that shouldn't be available full-text online legally (a Rebecca Stead novel). The bot's plot summaries and critical assessments
were paraphrases of existing ones, but not close enough to be detected as plagiarism, and it can create new versions of those paraphrases instantly. This means that we can't tell classes "I've already put this essay topic through ChatGPT, so I'll recognize
if you use it."
Maybe we could suspect that a C paper's thesis and evidence was generated by ChatGPT, but I don't think we could prove it. Its idea of writing a conclusion is to repeat and rephrase its introduction and the writing style is completely bland and repetitive,
but those are true of D or C- English papers in general. If we're lucky, the generated essay will have a big error in it (one of the Rebecca Stead ones I asked for identified the wrong character as Black and the other used evidence that didn't really make
sense for the topic), and we can ask the student questions to show whether they actually read the material they were supposed to.
I think this situation may end up being like one in which a student asks or pays someone else to write a paper for them, but the paper *isn't* obviously way above the student's writing or thinking level as demonstrated in other assignments. We can interview
the student and ask them questions about their work and hope they can't explain it, but we won't be able to prove they didn't write it themselves.
Perhaps as teachers the best we can do is make fun of ChatGPT in class when we're talking about academic dishonesty and say how we've tried it out on our essay topics and never gotten anything back worth more than a C-.
Nicky
From: James Fleming <james_fleming@sfu.ca>
Sent: December 28, 2022 12:57:38 PM
To: Sam Black; academic-discussion@sfu.ca
Subject: Re: Some advice please re. ChatGPT
Coincidentally those of us currently serving as departmental Academic Integrity Advisors are having a chat about this issue on our own list--not with regard to policy, but pedagogy and evaluation. Would there be interest in broadening the discussion? JDF
James Dougal Fleming
Professor, Department of English
Simon Fraser University
Burnaby/Vancouver,
British Columbia,
Canada.
The truth is an offence, but not a sin.
-- Bob Marley
From: Sam Black <samuel_black@sfu.ca>
Sent: December 28, 2022 12:24 PM
To: academic-discussion@sfu.ca
Subject: Some advice please re. ChatGPT
Hi All,
Does anyone know if some policy guidelines have been issued by SFU re. ChatGPT and academic dishonesty. Specifically, what would SFU accept as dispositive evidence that an essay had been generated using ChatGPT or similar AI software? Obviously, it will
be impossible to introduce as evidence materials that have been cut and pasted without acknowledgement.
In this vein, I recently had a chat with an engineering student (but not an SFU student!) who received an A+ on an assignment using ChatGPT.
The software could not generate an A+ paper in Philosophy (of course not!). For the moment, I'm mostly concerned with suspicious C+ papers.
Thanks in advance,
Sam
Sam Black
Assoc. Prof. Philosophy, SFU
I respectfully acknowledge that SFU is on the unceded ancestral and traditional territories of the səl̓ilw̓ətaʔɬ (Tsleil-Waututh), Sḵwx̱wú7mesh Úxwumixw
(Squamish), xʷməθkʷəy̓əm (Musqueam) and kʷikʷəƛ̓əm (Kwikwetlem) Nations.
|