[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: ChatGPT



2 cents (one mine):

1. I tell my students that the point of writing is never to "make an essay [or other piece or form of text ] be there." Rather, the point is to figure something out. But that means (i) identifying something that needs to be figured out. (Which means in turn (ii) being in the world, giving a damn, and all that good kind of phenomenological stuff. In parentheses because I don't normally tell them that part.)

2. (A more elegant form of 1.) Hans-Georg Gadamer on "hermeneutical consciousness"--by which he basically means our capacity to understand: "The real power of our hermeneutical consciousness is our ability to *see what is questionable*." In German, "fragwürdig": question-worthy. To me, the rise of ChatGPT etc. merely sharpens the focus where it already was: here. JDF
From: Steve DiPaola <sdipaola@sfu.ca>
Sent: August 5, 2023 4:32:11 PM
To: Yildiz Atasoy
Cc: Ronda Arab; Gerardo Otero; Nicky Didicher; academic-discussion@sfu.ca; Andrés Cisneros-Montemayor
Subject: Re: ChatGPT
 

I have been doing research and teaching AI (and making AI art) for over 20 years. Now I am given talks worldwide on what AI  is and its issues. 

 

Sorry this is long - read at your leisure: 

 

It is just not such a black and white situation and a very changing situation.  Nor is it just ChatGPT- new LLM systems are coming quickly now (we have several in our lab, like many researchers now, that dwarf ChatGPT). For instance you can train on your own data ( a folder of papers say or anything). So, say you already have a lot of writing (I have about 100+ papers and 20+ past grants) you can now locally train on just YOUR papers/writing (or past grants!)  and use your continued thoughts as prompts to intellectually 'journey through YOUR past work' to generate something anew JUST FROM YOUR work alone. Would you call that plagiarism? Of course, the super negative of this local dataset training that will we see - is someone (a student or a company that students pay) will soon:  find and compile every past student paper or writing assignment from a specific class and train a very specific data model that students can use with their new prompting on that class paper dataset - that would be very hard to detect. (pay for model SFU_ ENGL116 - load and prompt it) - so these are the extremes of this issue. 

 

I will end with a good article on this classroom issue and some specific advice.

 

-       -   -

 So here is my take on how strong writers use GPT systems masterfully to get their inner ideas out - (many are writing their grants this way!) Would you call this cheating or plagiarism or just a very new way to write? Again, the way pros are writing now - shouldn’t we begin teaching it to our students. It uses this 1) intro paragraph and 2) bullet points – iterative editing process. 

 

     typically: (I suggest you try this if you haven't - your colleagues are)

 

 1) make an ordered bullet point list of your ideas (what YOU want to write about), they do not have to be perfect bullet points but just something that are separate points with say leading numbers of your preferred order.  (Many performative folks like to grab a mic and talk these out in dictation - then order / clean them a bit). 

 2) add them to GPT - and above them give a detailed paragraph on how you want to express those numbered ideas (YOUR ideas on flow, style, expressiveness, formal techniques) - then hit generate.  (Like you are giving it to your smart but needs full details apprentice writer

 3) read results, if you're not happy (no one is on the 1st round - this is an iterative new creative process) - edit intro instructions (ie. be more formal) or some points, and regenerate (or just go to 4 first if it is just parts that are bad) -- to be clear YOU are rewriting over what is in GPT with your own words based on YOUR analysis of results versus your goals.

4) read again and if it all off redo 3 but if some parts are good - cut/paste good parts to word processor. go back in GPT - remove those good/done parts and now edit based (instructions or points) on just getting those parts left to be corrected to your standards (easier now as you can hone on the issues of point 2 and 5 say).

 5) do this till your standards/goal is met with your whole or parts - and now in an editor reform all (the best parts from several passes) and add your hand fixes. (Do this in stages for big texts)

 6) now with something pretty good (80%), edit to make it fully yours and complete/polished.

 7) double dip: if you feel like it, put part or all back into the LLM system and ask it to do some overall specific directional or general clean up (eg. make it more formal; or make it use less jargon or simpler language; or condense it to fit in 500 words) 

 

Typically, the more you have done this process the better you get at being this meta creative writer on your work. (a new kind of editor of your work).  This is a new creative style of writing - would anyone reading this call this plagiarism? More so you could do the above steps where the dataset is ONLY your past papers rather than the big ChatGPT dataset. Surely that can not be plagiarism.  You as a researcher writing grants - are now competing with people using these techniques right now in writing their grants. 

 

Creatives have always embrace new tools early to get something in their soul out, be it moving from egg tempera to oil paintings, modern musical instruments or these new AI tools  (I have an upcoming Massey Lecture on this - the future of creativity and AI) 

 

 

NOW COMPARE the above technique (you or your student could do) with this other technique:  I grabbed the class paper assignment instructions and pasted it directly into ChatGPT and took whatever came out. Obviously they are VERY, VERY different - one is worse than plagiarism - it is straight cheating similar to paying someone to write your essay by giving them the assignment instructions -- but the first way I elaborated might be the future of writing.

 

and that is only one smart way to use new AI tools as a creative. There are so many new systems and processes coming out. You can now make a video of you presenting an idea and turn it into writing pros for instance. The use the steps on that. And so on.

 

I am open to giving a guest lecture to any class on these topics. (And we are working on a service course at SFU) Just email me.

 

Hope this helps some folks.  

 

----

 

As promised here is a decent article (I do not agree with all of it) on ChatGPT and classroom work: 

 

https://co.chalkbeat.org/2023/8/4/23820783/ai-chat-gpt-teaching-writing-grading?fbclid=IwAR3tOQYqw_3FyyfBKTWvhNN6TXYXbgZgOymW0CUuc9qMDTvLUXitnRklqaM

 

There is also a facebook group that is one of the bigger ones for AI and teaching. 

 

https://www.facebook.com/groups/703007927897194

 

Ironically most of the conversations on this teachers FB group are trading tricks to use LLMs like chatGPT for making (generating) assignments and tests and slidesets way faster/easier on the teacher side. Interesting double standard?

 

Lastly - two specific 'essay cheating and chatgpt' ideas from me: 

1) There is no accurate system (none! and surely not Turnitin) that can tell you if your students' essay was fully or partially generated with ChatGPT. And never ask ChatGPT if someone used it. Do not be fooled.

 

2) Grading tip: ChatGPT will have some completely wrong statements – typically as much as 10% - researchers call this hallucination and we have not been able to systematically remove all of them. So I suggest putting in a statement in your grading – that students must read and edit their work so that everything is factually correct – and there will be no tolerance for huge incorrect statements – if there is, you will get a zero on the whole assignment. This will catch those that did not even have the responsibility to check through their work.  

 

Thanks Steve 

 

PS ETHICS

 ChatGPT is closed, non transparent software by OpenAI where they will not say specifically how it works, open the code to researchers for inspection and are looking at every piece of data that goes through it (huge privacy issues!!), charging for the plus model with no limit to future costs and can limit use of the free model and do all the time. We and others have papers in AI ethics (in many areas) – and need open AI systems for testing/bettering.  There are new LLM systems that are open and therefore, we can test, update and attempt to correct SO MANY ethnic AI issues (surely not all). For instance, I am using / testing (more for research uses) Meta’s (Facebook’s) Llama2.  It is way more accurate and from an ethical AI standpoint:  the code is completely open and available to all * free to use and view and change, that means we can do open fair ethical testing on bias and so many issues we can’t with the black box that is openAI. And (with a good computer) you can run it locally (not allowed with openAI GPT). Local means that you are NOT exposing your data (like with ChatGPT) every time you use it and of course we can make better ethical variants of it or special versions for marginalized groups. If you get a chance try it – and compare results. We are working on educational project (with a museum) to be able to converse with talking / expressive Vincent Van Gogh, where given our new cognitive tools and chaining LLMs – we can get very honest accurate results and where 60% of the time the answers are from Van Gogh’s own words (given we can train in his 700 letters he wrote to his brother Theo). We use these systems for arts and health too. 


-  Steve DiPaola, PhD    -  -  
 - Prof: Sch of Interactive Arts & Technology (SIAT);  
 - Past Director: Cognitive Science Program; 
 - - Simon Fraser University - - -  
    research site:   ivizlab.sfu.ca
    art work site:    www.dipaola.org/art/
    our book on:     AI and Cognitive Virtual Characters
At Simon Fraser University, we live and work on the unceded traditional territories of the Coast Salish peoples of the xʷməθkwəy̓əm (Musqueam), Skwxwú7mesh (Squamish), and Səl̓ílwətaɬ (Tsleil-Waututh) and in SFU Surrey, Katzie, Kwantlen, Kwikwetlem (kʷikʷəƛ̓əm), Qayqayt, Musqueam (xʷməθkʷəy̓əm), Tsawassen, and numerous Stó:lō Nations.


On Sat, Aug 5, 2023 at 11:32 AM Yildiz Atasoy <yatasoy@sfu.ca> wrote:
Sorry I meant to write "her" not "het". This is an algorithmic error made by my " smartphone".

Best,
Yıldız
Dr. Yıldız Atasoy
Professor, Sociology
Former Director, Centre for Sustainable Development
Associate Member, the School for International Studies
Associate Member, Department of Geography

Food, Climate Change & Migration - Public Engagement Forums
NEW BOOK: Commodification of Global Agrifood Systems and Agro-Ecology: Convergence, Divergence and Beyond in Turkey
Book Reviews: International Sociology & New Perspectives on Turkey

Mailing Address:
Simon Fraser University
Department of Sociology and Anthropology
8888 University Drive,
Burnaby, BC, Canada V5A 1S6
E-mail: yatasoy@sfu.ca


________________________________
From: Yildiz Atasoy <yatasoy@sfu.ca>
Sent: Saturday, August 5, 2023 11:26:26 AM
To: Ronda Arab; Gerardo Otero; Nicky Didicher; academic-discussion@sfu.ca
Cc: Andrés Cisneros-Montemayor
Subject: Re: ChatGPT

Hello All,

I wholeheartedly agree with Ronda, and appreciate het taking the time to write a comprehensive explanation and review of University policy.
I will also forbid the use of AI generated programs in my classes.

Kindest regards,
Yıldız
Dr. Yıldız Atasoy
Professor, Sociology
Former Director, Centre for Sustainable Development
Associate Member, the School for International Studies
Associate Member, Department of Geography

Food, Climate Change & Migration - Public Engagement Forums
NEW BOOK: Commodification of Global Agrifood Systems and Agro-Ecology: Convergence, Divergence and Beyond in Turkey
Book Reviews: International Sociology & New Perspectives on Turkey

Mailing Address:
Simon Fraser University
Department of Sociology and Anthropology
8888 University Drive,
Burnaby, BC, Canada V5A 1S6
E-mail: yatasoy@sfu.ca


________________________________
From: Ronda Arab <ronda_arab@sfu.ca>
Sent: Saturday, August 5, 2023 10:31:54 AM
To: Gerardo Otero; Nicky Didicher; academic-discussion@sfu.ca
Cc: Andrés Cisneros-Montemayor
Subject: Re: ChatGPT

Hello all,

I am perhaps an outlier here, but I have found it useful to ban all use of Chat GPT for my courses, particularly for my recent Engl 113W class, and I will continue to do so.

Out 100-level W credit courses are designed and intended for students to learn and practice writing. And Chat GPT, although it does not spit up perfect gems of stylish, sophisticated prose, is able to produce grammatically correct, essentially correct content, if that content is available elsewhere on the internet. While I have been honing my essay topics for years to make it difficult or impossible for students to use online cheat sites, it is sometimes difficult to do that perfectly, especially when one teaches, as I often do, authors such as Shakespeare, for whom there is a lot of content found online. Chat GPT is a new obstacle, though.

As far as tutors go, using a tutor or a service who changes your writing (i.e., corrects your grammar, sentence structure, punctuation, etc) rather than simply pointing out errors and teaching you how to correct them is forbidden by SFU Academic Honestly policy, although there appears to be a provision allowing instructors to override the policy, which I choose not to do. I've copied the provision here:


2.3.5 Unauthorized or undisclosed use of an editor, whether paid or unpaid. An editor is an individual or service, other than the instructor or supervisory committee, who manipulates, revises, corrects, or alters a student’s written or non-written work. Students must seek direction from the instructor about the type of editor and the extent of editing that is allowed in the course. Students may access authorized academic support services such as the Student Learning Commons, Centre for English Language Learning, Teaching, and Research, and WriteAway, which do not provide editing.

I had a case  of a student cheating using Chat GPT this past semester in my Engl 113W class. This is a "W" class--the students are getting credit for working on their writing as well as for understanding the literature that we study. My TA input several chunks of prose from the student's essay and asked Chat GPT if it had produced them. Chat GPT said it had. I went through the paper thoroughly and found many instances for which Chat GPT confessed it had produced the text. (In some cases I had to switch a pronoun to a literary character's name or vice versa--it appears it had to be the exact text.) Now this is not a fool-proof way of discovering whether or not the text was generated by Chat GPT, as Chat GPT will sometimes tell you it generated content that it did not generate, as I tested it with a few chunks of writing from a published article of my own. So I met with the student in question. The student said that he had used Chat GPT to "proof read" his essay after he had written the essay himself. That was enough to give the student a 0 on the assignment, as I had explicitly forbidden, in writing, on the assignment, all use of Chat GPT. I also asked the student to send me his notes and drafts. Perhaps it was no surprise to discover that included in the rough work he sent me was no sign at all of an essay that was written before plugging it into Chat GPT, which is what he claimed he had done.

Sure, Chat GPT didn't write every word of the essay. The essay required the student to write 1000-1300 words and Chat GPT generally can only spit out about 400 words at a time (in my experience with experimenting with it). So the student had to craft a series of questions to ask Chat GPT and then piece together the bits. Nevertheless, the student did not do the work of putting his own thoughts into writing, which requires cognitive functioning that I continue to believe is an important skill to learn and develop.

I suspect I will have to incorporate more in-class essays into my "W" courses for units for which there is a fair amount of online content available, as I am simply not ok with students getting credit for writing they did not do.

Best,
Ronda

Dr. Ronda Arab
Associate Professor of English
Simon Fraser University

pronouns: she/her
________________________________
From: Gerardo Otero <otero@sfu.ca>
Sent: 04 August 2023 19:28:33
To: Nicky Didicher; academic-discussion@sfu.ca
Cc: Andrés Cisneros-Montemayor
Subject: Re: ChatGPT

Thanks, Nicky. Very useful suggestions in that Google Doc, with all the range of approaches, from prohibition to totally free access without acknowledgment.

Best regards, Gerardo

From: Nicky Didicher <didicher@sfu.ca>
Date: Friday, August 4, 2023 at 5:50 PM
To: Gerardo Otero <otero@sfu.ca>, "academic-discussion@sfu.ca" <academic-discussion@sfu.ca>
Cc: Andres Cisneros-Montemayor <a_cisneros@sfu.ca>
Subject: Re: ChatGPT

Hello, Gerardo and others,

Should you wish to see a large range of different AI policy statements for many different disciplines and from many different institutions, here is a google doc curated by Lance Eaton:  https://docs.google.com/document/d/1RMVwzjc1o0Mi8Blw_-JUTcXv02b2WRH86vw7mi16W3U/edit?pli=1#heading=h.1cykjn2vg2wx

I completely agree that forbidding the use of generative AI is futile! And the main way to go for me is to include it the examples I give for how to write the "Assistance Acknowledged" paragraph I already ask for with essays and creative projects.

I'm planning to adjust the wording of my syllabi policies depending on the course. For example, for my quantitative analysis of poetry class this Fall, I've drafted the following:

"• you are permitted to use text-generating AI such as ChatGPT, Grammarly, or Quillbot for your written assignments, provided you acknowledge it at the end of the assignment and specify what you used it for (e.g., grammar and style corrections, organization, suggestions for an effective title); note: ChatGPT writes terrible metrical poetry and isn’t good at scansion--it can find stressed syllables most of the time, but not divide lines into feet successfully; however, it’s useful for fixing grammar errors and revising for clarity"

In the instructions for their term paper, I will also note that when ChatGPT writes English essays it usually paraphrases cheater sites such as gradesaver and shmoop, and, when asked to used peer-reviewed sources, it fabricates evidence.

Nicky
________________________________
From: Gerardo Otero <otero@sfu.ca>
Sent: August 4, 2023 4:47:38 PM
To: academic-discussion@sfu.ca
Cc: Andrés Cisneros-Montemayor
Subject: ChatGPT

Dear Colleagues:

In September, I’ll be teaching for the first time since ChatGPT became available. So, I’m rather dreading how I will handle this issue, but have no intention of forbidding it (that would be like stopping gravity). Earlier in the year, we had a very interesting conversation on this topic in this list. At that time, I wrote a brief insert for my syllabus based on ideas from other colleagues’ posts. I would like share that short text, asking you for any ideas, criticisms, or suggestions you might have. Here’s the text from the section pertaining to mid-term and final essays (this is a grad course):

“You are required to insert an “acknowledgments” section in mid-term and final essays. You can say whether you began with Wikipedia and engaged with ChatGPT to do your initial research, got idea X from a peer in class, and had your mother or father proofread your paper. Bear in mind that ChatGPT can yield false responses and provide references that do not exist. You must double check anything you use from this tool, and preferably stick to our required readings to write your essays. They should provide you with more than sufficient material.”

Best regards, Gerardo
__

Gerardo Otero
Professor and Graduate Chair
School for International Studies
Simon Fraser University
7200-515 West Hastings Street
Vancouver, BC Canada V6B 5K3
Tel. Off: +1-778-782-4508
Website: http://www.sfu.ca/people/otero.html
Gerardo’s YouTube Channel<https://www.youtube.com/channel/UCA1-aXDghF89MdKhjB5vuFA>

I thankfully acknowledge that I live and work in unceded traditional territories of the Musqueam, Squamish, Tsleil-Waututh, and Kwikwetlem Nations.