|
Hi
Thanks everyone for chiming in, and Steve, for your offer. Steve- what I think would be useful to know is what the AI is bad at, both in terms of designing assignments that are hard to cheat on, and for knowing what our human students will need
to be able to do as these AI tools become ubiquitous.
Cheers
Julian
On Dec 28, 2022, at 1:36 PM, Steve DiPaola <sdipaola@sfu.ca> wrote:
I am a SFU researcher/prof in the AI space and surely know and teach these systems - so if consulting is needed.
My lab writes these systems (NSERC/MITACS granted) for studies, health and education coaches as well as for the arts. And I give keynote talks (and TV interviews) about the ethical issues
So I can discuss with a group what they can do now and in the future.
-steve
For those interested in what we do at SFU with them:
video Here is some work where I am chatting real time with Picasso:
video (from a talk in Cambridge Conf)
We are in talks for a project where visitors can talk live to Van Gogh about his life for the Van Gogh Museum.
(and with the philosophy comment in mind)
El Turco I recently completed a large art installation for the Japan Triennial (with artist Diemut Strebe) that showed the issues with these systems where
our "Socrates" debating our GPT system in 17 dialogues.
It is both right on and not at all at the same time.
El Turco (see videos of the dialogues)
Again, I can help with this effort of understanding issues with plagiarism (as well as tools and more so effort for our students to understand how AI works and its implications)
Note the visual space ( AI visual generation) equally has issues for us.
steve
- Steve DiPaola, PhD - -
- Prof: Sch of Interactive Arts & Technology (SIAT);
- Past Director: Cognitive Science Program;
- - Simon Fraser University - - -
our book on: AI and Cognitive Virtual Characters
At Simon Fraser University, we live and work on the unceded traditional territories of the Coast Salish peoples of the xʷməθkwəy̓əm (Musqueam), Skwxwú7mesh (Squamish), and
Səl̓ílwətaɬ (Tsleil-Waututh) and in SFU Surrey, Katzie, Kwantlen, Kwikwetlem (kʷikʷəƛ̓əm), Qayqayt, Musqueam (xʷməθkʷəy̓əm), Tsawassen, and
numerous Stó:lō Nations.
Coincidentally those of us currently serving as departmental Academic Integrity Advisors are having a chat about this issue on our own list--not with regard to policy, but pedagogy and evaluation. Would there be interest in broadening the discussion? JDF
James Dougal Fleming
Professor, Department of English
Simon Fraser University
Burnaby/Vancouver,
British Columbia,
Canada.
The truth is an offence, but not a sin.
-- Bob Marley
Hi All,
Does anyone know if some policy guidelines have been issued by SFU re. ChatGPT and academic dishonesty. Specifically, what would SFU accept as dispositive evidence that an essay had been generated using ChatGPT or similar AI software? Obviously, it will
be impossible to introduce as evidence materials that have been cut and pasted without acknowledgement.
In this vein, I recently had a chat with an engineering student (but not an SFU student!) who received an A+ on an assignment using ChatGPT.
The software could not generate an A+ paper in Philosophy (of course not!). For the moment, I'm mostly concerned with suspicious C+ papers.
Thanks in advance,
Sam
Sam Black
Assoc. Prof. Philosophy, SFU
I respectfully acknowledge that SFU is on the unceded ancestral and traditional territories of the səl̓ilw̓ətaʔɬ (Tsleil-Waututh), Sḵwx̱wú7mesh Úxwumixw
(Squamish), xʷməθkʷəy̓əm (Musqueam) and kʷikʷəƛ̓əm (Kwikwetlem) Nations.
|