When ChatGPT Accuses You of Plagiarism
I was originally going to post today about how I tried Google's version of ChatGPT named Bard. However, I saw this YouTube video by Steve Lehto and the concept of using ChatGPT as a witness goes well with yesterday's post about being accused in court by a hologram.
There is a lot I don't know about this story. I would like to know what prompt(s) the professor used since wording matters for a prompt. Did the professor in Texas document each query and result? How much of a sample was used? Really it doesn't matter since ChatGPT isn't designed or intended to be a detection tool for its own generated content. Even if ChatGPT was a foolproof detector of what ChatGPT writes there are other AI models that can generate text content like Bard and GPT4All. If you asked a foolproof ChatGPT if it wrote text generated by Bard it would say no. It wouldn't tell you the text was written by Bard or another AI model because you only asked ChatGPT if ChatGPT wrote it.
Maybe OpenAI has tweaked their code recently. I've tried to replicate the "Did you write this?" test but I can't purposely get it to affirm that it wrote anything unless it is specifically text I requested ChatGPT to generate and in the same session copy-pasted the paragraph asking if ChatGPT wrote the text. I think maybe I would need an additional alternate ChatGPT account to possibly replicate what the college professor did.
If the professor has been working for awhile I assume he has had some of his own academic writings published. If a professor can get away with it they will write the textbook for the class to use. I wonder if before "testing" and accusing his students he ran the same test on his own published works.
The professor in the news story cited by the video teaches in Texas. Isn't there a certain stereotype about Texans and guns used to settle disputes? If I were falsely accused by a college professor I would like to have an opportunity to defend myself.
Comments