3 Comments

Matt: If a source does not exist the student has, with or without ChatGPT, committed academic dishonesty. At my school that’s enough to report them through the usual protocols and it’s pretty easy to prove. In fact, I just advised a new adjunct faculty member that if he could show a made up source, he would have a much easier time making his case to student conduct and could probably throw in the AI report for good measure.

I think part of the problem is that the discourse of getting away with it is much more shared, but nobody talks about the time they got caught. While FERPA would never allow for it, I think an apology to the class would be appropriate for serious offenders—especially for online classes—because the cheating student has degraded the academic experience of everyone.

Expand full comment
author

As you know, not every ChatGPT submission has something as obvious as a “hallucinated” source.

As I said above, I understand why people might choose any of the four strategies. Public shaming might not fit everyone’s teaching philosophy, but I agree that ChatGPT is impeding learning, even for the students who don’t use it.

I’m not sure the discourse of “getting away with it” is any more compelling to students than it has always been, with online paper mills & sparknotes, etc. What’s different is how it intersects with a mainstream discourse about AI. They are being characterized (inaccurately, I think) as patient zero for the coming literacy panacea (which, I believe, will never materialize).

Expand full comment

I feel the need to clarify.

Of course, many ChatGPT essays don’t exhibit clear markers. My only point was that if there are fake sources that in itself constitutes academic dishonesty.

My goal in the previous statement was not to valorize shaming students, but that students should provide some form of restitution through an apology. Making the distinction might be hard and I realize it’s impossible to do in real life class contexts. What I had in mind was online classes where I’ve had students use ChatGPT to generate content and then students are forced to interact with thoughtless text. Students, thus far, don’t seem as adept at recognizing ChatGPT, and that’s fair since they don’t read the volume of student work that we do.

My own approach has been to report what I can and to grade as if it were genuine when I can’t. Our campus has banned “unauthorized use” so that can produce a lot of variation between instructors on campus, but at least gives instructors a leg to stand on to think what is right for their discipline. My hope too is that the low grades for those things I can’t prove convinced students that it’s not worth it.

I do feel like academic dishonesty cases have gone up though and I find myself wistful for the good ole days of patch writing. I used to have maybe two cases per semester (4-4 load at an open access institution). Now I have multiple cases per section and I have some students who are repeat offenders, even after they’ve been caught.

I agree with you that I don’t think that there will be any great literacy gains.

Respect!

Expand full comment