It’s Time to Step off the AI Panic Carousel Before We Harm our Students
We should not upload student work to AI text detectors without their consent
I’ve written before about how developers have swiftly responded to concerns about students using GPT to “cheat” by building AI-powered detectors in Our Obsession with Cheating is Ruining our Relationship with Students. It hasn’t even been a month since that post, but the market for these AI detectors has exploded and many folks are rushing to upload student work to them. This misguided reaction is a result of our panic. Something I have not talked about is how this action can be damaging to our students, specifically uploading student work to an unreliable AI detector without their permission.
AI text detectors are not analogous to plagiarism detection software, and we need to stop treating them as such. AI detectors rely on LLMs to calculate probability in their detection. Unlike plagiarism detection, there is no sentence-by-sentence comparison to another text. This is because LLMs don’t reproduce text—they generate it. False positives abound, and these unreliable AI detection systems are sure to further erode our relationships with our students.
Pause Before Uploading Student Work
When we upload work for a plagiarism check, we can do so because students have granted us an extremely limited license to check their work in the context of academic honesty. Students still hold the copyright to their work, and we need to be respectful of students’ data rights. Uploading student work to third-party websites always needs clear consent.
Institutions have had no time to vet any of these so-called AI detectors. This is unlike plagiarism software vendors, which are contracted, go through testing, have legally binding user agreements, and face legal consequences based on how they handle student data. Even that isn’t full proof to guard against unethical behavior.
If we upload student writing to one of these detectors, we have no idea what happens to students’ data. Will these companies store student data? Will they sell student data? Will they use student data as training material for future LLMs? As far as I’m aware, there has been no discussion about whether any of these AI detection systems is FERPA compliant.
None of these systems have been rigorously tested. GPTzero was made less than a month ago, by a grad student, in a coffee shop, during the winter break. The developer just released an API for educators and institutions to use. Do we think uploading student writing to an app that was built in such haste is a reasonable move on our part?
As far as I can tell, all current AI detection software uses older versions of LLMs as their main detection mechanism. If educators use one, we are choosing to embrace AI to try and catch AI. It also isn’t possible to vet such tech fully because these are black-box systems. We don’t know how they were trained, what they were trained on, and how bias functions within them.
We’ll Get Watermarking, But That Isn’t a Solution
OpenAI will likely offer the public some type of detection mechanism in the form of watermarking outputs, but even they acknowledge this isn’t going to work in the long run. Such a watermarking system isn’t going to be universal. Each LLM would need to adopt its own watermarking system, and there are dozens of them. Making a universal agreement would require unprecedented international cooperation for any chance at a standardized watermarking system. Do we think China and Russia would ever sign onto such an agreement? What’s more, if we did have a standardized international watermarking system, it would be a veritable house of cards. If the cryptography key that encrypts the watermarking process gets accidentally released to the public, the entire system would crumble.
Our Students are not Eager to Cheat
There’s absolutely no data to suggest students will engage in widespread cheating using LLMs like ChatGPT. That some envision a scenario where legions of students cheat in place of learning shows how far we’ve gone in adopting a gatekeeping mentality toward education. Many people are terrified that generated content is going to displace thinking, but those of us who have used the tech with our students found many are not eager to adopt AI and certainly aren’t rushing to let it generate entire essays.
Our panic carousel is driven by fear of students cheating, but we’re keeping it in motion by preemptively mourning a loss of learning that we believe will take place unless we can stop students from adopting AI in their writing. We have no such power to stop students from adopting any technology, yet we’re already nostalgic for the “time before AI” in our teaching, and we’re treating that as a fantasy in which contract cheating and academic honesty did not exist. Education has taken rides on this carousel before and disaster did not ensue. Perhaps Don Draper said it best in Mad Men: the carousel is the place we ache to return to, and that’s the fantasy, not the reality, we yearn for.
Mad Men - The Carousel
Thanks for that clip from `Mad Men' - very powerful! How we ache for a simpler past :-) - one we can never experience fully again.
Thank you, Marc! Couldn't agree more. Need to treat students and their work with respect and keep our expectations high. We should expect our students to be honest people who want to learn (while doing what we can to prevent cheating and dealing with it properly when it does happen).
I write about kids & ethics in my article "What is a Good American?"
https://raisingamericans.substack.com/p/the-good-american