
Like so many things in our world, our well-intentioned efforts to solve one problem usher in a legion of new challenges, and AI detection is no different. Recently, process tracking joined the pantheon of AI detection techniques. We’ve seen the rush to adopt AI-powered detectors, experiments with linguistic finger-printing, watermarking AI outputs through cryptography, and now an advanced form of long-term proctoring called process tracking.
The ground truth is faculty are turning to AI detection because they are burning out. They can’t keep up with all the ways AI can impact writing or do assignments for students. They’re resorting to imperfect tools to track and detect, while others are using the physical classroom space to do device-free and reclaim some time for learning without technology.
Why Process Tracking?
I have written and rewritten this post perhaps a dozen times. It’s spring break for my kids, so we made our way down to New Orleans for a few nights. My wife told me to relax and not do any work, so I did my best to stay offline. This of course didn’t happen like I’d planned. For this essay, I’d write a paragraph or two, maybe even some of them were rants, and delete them. The kids run past. We’d go out for five or six hours, and come back for a rest. I’d start again.
I’d take a sentence I’d written for this piece and save it into my notes for a presentation or future talk, copy a link to another writer’s article, and maybe even use AI to summarize some of the ideas. Read it after a few hours, decide I didn’t like it, and then go about and delete it.
If an editor saw my writing process they’d think I was mad. If I was a student and a writing instructor saw how I write, with time stamps, keystroke entries, what was copied, what was deleted, and yes, what was made by AI, what would that person think of me? I feel entirely naked just describing how I write—I would be mortified if someone could see that entire process.
As faculty, we approach such questions of process tracking from positions of authority. We see the uncritical adoption of generative AI by students for their writing not simply as a threat to academic integrity, but also to their learning and the vital skills they’ll need to function in the world. So of course we’re trying to curb students using AI in their assignments. But what second-order effects does adopting process tracking have on a real-life person’s writing?
The Downstream Consequences of Process Tracking
The way we write is often equated with thinking. What long-term impacts might this have on writing and thinking if what we write/ what we think is recorded and monitored via algorithm? What happens when our rants, asides, and tangents all become visible?
My professor is so smug, so self-righteous, assigning this 2000-word essay due at the end of the week over things we barely even talked about in class . . . oh, God! Will he be able to see that in the process tracking report? I am so, so, sorry! I didn’t mean any of what I wrote. I just . . .
All types of writing, even academic writing, have creative elements to it. I would dare say writing to report knowledge in a proctored setting was never my strongest writing. What impact would carrying that sort of proctoring into a writer’s long-term writing process mean for the creative or stylistic chances an author might take?
I don’t want someone to see how hard it is for me to spell big words I don’t always know, so instead of typing and retyping and using a grammar tool to fix them, I’ll just stick to simple sounding words I know so I won’t look dumb.
Process Tracking Arrives
Last fall, Grammarly launched a process tracking tool called Authorship. At the time, Grammarly marketed the product not to faculty, but to students who were frightened of being accused of using AI and had little defense against such accusations. Now TurnItIn offers the option to faculty in a new paid add-on they are calling Clarity.
The technique isn’t necessarily new and the principle behind it makes a great deal of sense to combat academic misconduct when AI is involved. Process tracking does much what the phrase implies—an algorithm records keystrokes, tracks the time a writer spends on a task, logs each copied and pasted source, and provides insight into it all via a replayable interface.
The thing is, generative AI is now just one of dozens of problematic technologies that our students will have to contend with in the real world. AI is now part of our lived experience in online spaces. The so-called ‘AI slop’ is everywhere on social media and it creeps into the offline world through advertisements, marketing, and even artistic works. Generative AI is unavoidable. None of the so-called detection solutions work to deter AI usage in all instances. One method may work for a particular assignment, modality, or learning outcome, but fail spectacularly in others.
What's worse, these systems don’t simply fail, they often mislabel human-generated text as being of AI origin. While process tracking won’t do that, it also isn’t designed or deployed in a way to curb students from easily bypassing the interface. Simply opening a new tab and manually copying an AI output via typing or automating the task will defeat it.
Why the Experts Say No to Problem Tracking
Mark A. Bassett and Kane Murdoch are critical of AI detection and see issues with process tracking as well. Their excellent post covering Ten Persistent Academic Integrity Myths talks about how easy it is to bypass process tracking with new AI tools, like AI agents.
Why Some Faculty Are Drawn to Use Process Tracking
Anna Mills sees a place for process tracking in her essay Why I'm using AI detection after all, alongside many other strategies. Anna sees process tracking as a useful tool alongside other detection techniques and writes about her process of adopting AI detection from the perspective of someone who was once firmly opposed to using AI detectors.
Anna approaches this topic with quite a bit of nuance and sees detection as one form of mitigation. Indeed, both Murdoch and Bassett mention the Swiss Cheese method Anna alludes to in her essay as a model of academic integrity using more than one method to create an umbrella, but where they differ from Anna is Murdoch and Bassett believe it is only effective if institutions, not individual faculty, decide to integrate it into a robust system. Doing so requires a commitment from the institution. That means resources, personnel, training, etc. We aren’t seeing anything like that happening right now.
It’s been clear to me for years now that such tools may have their place in a formal academic misconduct investigation, but not in the hands of untrained faculty. Very few faculty members have training in academic misconduct investigations to begin with, and providing them with a tool without the needed training is a recipe for chaos.
What are our Ethical Obligations?
I was interviewed for There’s a Good Chance Your Kid Uses AI to Cheat in the Wall Street Journal, and I told them that generative AI in education is “a gigantic public experiment that no one has asked for.” If you’ve read this newsletter for some time, you also know I don’t think the challenges with AI in education are just about academic integrity.
We’re continually seeing AI sold to teachers as time-saving solutions to create assignments, offer feedback, produce faster lit reviews, etc., and yes, use AI or some other technology to try and detect AI. Where is the discourse about our ethical responsibilities using this technology in our jobs and with and often on our students?
Academic integrity applies to students and faculty, but faculty somehow get the benefit of the doubt that their behavior is trustworthy while student behavior is anything but. I’m not saying either students are faculty are prone to cheating. We shouldn’t assume either understands all the ethical dimensions involved around generative AI in education.
There’s a strong likelihood that companies will use student essays loaded into AI detectors to train future AI systems. I think it is also reasonable that some future technology could use the very human decisions captured in process tracking to develop systems that mimic writing more organically. I don’t see people talking about that. If they did, they might pause before adopting AI detection. They might also weigh the risks and see the current uncritical adoption of AI by students to cheat as far greater harm.
If we turn to process tracking because we cannot get students to use AI tools responsibly or ethically, how much of a leap will it be before others demand we do the same for ourselves? I could absolutely see certain state legislative bodies mandate teachers at public institutions use similar tracking to ensure they aren’t unethically using AI to grade students, offer rote automated feedback, or establish other controls over an educator’s labor.
Looking Beyond Process Tracking
We can't detect our way out of this problem. If we're truly concerned about students' learning rather than just detecting AI, we need to confront the uncomfortable reality that our assessment models may no longer serve us in world where AI is increasing unavoidable. What does authentic writing look like when AI is part of a writer’s process? How do we value said process when tools can generate polished products in mere seconds?
I'd argue that embracing this uncomfortable moment means rethinking what we ask of students and why. It means creating assignments that integrate AI thoughtfully rather than pretending we can build a walls against it. It means acknowledging the messy nature of writing—with its rants, tangents, and countless drafts—as having value by itself.
Faculty are burning out not just because of AI, but because we're trying to maintain educational models that were designed for a different era. The path forward isn't more sophisticated tracking of keystrokes, but more purposeful and meaningful engagement with students about why and how they write.
If we turn to surveillance today to address AI use, what will we sacrifice tomorrow? Our students' privacy? Their willingness to take creative risks? Our own academic freedom? These questions deserve more than technological quick fixes—they demand a fundamental reconsideration of what education means in the age of AI.
AI Disclosure: That above paragraph was AI generated. I asked Claude to give me options to try and close this piece. I’d much rather choose to disclose how I used tools and why I used them than have technology do it for me.
"Faculty are burning out not just because of AI, but because we're trying to maintain educational models that were designed for a different era. The path forward isn't more sophisticated tracking of keystrokes, but more purposeful and meaningful engagement with students about why and how they write."
All of this. And I'll raise my hand here, too, as it is increasingly clear that a lot of the tools/systems that I've grown not just accustomed to but confident in as a teacher are not enough to meet this moment. Going back to the drawing board (especially one that doesn't really exist?) is a lot to even begin grappling with. But it's what is needed.
(Also: I very much relate to having to pause/revise/delete myriads times while interrupted by boisterous children running around the house!)
The sadly ironic part of this fantastic article was the Disclosure at the end. It left me feeling let down by what I just read - not the whole premise of the article which is so spot on, but the realization that I wasn’t reading the “voice” of the author when I believed I was for that paragraph. I felt I was tricked. That is what I think AI Everywhere is going to cause - a genuine mistrust what we see and read All The Time because we don’t want to feel tricked. Your words (and AI’s) make that so clear. But if you hadn’t done the Disclosure, I would have never known. That’s the dilemma we face know at a massive scale.