Like so many things in our world, our well-intentioned efforts to solve one problem usher in a legion of new challenges, and AI detection is no different.
"Faculty are burning out not just because of AI, but because we're trying to maintain educational models that were designed for a different era. The path forward isn't more sophisticated tracking of keystrokes, but more purposeful and meaningful engagement with students about why and how they write."
All of this. And I'll raise my hand here, too, as it is increasingly clear that a lot of the tools/systems that I've grown not just accustomed to but confident in as a teacher are not enough to meet this moment. Going back to the drawing board (especially one that doesn't really exist?) is a lot to even begin grappling with. But it's what is needed.
(Also: I very much relate to having to pause/revise/delete myriads times while interrupted by boisterous children running around the house!)
The sadly ironic part of this fantastic article was the Disclosure at the end. It left me feeling let down by what I just read - not the whole premise of the article which is so spot on, but the realization that I wasn’t reading the “voice” of the author when I believed I was for that paragraph. I felt I was tricked. That is what I think AI Everywhere is going to cause - a genuine mistrust what we see and read All The Time because we don’t want to feel tricked. Your words (and AI’s) make that so clear. But if you hadn’t done the Disclosure, I would have never known. That’s the dilemma we face know at a massive scale.
I see what you are saying, but I think we have to find a place for these tools in writing and come to terms with our own expectations. My guess is quite a few authors are using these tools so effectively that we wouldn't notice without some type of disclosure. When my students disclose they use AI, I don't feel tricked--I use it as a moment to talk with them about their process and why they made the choices they did. I also see it as an amazing act of trust. They're making the choice to be honest and say how they've used a tool.
I agree with you that it’s a different world now and setting expectations is the key. The entire creative process has to be reimagined in a way. I mean, we have always taken inspiration from things and probably not disclosed them. I think the difference now is the ease and the proximity to the final work the amount of external influence has on the process and now that’s hard to really know what’s “my” own work. And what does that mean? It’s a fascinating discussion and I applaud you for being so open about it.
Thoughtful and thought provoking. I've always maintained that writing is messy business and, like you, I shudder at the thought of someone looking inside my head while it's happening. Sometimes like a kitchen blender, other times like a water pipe, and then occasionally the blocked dunny when even the plunger fails. This is writing.
Teaching kids to embrace this sort of chaos means teaching them that writing is not the written, that pursuing perfection in the end piece is unrealistic because the writers work is always in the making. AI tools present the written, the struggle with the blank space is anathema to the tool because it is a tokenistic model that assembled parts on the fly. We expect kids to produce work that compares in its perfection of syntax and language features to that of work that comes off the bookshelf because that's what we've been trained on. Our benchmark is not only that which is the written, but that which is the edited and read, and the critiqued and the read again and the prized and the exemplified. We shove the written at the kid and say, 'There it is, that's what it looks like, write your version of that.' And we add to that a demand to make it original, give it a distinctive voice, and use your own ideas.
I use AI tools in my teaching because it can serve up what I need in a hurry. It can code my glossary entries for a moodle in double quick time, it can extract key and salient points from texts that I can use for concept acquisition classes, it can give me alternative ways to use moidle tools for student activities. I do this because Claude is a far better reader than I. I feed it my syllabus documents, course outlines, assessment tasks and marking rubrics, and then I feed it anonomised student papers and get it to assess them against the background materials. I moderate it's outputs until we agree on a set of marks, and then I get it to process the batch. Claude always offers up useful feedback. I read several of the papers in the batch along the way, query a decision once in a while, and have the machine assess its assessments across the range.
I do this because it's a far better reader than I. Does it save time? I think not. Does it give me confidence in my reporting? Yes, without a diubt. I train it in my style as we go, so its reporting has a voice close to my own formal tone, and in the circumstances, that's okay. Can it write for me? Not in a fit.
I don't know the answer to the AI as tool if cheating. We have insisted on students working on paper by hand in summative assessments. I don't think it's ideal, because it means we're shortchanging them somewhat on digital instruction, but it is preventing the ghost in the machine popping its head out. The ease with which the machines can be used, and the mockery of useful guardrails and age restrictions, do make it difficult to concoct adequate interventions. But, I fear while we insist on judging the written and not exhorting the chaos of what writing actually is, or any other creative pursuit for that matter, we are always pitted against the machine that can present the image of perfection.
What I feel I want to see is a student showing me that they can think along a creatively critical and critically creative continuum, inscribe that thinking in a cyclicle practice in which they strive for an effect wanted, beginning with a prelibation, the satisfier of which is not known, dip into their gladbag of imagination, test a selection for fit, and apply their sensibilities they have developed from an increasing array of reading to judge whether that foretaste of the effect wanted has been satisfied. The machine can't do that, but a student pursuing purpose of writing can.
I spend a lot of time thinking about what progress tracking (and don't get me wrong. I understand why people use it) and what happens when "off-stage" time gets closer and closer t being obliterated.
If "integrity is what you do when no one is watching," what happens when we no longer have moments when no one is watching?
Thank you for your thoughtful piece; as always I appreciate your leadership and any chance to be in dialogue with you! Thanks for engaging with my writings on process tracking and detection.
My own approach is to advocate for guardrails on how process tracking and detection are done. So I tell students not to share the revision history or writing replay with Grammarly Authorship. That system puts the process report in their hands first and lets them opt out of sharing the replay. That means their early sentences are not visible to me. I would also advocate for strong data protections. With Grammarly Authorship, they are not sharing anything more than they would using Grammarly to help them correct typos.
I do think it's important to clarify that process tracking doesn't have to mean selling student data and it doesn't have to mean showing early drafts to teachers.
I am also completely in agreement that we need a swiss cheese approach at an institutional level, and this shouldn't all be left to teachers to sort out on our own. Still, I think teachers do see an opportunity to take a non-punitive approach with students before or instead of turning them in to an institutional academic integrity process, and I think process tracking and detection can be part of that non-punitive approach.
I'm my country, our universities have the budget to innovate and include AI and other advancements in their systems, but have chosen to maintain the authodox Way of filing, documentation, lecturing and assessment. It is very terrible 💯
(Full disclosure: I've built a tool for keystroke analysis focused on a narrower view of authorship; I'm generally agnostic to AI as a co-writer but very much for a new path towards individual authorship and maintaining writing as a valid means of assessment/worthwhile endeavor)
You've raised the ultimate question. "How do we value said process when tools can generate polished products in mere seconds?"
The question of authorship has been around before ChatGPT, and well before we just wanted to know whether it's human or not. Essay mills have been a defeat for any edtech product since before Turnitin.
How do we value writing when another human can do the thinking for dollars generating the same polished products? The truth is that, for any piece of writing uploaded or pasted to a modern LMS, the link to the student not clear.
Balancing friction and invasiveness with validity is the hardest thing I've worked on.
I know I'm not alone in viewing effort and fumbling and rants as the beauty and magic that separates real writing from machine generation. If recognizing that process and toil and struggle is part of a path forward that raises the value of writing to >0... I'm here for that.
Please know that I'm coming from a place of intense love for the written word and writing.
Marc, I read all your posts with interest, but please will you direct me to a list of assignments that do what you claim we as teachers should do. From all around, Im reading a lot a lot a lot of doom and gloom, and then 3/4s of the way down these posts, writers like yourself say "we should do X." Let's have more of we should do X. We are burned out because of all the things you say. But I am ready and willing to do X, so can we hear more about that?
Hi Freddie, I try to frame my current teaching around creating ethical expectations for students using AI in writing assignments. I've published most of this in the Chronicle and reposted most on my Substack: https://www.chronicle.com/author/marc-watkins. Harvard's AI Pedagogy project and TextGenEd are also both solid resources to see how faculty are responding to AI.
The thing is, it will take use all collectively years to rethink and examine our learning outcomes in the wake of the AI tools we have today, let alone those forecasted in the near future. I have no idea how to respond to multimodal AI that can see you or converse with, deal with AI agents that can take over your browser, or AI reasoning and deep research tools. We badly need a pause in AI deployments to find our feet again and think about our response. Any reforms now will have to take this all into account and we need funding and time to do that.
"Faculty are burning out not just because of AI, but because we're trying to maintain educational models that were designed for a different era. The path forward isn't more sophisticated tracking of keystrokes, but more purposeful and meaningful engagement with students about why and how they write."
All of this. And I'll raise my hand here, too, as it is increasingly clear that a lot of the tools/systems that I've grown not just accustomed to but confident in as a teacher are not enough to meet this moment. Going back to the drawing board (especially one that doesn't really exist?) is a lot to even begin grappling with. But it's what is needed.
(Also: I very much relate to having to pause/revise/delete myriads times while interrupted by boisterous children running around the house!)
The sadly ironic part of this fantastic article was the Disclosure at the end. It left me feeling let down by what I just read - not the whole premise of the article which is so spot on, but the realization that I wasn’t reading the “voice” of the author when I believed I was for that paragraph. I felt I was tricked. That is what I think AI Everywhere is going to cause - a genuine mistrust what we see and read All The Time because we don’t want to feel tricked. Your words (and AI’s) make that so clear. But if you hadn’t done the Disclosure, I would have never known. That’s the dilemma we face know at a massive scale.
I see what you are saying, but I think we have to find a place for these tools in writing and come to terms with our own expectations. My guess is quite a few authors are using these tools so effectively that we wouldn't notice without some type of disclosure. When my students disclose they use AI, I don't feel tricked--I use it as a moment to talk with them about their process and why they made the choices they did. I also see it as an amazing act of trust. They're making the choice to be honest and say how they've used a tool.
I agree with you that it’s a different world now and setting expectations is the key. The entire creative process has to be reimagined in a way. I mean, we have always taken inspiration from things and probably not disclosed them. I think the difference now is the ease and the proximity to the final work the amount of external influence has on the process and now that’s hard to really know what’s “my” own work. And what does that mean? It’s a fascinating discussion and I applaud you for being so open about it.
Thoughtful and thought provoking. I've always maintained that writing is messy business and, like you, I shudder at the thought of someone looking inside my head while it's happening. Sometimes like a kitchen blender, other times like a water pipe, and then occasionally the blocked dunny when even the plunger fails. This is writing.
Teaching kids to embrace this sort of chaos means teaching them that writing is not the written, that pursuing perfection in the end piece is unrealistic because the writers work is always in the making. AI tools present the written, the struggle with the blank space is anathema to the tool because it is a tokenistic model that assembled parts on the fly. We expect kids to produce work that compares in its perfection of syntax and language features to that of work that comes off the bookshelf because that's what we've been trained on. Our benchmark is not only that which is the written, but that which is the edited and read, and the critiqued and the read again and the prized and the exemplified. We shove the written at the kid and say, 'There it is, that's what it looks like, write your version of that.' And we add to that a demand to make it original, give it a distinctive voice, and use your own ideas.
I use AI tools in my teaching because it can serve up what I need in a hurry. It can code my glossary entries for a moodle in double quick time, it can extract key and salient points from texts that I can use for concept acquisition classes, it can give me alternative ways to use moidle tools for student activities. I do this because Claude is a far better reader than I. I feed it my syllabus documents, course outlines, assessment tasks and marking rubrics, and then I feed it anonomised student papers and get it to assess them against the background materials. I moderate it's outputs until we agree on a set of marks, and then I get it to process the batch. Claude always offers up useful feedback. I read several of the papers in the batch along the way, query a decision once in a while, and have the machine assess its assessments across the range.
I do this because it's a far better reader than I. Does it save time? I think not. Does it give me confidence in my reporting? Yes, without a diubt. I train it in my style as we go, so its reporting has a voice close to my own formal tone, and in the circumstances, that's okay. Can it write for me? Not in a fit.
I don't know the answer to the AI as tool if cheating. We have insisted on students working on paper by hand in summative assessments. I don't think it's ideal, because it means we're shortchanging them somewhat on digital instruction, but it is preventing the ghost in the machine popping its head out. The ease with which the machines can be used, and the mockery of useful guardrails and age restrictions, do make it difficult to concoct adequate interventions. But, I fear while we insist on judging the written and not exhorting the chaos of what writing actually is, or any other creative pursuit for that matter, we are always pitted against the machine that can present the image of perfection.
What I feel I want to see is a student showing me that they can think along a creatively critical and critically creative continuum, inscribe that thinking in a cyclicle practice in which they strive for an effect wanted, beginning with a prelibation, the satisfier of which is not known, dip into their gladbag of imagination, test a selection for fit, and apply their sensibilities they have developed from an increasing array of reading to judge whether that foretaste of the effect wanted has been satisfied. The machine can't do that, but a student pursuing purpose of writing can.
This is essential reading, Marc. Really.
I spend a lot of time thinking about what progress tracking (and don't get me wrong. I understand why people use it) and what happens when "off-stage" time gets closer and closer t being obliterated.
If "integrity is what you do when no one is watching," what happens when we no longer have moments when no one is watching?
Thank you for your thoughtful piece; as always I appreciate your leadership and any chance to be in dialogue with you! Thanks for engaging with my writings on process tracking and detection.
My own approach is to advocate for guardrails on how process tracking and detection are done. So I tell students not to share the revision history or writing replay with Grammarly Authorship. That system puts the process report in their hands first and lets them opt out of sharing the replay. That means their early sentences are not visible to me. I would also advocate for strong data protections. With Grammarly Authorship, they are not sharing anything more than they would using Grammarly to help them correct typos.
I do think it's important to clarify that process tracking doesn't have to mean selling student data and it doesn't have to mean showing early drafts to teachers.
I am also completely in agreement that we need a swiss cheese approach at an institutional level, and this shouldn't all be left to teachers to sort out on our own. Still, I think teachers do see an opportunity to take a non-punitive approach with students before or instead of turning them in to an institutional academic integrity process, and I think process tracking and detection can be part of that non-punitive approach.
I'm my country, our universities have the budget to innovate and include AI and other advancements in their systems, but have chosen to maintain the authodox Way of filing, documentation, lecturing and assessment. It is very terrible 💯
(Full disclosure: I've built a tool for keystroke analysis focused on a narrower view of authorship; I'm generally agnostic to AI as a co-writer but very much for a new path towards individual authorship and maintaining writing as a valid means of assessment/worthwhile endeavor)
You've raised the ultimate question. "How do we value said process when tools can generate polished products in mere seconds?"
The question of authorship has been around before ChatGPT, and well before we just wanted to know whether it's human or not. Essay mills have been a defeat for any edtech product since before Turnitin.
How do we value writing when another human can do the thinking for dollars generating the same polished products? The truth is that, for any piece of writing uploaded or pasted to a modern LMS, the link to the student not clear.
Balancing friction and invasiveness with validity is the hardest thing I've worked on.
I know I'm not alone in viewing effort and fumbling and rants as the beauty and magic that separates real writing from machine generation. If recognizing that process and toil and struggle is part of a path forward that raises the value of writing to >0... I'm here for that.
Please know that I'm coming from a place of intense love for the written word and writing.
I totally understand. Phew. It's exhausting, and the burnout is real.
I feel one minute I have a handle on it, and a second later, something is different.
Pivot. Pivot. Pivot.
And students can't keep up either.
(And the need for the pause is huge but I doubt it is coming)
I do very much value your work. And read every new post voraciously.
Thank you. I will keep subscribing, and will certainly check out your suggestions.
Marc, I read all your posts with interest, but please will you direct me to a list of assignments that do what you claim we as teachers should do. From all around, Im reading a lot a lot a lot of doom and gloom, and then 3/4s of the way down these posts, writers like yourself say "we should do X." Let's have more of we should do X. We are burned out because of all the things you say. But I am ready and willing to do X, so can we hear more about that?
Hi Freddie, I try to frame my current teaching around creating ethical expectations for students using AI in writing assignments. I've published most of this in the Chronicle and reposted most on my Substack: https://www.chronicle.com/author/marc-watkins. Harvard's AI Pedagogy project and TextGenEd are also both solid resources to see how faculty are responding to AI.
The thing is, it will take use all collectively years to rethink and examine our learning outcomes in the wake of the AI tools we have today, let alone those forecasted in the near future. I have no idea how to respond to multimodal AI that can see you or converse with, deal with AI agents that can take over your browser, or AI reasoning and deep research tools. We badly need a pause in AI deployments to find our feet again and think about our response. Any reforms now will have to take this all into account and we need funding and time to do that.