Marc - thanks for bringing this issue to the forefront but I feel like the horse is so far out of the barn that these little deceptive techniques are completely missing the fact that any student who is even remotely good at using AI is way beyond the "copy and paste" the prompt into AI to get their essay done. Anyone who has played around with these tools knows how easy it is to get around the kind of techniques teachers think is so clever - it's a ridiculous long term solution. The issue from my perspective is how little interest there seems to be - for understandable reasons - among faculty to learn about the tools in any kind of meaningful way. I really don't blame the students. It should be a professional obligation to learn about new technologies and this one is the most critical to come along in recent years. It's not going away so sticking your head in the sand simply isn't going to work.
you don't blame the students, who are legal adults, for violating their professors' AI policy, not engaging in the assignment with good faith, and turning in work under false pretenses? that's absurd.
I don't blame the students in classes where teachers don't understand AI, don't discuss AI, and have no idea how to use AI. I have not had a problem with AI in my classes because I am very clear about what is off limits and where AI may be useful for some aspects of the process. My students are not "legal adults" - they are 14, 15, 16 and 17 years old who have suddenly been given a superpower to create instant responses to questions that look plausible at first glance. It needs to be explained to them the problems and issues with what's going on behind the scenes. Do these AI policies simply consist of "don't use it"? If so, that is woefully naive and sets up exactly the dynamic that Marc is discussing where it's a witch hunt and everyone loses.
Is education the filling of a pail, or the lighting of a fire? As an educator for nearly 50 years my views have certainly changed over the decades. Most of my career was spent passionately trying to fill the pails in front of me. However, I've learned that teaching and learning are two very different things. Those who "teach" are right to be afraid of AI. It can indeed be used to "cheat". It's a threat, like a thief coming in to steal the dragon's hoard that has been passionately amassed over the years. Small wonder that teachers want to use all sorts of new technologies to detect and thwart that threat.
However, I am now working with educators who cause learning, but who do not teach! They are not "filling the pail" but instead are lighting the fires of learning. They aren’t threatened by AI, in fact, for their learners, AI is an incredible tool, allowing for hyperpersonalization, as well as voice and choice that were unimaginable before. One of the major factors has been their openness to transitioning away from a “deficit-based” grading system in favor of an “asset-based” approach. Students are not assessed on their “performance” on assignments, but instead are assessed on clearly defined proficiency standards which exist outside of the curriculum, the syllabus or the calendar.
Whether or not “deceptive assessment” is a good or bad idea is not the issue. The problem is deeper. AI is forcing us to look more deeply at the very notion of assessment itself. Thanks to AI we have the chance to stop looking for more and more precise ways to assess the success of instruction and can focus on truly assessing the learning that AI is making possible.
IMO: One of the opportunities we have at hand is to engage students in understanding changes to efficiency in production process that we all are navigating in real time. This change will be completely individual and relative!
Example: It used to take hours and books (Strunk and White!) to identify all the grammatical errors in a paper. Now we just leverage Grammarly and click to accept until all the red squiggles are gone. Not all writers use Grammarly, not all writers who do use accept all of Grammarly's recommendations.
The risk that detection helps to identify, is that many students have learned how to prompt (or at least how to copy-paste your prompt into the text window). The most egregious abusers--the students with a pass through mentality (copy+paste, copy+paste, submit)--are missing out on the true efficiency that AI can afford in their learning process.
For those students writing their own prompts (still writing!) and then leveraging the output know that quality outputs still take effort... It takes deep reading and often significant rework and editing. Not all students will use generative AI, and not all students who do are cheaters, but we all know that using our brains is important!
Yes to all of this - this is exactly how I've been feeling recently and getting frustrated (as an educational developer) with people asking for how to detect AI. I always tell people even if you plan to resist AI, you need to engage with it yourself (use it) and at least discuss it with students, if not outright let them use it and discover its shortcomings/limitations. My students who study computer science are the least likely to use AI inappropriately because they're so aware of its limitations for a social science type course (or at least some particular assignments of mine). I've been wondering how many examples of "other ways of doing things" people need to see before they can imagine how to apply them to their own classes?
I think you got me off the fence about updates I want to make to my next AI Plagiarism and Cheating presentation for our faculty later this week. I've been giving some version of it for almost a year and a half now, and it just keeps getting bigger and more involved.
This is a great post. You are reflecting some of my fears that have been getting stronger for the past couple of weeks.
Could not all student's written answers be acceptable only when they include an ending that is composed by an AI device and for which the question to be answered is one that is designed to require a lot of AI time to answer due to its complexity? Had an AI device been used in the rest of the solution, no difference in writing style would be apparent, and the amount of space for this last answer would be disproportionate and obviously fake.
My solution is to accept that AI is an absolute game changer in education (and everywhere else for that matter). With that in mind, it's pretty futile to try to "bend" AI into the existing paradigm - don't try to use AI to improve "teaching". Instead figure out how your students can use it most effectively (of course with your guidance) to actually learn more deeply. Remember, you're not Smaug, sitting on an information hoard that those nasty hobbits are trying to break in and steal. Your (and their) opportunity now is tremendous. You don't need to worry about trying to get them to respond more correctly to your instruction, you should even be using AI to open new worlds to them through the lens of your experience, and knowledge. We should cultivate environments where the learners begin to surpass their teachers. That also opens up tremendous opportunities to rethink our very basic notions about assessment. Who better than their teachers/professors to be able to help guide and encourage?
You've identified the key question to ask whenever AI is involved - is it being used to advance learning? With tools like Notebook LM and others coming down the pike, the conversation is going to shift from AI output to RAG and the ability to query your own and others work to look for themes, connections, and points of interest. Students already have - but it will be ubiquitous soon- the ability to interact with all of the great works of literature in a profound way. For those who want to engage and explore this landscape already, they will leave their teachers in the dust who are not willing to acknowledge what is happening.
Marc - thanks for bringing this issue to the forefront but I feel like the horse is so far out of the barn that these little deceptive techniques are completely missing the fact that any student who is even remotely good at using AI is way beyond the "copy and paste" the prompt into AI to get their essay done. Anyone who has played around with these tools knows how easy it is to get around the kind of techniques teachers think is so clever - it's a ridiculous long term solution. The issue from my perspective is how little interest there seems to be - for understandable reasons - among faculty to learn about the tools in any kind of meaningful way. I really don't blame the students. It should be a professional obligation to learn about new technologies and this one is the most critical to come along in recent years. It's not going away so sticking your head in the sand simply isn't going to work.
you don't blame the students, who are legal adults, for violating their professors' AI policy, not engaging in the assignment with good faith, and turning in work under false pretenses? that's absurd.
I don't blame the students in classes where teachers don't understand AI, don't discuss AI, and have no idea how to use AI. I have not had a problem with AI in my classes because I am very clear about what is off limits and where AI may be useful for some aspects of the process. My students are not "legal adults" - they are 14, 15, 16 and 17 years old who have suddenly been given a superpower to create instant responses to questions that look plausible at first glance. It needs to be explained to them the problems and issues with what's going on behind the scenes. Do these AI policies simply consist of "don't use it"? If so, that is woefully naive and sets up exactly the dynamic that Marc is discussing where it's a witch hunt and everyone loses.
Is education the filling of a pail, or the lighting of a fire? As an educator for nearly 50 years my views have certainly changed over the decades. Most of my career was spent passionately trying to fill the pails in front of me. However, I've learned that teaching and learning are two very different things. Those who "teach" are right to be afraid of AI. It can indeed be used to "cheat". It's a threat, like a thief coming in to steal the dragon's hoard that has been passionately amassed over the years. Small wonder that teachers want to use all sorts of new technologies to detect and thwart that threat.
However, I am now working with educators who cause learning, but who do not teach! They are not "filling the pail" but instead are lighting the fires of learning. They aren’t threatened by AI, in fact, for their learners, AI is an incredible tool, allowing for hyperpersonalization, as well as voice and choice that were unimaginable before. One of the major factors has been their openness to transitioning away from a “deficit-based” grading system in favor of an “asset-based” approach. Students are not assessed on their “performance” on assignments, but instead are assessed on clearly defined proficiency standards which exist outside of the curriculum, the syllabus or the calendar.
Whether or not “deceptive assessment” is a good or bad idea is not the issue. The problem is deeper. AI is forcing us to look more deeply at the very notion of assessment itself. Thanks to AI we have the chance to stop looking for more and more precise ways to assess the success of instruction and can focus on truly assessing the learning that AI is making possible.
Marc, this is a great post.
IMO: One of the opportunities we have at hand is to engage students in understanding changes to efficiency in production process that we all are navigating in real time. This change will be completely individual and relative!
Example: It used to take hours and books (Strunk and White!) to identify all the grammatical errors in a paper. Now we just leverage Grammarly and click to accept until all the red squiggles are gone. Not all writers use Grammarly, not all writers who do use accept all of Grammarly's recommendations.
The risk that detection helps to identify, is that many students have learned how to prompt (or at least how to copy-paste your prompt into the text window). The most egregious abusers--the students with a pass through mentality (copy+paste, copy+paste, submit)--are missing out on the true efficiency that AI can afford in their learning process.
For those students writing their own prompts (still writing!) and then leveraging the output know that quality outputs still take effort... It takes deep reading and often significant rework and editing. Not all students will use generative AI, and not all students who do are cheaters, but we all know that using our brains is important!
Yes to all of this - this is exactly how I've been feeling recently and getting frustrated (as an educational developer) with people asking for how to detect AI. I always tell people even if you plan to resist AI, you need to engage with it yourself (use it) and at least discuss it with students, if not outright let them use it and discover its shortcomings/limitations. My students who study computer science are the least likely to use AI inappropriately because they're so aware of its limitations for a social science type course (or at least some particular assignments of mine). I've been wondering how many examples of "other ways of doing things" people need to see before they can imagine how to apply them to their own classes?
I think you got me off the fence about updates I want to make to my next AI Plagiarism and Cheating presentation for our faculty later this week. I've been giving some version of it for almost a year and a half now, and it just keeps getting bigger and more involved.
This is a great post. You are reflecting some of my fears that have been getting stronger for the past couple of weeks.
Could not all student's written answers be acceptable only when they include an ending that is composed by an AI device and for which the question to be answered is one that is designed to require a lot of AI time to answer due to its complexity? Had an AI device been used in the rest of the solution, no difference in writing style would be apparent, and the amount of space for this last answer would be disproportionate and obviously fake.
So what’s your solution?
My solution is to accept that AI is an absolute game changer in education (and everywhere else for that matter). With that in mind, it's pretty futile to try to "bend" AI into the existing paradigm - don't try to use AI to improve "teaching". Instead figure out how your students can use it most effectively (of course with your guidance) to actually learn more deeply. Remember, you're not Smaug, sitting on an information hoard that those nasty hobbits are trying to break in and steal. Your (and their) opportunity now is tremendous. You don't need to worry about trying to get them to respond more correctly to your instruction, you should even be using AI to open new worlds to them through the lens of your experience, and knowledge. We should cultivate environments where the learners begin to surpass their teachers. That also opens up tremendous opportunities to rethink our very basic notions about assessment. Who better than their teachers/professors to be able to help guide and encourage?
You've identified the key question to ask whenever AI is involved - is it being used to advance learning? With tools like Notebook LM and others coming down the pike, the conversation is going to shift from AI output to RAG and the ability to query your own and others work to look for themes, connections, and points of interest. Students already have - but it will be ubiquitous soon- the ability to interact with all of the great works of literature in a profound way. For those who want to engage and explore this landscape already, they will leave their teachers in the dust who are not willing to acknowledge what is happening.