Generative AI isn’t going to be a standalone tool for long. The copilot tools that we currently have will soon evolve into autopilot agents that complete tasks for us. How this impacts education is anyone’s guess, but if the current chaos and mood that ChatGPT has caused in higher education is any indication, the overall reception isn’t going to be welcome. Nor should it be. But how do we ethically resist generative AI? Many have turned to faulty AI detectors and some have embraced trojan horse prompt injection to try and catch students using AI. Many faculty are sacrificing transparency and honesty on their end by adopting deceptive assessment techniques.
Where We Are Headed
OpenAI has vision, Google has Astra, Anthropic has Computer Use, and now Microsoft Copilot Vision will soon be able to see what’s on our screens. We scoff at this, dismiss it as a novelty, and do so at our peril. A few of us saw the warning signs in the months before ChatGPT was released and understood how primed the market was for software that mimicked writing. Imagine computer vision active in the LMS, guiding our students through every assignment.
We cringe at the thought of such influence, the loss of autonomy and agency, the robotic gendered voices, the political influence, but I can guarantee you all of those areas will be ignored. Vision in education will be tied to student success. Instead of tracking graduation rates over 6 years, such tools will help power a full suite of software designed to get students a degree in 4 years. That messy human challenge of learning will be optimized using technology even the developers don’t fully understand and with little regard for the consequences.
Matthew Kirschenbaum and Rita Raley foresee this in their recent essay with the path institutions are taking with the current wave of generative tools. You can read the snippet below and the whole article reprinted in the Chronicle AI May Ruin The University as We Know It:
Consider all the third-party software licensing for course management and evaluation, as well as the new entrepreneurial ventures that entice budget-strapped schools with the promise that they no longer need to worry about building and supporting a technological infrastructure. From one of the many start-up companies that offer note-taking, transcription, and digitization services, an exemplary corporate pitch: Because you lack both equipment and staff, we will remotely operate the cameras in your lecture hall so that you can meet the updated legal requirements for remote access. In exchange, we are going to store those lectures on our servers and use them to train our in-house language models — access to which we will then be able to license to you in the next funding cycle, when you are ready to add our premium transcription services to your subscription.
How we resist such a future should take shape now by evaluating how little faculty have done to engage AI critically in the classroom. I previously argued that engaging AI doesn’t mean adopting AI. I don’t see how anyone could possibility resist generative AI without critically engaging it within education. Sadly, that’s what’s taking place now. Many have focused on finding ways to avoid discussing generative technologies with students, focusing on dubious methods to try and catch students. This isn’t working now and it isn’t going to work long-term. We’re going to have to engage AI critically.
The False Promise of AI Detection
The cat-and-mouse game of AI detection is becoming increasingly toxic and frankly absurd. AI detection isn’t reliable in academic contexts, but that hasn’t stopped scores of faculty moving to more inventive ways of catching students cheating using prompt injection techniques to insert trojan horse-style traps, hoping to catch unsuspecting students who feed the language into ChatGPT. I’d covered some of this in the past, but recent posts across X have shown a number of faculty adopting this and getting millions of views.
Enable 3rd party cookies or use another browser
As Sarah Eaton noted over a year ago, resorting to these types of tactics is a form of deceptive assessment and has no place in the classroom. More alarmingly it shows just how desperate some faculty are for a solution to students using AI unethically within their classes. We should set clear guidelines for the use of AI in our courses and likewise hold students accountable when they violate those standards, but that only works when we act ethically. Otherwise, what’s the point?
When you resort to deception to try and catch students cheating, you’ve compromised the values of honesty and transparency that come implicitly attached to our profession. Perhaps that’s the real challenge here—making the implicit explicit. We’ve been forced to explain to many students why their use of generative AI tools like ChatGPT constitutes academic dishonesty. Some have leaned into this, taking the time to engage AI with students with a focus on self-discovery and advocacy about the impact uncritically adopting such systems has on student voice, authenticity, transparency, and learning.
Most though haven’t had the time or interest to address generative AI’s impact on students because, you know, they need to teach their courses! I get it, truly, I do. No one has asked for ChatGPT and it's maddening to think how much time, energy, and resources it takes to engage these systems, but if not us, then who?
Our new AI era should cause us to make visible and declare what it means to act ethically with this technology. For some, that will mean no AI in student work. I’m not sure taking a position like that is tenable. Regardless of the actual outcome of taking such a restrictive path, you should at the very least invite students into a conversation about why you think this way, not hide behind techniques that rely on deception to try and catch students who transgress your moral stance on this technology.
Why Deceptive Assessment is a Bad Idea
Hiding a trojan horse within your assignment isn’t a foolproof way of catching a student using AI. Many students rely on screen readers to pick up on such material or text-to-speech systems that will easily catch such techniques. Students will also soon find that simply crafting a prompt to negate nonsensical instructions is an effective workaround. New advanced models from OpenAI, Anthropic, and Google may also make such techniques obsolete by simply ignoring the hidden instructions or recognizing them as a prompt injection attack.
Those are some technical reasons why hidden deception might fail in detecting AI, but the most important reason why deceptive methods won’t work is they don’t stand up to outside scrutiny. When we accuse a student of misconduct in higher education that student has a right to appeal the outcome and the method a professor used to determine that judgement. What does that mean for faculty? For one, we need to be transparent within our syllabi about what assessment or surveillance methods we’re using in the classroom.
If a student appeals a professor’s attempted sanction and cites the method of deception then they’re likely going to win, and frankly, they should. Part of teaching involves integrity and transparency, and deceptive assessment is antithetical to that process. If we expect students to act responsibly in regard to AI, then we likewise must model those behaviors and values. Using hidden gotcha moments to try and catch students isn’t going to fly within most institutions.
No One In Power Wants Detection
Ultimately AI detection schemes will fail, largely because the power structures that both sell these tools and buy them see them as crucial for student success and such decisions shape the narratives on our campuses. I’m not a fan of AI solutions for human learning. That faculty have the judgment to dictate their own AI policies is unsustainable in the long run. We all understand that institutions will one day adopt policies around AI and while academic integrity will be part of the conversation, we are fooling ourselves if we believe we’ll see a single university explicitly ban AI.
Faculty are decent people. Students are as well. Both are in the wilderness now trying to navigate how to use or resist AI. The stakes are increasing by the day and go far beyond simple instances of students using ChatGPT to cheat or avoid learning. Institutions are purchasing access to these tools for students and faculty now. We should have a voice in shaping those decisions. Do we think they will listen or take seriously our concerns about generative AI’s impact on learning if we continue to avoid engaging it and instead hide behind deceptive assessment techniques?
Marc - thanks for bringing this issue to the forefront but I feel like the horse is so far out of the barn that these little deceptive techniques are completely missing the fact that any student who is even remotely good at using AI is way beyond the "copy and paste" the prompt into AI to get their essay done. Anyone who has played around with these tools knows how easy it is to get around the kind of techniques teachers think is so clever - it's a ridiculous long term solution. The issue from my perspective is how little interest there seems to be - for understandable reasons - among faculty to learn about the tools in any kind of meaningful way. I really don't blame the students. It should be a professional obligation to learn about new technologies and this one is the most critical to come along in recent years. It's not going away so sticking your head in the sand simply isn't going to work.
Is education the filling of a pail, or the lighting of a fire? As an educator for nearly 50 years my views have certainly changed over the decades. Most of my career was spent passionately trying to fill the pails in front of me. However, I've learned that teaching and learning are two very different things. Those who "teach" are right to be afraid of AI. It can indeed be used to "cheat". It's a threat, like a thief coming in to steal the dragon's hoard that has been passionately amassed over the years. Small wonder that teachers want to use all sorts of new technologies to detect and thwart that threat.
However, I am now working with educators who cause learning, but who do not teach! They are not "filling the pail" but instead are lighting the fires of learning. They aren’t threatened by AI, in fact, for their learners, AI is an incredible tool, allowing for hyperpersonalization, as well as voice and choice that were unimaginable before. One of the major factors has been their openness to transitioning away from a “deficit-based” grading system in favor of an “asset-based” approach. Students are not assessed on their “performance” on assignments, but instead are assessed on clearly defined proficiency standards which exist outside of the curriculum, the syllabus or the calendar.
Whether or not “deceptive assessment” is a good or bad idea is not the issue. The problem is deeper. AI is forcing us to look more deeply at the very notion of assessment itself. Thanks to AI we have the chance to stop looking for more and more precise ways to assess the success of instruction and can focus on truly assessing the learning that AI is making possible.