
Tomorrow, I’ll host a two-day institute for faculty grappling with generative AI. I’ll be joined by the amazing
, Amy Rutherford, and Bob Cummings. 90+ faculty from the University of Mississippi have registered to learn about practical strategies to curb AI misuse in their classrooms and also hear about the flurry of recent updates that make confronting generative AI in teaching and learning so challenging. This is the fourth time we’ve held a university-wide event to train faculty about generative AI and it is needed now more than ever. What concerns me going into it are the recent stories decrying students and faculty using AI in a general sense of the ‘us vs. them’ mentality. The AI crisis in education isn't simply that it enables cheating—it's that we're failing to develop a nuanced understanding of how this emerging technology intersects with long-standing educational challenges.Part of that story is who is left out of these dystopian narratives. I want to begin by telling you about Drew*, who was a student of mine. Drew came to each class meeting and sat right in front of me in the first row. He was eager to learn about all things AI. He contributed to every class discussion, asked insightful questions, and challenged and engaged his peers with an amazing level of thoughtfulness. But Drew never submitted any of the assignments for the course. Not a single one. Not the assignments that invited him to use AI, and none of those assignments that required him not to.
I met with him after class, spoke with him in a private conference in my office, and issued alerts for student support services, but nothing I did helped Drew be successful in my class. Sadly, Drew’s story isn’t an outlier. In fact, his story has become increasingly common. Students show up, and attend class, but don’t actually submit any of the required coursework. No one fully understands the phenomenon and while it certainly has gotten worse since the pandemic, it isn’t exactly new.
Before I met Drew, I piloted GPT-3 in first-year writing courses in the fall of 2022 with my colleagues in the Department of Writing and Rhetoric. I joked then that now AI was in our world, at least we’d never see another incomplete draft ever again. I was shocked that semester when the opposite proved true. I received the same number of incomplete drafts, some written by students, others generated with the aide of AI.
Higher education has had a student graduation problem for decades. ChatGPT isn’t going to solve that. The average undergraduate graduation rate for students nationwide sits at an abysmal 63% for public universities. The most many of these students will be able to list on a resume is “some college.” With student loan rates at over 6%, that means students who don’t complete a degree are left paying hundreds of dollars in interest each month on debt they cannot discharge for a degree they don’t possess. Things weren’t exactly rosy in higher education before ChatGPT arrived and the AI panic is making us forget that.
Everyone Is Actually Not Cheating Their Way Through College
James Walsh’s Everyone Is Cheating Their Way Through College paints a grim and hyper-sensationalized portrait of an educational system on the brink of total collapse because of unchecked generative AI. It’s a world where students would rather cheat their way through school using ChatGPT for nearly everything. That many students are using the technology unethically to do just that and offload learning is undeniable; what needs to be analyzed more deeply and talked about with nuance and understanding are students whose stories such single-sided narratives leave behind. A significant number of students don’t use AI, and students like Drew have been quietly failing out of college for decades. It’s absolute lunacy to assume a chatbot could solve that.



The recent influx of think pieces and reportage about AI in education has been a steady drip of clickbait articles decrying generative AI leading to learning loss, mass illiteracy, and an existential crisis for education. Such sweeping narratives lean into the panic and leave out students like Drew and countless other students forced to adopt surveillance out of fear.
People are just now suddenly waking up to the realization that tools like ChatGPT aren’t just going to be used by students to skirt learning. These aren’t new ideas. Mike Sharples foresaw this before ChatGPT came on the scene, writing in Automated Essay Writing: An AIED Opinion: “Students employ AI to write assignments. Teachers use AI to assess and review them. Nobody learns, nobody gains.”
Kashmir Hill’s recent piece The Professors Are Using ChatGPT, and Some Students Aren’t Happy About discusses the ethical minefield that Mike Sharples foresaw, with faculty using AI undisclosed with students to create course materials and offer feedback. Hill’s piece is sourced, in part from student’s Rate My Professor comments. Sourcing data from anonymized message boards that often amplify student grievances against their professors should make any reader pause.
The two main examples Hill cites in the story both involve contingent faculty, an adjunct instructor at Southern New Hampshire University who used AI for feedback, and another adjunct professor at Northeastern University who used AI in instructional materials. Neither disclosed that AI was used with students. The Northeastern professor upset a student to the point that she asked for a refund for the class—$8,000—which the university denied.
The social media outrage that followed was sadly predictable. Hot takes poured in.
What the piece completely glossed over, and what social media cast aside, were the material conditions that likely lead to the contingent faculty members leaning on generative AI to do part of their job in the first place. Even the aggrieved student at SNHU interviewed in the piece noted it might be a “third job” for many of her instructors, who might have hundreds of students.” Yet, there’s no analysis or questioning exactly what kind of quality feedback a faculty member under those conditions was capable or expected of giving students before generative AI. I’ll give you a hint, it involves the words copy and paste.
There also isn’t any discussion of the material conditions of the adjunct faculty member at Northeastern University whose salary for the class likely didn’t even come close to the single student’s requested $8,000 refund. None. The deep structural inequities that are so entwined in higher education are glossed over because AI has become the big bad wolf terrorizing us all. Nearly 70% of faculty who teach in higher education hold contingent appointments. That contingent faculty members resorted to using AI undisclosed with their students to do part of their job should only be shocking to those completely blind to the power dynamics present across college campuses. The AI panic became the single story—not how new technology is being used in troubling ways to address the material conditions of existing exploited faculty labor.
The story about AI feedback is complicated. No one should be eager to offload the care and relational aspects found in human-centered feedback to an algorithm, but there effective ways to explore AI feedback that are open and require real-life people reading and responding with each other with AI in the loop. Eric Kean’s MyEssayFeedback uses generative AI alongside student peer review and asks students to compare the quality of both AI and human feedback to their own writing. A teacher oversees the whole process. Such a system, when used carefully and employed intentionally, could truly help students who don’t have access to resources or time offered by a traditional residential college education. Where’s that story?
To the NYT’s credit, their recent reporting has begun to illustrate how harmful this AI panic has become to students. Callie Holtermann’s A New Headache for Honest Students: Proving They Didn’t Use A.I. lays bare the incredible cost students bear, forcing many to use process tracking in ways unthinkable only a few years ago:
But the specter of A.I. misuse, and the imperfect systems used to root it out, may also be affecting students who are following the rules. In interviews, high school, college, and graduate students described persistent anxiety about being accused of using A.I. on work they had completed themselves — and facing potentially devastating academic consequences.
In response, many students have imposed methods of self-surveillance that they say feel more like self-preservation. Some record their screens for hours at a time as they do their schoolwork. Others make a point of composing class papers using only word processors that track their keystrokes closely enough to produce a detailed edit history.
Open Disclosure of Generative AI Must Transcend Education
What should make us all concerned is one of the chief challenges around generative AI that transcends higher education is asking users to disclose when they’ve used the technology. I’ve written many times about why open disclosure of generative AI matters and how faculty can model disclosure with students. That’s something Kashmir Hill’s piece did a good job addressing within higher education, but what we need to desperately discuss is what happens when no one adopts open disclosure of AI usage as a principle in broader society.
Universities aren’t the only institutions with complicated relationships with generative AI. The New York Times is suing OpenAI for scraping their stories to train models that power ChatGPT, yet the Times, like so many other organizations, has reportedly approved the usage of those very tools in their newsroom. Would readers have the same level of expectation students have to be told when the technology was used to develop a story? For research? Questions generated to ask in interviews? What about subjects who are interviewed by a reporter and then find their words transcribed via a generative tool, summarized, and synthesized? I’m sure Hill acted ethically in her reporting, but again to the larger point, what does acting ethically even mean when we’re dealing with this technology and audience expectations?
To be clear, I’m not accusing anyone at the Times or any news outlet of using the technology—what I am saying is we need to start having a much broader discussion around the concept of disclosure when generative AI is used in our world.
People beyond students and faculty are already using the technology in troubling ways and that’s a blindspot everyone seems to be missing in these stories solely about AI panic in education. This is precisely why our classrooms provide a crucial testing ground for developing ethical AI norms that will shape society more broadly. When we teach students to thoughtfully consider and disclose their AI use, we're not just addressing academic integrity—we're preparing them to navigate an increasingly AI-mediated professional landscape. The student who learns to critically evaluate when and how to use (and how to avoid using) AI tools transparently in their coursework is developing ethical skills they'll need beyond higher education.
Right now the lawyer who handles your legal dispute might use AI to draft motions without your knowledge; the police officer taking your report might employ GenAI to process and summarize your statement; the doctor prescribing your medication might rely on AI diagnostic assistance; and your therapist could be feeding notes to ChatGPT to help organize their thoughts after your session. None of these professionals are currently required to disclose this or any AI assistance, yet you can be sure these things are happening now. That's the price we pay when AI developers deploy a new technology as a public experiment. Without educational institutions taking the lead in establishing disclosure norms, we risk creating a society where generative tools stand in for human communication, even sense-making, without question.
When I tell faculty that we need to teach students about generative AI, I’m often misinterpreted and folks think I mean we need to teach students how to use generative AI. Far from it. We have an incredibly short window of time to teach students how this technology impacts their world and help faculty guide students in discussing what role AI should have in the classroom. What students learn now will become the bedrock that forms any emerging social contract about how we function with generative technology in the world.
At some point, we have to stop grieving about what we’ve lost and start taking action to preserve what we value most. Academia is obsessed with policy and proceduralism to the point that many faculty don’t pause to consider if something like generative AI may transcend such things. Outside of assessments, we don’t control how our students access technology. Our institutions are not the ones who control generative AI and decide how it is deployed. We have no say in what a student buys or when they receive premium access to these tools.
What we can do is talk with students like AI is a real thing that is happening to them in their world and help them understand that the hype and doom around the technology is more than a single story. We want students to understand disclosure as an ethical baseline when this technology is employed by or used on them. If you believe slapping a boilerplate policy on your syllabi banning generative AI or having a single conversation with your students is sufficient, then I challenge you to consider what’s truly at stake here.
There is no one sitting down with students to talk with them about this. No regulation is on the horizon. It will be years before we see anything like a general education curriculum around generative AI. Just ask Elon Musk’s Grok AI chatbot what it thinks about “white genocide” in South Africa.
Generative AI is Already an Accessibility Issue
We also need to be mindful that we don’t create more harm in the process. Those faculty who envision removing technology entirely from their classrooms to create spaces where AI isn’t an issue are trying to wall off the digital reality that is our world and this is certain to create undo friction for our students. Don’t get me wrong, I’m a fan of friction in learning, especially when it is used to keep some level of desirable difficulties in classrooms.
Students like Drew often have undisclosed issues that keep them from being successful in the classroom. In fact, the CDC reports that over 28% of adults in the US suffer from a disability. Faculty who say they are mindful of accessibility requests and do their best to honor student accommodation requirements often don’t realize the immense cost that’s required for those students to receive testing and have those approved.
This means that many of our students don’t have access to the level of support needed to gain any official accommodation. Much of the adaptive software on the market has been upgraded using a mix of adaptive AI, generative AI, or other machine learning technology. In striving to create a space separated from this technology so many of us despise, we may in fact be creating an insurmountable experience unduly laden with obstacles for students in need all in the name of ‘no AI in my class.’
That’s why we need more than a single story about generative AI. These are increasingly dark times in academia. Conservative attacks on DEI, the collapse of research funding, and an increasingly isolated and struggling generation of students are more than we can deal with. Throw AI haphazardly in the mix and it is no wonder people panic. But despair or radical shifts in how we teach aren’t going to solve or address any of those problems and could lead to even worse outcomes. The best way to cut through the hype is to talk to your students. Read Marcus Luther’s What My Students Had to Say About AI. Hearing what your students think about AI might just change your mind and cause you not to view an entire generation of young students as cheaters.
*Drew is not the student’s real name.
Marc - there is also the enormous disconnect between education and journalism and businesses where, essentially the mantra at some companies is "use AI" or you will be out of a job (think Shopify). You mention law (where I can assure you that AI is used for all sorts of things ... ditto with any business or organization that relies on massive amounts of repeat paperwork which is basically all of them) as well as some other use cases. Students are not stupid - the same kid who feels like they need to self-monitor their AI use at college if they don't want to get caught is going to head into a career where the opposite will occur. If something takes them too long at work they will be asked why they didn't use AI. Something is going to have to give.
Thank you - invaluable work at these UM conferences for the rest of us not there. Please post what you can that comes out of it!