
For the fourth time, I helped host a university-wide institute for faculty to explore how generative AI impacts teaching and student learning. Over 90 faculty members from full professors to graduate students spent two days discussing how to curb AI misuse and explore recent AI updates. To capture just how absurd our current AI era is, during the institute and shortly afterward, Anthropic released Claude 4 and Google unveiled their video generation model with audio called Veo 3.
In many ways, it feels like AI has finally arrived at our doorstep, or at least there’s a growing realization that the technology isn’t going to be policed away, fizzle out, or go the way of MOOCs. I’m exhausted. I feel scattered and this post will probably be rambling and near incoherent, but as tired as I am, there is a real sense of renewal and communion having spent time talking with folks from across campus about how our world has changed.
Teaching in the Shadow of GPT-4o
While academia’s collective response to ChatGPT has been slow, we also have to measure that against what the technology was capable of at launch vs. today. We held our first AI institute during the summer of 2023 and I can tell you that the capabilities of generative AI have changed dramatically since then. ChatGPT is only the interface for a multitude of generative models and a seismic shift in usage and capabilities started in May 2024 with OpenAI’s decision to include access to their premium model 4o in their free plans. GPT-4o is a far more capable model than the free version available to users at the original launch of ChatGPT at the end of 2022 and this is the primary reason we saw such chaos in our classrooms last year.
Quite suddenly, students and faculty found that ChatGPT could create coherent essays, answer more complex questions, and use a slew of multimodal features. OpenAI has continually pushed updates to the interface to the point that comparing the 2022 version of the model behind ChatGPT to the current free iteration is a major mistake; it is like confusing a pinball machine for a PlayStation. Academia collectively dismissed the technology too quickly and failed to stay engaged as it evolved in real-time.
What concerns me is what these increased improvements to AI capabilities mean for learning today and the job prospects of future students and faculty. At the Atlantic, Derek Thompson noted “labor conditions for recent college graduates have “deteriorated noticeably” in the past few months, and the unemployment rate now stands at an unusually high 5.8 percent.” Anthropic’s CEO Dario Amodei recently warned that “AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years.” And then there is Kevin Roose writing in The New York Times: “This month, millions of young people will graduate from college and look for work in industries that have little use for their skills, view them as expensive and expendable, and are rapidly phasing out their jobs in favor of artificial intelligence.” That should make us all consider what may be at stake. Any contraction to entry-level jobs most filled by recent graduates will create incredible stress on higher education as our students will soon ask—why am I in college if I cannot find a job afterward?



The stakes are likewise high for faculty and they, too, must be aware of what is on the horizon. As Scott Latham opined in the recent essay Are You Ready for the AI University, it is clear that AI will accelerate the automation of teaching and the jobs that go along with it: “I can think of no plausible scenario in which there will be an equal number of faculty members in 10 years as there are today.” In fairness, that’s not all due to AI. Population decline, changes in attitudes toward college, and young men increasingly opting out of higher education all contribute to the notion that we may have reached peak college in our nation’s history. AI will exacerbate that decline, but it certainly didn’t create it.
I think there’s ample room to push back against rising AI tides and I strongly disagree with Jim VandeHei’s pronouncement "You are committing career suicide if you're not aggressively experimenting with AI.” I certainly advocate for experimentation and leaning into learning about the technology, but telling faculty that they have to learn AI or risk losing their jobs is a surefire way to get them to ignore what is going on or actively choose to resist it through bans or adopting AI detection.
Now is the time to be proactive and reflective about who we are and what we value, not fall into despair or blindly accept techno-optimistic takes about efficiency leading to a golden age of humanity. Change is happening, yes, but let’s approach it in a critical way that leads to outcomes that help us fight to preserve what we value about human-centric teaching.
The Spring AI Institute for Teachers
I was joined by amazing faculty who led sessions at this institute.
did a marvelous job leading sessions about creating AI-aware assignments, exploring the reasons why students cheat, and how faculty can create assignments with more intentional purpose as a foundational pillar to help show students value and meaning. Amy Rutherford led attendees through the various challenges posed by AI detection, touching on process tracking, watermarks, and stylometry. Shelby Watson and Micheal Carelse discussed how library databases were incorporating AI and various academic publishers’ policies for AI and authorship. Bob Cummings led a session on AI agents, using his free credits with Manus to generate a 240-page book about writing pedagogy. The process took less than an hour, and while the results were more of a collection of summaries than a coherent text, I think people realized how quickly this technology can automate complex tasks.I led sessions about AI updates. I’ve often felt like Paul Revere with a digital sign instead of a lantern—announcing each AI leap to increasingly weary colleagues. They rightly want resolutions to deal with AI disrupting how they’ve taught for years. My best advice: Get your students to talk openly about how they’re using AI, establish what is or is not acceptable for your class, and build all of your future assignments with awareness of what AI can do.
The Questions Many Asked
But even as we take these first steps, the conversations at our institute revealed just how many fundamental questions we're still grappling with. Those of us who can find the badly needed support to create more sweeping changes can author pathways to help faculty now and answering any one of these questions could fill a book if not more:
How do we talk with students about AI usage across disciplines?
Are there frameworks that show how to embrace or refuse AI in large lecture classes?
How do I do any of this in an online or hybrid class where asynchronous teaching is the only method I have to communicate with students?
How do I find time in my already packed curriculum to teach students ethical AI usage?
How do we assess AI? When does AI usage cross the line from assistance into academic misconduct?
How do I teach students research skills in this generative AI era? How do I conduct research now that many of the digital tools are use generative AI?
The Bigger Questions We Should Start Asking
These practical challenges kept surfacing throughout our two days together, but underneath them lurked even more fundamental questions we must start asking ourselves:
What happens if AI continues to improve?
What won’t be automated in our digital interactions?
What’s the point of teaching students to learn something if a machine will give them a simulation of it?
These are just a few of the thorny questions folks raised and several others that have been in the back of my mind for quite some time now. I wish I had answers for them. I don’t. As much as we extoll the values of human connection and human-centric learning, it is undeniable that many of our students are increasingly turning to automation to side-skirt learning, and it is likely that so too are a number of faculty, treating AI as efficient pedagogy, being awed by deep research, automatic feedback, and instant lesson plans.
How Should Academia Change in the Face of Generative AI?
Adopting or resisting AI feels increasingly impossible. For one, the technology has simply moved too fast to forecast what new AI skills students would need in this rapidly changing AI era. In the summer of 2023, many people thought we’d be training future prompt specialists. For a time, prompt engineering was one of the hottest jobs on the market. Fast forward to today and you’d be hard-pressed to find a single job posting for it. Why? Advances in chain-of-thought prompting in reasoning models and other improvements mean you don’t have to possess the linguistic savvy to create effective prompts.
Last summer, every app and vendor on the market was investing in some version of retrieval augmented generation to have AI ‘talk to your data.’ We thought we’d be shifting research skills to teach students how to fact-check generative summaries using parallel reading strategies. RAG was all the rage and you’re likely still dealing with some systems that use the technique, but you won’t find that in many of the frontier models or the new deep research features. They’ve all mostly moved to a mix of direct search or Agentic search, which is more timely and up-to-date. You cannot easily process 100 or 300 search results via a deep research report with the human eye. Parallel reading for AI-generated search results isn’t going to be a research skill that survives long now that Agentic AI is common.
The idea that we can train or rapidly retrain students or ourselves with specific AI skills to be competitive in a landscape that deploys generative features and then deprecates them at a rate matching Liz Truss’s tenure as Britain’s prime minister simply isn’t feasible. We’re much better off teaching our students to think critically and put them in situations where they need to show adaptability and resilience to change than creating AI-specific skill pathways. We used to call that part of a liberal arts education.
There are those who refuse and resist generative AI, but at what cost? Our value-based positions aren’t always shared by our colleagues or our students and there are real consequences to drawing a line around curbing AI misuse and trying to banish the technology entirely from our classrooms. Nothing I’ve seen suggests the path forward involves pursuing academic misconduct cases at scale against students who use AI. That is a recipe for burnout and creating a hostile classroom space for students. It isn’t sustainable for faculty workloads and it certainly isn’t conducive for learning. Those structural conditions that make students turn to a tool like ChatGPT to offload learning or cheat aren’t going to disappear, neither will the material conditions of faculty members who seek AI assistance to make their jobs manageable.
And yet what does it say about those of us who don’t even bother entertaining the thought to resist or refuse generative tools? Technological advancement and innovation haven’t always been a singularly positive story. There’s ample evidence that unguided AI adoption is actively causing harm, and even the developers of these tools hold serious doubts that they can make AI tools that are fully secure and safe. To blindly adopt such a technology without any sort of critical understanding of the costs is a wholly different level of foolishness that won’t last.
Sometimes It Pays to Be Slow
So where does that leave us? Some, like the authors of the University of Sydney’s two-lane approach to assessment, envision separating education between secure and open assessments—give me a dose of AI-proof assignments along with an acknowledgment that anything outside of secure assessments could be generated via a machine. There’s certainly a logic to this, but each time I think about it, I find myself incredibly saddened by how quickly some have decided the way forward is to completely rebuild how we assess learning because of AI.
Mind you, this is a technology that is free for now and iterates rapidly in public, making it nearly impossible to replicate results from one research study to the next. Simply put, we are in a period of technical acceleration fueled by massive amounts of venture capital and very little understanding of how it impacts learning. What do we think will happen when the music stops and the money dries up?
Eventually, someone will have to pay for the cost of inference from these advanced features. Five years from now, maybe sooner, we could see a crash or a sudden shift where the most advanced versions of these tools become ridiculously expensive or unavailable to all but a few of our students. What will we do then? If we remake education in the name of AI preparedness, we need to consider if the changes we implement will be reversible and I don’t just mean assessments. We have no idea what AI saturation will do to our student’s cognitive abilities or to our own.
This tension between resistance and adaptation isn't new for educators—we've navigated technological shifts before, though perhaps none quite this rapid. When I first began teaching some fifteen years ago it was as an instructional assistant in a large lecture class of over 200 students. I, along with another IA, would grade two or three rounds of bluebook written exams each semester. Doing the process by hand took us two weeks—often longer just to grade and provide short feedback to students.
I stopped giving handwritten exams years ago, in no small part because I couldn’t provide the level of feedback students needed fast enough (or with legible handwriting) the way I could use a computer with a word processor. There is no going back to handwritten exams for me, nor is there for many others. Should we similarly be concerned that we’d struggle to return to a time when we used different technology the more AI automates our core skills?
In some ways, I think our world is all the better that institutions of higher learning are slow creatures to adapt to change. As much as I’ve railed against the endless layers of bureaucracy and archaic procedures that have made our AI response so challenging, there’s comfort in knowing that academia is a tortoise and not a hare. Higher education has always been on the cusp of cultural transitions and I think there’s some structural friction by design that provides us with the space needed to sort out some of these questions.
Social change will never match technological innovation, but we shouldn’t be lulled into complacency. Starting to think about a future where generative AI may continue to be part of our educational landscape is vital now and these conversations are likely going to happen in our classrooms long before they make it into administrative meetings. Trying to keep them practical and focused on positive outcomes for our students and ourselves will continue to be one of the challenges that we face.
"Adopting or resisting AI feels increasingly impossible."
This might be my favorite piece of yours as far as its acknowledgment of how stuck we sort of are—and how we are lying to ourselves if we believe either enthusiastic adoption or resistance is even possible. (Indeed, the "enthusiasm" either direction feels like self-delusion the more I step back and scrutinize it.)
For me, I'm lock-step with you in that the right path is to be intentionally slow and deliberate while stubbornly curious.
A very-bad analogy for this, perhaps, would be the opening Red-Light-Green-Light game from Squid Games; those rushing ahead are destined to, well, you know if you know—but if you wait too long to move forward the results are equally grim.
Great post with lots of provocative and unanswerable questions. My one quibble is I think you take the VandeHei quote out of context - he is speaking to his finance and legal team internally. For them, it is career suicide to not be experimenting. It further underscores the disconnect between academia and industry. Who are the kids going to listen to? While it may not be career suicide if faculty don't at least get familiar with AI tools, I think it may be career irrelevance. But what this post gets at that I wish more faculty would understand is the breadth and pace of change that has upended the entire conversation. At the moment, none of this looks like it is slowing down.