The AI Wolf That Education Must Face
GPT-5 is now here and freely available to all. For many, this will be their first interaction with a reasoning model, and yet for all of us, it is another experiment that we have to deal with that none of us asked for. Perhaps the greatest change is in the interface itself. You no longer need to select which model to do the work for you; an internal router behind the scenes now does that work on your behalf, eliminating yet another point of friction. But this is already causing quite a bit of confusion and challenges, as Ethan Mollick noted on X:
We’re seeing a lot of mixed reactions as people use GPT-5. Sometimes the model uses the nano version of GPT-5, while other times it is the full model. Without transparency behind the scenes, it basically becomes a giant guessing game, letting the router roll the dice and pick which model is best based on . . . what?
And to cap this absurdity, Sam Altman’s response to criticism was to roll back much of these updated features mere hours after they launched, making any attempt at understanding what has changed basically moot.
GPT-5’s launch happened midway through trying to write a post about the return to the semester. It is exhausting to try to weave all these things together, to even think clearly about the classroom. So here I am trying to prepare for fall classes, and OpenAI releases one of its biggest updates yet. Here we all are.
Education is approaching an hour of the wolf moment with generative AI. It has been nearly three years now since ChatGPT arrived, and no one agrees on what to do about it. Hell, quite a few people still think we shouldn’t do anything. Embrace AI and you’re a fool who lets the wolf inside; ban it and you're a fool twice over for thinking you can keep the wolf at bay; ignore it and be devoured. The wolf isn’t going anywhere—it’s only getting bigger.
As someone with a background in the humanities, I think the best route I know to deal with an intractable problem is to actually talk about it with the people affected by it the most. For generative AI, that means our students.
From Obligation to an Invitation to Talk About Value
A new semester approaches, and once again, faculty are faced with an obligation to provide guidance to their students about a commercial product many of them have little interest in or frankly understand in the slightest. I’d like to pivot at this point and try to inject some humanity into this machine-generated moment. Instead of viewing AI guidance as a point of drudgery, what if we could reframe AI policies as invitations for students to understand their own emerging responsibilities related to AI and discuss what they value about learning and how this technology might help them or hinder their goals?
Faculty have had to shoulder the burden of giving their students an AI policy, one that their institutions have largely outsourced to those in the classroom. Without clear guidance, each faculty member is left to establish their own course-specific AI policy that is often at odds with colleagues teaching in different disciplines, sometimes within their own departments. You might draft an incredibly restrictive AI policy, even ban it, then shockingly find that you are the outlier in your college or department.
Worse still, the onus for absorbing this chaotic approach falls onto students. The messaging they are receiving is maddening in its contradictions for a general-purpose technology. From use AI in my class and I will fail you and take you up on academic misconduct charges to use AI in my class to help you achieve the best learning experience possible.
A big part of the challenge with policy is that it centers on the concept that the school provides a tool or service and sets expectations around how it is used. Generative AI has steamrolled that notion. We’ve also seen Edtech vendors rush to embed generative AI into existing services. How much of a problem is this? Laurie Bridges shared Aaron Tay’s post about one such integration of an AI chatbot inside one of the largest academic databases that now censors search terms for genocide, lynching, January 6th, Covid-19, and Gaza. Institutions don’t have to purchase an AI product for it to be a problem—it is already here.
Invitation Approach
Useful for small courses, in person
Bad fit for large courses or an overwhelmed faculty member
Not all syllabus policies about AI will be the same, even for those of us who acknowledge that students will have access to AI tools. For a smaller class I’m teaching about writing and generative AI, I plan to use the following language:
For our class AI policy, I’m going to start by telling you what I won’t be using AI for and why that is meaningful to me. I won’t be using AI to answer emails, provide feedback, or grade your work. I also won’t be using AI to write letters of recommendation. The reason why I won’t be using AI for these purposes is that it can impact the relationship I have with you, and that is something I value much more than efficiency. You may feel differently, and that’s okay. Any time that I do use AI, I will be transparent about how it is used, including labelling what was generated by a machine. I invite you to craft your own statement about how you will use AI or not use it in this class, focused on what you value about your student experience, learning, and the relationship you have with your peers and me. While your own stance can take many forms and change during the course of the semester, one area I will ask you to respect is openly disclosing when you use AI with me and one another.
I want this to be an invitation for students to think about what it is really going to be like having to make decisions about using AI in the world beyond school when there’s no one there to hand them guidance about dos and don’ts with AI. I want them to think about what the consequences might be if they turn to a chatbot uncritically. Most of all, I want to know what they value about learning.
The Stop Light Approach
Useful for larger courses, introductory classes, or multiple sections
Bad fit for asynchronous courses
For larger classes, multiple sections, or introductory courses, I’ve adopted a stoplight approach around AI usage since the fall of 2022. Unlike the invitation approach, I need to set clearer expectations with students in ways that are manageable for me. I do still invite them several times during the semester to add or delete areas from the green light approach or tweak language in the yellow light section. You can read more about my AI-Assisted Writing vs. AI-Generated Writing template I use on my syllabus here, or view it below:


The thing is, slapping a policy on a syllabus isn’t enough. Nor is inviting students in to edit or co-create the policy. I am one of five or six teachers they have to think about in a given semester. Students will forget your course policies, often. That’s why I carry my stop light approach down to individual assignments and include clear labels for what is or isn’t allowed within them. You can read more about AI Assignment Assistance Guidelines or see the image below:
Does it work? Not always. No method is going to be a foolproof AI from interfering with learning, but that’s not really the point. None of us controls the technology our students have access to outside of the classroom, and in no version of reality do I see my future students being asked to go through the myriad of obstacles many educators have introduced since AI’s arrival to try to validate what they know.
That’s the hard part I think we’re all grappling with—students are telling us through their use of AI that it is valuable to them, and many report AI helping them to learn. Is it actual learning, or the illusion of learning? I’m not sure anyone really knows. Either way, we need to talk with students about it. Otherwise, there is a real risk that artificial intelligence will continually carve out experiences we value with the human intelligence of our students simply because we are opposed to even having a conversation with them about it.
SXSW EDU
I’d love for you to take a look at my proposed session and consider voting for it to be included at SXSW Edu this march. You can vote for it by clicking the heart button through the link or image. My session is about what it means to become AI aware in the classroom and has a awkward video.








I feel like one helpful step would be for all teachers, professors, etc. to include a version of the "invitation" that lays out their own choices pedagogically and is transparent about their reasoning around AI—no matter what it is!
We cannot expect of students what we are not willing to do ourselves as educators, and I do think that would be a very positive step forward. (Into the murkiest of abysses that we're in, admittedly, with no clue of what lies ahead.)
Nice article, especially the different approaches. Per usual, there's no "one size fits all" approach for AI. Also, Lex is awesome!