One of the most powerful tools we have in curbing AI misuse and shaping its ethical adoption is also one we struggle with employing. Talking with students is harder now than it was before the pandemic. Talking with one another is likewise strained with increasing political tension. And, yet, the power of a persuasive conversation can be a far more powerful tool than any AI detector. Indeed, many of us forget as we’ve aged just how formative some of the earliest discussions we’ve had with teachers shaping how we learn. I’m willing to wager that a conversation has more power in shaping the trajectory of AI adoption among certain groups than any tech marketing.
The conversations we need to have about generative AI include our students, administration, and our colleagues. I won’t sugarcoat any of this—these conversations will be a continuum lasting years. They will be frustrating because they may not appear to do anything at first, but that’s because human beings aren’t like machines—we’re slow to change and that’s a good thing. Each interaction we have with one another shapes our thinking about genAI’s impact on teaching and learning. We should not dismiss the small conversations as meaningless. What we should do is recognize the amount of time and energy it takes to simply talk about AI.
Purpose Matters
The opening conversation we have with students about AI is crucial. Students look to us to guide them in knowing what is ethical or allowed and they are desperate to know what that means when it comes to generative AI. Some will opt to ban it, and they have good reasons to do so, but your stance on AI should not preclude a conversation about generative AI or its impact on learning.
When I discuss generative AI with my students this fall I’m going to start by touching on why we’re here in the classroom—to learn. My philosophy on AI is it should be used in a way to augment a student’s learning, not offload it. I make this clear to my students in my syllabus and include labels for assignments indicating what tools/ or use cases are appropriate for AI and in what context. I will tell them that for many assignments, maybe even most of them in our class, using AI isn’t effective in augmenting your learning within the context of this class.
Attribution and Openness
For assignments where students use AI, I want them to attribute when they use the technology and let them know I’m not looking for examples of them using it as cheating or as a shortcut, but rather looking for moments that show they’ve learned with or without AI. For me, the AI problem isn’t about cheating, it’s instead about advocating for students to understand they need to demonstrate what they’ve learned. In many ways, that means asking students to reflect on how they’ve learned.
Openness is key to maintaining trust in my classroom. In the spirit of this, I plan to model transparency each time I use AI in instructional design, labeling assignments, and other materials to show when AI was used so students know what I expect from them when they adopt AI assistance. These serve as conversation starters. I want them to ask me questions about my process, and about the choices I make when I draft an assignment. I may even run some activities I’ve written through Claude or GPT-4o for feedback in front of the class and get students involved in critiquing the output.
Using Restorative Justice in Academic Misconduct
I also know that many students will struggle with these ethical guidelines and some will challenge them, pushing their usage of generative AI into areas that I’ve identified as being academically dishonest. Namely, that means not being open about when they’ve used the technology or using generative AI in a way that shows me they did not learn. Instead of taking a student up on formal charges when I suspect an infraction, I am piloting the following model. It’s designed to scale if interventions are not successful.
First Offense: Meeting
If I suspect a student used technology or good old-fashioned plagiarism, I will meet with them one-on-one and air my concerns in a non-punitive and non-accusatory way. Instead, I talk to the student about what’s going on in their life, and how their classes are, and see if there are any areas that are causing them undue stress before directly talking to them about expectations. This initial meeting might serve to put the students on notice, but for me, my main goal is to pull them back into the ethical framework of our community by letting them know that while mistakes happen it's learning from them that shows growth. I plan to let a student resubmit an assignment, without penalty.
Second Offense: Taking a Restorative Approach
Sometimes, students don’t always learn from a conversation. If a student continues to defy the norms we’ve established in our classroom community, the meeting escalates into an issue of formal misconduct. I want to make it clear to the student this isn’t about rule-breaking—it’s about an ethical breach of trust that harms our relationship and the greater community. I can move to take the student up on charges or pursue a restorative approach to academic integrity to see if we can repair the damage that has been done and restore trust going forward. Restorative approaches are tricky and take a great deal of thoughtful planning. I am using UT’s Restorative Approach to Academic Integrity as a starting point. If the student doesn’t feel like this approach is right for them, or if I believe the ethical breach is so severe, then either of us can agree to withdraw from the restorative pathway and use the existing academic misconduct procedures.
Final Offense: Formal Charges
If the restorative approach fails or a student continues to break ethical standards of conduct within the class, then I have little recourse but to pursue formal charges. The evidence I’ll bring isn’t an AI detection report, but emails and meeting notes that clearly show the multiple attempts I’ve taken with the student to address their conduct within the course.
Will this work? Maybe. Maybe not. The point is I am willing to try something. I’m approaching academic integrity from a position of trust with my students, but clearly outlining a road map of consequences if their behavior continually challenges the norms of our class. However, the labor involved here is immense. I know many will simply throw their hands up at the thought of this and that’s why we need to have conversations about AI not just with our students, but all stakeholders.
The Conversation With Administration
If upper administration wants faculty to be AI literate and work to rethink AI assessments for academic integrity, then they first need to take steps to address the material conditions impacting labor. The same goes if an admin asks educators to lean into AI to make their job easier. Someone needs to pay for that time and training. Contract hours and paid professional development opportunities in the form of grants, fellowships, course redesign stipends, curriculum updates, and learning communities must be addressed because AI isn’t like the learning challenges we’ve faced in the past. The field moves quickly, with new tools and use cases popping up all the time. If institutions want faculty to actually adapt to AI and grapple with its impact on education, then they must rethink professional development as a continuum, not as one-off experiences or items to be checked off from a list at the beginning of the semester.
That’s easier said than done and goes well beyond generative AI. Why do so few institutions of higher learning within the US have staffed offices of Academic Integrity? In over a decade of teaching at two R1 schools, I’ve never had formal training about what academic integrity means. The training I received as a classroom teacher was to sit around a table with other grad students and discuss teaching for two hours a week.
Educators and students both deserve a clear say about what generative AI tools an institution adopts. That’s been ridiculously hard because most of these tools arrive as system updates with little conversation about the downstream impact turning on these systems might have.
The Conversations We Have With Each Other
What’s your plan for the fall? How do you think we should approach admin about generative AI tools they purchase? What is working within your class? These are the most important conversations we can have with one another. Yet, how often do we have a different conversation, one where it is clear our fellow colleague is too burned out, too overwhelmed, or frankly too pissed off to do more than curse? We’re people and we’ve been put into a daunting situation—figure out AI as it deploys in real-time around us. The notion that this is something we should keep track of as part of our daily lives is as outrageous as it is untenable.
Far too many people feel adrift and sometimes just having a chat with them can help give them a lifeline. Often times that becomes an airing of grievances. Something we can do is give purpose and direction to those justifiable feelings by shaping the discourse. If generative AI is getting in the way of student learning, then what are proactive ways we can address this that don’t put more strain on the relationship we have with our students? That’s a good enough question to ask to steer the ship back on course.
The Power of Rhetoric
The problem with talking and asking questions is people inevitably want answers. Students want to know how to use generative AI ethically in their learning and what skills they’re going to need once they graduate. We don’t have those answers for them and won’t for years to come. Neither will we come to terms with the labor needed to revamp assessments or develop the needed resources to fully realize the academic integrity we hope can be established in our new AI era. Rhetoric isn’t just a means to an end—it is how people learn to argue and use their human emotions and reasoning to arrive at informed opinions. Let’s celebrate that this fall through a conversation (or a few hundred of them!)
This is going to be a challenging pathway. I tried something like it this year and it takes lots of trust, something that is in short supply these days. What kinds of assignments are your students working on these days? I am asking as a fellow working in the field plotting out my moves for the next school year as I write.
Two years ago in our first meeting, my principal announced that we, the faculty, needed to become “experts” on AI and talked about how it could help with writing. As an English teacher, I knew little about AI and immediately rejected this approach. My curriculum was full and planned, and I was not going to shove this new material on my students. I did have them read articles about negative aspects of AI. I saw this as giving them a balanced view in the face of everything else they were hearing. The pressure to include AI got worse last year, but my students (6th grade) needed to improve their reading and writing skills. They did not need an “artificial” aid to make things easier. This AI pressure did play a part in my decision to retire in June.