Have an AI Policy? So does Everyone Else
With so many approaches, we may be creating a confusing landscape for our students
My semester begins tomorrow. The anxiety and fear over students using LLMs only grew over the summer as educators tried to suss out which approach was best for our teaching and our student’s learning. We all found and adopted different approaches the best we could, but we didn’t pause to consider the impact so many approaches would have on our students. Some universities, like mine, offer faculty suggestions, but with no consensus on what approach to take, we may be creating a confusing landscape for teachers and students. The following are approaches we’ve all encountered in education about generative AI, and the editors of the forthcoming TextGenEd collection Tim Laquintano, Carly Schnitzler, and Annette Vee deserve full credit for being the first to articulate the moment we’re in this fall.
Approach #1 Ban It
Banning is the default for most faculty. Most haven’t had time to learn about the tech or know what skills they’d need to teach students to ensure ethical use in educational contexts. I empathize, but banning isn’t going to work. You can take a moral stance and ask your students to as well, but the trajectory of LLM deployment will soon include some version of it behind any web-based text interface. Worse, the people who adopt a ban + surveillance approach using unreliable AI detection software. In the last month, Open AI has shut down their AI text classifier tool and universities like Vanderbilt and the University of Pittsburg turned off the feature within Turnitin.
Approach #2 Embrace It
AI is inevitable, so let’s stop writing altogether and generate text for every context and instance we come across! This is the most brash position, mixing a lot of techno-determinism and hype. We don’t have data that suggests the long-term impact generative technologies have on skills or learning. Uncritically embracing this approach could lead to the loss of skills we collectively value and try to teach to our students. It isn’t clear to me if people taking this approach give a lick about data rights, how these systems work, or the impact such swift adoption will have on society.
Approach #3 Deep Dive Critical Exploration
This has been my approach and the approach of many in education—to critically engage technology as both a tool and a process. We’ve adopted an exploratory approach, one where we engage the tech alongside students and teach them about how they work, taking deep dives into the history and data that powers LLMs. To do this requires resources and time—two things many educators sorely lack at most institutions. One of my goals this past year has been to train faculty as rapidly as possible so they, too, can attain some AI literacy to engage LLMs critically.
Approach #4 Chaos: Every Approach, All at Once
Imagine you are a freshman student and sign up to take five classes at a university this fall. Instead of encountering a coherent, agreed-upon policy, like those we’ve had for academic honesty, you instead find multiple different approaches to a technology you don’t understand, coming from professors who also don’t fully understand the tech. Some ban it, others embrace it, and maybe a few critically approach it. Worse, you start hearing stories from fellow students about how they were falsely accused by their professors of generating content through unreliable AI detectors. Already there are reports of students rewriting their own work and testing to see if it is falsely flagged in a detector. Then there are students like Jessica Zimny, who turned to self-surveillance using screen recording software just to have evidence that her work was her own.
We had an extremely limited timeframe to collectively come up with coherent strategies in education, so we shouldn’t beat ourselves up. We could never match the pace that tech adopted LLMs and put them into a grand public experiment no one wanted.
What we need is a pause in the tech’s deployment and time to come to terms with what LLMs mean for education. But I don’t think this will happen.
Things You Can Do for Yourself and Your Students
Take a Policy Break: Instead of handing students a policy about LLMs, take a break and spend some class periods talking about what’s going on. LLMs are now a part of our lives and getting students to critically engage the world around them is one method to teach them critical thinking.
You can have students explore Casey Fiesler’s AI Ethics and Policy News Crowdsourced List and assign readings from the list.
Let students take a dive into how LLMS work by having them read Pamela Mishkin’s wonderful interactive essay Nothing Breaks Like AI Heart.
Expose students to types of writing where generative AI fails by assigning them to read Vauhini Vara’s fabulous essay Ghosts.
* I have assignments based on both essays you can access for free.
Learn with Your Students: Time and energy are in short supply. Many educators don’t have the weeks needed to take a deep dive into generative AI, nor do they have the professional development funds. Consider taking half an hour to an hour each day to explore something about the tech and bring your students along with you.
Ethan and Lilach Mollick have a short five-part video series called Practical AI for Instructors and Students.
Microsoft has a short hour-long course called Empower educators to explore the potential of Artificial Intelligence.
I also have a professional development course called Generative AI in Education, which you can purchase or apply for a scholarship or an OER Fellowship. I’ve released several lessons for free already: Using Generative Video and Using ChatGPT for Instructional Design.
Give Yourself and Your Students some Grace: No one asked for generative technology to be released as it has been. No one has ‘the answer’ and we shouldn’t expect to come to any sort of collective understanding or consensus about how we should or shouldn’t use this tech. Your students will stumble and fail to meet your expectations, they’ll ask questions you won’t be able to answer, and they’ll look to you for guidance when you are similarly searching for.
Take comfort in knowing that many students actively decided to not adopt generative AI to help them in pilots that I conducted this spring.
Most students are here to learn and not offload that learning to an algorithm.
Trust that the majority of your students don’t want to cheat themselves out of an education by engaging generative AI unethically to commit academic dishonesty.
Found this Helpful? Sign up for the Course!
Generative AI in Education has over twenty sections and the first pathway is about AI literacy. Such understanding will be foundational in education as major tech companies continue to deploy and scale generative AI systems in public. I’ve released assignments from the course under free CC-By licensing and offer discounted group pricing, scholarships, and OER Fellowships for access to the course.
I'm thinking about keeping the policy wide open, and just have conversations about how AI is or is not working in their writing process.
My experience has been that students find and see the limitations themselves and start to see the value of their own writing in a new context.
Such a great article! Got my mind churning.
- what if we just kept writing samples and compared them randomly using AI over time to see if the style were consistent? Maybe train with collegiate writing over a four year course of study so it could understand change over time?
- adopt a tutorial method with 1-2x per term the student reads their essay aloud and answers questions with the instructor
- stop grading. If there is no good-grade pressure than there is no advantage to plagiarizing. If it is true that what you get out depends on your effort, then why care about grades?