What’s the state of AI oversight after the election . . . this election? For one, we can kiss goodbye any chance for regulatory action to pause generative AI’s deployment as a massive public experiment. Put simply, there was little will in Washington before November 5th for taking such action and now there’s no realistic chance we’ll see any meaningful oversight of AI.
Biden’s executive order on AI likely won’t survive the first few months of Trump’s second term. Maybe not even the first day. Any hurdle to stop water or energy consumption is likewise off the table. Just this week, Meta had its recent ambitions to build a nuclear-powered data center thwarted by a rare species of bees. Federal Energy Regulatory Commission also cut short a plan for Amazon to power a new data center using nuclear power. That’s the power of regulation. Sadly, we’d be delusional to believe we’ll see headlines like this over the next four years.
Elon Musk is now a confidant of the future president and having a capable technocratic billionaire who hates regulation is a sure sign that business interests will continue to dominate. For education, this means that we’re in for even more AI marketed at students and teachers. It also means faculty will be left largely on our own to develop meaningful guardrails about AI’s impact on learning.
Telling students AI is cheating across the board or taking a firm moral stance against the use of this technology is an option many faculty want to pursue. But I think most of us know that bans aren’t going to work. Generative AI is in our world and those positions will not be tenable as universities rush to enable generative tools for faculty and students, go on hiring blitzes for faculty with AI expertise, and make AI part of degree paths.
We Have to Engage AI If We Want To Resist It.
Perhaps the best way to resist the uncritical adoption of AI by our students is to lean into the classroom time we have each week with students and allow them to discuss AI as a cultural force so they can learn to question how this technology is altering their world. One method is taking an approach akin to Jane Rosenzweig’s themed writing course at Harvard To What Problem is ChatGPT the Solution and make AI a central topic of inquiry in a class. Here’s a brief introduction taken from her syllabus linked above:
What does it mean for a college education if ChatGPT can pass exams and write essays? What jobs will disappear as generative AI becomes more sophisticated, and what jobs will emerge? Do you want to watch a movie featuring AI-generated versions of your favorite stars, speaking lines generated by ChatGPT? Should tech companies be able to use your written work to train their AI tools? What role should the government play, if any, in regulating generative AI tools? Since ChatGPT was released in November 2022, experts and pundits have raised these questions and many others, predicting that generative AI will lead to everything from the extinction of the human race to unprecedented prosperity to a society mired in disinformation, bias, and inequality. In this course, we’ll consider a wide range of arguments by AI ethicists, scholars, writers, and practitioners as we try to make sense of what problems generative AI tools can solve–and what problems these tools may create.
Not all of us have the ability to shape curriculum or theme our courses. Another option is to create space for students to reflect on their AI usage and require them to critically evaluate what affordances, if any, using a generative tool offered them. Below is a reprinted article Make AI Part of the Assignment from my generative AI advice column from The Chronicle of Higher Education. I’ve also included a template I use to help my students make their learning transparent.
Make AI Part of the Assignment
Open your favorite social-media platform and you’ll see dozens of threads from faculty members in anguish over students using ChatGPT to cheat. Generative AI tools aren’t going away, and neither is the discourse that using them is academically dishonest. But beyond that issue is another worth considering: What can our students learn about writing — and their own writing process — through the open use of generative AI in the college classroom?
In Ted Chiang’s recent essay in The New Yorker, “Why AI Isn’t Going to Make Art,” he aptly describes students using AI to avoid learning and the dire effect that has on their skills development: “Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.”
Learning requires friction, resistance, and even failure. Some three decades ago, Robert A. Bjork, a psychologist at the University of California at Los Angeles, coined the term “desirable difficulty” to describe the benefits that students get from doing increasingly challenging tasks to enhance their learning. ChatGPT removes many of those desirable difficulties by offering the user a frictionless experience: Prompt AI with a question, and get an instant answer. The student’s brain offloads everything to an algorithm.
Given that reality, how can you as a faculty member respond? In my first column, “Why We Should Normalize Open Disclosure of AI Use,” I noted that students are eager for standards because they want to use the technology openly and ethically. So the first step in responding is to set those standards in your own courses, and “normalize” disclosure of AI usage.
Here I will focus on the second step: how to introduce a bit of intentional friction into your students’ use of AI and find ways for them to demonstrate their learning when using the technology. Educators including Leon Furze, Katie Conrade, and Jane Rosenzweig have all written about the need to keep friction as a feature of the college classroom and not let generative tools automate learning.
In my own courses and as director of an AI institute for instructors at my university, I’ve adopted and suggested this method: As part of the assignment, require students to critically evaluate how they used the technology and how it affected their writing process. That way, they aren’t just passively relying on AI-generated content but meaningfully assessing its role in their writing.
Usage of a tool like ChatGPT often obscures the most critical aspects of a student’s writing process, leaving the instructor uncertain about which skills were used. So I created a form — the AI-Assisted Learning Template — to guide students in evaluating their own AI use on a particular assignment.
On the template, I first ask students to “highlight how you used human and machine skills in your learning” in five potential categories, and offer them a range of options to characterize whether and how they used AI tools to do the work:
Idea generation and critical thinking (for example: “I generated all of my ideas independently” or “I collaborated with AI to refine and expand on initial concepts”).
Research and information (“I utilized AI-powered search tools to find relevant information” or “I used AI-summarized articles but drew my own conclusions”).
Planning and organization (“I organized and structured my assignment on my own” or “I started with an AI-generated outline and developed it with my own insights”).
Content development (“I wrote all content without AI assistance” or “I expanded on AI-generated paragraphs with my own knowledge and creativity”).
Editing and refinement (“I edited and refined my work independently” or “I critically evaluated AI-suggested rewrites and selectively implemented them”).
Then the template lays out the prompt — “AI might have helped you learn in this process, or it may have hindered it. Take some time to answer some of the questions below that speak to your experience using AI.” — and poses some questions (tied to my learning outcomes) to help students write a short reflection about their usage of this emerging technology. Among the questions I list: What tricky situations arose when you used AI? How did you chart a path through them? Did bouncing ideas off AI spark your creativity? Were there any new exciting directions it led you toward, or did you wind up preferring your own insights independent of using AI? Which of your skills got a real workout from using AI? How do you feel you’ve improved?
Giving students the opportunity to think critically and openly about their AI usage lays bare some uncomfortable truths for both students and teachers. It can lead both parties to question their assumptions and be surprised by what they find. Faculty members may discover that students actually learned something using AI; conversely, students might realize that their use of these tools meant they didn’t learn much of anything at all. At the very least, asking students to disclose how they used AI on an assignment means you, as their instructor, will spend less time staring into tea leaves trying to discern if they did.
But, you may be wondering, won’t some students just use ChatGPT to write this assessment, too? Sure. But in my experience, most undergraduates are eager for mechanisms to show how they used AI tools. They want to incorporate AI into their assignments yet make it clear they still used their own thoughts. As faculty members, our best bet is to teach ethical usage and set baseline expectations without adopting intrusive and often unreliable surveillance.
Pre-ChatGPT, several of us tested three other AI tools (Elicit, Fermat, and Wordtune) in the writing-and-rhetoric department’s courses at the University of Mississippi. We published our findings in a March 2024 article on “Generative AI in First-Year Writing.” For our study, we evaluated students’ written comments about how they had used those three tools in their class work. Among our findings:
Students did, indeed, learn when they used AI tools in their writing process. The catch: Their learning was limited to short interactions with AI in structured assignments — and not with uncritical adoption of the tools.
Students identified the benefits afforded by the technology in exploring counterarguments, shaping research questions, restructuring sentences, and getting instant feedback. However they were also aware of its limitations: For example, many students chose not to work with large chunks of generative texts because it did not sound like them, preferring their own writing instead.
They didn’t just learn how to prompt a chatbox. By being asked to critically evaluate their use of these tools, and balance the speed of the technology with this required pause for reflection, students had to reaffirm with their own words the point of why they were in the classroom — to learn.
When you require students to disclose the role of AI as a routine part of an assignment, you also open up the avenue for students to realize that the tool may not actually have helped them. In our culture, we’ve become so accustomed to viewing failure as a bad thing that young learners avoid taking risks. But requiring open disclosure sends the message that it’s OK for them to try something new, and not succeed at it.
Mind you, it has only been 22 months since the public release of ChatGPT. We’re still grappling with the implications of generative tools and what they mean for students. We often learn the most about ourselves through failure. Let’s give students that same opportunity with AI.
What’s the alternative? If professors don’t advocate for such open disclosure in our new generative era, we risk offloading the task to a new wave of AI-detection tools that surveil a student’s entire writing process. Grammerly’s new Authorship tool lets students track their own writing process, capturing every move they make in a Google doc. Flint uses linguistic fingerprinting and stylometry to compare student writing against a baseline sample. Google will begin watermarking generated text with SynthID. All of those methods supposedly show that AI was used. But none of them require students to think critically about what they learned when using the technology.
And using a tool to track your students’ writing only adds another layer of technology to attempt to solve a technology-created problem. You’re relying on a machine to try and validate whether or not a human wrote something. Personally, I’m not keen to participate in surveillance capitalism.
That’s why I recommend that faculty members shift focus away from technology as a solution and invest in human capital — i.e., us. Find ways for your students to openly disclose their use of AI tools and to demonstrate what they’ve learned when using the technology.
Thank you! Many faculty members are resisting AI labeling or disclosure, often dismissing it with comments like, “Aww, come on, we’ve all been using Grammarly.” 🤦🏽♀️ But you know what will ultimately drive change? Promotion and tenure requirements, along with scholarly communication policies. I conducted an analysis earlier this year and found that most major publishers have already implemented clear AI disclosure requirements. For example, here’s the AI policy from Taylor & Francis: https://taylorandfrancis.com/our-policies/ai-policy/. (Labels are coming!) Additionally, Google announced their AI detection tool just last month (works better for longform text than short social media posts) meaning AI use will increasingly be enforceable in journal and book submissions. (Still doesn't mean AI detection is foolproof!)
I also wanted to mention that the Authors Guild is doing fascinating work in this area. All of this gives me hope that, while government regulation of AI may be stalled, the American story of techno-science—the way technology and science are often harnessed by capitalism to drive profit—will still be powered by the human spirit. I remain hopeful about real chances for a more ethical integration of AI, one that balances innovation with accountability and creativity. Thanks. again!
Thank you for saying it out loud. Both myself and my students, who are reading "Burning Data" from RESET, by Ronald Deibert, talked about this issue "The Day After".
We should be collecting data on AI use and potential learning loss and the things it's not good for when the user is learner still acquiring literate discourse so we can establish a counter-narrative to the commercialized utopianism currently out there
Christine Ross,
Rochester Institute of Technoloigy