Google is now offering college students over a year of free access to their Gemini advanced AI suite of tools. With that announcement, students in the US and Canada now have free trials of premium generative AI tools from OpenAI, xAI, and Google. AI companies realized early on that the power users of their products were students, not working professionals, and are now targeting that demographic more heavily than ever. It’s also a staggering blow to equity and access to a technology often marketed for its universal ability to uplift humanity. To get free access to the most advanced versions you’ll need to be enrolled as an active college student. That’s hardly an equitable framework to scale a technology through society. It is a shrewd and calculated marketing move to hook users at a young age who are likely going to pay later on in their professional lives.
Free Premium AI for Some, Not All
Google made the stunning move to offer college students access to Gemini Advanced ($20 a month) for free for over a year. This not only includes access to Google’s most powerful premium AI but also 2 terabytes of extra storage.
OpenAI offers college students in the US and Canada free access to ChatGPT Plus (normally $20 a month) through the end of May.
xAI now offers college students SuperGrok (also normally $20 a month) for free for 2 months when they sign up with a .edu email.
We’d be foolish to believe that this round of free trials will be the last. A significant number of students will eventually sign up for premium access. Maybe not this round, but I’m sure in future offerings we’ll see a continuous uptick as word of mouth spreads among students.
We’ve already seen scattered pockets of students who’ve adopted premium versions of GenAI, but nothing approaches this scale. Faculty will now need premium access to GenAI tools just to keep up—they aren’t included in these offers. Many of our campuses adopted Microsoft’s Copilot (originally called Bing Chat Enterprise). Copilot gave faculty access to GPT-4, then the most powerful tool on the market. It also came with data protection. It is now hopelessly out of date. The model hasn’t been updated since October 2023. That’s a lifetime in this generative AI era.
It’s one thing to try and pit your assessments against the free version of ChatGPT. It is quite another to navigate the maze of premium AI features that some of your students may have just enabled. It’s also expensive. Really expensive. Especially, if you’re trying to access the premium version of two or three tools that each cost $20 per month.
Put bluntly, without access to premium GenAI, faculty will not be able to gauge how this technology impacts student learning. Running your assignment directions through a free model that isn’t as powerful as one of the premium models, or thinking students won’t use the greater usage limits bundled with premium access, is sure to create a false sense of what students who use premium GenAI can and cannot do in the disciplines we teach.
Uneven Access Makes Teaching Exceptionally Complicated
Yet, not all students even want to use AI. We need to prepare ourselves for students who don’t use AI, those who use free AI, and those who increasingly have access to premium AI features. Some will know how to use these tools effectively, others will do so poorly, and few will do so with ethical disclosure. Teaching in this dynamic is without precedent.
From a technical standpoint, it’s asking faculty to teach students with bizarrely different access to tools, features, and skill sets. It isn’t going to be a fair playing field for anyone. Even faculty with pro-AI policies and access should pause and consider what exactly the landscape will look like this fall, teaching students who missed out or opted out from one of these trials while a significant number of their peers may have opted in.
It’s no wonder that forums like Reddit’s r/professors are filled with post after post about faculty returning to Bluebooks. But this rush to secure exams misses the crucial point—AI impacts the learning leading up to the exam, making the whole point of a valid assessment come into question. ChatGPT and other generative tools aren’t simply cheating tools to write essays, solve math equations, or answer test questions. We’re increasingly seeing students using various GenAI tools as learning aides.
D. Graham Burnett’s recent essay in The New Yorker Will the Humanities Survive Artificial Intelligence takes a look at using a tool like NotebookLM to summarize a previous course he’d taught:
On a lark, I fed the entire nine-hundred-page PDF—split into three hefty chunks—to Google’s free A.I. tool, NotebookLM, just to see what it would make of a decade’s worth of recondite research. Then I asked it to produce a podcast. It churned for five minutes while I tied on an apron and started cleaning my kitchen. Then I popped in my earbuds and listened as a chirpy synthetic duo—one male, one female—dished for thirty-two minutes about my course.
What can I say? Yes, parts of their conversation were a bit, shall we say, middlebrow. Yes, they fell back on some pedestrian formulations (along the lines of “Gee, history really shows us how things have changed”). But they also dug into a fiendishly difficult essay by an analytic philosopher of mind—an exploration of “attentionalism” by the fifth-century South Asian thinker Buddhaghosa—and handled it surprisingly well, even pausing to acknowledge the tricky pronunciation of certain terms in Pali. As I rinsed a pot, I thought, A-minus.
But it wasn’t over. Before I knew it, the cheerful bots began drawing connections between Kantian theories of the sublime and “The Epic Split” ad—with genuine insight and a few well-placed jokes. I removed my earbuds. O.K. Respect, I thought. That was straight-A work.
What hit me, listening to that podcast, was a sudden clarity about what’s happening in Washington (and beyond). If I had written the code that could do that with my nine-hundred-page course packet, I might feel a dangerous sense of mastery. I might even think, Give me admin privileges on the U.S. government—I’ll clean it up. That would be hubris, of course, the Achilles kind, and it would end in ruin. But I’d still probably feel like a minor deity. I might even think I deserved admin logins for all human institutions. I suspect that such thinking explains a lot about this moment: the coder kids are feeling that rush, and not entirely without reason.
Students can now use any number of GenAI tools to record and synthesize their lectures, use AI to summarize their readings or videos, and then have the AI create notecards and practice quizzes. The student can then take any number of proctored assessments and be given a degree based not on what they know, but on how good AI was at creating a transactional summary of what that learning experience was supposed to be. Is the assessment valid if the learning was heavily AI-assisted?
The adoption of AI in education won't happen uniformly. Just as students have different levels of access to and skills with generative AI tools, educational institutions will implement AI at varying rates and in different ways. This uneven adoption will likely create a fragmented educational landscape, where how students use AI for learning depends on which department or school they're in. Maybe even who teaches a particular class.
The Student Experience Matters
The messaging students receive about generative AI is maddening in its contradictions. Companies are now offering them free access to premium GenAI tools, while many faculty are rushing to ban its usage, and industry is signaling that they expect college graduates to be fully AI literate. How would you feel in this landscape? Faculty are telling students they must show their knowledge and skills without tools like ChatGPT, while their future employers are screaming for them to arrive prepared to use this technology in the name of efficiency, while tech developers offer them increasing access to more GenAI features than anyone can keep up with.
Students have to navigate these opposing messages and are largely left on their own to develop their understanding of GenAI as a practical tool and establish an ethical understanding of using or not using it. How does that work when each day students are confronted by faculty who give them the green light to use AI, while others say doing so will lead them to be taken up on charges and fail a class, while still others don’t say anything at all regarding generative AI?
As Burnett astutely observes in his recent New Yorker piece:
[E]veryone seems intent on pretending that the most significant revolution in the world of thought in the past century isn’t happening. The approach appears to be: “We’ll just tell the kids they can’t use these tools and carry on as before.” This is, simply, madness. And it won’t hold for long. It’s time to talk about what all this means for university life, and for the humanities in particular.
The student experience should be one of the main things we consider when we talk about generative AI in education. This technology impacts how students study, take tests, communicate, and even form and navigate relationships. If we’re not taking the student experience into account when we discuss GenAI’s impact on learning, then we’re missing the point. It’s also the reason I think we need to discuss AI with students far more than we’re currently doing so now. That isn’t ever going to happen if we continue to treat this technology as solely a cheating tool. We have to talk to students about how this thing we call AI impacts their world within and beyond our classrooms.
Institutions must develop comprehensive AI policies that address both access inequities and pedagogical integration. Faculty need professional development and financial support to access the same tools their students use. Most importantly, we need to bring students into this conversation, creating collaborative frameworks where AI literacy becomes part of their educational journey rather than an unspoken advantage for some and disadvantage for others.
Summer Plans:
For the first time in 15 years, I won’t be teaching a summer session. Instead, I’ll be working with several universities and school districts about crafting AI policy and training faculty in AI awareness. If you’d like to learn more about the training I’ve been doing you can read more about it here and feel free to drop me a line if you’d like to know more.
Upcoming Events:
I’ll be speaking at several upcoming conferences and events this summer, so please say hello if we run into one another.
EAB’s 2025 Presidential Experience Lab held at OpenAI’s New York headquarters
Perusall Exchange 2025: Why Reading Matters to the Future of Learning
Artificial Intelligence and Digital Literacy: Toward an Inclusive Practice
Interviews and Podcasts
Here are several recent interviews/ articles/ podcasts where I spoke about AI.
Deepa Seetharaman The Wall Street Journal: There’s a Good Chance Your Kid Uses AI to Cheat
Beth McMurtrie The Chronicle: Should College Graduates Be AI Literate?
Perusall Social Learning Amplified Podcast: Reading and Writing in our Generative AI Era with Marc Watkins
Pause for Learning Podcast: Teaching and AI
Smarter Campus Podcast: From the Frontlines: Marc Watkins on AI in the Classroom and the Future
Faculty need to also be ready to work with students who refuse to use AI out of environmental concerns. If you teach a non-AI focused class but introduce a requirement to use AI for one of your assignments, have an alternate assignment ready for these students.
Way to go, Marc! So glad to see such a thoughtful critic of AI in education getting exposure in so many outlets.
My views align with yours in many ways, but one area where we diverge is the lack of uniformity or standardization in the approach to AI in higher education.
I see this plurality of approaches as a benefit of the autonomy higher ed faculty enjoy. For the most part, we have enormous freedom in our classroom practices, and I think the resulting variety benefits our students. I wish K-12 teachers were given a similar level of autonomy.
The biggest problem I see is the lack of clarity many teachers give their students on what is allowed and a general lack of dialogue between students and teachers outside the power dynamics of the classroom. I'm increasingly wary of institutional policies that don't respect the range of views and practices I hear about every day.
Since we don't really know much yet about the educational value of AI, I hope we avoid constraining experiments, including experiments by those who prohibit the use of AI in the classes and those who embrace it.