Showing Up for the Future: Why Educators Can’t Sit Out the AI Conversation
Guest post from Lew Ludwig
Anthropic recently announced a free version of their most advanced AI model for college students in the US. Within hours OpenAI responded by making their $20 per month ChatGPT Plus tier free for college students through the end of May with the promise: “ChatGPT Plus is here to help you through finals.” Giving away their professional tier for free may seem foolish. Trust me—it isn’t. College students are the super users of generative AI and OpenAI knows that this generation will ultimately decide the future of generative AI.
This is no arbitrary decision. The fact that this offer is only for US and Canadian college students is calculated and shows how little understanding there is in the tech world about issues like equity and access to emerging technology, the potential consequences this might have for academic integrity at universities, or how foolish institutions look for investing millions to give their students greater access to this technology. That funding could have been used to bolster research and teaching at our schools during these chaotic early days of the second Trump administration. Instead, it’s paying for a service the top AI company in the world is now giving away for free to keep college students from using any other AI tool, like Claude for Education.
Of course it’s only for a limited time—just through finals! But that might be the most alarming part. OpenAI or any AI company can increase or pull access to their tools at any time for any reason. So much for OpenAI’s marketing that sells the idea of AGI as a tool that “empowers humanity.” I guess you need to be enrolled in college in the US or Canada to use AI as an “amplifier for humanity.”
We Need to Teach Students About AI
Beth McMurtrie’s excellent recent essay in the Chronicle Should College Graduates Be AI Literate? is one of the most comprehensive pieces of reporting about how challenging the current climate is on our campuses to just talk with folks about generative AI. We must move beyond arguments about generative AI and start taking practical steps toward engaging this technology that’s rapidly disrupted how we teach, but not why we teach.
College students are the demographic AI developers see as being most influential in using their products today and in the future. Now is the time to lean in and start having conversations with our students about AI. It’s clear from this recent flurry of events that it is students in higher education that will truly chart generative AI’s course.
Which brings me to our guest post this week. Lew Ludwig, Director of the Center for Learning and Teaching and Mathematics Professor at Denison University, has likewise witnessed how hard it is for faculty to hold conversations about AI, but believes it is crucial we have these conversations. Below is a guest post from Lew, one he developed with the aid of AI using the AI Sandwich writing technique.
Guest Post from Lew Ludwig: The Cost of Not Showing Up
In a recent piece, Marc Watkins makes a strong case: we can’t afford to sit out the cultural conversation around AI. Not anymore. As faculty, we don’t have the luxury of moral distance or technical indifference. Whether we like it or not, generative AI is influencing higher education—and if we don’t step in, someone else will.
We don’t need to bow to the hype or join the cheerleading squad. But ignoring AI on principle—believing that a refusal to engage counts as resistance—carries its own risks. It hands the reins over to the corporations and administrators who are more interested in adoption than reflection. If we really care about our students, our pedagogy, and our values, we can’t stand apart. I was reminded of this while listening to Maha Bali on the Teaching in Higher Ed podcast, where she referenced Freire and Shor’s Pedagogy for Liberation. They remind us that if we want to challenge a dominant system, we must also teach students how to understand and navigate it. Not just to critique it—but because their ability to earn a living may depend on their fluency with it. As uncomfortable as it may be, students need to learn the rules of systems we ourselves may wish to change. Refusing to teach those rules doesn’t shield them from harm; it just leaves them less equipped to face it.
The Risk of Disengagement
Let’s be honest: most of us aren’t jumping headfirst into AI. At many of our institutions, it’s not a gold rush—it’s a quiet standoff. But the group I worry most about isn’t the early adopters. It’s the faculty who’ve decided to opt out altogether.
That choice often comes from a place of care. Concerns about data privacy, climate impact, exploitative labor, and the ethics of using large language models are real—and important. But choosing not to engage at all, even on ethical grounds, doesn’t remove us from the system. It just removes our voices from the conversation.
And without those voices, we risk letting others—those with very different priorities—make the decisions that shape what AI looks like in our classrooms, on our campuses, and in our broader culture of learning.
Ignoring the Inevitable
The genie’s been out of the bottle for a while now, and it’s not going back in. Generative AI isn’t a fad. It’s not waiting for us to catch up. And the longer we ignore it, the more likely we are to miss our moment—to lose our ability to shape what comes next.
To critique something well, we have to know how it works. That’s not complicity. That’s pedagogy. Shor and Freire understood this: real transformation doesn’t come from distance. It comes from deep understanding. If we want to push back on the worst of what AI might become, we need to step close enough to see how the machine is built.
What We Lose When We Step Away
When we don’t engage, we leave students to figure it out on their own. That doesn’t mean they won’t use AI—it just means they’ll use it without guidance. And when that happens, the problems we’re worried about—academic dishonesty, shallow thinking, unearned answers—don’t disappear. They get worse.
We’re also giving up the chance to shape how these tools show up in our syllabi, our assignments, our institutional policies. We lose ground we may never get back. And we miss the opportunity to help students learn not just how to use AI, but how to question it—how to stay human inside systems that weren’t built with their humanity in mind.
Moving Forward, Without Losing Ourselves
Some faculty have carved out a clear position: AI is cheating. End of story. And I get the instinct. It feels like a line worth defending. But here’s the hard part: that stance doesn’t make the technology disappear. It just takes us out of the room where decisions are being made.
Engagement doesn’t have to mean endorsement. No one’s asking us to automate our grading or let ChatGPT write our lectures. But we do need to get our hands dirty. We need to try it, question it, teach with it—and teach about it. Not because it’s trendy, but because our students are already swimming in these waters, and they need someone to help them read the currents.
This kind of engagement is slower, messier, and far more valuable than blanket rejection or blind adoption. It looks like experimenting in low-stakes ways. Talking openly with students about what these tools can and can’t do. Showing them how to think critically, not just copy cleverly. And maybe most importantly, helping shape institutional policies that reflect not just what’s possible, but what’s responsible.
The Other Extreme: Blind Embrace
Of course, disengagement isn’t the only danger. There’s also the temptation to say yes to everything. To let the AI do more and more of the work—not because it makes learning better, but because it makes things faster or easier.
But when we over-rely on these tools, we’re not just outsourcing labor. We’re outsourcing judgment. That’s what automation bias does—it makes us trust the machine over ourselves. It lulls students into accepting outputs without questions, and tempts educators to confuse novelty with effectiveness.
That kind of adoption doesn’t just risk shallow learning—it risks building a system where students become passive consumers of knowledge rather than active creators of it.
Holding the Tension: Strategic, Critical Engagement
So what do we do with all this? We hold the tension. We resist the urge to swing to either extreme. We lean into what Shor and Freire taught: liberation doesn’t come from silence or surrender. It comes from participation—critical, messy, thoughtful participation.
That might mean building assignments where AI plays a role, but so does reflection. It might mean asking students to compare their own work with AI outputs, or to examine the biases baked into these systems. It might mean joining committees, hosting workshops, and asking hard questions at faculty meetings. Whatever the path, the point is the same: we show up. We engage.
Not because AI is the future. But because we are.
Charting a Way Through
Truth is, this isn’t easy. There’s no neat framework, no checklist for ethical AI use in education that works across all disciplines, institutions, and teaching philosophies. It’s messy and uncertain—and that’s exactly why we need to stay in it.
If we step back, we lose the chance to guide our students—not just in how to use AI tools, but in how to think about them. If we jump in too fast, we risk confusing novelty with progress. Either way, we miss the opportunity to shape something better.
So maybe the goal isn’t to master AI. Maybe it’s to stay close enough to it that we can push when we need to push, question what needs questioning, and protect what really matters about teaching and learning.
We don’t need to have it all figured out. But we do need to show up—with curiosity, with skepticism, with care. Because the future of education isn’t something that will just happen to us. It’s something we help build—one thoughtful, complicated, imperfect step at a time.
AI Disclosure: This piece was created using the AI Sandwich writing technique.
Special thanks to Maha Bali and Marc Watkins for theirå feedback and inspiration.
Bio:
Lew Ludwig, Director of the Center for Learning and Teaching and Mathematics Professor at Denison University, focuses on integrating generative AI into higher education. He has led over 40 workshops and webinars on this topic and, in 2021, received the POD Innovation Award and is the recipient of the Ohio MAA Effective Teaching Award.
I want to challenge one of the underlying assumptions contained in this post, an assumption that has been repeated as fact without critical consideration – the assumption that the current adoption of LLM Gen AI is only the beginning of a larger trend of widespread adoption. This is absolutely not a foregone conclusion. The money being lost on OpenAI is absolutely unprecedented. Open AI lost $5 billion in 2024 and loses money on even its paid subscription services. This technology is only offered to the public through absolutely massive amounts of speculative investment. Offering a cool tool to the public in hopes that it will someday become essential will only satisfy investors for so long. They will want to see user data first and then profits.
Regarding user data, according to OpenAI's own propaganda piece, Building an AI Ready Workforce, “More than any other use case, more than any other kind of user, college-aged young adults in the US are embracing ChatGPT…” No wonder OpenAI is doubling down on providing AI to students. Adoption by students is driving their growth. By contrast, despite business leaders’ excitement about AI generally, the actual workers tasked with using it have mostly not found much help from a general information device that is frequently inaccurate and lacks privacy protections. Few jobs are general. Most work is particular. Even in areas like customer service, there have not been any of the anticipated business disruptions. So, what industry has been disrupted? There are a number of business indicators that now suggest that AI has disrupted what I would consider to be the cheating industry. Companies like Chegg and Quizlet appear to be losing market share as ChatGPT gains. User trends revealed through Google Explore show ChatGPT use patterns over the school year with almost identical peaks and valleys as Quizlet and Chegg. More than any other use, ChatGPT appears to be used as a highly effective cheating device for students.
From a business point of view, what we have are AI companies building user numbers by tempting students away from the work of studying for grades, then using these growth statistics to convince businesses that they need to keep up with the users of the future (students), then trying to convince our institutions that we need to make our students AI-ready for future jobs.
In the meantime, those students who use the tool often see their learning compromised as a result. To me, this is a situation like Big Tobacco, where the companies are trying to build business by creating unhealthy dependencies.
I agree that we need to engage with this problem rather than sit on the sidelines, but I don’t agree that the future of AI is either clear or inevitable. We should advocate to hold these companies responsible for the harm they are imposing on our education system.
I agree that educators can't sit this one out. In fact, hiding our head in the sand is irresponsible when it comes to preparing the future that students face. I am suggesting that there is a very different way to deal with the issue -- I'm suggesting adopting a "Proficiency" orientation. Here's a link to the ]Substack I posted earlier this week. I'd love to have your feedback on the concept I'm not wedded to all the details, but I think the concept has merit as an alternative to the yes/no discussions that many seem to be having. https://twelchky.substack.com/p/ai-in-schools-a-call-for-a-new-kind?r=6jqjj