Before You Buy AI for Your Campus, Read This
Does your institution need to buy AI? Will students even use it? Just because students use generative AI tools like ChatGPT on campus does not mean that the technology is inherently designed for educational purposes. One of the more striking things you notice as you begin to explore the multitude of AI tools is how little they differ from their public offerings. Most come bundled with data protection and little else to distinguish that an educational AI license is functionally any different than the plans students have for free.
We’ve heard for the last three years that faculty have to rethink teaching and assessment in the wake of AI, but what is really lost in this discourse is how universities will need to change how they think about their existence because of machine intelligence. We may not need to purchase AI or be able to create customized pathways for degrees around a technology that changes so rapidly so quickly, so what then is the role of a university?
Struggling to Sell What You Already Gave Away for Free
In What to ask when AI vendors show up on your campus, Katie (Kathryn) Conrad gets right to the tension and growing disconnect that faculty have when AI vendors arrive on campus trying to sell products. Sometimes these pitches are institutional adoption campaigns in the flavor of CSU buying system-wide access to OpenAI’s ChatGPT. While many others are attempting to sell individual subscriptions directly to students and sometimes to faculty.
Conrad’s excellent ethical considerations are ones faculty and even students should ask representatives of these companies when they show up. As Conrad notes in her introduction, very rarely are faculty included in these institution-wide conversations about purchasing AI licenses for their campus:
As a few of my fellow critical AI colleagues have noted, even just getting reps (or administrators) to answer the question of what they mean by “AI” can be quite revealing. But in an environment in which you may be the only person asking (and I’d encourage you to get at least one other colleague to join you in asking questions, which can encourage others to speak up), you might want to pick a question that will get others thinking about the answers that are (or aren’t) provided.
I’d like to build off of these questions for a different, but related, audience that may not be swayed by ethical concerns alone. Here are three practical questions I’d encourage CIOs, Provosts, University Presidents or other decision makers to begin asking said vendors when they arrive before purchasing any AI product.
1. What evidence do you have that students will stop using the free version of AI tools and start using institutional licenses if they are purchased?
Conrad notes representatives pitch campus-wide AI adoption initiatives by leaning heavily into notions about equity and access. That’s the sale’s pitch, and it is beginning to wear thin because students aren’t willing to ditch their free AI plans. These arguments come bundled with data protection, mentions of secure access, HIPAA, and FERPA protection. That’s the language Edtech providers have been using for decades to sell institutions their products. The problem is that students have little interest in using AI tools that their campuses provide for them when they already have the free equivalent. Some of the reasons for this are clear and inescapable from how these tools are viewed culturally right now.
Students in high school and throughout higher education don’t trust that their use of institutionally provided AI tools won’t be monitored or linked back to them if they use AI inappropriately in a class. They prefer to use their own AI, and that largely remains ChatGPT.
Students don’t see the need to learn a new tool or interface that might be different than ChatGPT. Gemini, Claude, Anthropic, Grok, Perplexity—join countless wrapper apps that have been sold to schools and universities but see significantly less usage by students.
Often, the free version of ChatGPT is all students have used or have any interest in using. I fear many have falsely equated student use of one AI tool as a sign that students are even interested in learning anything about how AI works or exploring alternatives. Working with students closely using AI these past three years and I can say with some confidence that very few are deeply interested in using a different AI tool than the one they already use. That’s not just in the USA. The Digital Education Council’s recent 2026 survey about AI use throughout Latin America reveals ChatGPT remains the tool of choice for students:
Arguments about equity, access, and data security don’t hold up well if students refuse to stop using the free versions of AI. Universities cannot and likely will not continue to pay expensive fees for AI licenses that are only used by students infrequently when faculty assign them and specifically instruct students to use them. AI developers have so thoroughly saturated the market with free AI that most users don’t see the need to purchase a $20 per month subscription to gain more access to advanced features. Heck, ChatGPT’s free tier gives most anyone limited access to the most advanced features on a rolling basis.
It’s a self-inflicted irony that vendors selling AI products can’t sell a repackaged version of their free AI by putting an educational wrapper around their product. The free version adopted so heavily by students has become the default for an entire generation. If developers altered how these tools functioned by making it more difficult for students to use or placing more guardrails around usage, then students will all but certainly stop using these tools and search for alternatives that let them breeze through their course work.
2. Why should universities purchase AI education plans if students can use agentic AI to bypass them?
A fundamental blind spot many AI developers have is not recognizing how agentic AI breaks existing systems, including the very Edtech solutions they’re trying to sell. When faculty use AI with students, it is generally as an assistant or in a way that is geared toward teaching students how to use AI to augment an existing skill. These techniques are iterative, require critical thinking, and ethical decision-making. Agentic AI transcends all of that. A student can simply instruct an AI-enabled browser, like Perplexity’s Comet, OpenAI’s Atlas, or Gemini within Google Chrome to complete the task for them. And, yes, this includes having an agentic browser log into another AI tool, even a university-provided one. Imagine trying to teach your students how to use a chatbot interface to help them study, only to discover they’re using an agentic browser to do the assignment for them! Many universities will eventually ask what the point is in purchasing any Edtech right now if it cannot be secured. Anna Mills recently tested Google’s implementation of Gemini within Chrome and found that it would take an exam on her behalf.
What’s really baffling is how little awareness there is on the vendor’s part. I get it. Most of the AI reps that visit campuses are in sales, and they only get a few slide decks and talking points about the product. Still, it is shocking to me that so few of the vendors I’ve spoken with have a sense of the stakes or the actual capabilities of the products they’re trying to sell. If your company makes a product that is released for free, like agentic browsers, that undermines how the expensive educational version of the product you’ve been assigned to sell, I’d expect at least some level of explanation. Instead, it’s often “I didn’t know it could do that.” To me, that’s really alarming!
3. Why should our campus integrate AI tools into degree programs when many companies across sectors are using AI to eliminate junior-level positions?
Outside of the equity and access arguments, the one I hear the most is the need for universities to help students learn how AI works to be professionally ready for jobs that increasingly ask them to use this technology. Those were great arguments in 2023 and 2024, until last year when we started to see certain sectors like coding and industries that rely heavily on entry level work abruptly shift to automate many of those junior positions. The shift isn’t evenly spread across industries or even at certain companies, but once a position is replaced by automation, it rarely reverts back.
Our students are alarmed by this. So, too are faculty. If the trend continues, we will witness a generation of new college graduates entering the workforce without clear pathways to careers. No, it won’t be every job, but it will be spread out across a vast number of white-collar careers that the end result will be nothing short of devastating. The World Economic Forum’s 2025 Future of Jobs report estimates 40% of companies will reduce staff as their skills become less relevant. Entire programs of study may not survive. And those that do will likely experience dramatic and rapid changes that break how traditional college degrees are set up. If students stop seeing college as a potential pathway to enter the professional world, then enrollments will drop, possibly at far greater numbers than we can imagine.
The idea that a university would adopt a technology that could contribute to such a catastrophic future for itself and its graduates is surely the existential question many have asked and will continue to ask. It also adds to the ethical concerns, learning loss, and emerging mental health issues posed by AI that many in higher education and beyond have raised. Many faculty won’t be on board with mass adoption until we get a clearer sense of AI’s role in the workforce. And that likely won’t come for many years. No one knows how AI will play out in the near future, let alone 10 or 15 years down the road. For many, the safe bet is to remain skeptical.
Vendors are going to need to start having responses to these questions and campuses should be prepared to critically engage any supposed solution they provide. The core challenge is how we change our institutional thinking when it comes to AI and start treating it more than a tech purchase.
Led with Values
Higher education doesn’t move at the speed of silicon valley, nor should it. If vendors really want to sell AI, then they need to start addressing each of these questions and the scads more Katie Conrad raised. We hear the marketing that AI isn’t like other tools or innovations. Why then would they expect universities to treat it like a normal purchase and be silent when there are so many practical and ethical considerations?
AI in education has been one of the most contentious and divisive topics of the past decade, so let’s do what we do best as institutions that value critical thinking and public discourse. Let’s teach our students about AI as a topic of critical inquiry! We don’t need to jump into purchasing AI to start considering our response and navigating the opportunities and challenges posed by it. Institutions, like Gonzaga University, are making AI part of their core curriculum by putting it in conversation with their institutional values:
As students of a Catholic, Jesuit, and humanistic University, how do we educate ourselves to become women and men for a more just and humane global community?
This question is the anchor for Gonzaga’s core curriculum, a roadmap for all undergraduate students to cultivate understanding, learn what it means to be human, develop principles characterized by a well-lived life, and imagine what’s possible for their roles in the world.
As the core website underscores, “a core curriculum housed in the context of a liberal arts education in the Jesuit tradition offers the most complete environment for developing courageous individuals in any major who are ready to take on any career.”
Ann Ciasullo, director of the University Core, believes that the core is a natural fit for content about AI.
“Because a commitment to inquiry and discernment serves as the foundation of our core curriculum, our students will engage with AI in ways that are both practical and critical. They will not only explore how AI works but also analyze the implications of the content it produces. Given our mission, Gonzaga is in a unique position to frame questions about AI and digital literacy in meaningful ways.”
That aligns deeply with my own approach to teaching students how to navigate AI. But I wonder how alien that may be to some universities that only view AI as a product or an opportunity to lure more graduates with credentials? We must confront some stubborn tendencies on our part that really limit the possibilities, and that’s hard. That’s cultural. And for many, the easier path is to simply buy a tool or service, pass the cost on to students, and forget about it.
What I’d love for campus leaders to consider are ways they can help shape the broader conversation, not by buying something, but instead by charging their campuses to hold a sustained conversation about what place AI has within their institution. It’s time. AI isn’t going away. Asking students, faculty, staff, and administration what role AI should play in the day-to-day operations and classrooms will be the question for most campuses for years to come.





Appreciate the shout-out and the questions here! The challenge is getting an audience with those making the decisions as well as selling the systems, but these are important practical questions. Regardless of your feelings about AI use in education, I think many institutions lack accountability and audit systems.
As long as "free" and reasonably powerful versions of AI tools exist, getting students to use AI in class on a sanctioned wrapper is, at best, an illusion of control. Ironically, Marc, where I've seen it work best is actually with younger students who are more inclined to follow the rules and don't really know how to use LLMs yet. I've seen with my own 6th grade daughter who worked on a thoughtfully designed exercise though Flint.AI that required her to go through the sequence of writing her short "essay" scaffolded with questions, examples, and analogies, that, as a teacher and father watching her in real time and observing both the questions and final output, produced a short piece of writing that she was really proud of. When I asked if she thought it helped her, she said it did and was useful and she was engaged and productive for about 30 minutes while working on it. I honestly don't know how to feel about all this!