At the heart of conversations about buying generative AI licenses across institutions is equity—offering students equal access to the best tool or tools on the market. However, access isn’t the issue we think it is. The push for equity is just a familiar narrative repurposed to justify AI companies’ sales pitches for lucrative campus and statewide contracts. Overall, this isn’t about access, equity, or giving students tools they will need for future careers. They have that now. Often for free.
What’s really going on with campus-wide AI adoption is a mix of virtue signaling and panic purchasing. Universities aren’t paying for AI—they’re paying for the illusion of control. Institutions are buying into the idea that if they adopt AI at scale, they can manage how students use it, integrate it seamlessly into teaching and learning, and somehow future-proof education. But the reality is much messier.
AI doesn’t need schools or universities to distribute it. Students have access to the best generative tools for free or at a low cost. Likewise, most faculty don’t need institutional AI licenses to experiment with the latest generative models. And no one—neither administrators, nor policymakers, nor the AI companies themselves—can predict how this technology will evolve in the next six months, let alone five years.
Many who buy AI tools across institutions are doing so under the faulty assumption that buying access gives them greater control over how their users interact with AI. It doesn’t.
Students largely don’t trust a university handing them a generative tool and telling them it is safe to use because students don’t trust authority figures to monitor a tool they’ve come to view only as a cheating tool. That’s the price you pay when you roll out a brand-new technology with few safeguards, no guidelines, and free access.
Access to Generative AI Isn’t an Issue
Generative AI tools are often available for free or low cost and that’s not going to change. Sam Altman is on the record stating GPT-5 will be available for free. You can access all of Google’s top-class models for free via AI Studio. Heck, you can even turn on Google’s Multimodal Live feature and have a 10-minute session with AI that talks and views you and your computer screen—all for free
Costs are Going Down
Google is now offering one-year contracts to their Gemini model at 50% off for students. For $9 a month, you get access to all the premium features of Google’s AI. Offering such a hefty discount is a sure sign of two things—costs of running the models are going down, while big AI companies are simultaneously trying their best to convert users from free to paid tiers. My guess is they are struggling with the latter.
Users are able to get generative models to complete various tasks for them, but bundling tasks together using AI tools to make you a super user often is a sales pitch that hasn’t struck a chord. Students use AI sporadically. Sometimes it is to brainstorm and help them study, but it’s likely more often to save time on completing an assessment. Write this essay for me, complete this discussion post, generate an outline for me, and condense this lecture for me. Save. Me. Time. That doesn’t translate into a coherent adoption strategy that justifies spending anywhere between $20 and $200 a month to access models when the free versions often suffice.
We currently don’t have the resources to establish a curriculum about applied AI, nor do we have a consensus about how to teach generative AI skills ethically in ways that preserve and enhance our existing skills instead of threatening to atrophy those skills. It will take years of trial and error to integrate AI effectively in our disciplines. That’s assuming the technology will pause for a time. It won’t. Which leaves us in a constant state of trying to adapt. So, why are we investing millions in greater access to tools no one has the bandwidth or resources to learn or integrate?
Deep Research Isn’t Living Up to the Hype
Deep Research is the new AI 'it' feature folks are talking about across socials, but is it work paying for? Google’s paid Gemini plan was the first to launch it, then the Chinese model DeepSeek did something similar, followed by OpenAI releasing the feature under the same name. Even X’s Grok 3 has a Deep Research-like search. Now third-party tools like Perplexity have their own Deep Research tool.
The technique behind Deep Research uses chain of thought prompting mixed with internet search to essentially build a lit review. OpenAI’s flavor of Deep Research is more analytical than the rest and goes a step beyond to create a white-paper length report, but that doesn’t mean it's worth paying a premium of $200 a month to access.
Leon Furze did the work of extensively comparing three versions of Deep Research tools on his blog. What Furze discovered in Hands on With Deep Research echoes what Derek Lowe found in his own review of OpenAI’s tool: impressive capabilities mixed with shallow, surface-level research and an inability for the tool to distinguish between real or fabricated sources. Furze aptly sums up his review of this new class of features:
The only conclusion I could arrive at is that it is an application for businesses and individuals whose job it is to produce lengthy, seemingly accurate reports that no one will actually read. Anyone whose role includes the kind of research destined to end up in a PowerPoint. It is designed to produce the appearance of research, without any actual research happening along the way.
The Real Cost of AI in Higher Ed
How much is higher education spending on AI? This seemingly straight forward question has so many layers that the true cost is nearly impossible to measure. And I don’t just mean material costs. People are burning out. Some are retiring. Others are outright quitting.
Asking a career professional to learn about a new technological advancement midway through their career didn’t work well for the automotive or manufacturing industry when automation arrived at scale in the 1980s. Sure, the government offered retraining programs, but that still didn’t change the ground truth—one way of working had changed. A great many people did not adapt to that change.
Asking someone who went to college and maybe even graduate school to learn new skills isn’t usually that much of a reach. We change careers often, using the bedrock skills we learned in universities to navigate the new demands of a job or industry. Critical thinking, social and emotional engagement, writing, close reading, active listening, etc. are all things a good college education gives you to navigate an ever-shifting economic landscape.
However, there are some pretty finite limits when you’re talking about a technology that is being actively marketed as having actual intelligence. The generative tools we have now may serve as copilots that offer us more time at certain tasks but also cost us time to learn and relearn how to use them each time they are updated. In education, we only have so many opportunities to learn a new skill before students check out. Do we think workers in industry are any different?
I’m not saying the advent of generative tools will have the same impact as automation in other industries as it will in so-called white-collar professions like education. It’s still too soon to tell. What I am saying is the notion of ‘upskilling’ or retraining human beings across industries has more than a financial toll.
The Human Cost
Victoria Livingstone’s I Quit Teaching Because of ChatGPT is a somber read about the emotional and psychological turmoil that came from trying to figure out how to teach with a new technology wasn’t something she’d signed up for. None of us have.
I found myself spending many hours grading writing that I knew was generated by AI. I noted where arguments were unsound. I pointed to weaknesses such as stylistic quirks that I knew to be common to ChatGPT (I noticed a sudden surge of phrases such as “delves into”). That is, I found myself spending more time giving feedback to AI than to my students.
I know a few colleagues who have decided to retire early because they don’t want to deal with generative AI tools like ChatGPT. “Good luck with that,” is what I usually hear instead of goodbye. That’s not what we want to hear.
Where the Money is Going
There are tools—both AI-enabled and AI-powered detectors
New hires—direct and interdisciplinary with AI expertise
The professional development—training existing faculty about AI
Research resources—cloud computing and physical GPUs
Then there is teaching students AI literacy
In sum, it’s an unimaginably high number few institutions can afford. Small universities, colleges, and community colleges cannot afford many of these expenses. Let’s unpack a few.
The Tools
The California University System recently announced a partnership with OpenAI to bring ChatGPT Edu to 500,000 users. This marks the largest system-wide AI adoption place yet announced. At a reported $17 million dollars, the plan amounts to each user paying less than $3 per month—almost 90% off the $20 a month ChatGPT Plus plan. Of course, OpenAI’s ChatGPT Edu plan isn’t transparent regarding what institutions will actually pay. These types of deals occur behind the scenes and only come out in state budget reporting.
Adding to the frustration, the CSU system recently announced massive across-the-board spending cuts amounting to a staggering loss of $375 million. How much of that the state expects to be offset by the promise of generative AI isn’t known, but education is like any other industry in one regard—if they find a cheaper way of producing a product, then they will adopt it quickly. Unfortunately, that product is learning.
CSU’s partnership with ChatGPT is just one example. Each one of the big AI apps hasn’t been transparent about what institutions who adopt will pay. Some simply give users more access v. enforcing rate limits, while others give access in exchange for propriety restrictions. You can only use our AI tools and no one else’s. The thing is purchasing one AI app for the institution seems ridiculous considering the dizzying array of tools and use cases—writing, reading, research, and coding come to mind. And did I forget to mention the multimodal application for image, voice, video, music, avatars, etc.? A Microsoft school isn’t going to be able to access Google’s NotebookLM and an OpenAI school isn’t going to allow access to Anthropic’s Claude.
Hiring The People
The University of Maryland at College Park recently announced 30 interdisciplinary tenure track lines plus an additional 10 professional track lines in AI. This follows multi-year hiring programs from places like the University of Georgia that filled some 70 tenure track lines in AI across disciplines in the past five years. Emory plans to support up to 60 hires. I imagine the major players in Texas, California, New York, and elsewhere are also conducting their own hiring blitz.
The question is what are these folks being hired for? AI is such a slippery term that encompasses so much of it. Some of these folks are for applied AI with systems like ChatGPT and other generative tools, but many more are for machine learning and other forms of AI. Much of this feels like higher education is engaged in some FOMO or frightened at the prospect of being left behind.
Once again, states and institutions with resources can afford to pay while smaller schools can barely afford new lines in existing fields, let alone something as speculative as this new wave of generative tools suggests.
Many more schools are launching labs, institutes, and other training programs for faculty. Some of these are using existing resources like centers for teaching excellence. The cost of training and retraining faculty just to keep up with the generative landscape is gargantuan, one must struggle to cover.
Building Research Resources
In addition to all of the above, R1s and other research-intensive universities feel the pressure to go shopping for GPUs to provide their researchers with access to build, train, and fine-tune LLMs. Computing labs, supercomputer clusters, and the actual personnel to run these things cost a tremendous amount of money. They’re also going to become more scared to resource and more complicated and expensive to power.
Those who thought the pathway to power these energy-intensive AI data was through investing in futuristic small-scale nuclear reactors need to do a reality check. We’ve never had a nuclear plant come online on time or on budget. Not once. Meta recently had its nuclear ambitions thwarted by (checks notes) rare bees. Meanwhile, Amazon had its bid to power one of its existing data centers with nuclear energy rejected by the Federal Energy Regulatory Commission.
At this point, the thought of rapidly building cheap, small nuclear reactors is just a fantasy, one these massive companies have latched onto as a solution to their ghastly energy consumption today and a marketing campaign to make people ignore the increasing usage each year.
Remember, AI Adoption is Driven by Hype, Not Strategy
So what exactly are universities purchasing when they invest in AI? More administrative oversight. Far more expensive infrastructure that only the wealthiest R1 and private universities can afford. Meanwhile, smaller universities and community colleges—already stretched thin—are left scrambling to keep pace in a race that has no finish line.
What’s so sad about all of this is the decisions to adopt AI aren’t about improving education. It’s all about optics. We don’t have the long-term research to suggest AI helps learning to make these kinds of moves. Universities simply don’t want to be seen as lagging behind. Every institution that can want to claim they’re on the cutting edge, coherent strategy for AI integration be damned. And that’s a huge cost—untold resources funneled into AI contracts instead of actual student learning, faculty support, or meaningful research.
Generative AI doesn’t have to be sold as the future of education to be a useful tool. That sort of hype is just the latest tech industry hollow sales pitch, using the familiar language of equity and innovation. If universities don’t start asking harder questions about AI’s real value, they’ll keep spending money they don’t have on tools their students don’t need—while real educational challenges go unresolved.
Puts me in mind of the Microsoft study reported on by 404 Media about the atrophying of critical thinking skills, as well as Anthropic asking job applicants not to use AI in their applications. I’ve been doing a unit on AI with my students (predominantly first gen, working class, and immigrant students), and they are seeing through the hype.
Perhaps the AI acronym might actually mean "As If"? It's allowing fantastic mimicry at scale, and it's arriving into a world where mimicry is generally enough. It reminds me of what an economist once said about the Walmart business model: "Keep shopping there and you'll probably find yourself working there." Without a more patient and thoughtful approach to AI adoption, we'll all live in a world where shallow pastiche is all we got. I hope we can do better.