OpenAI's Education Pitch Has a Free Version Problem
OpenAI wants universities to invest in educational AI at scale, but students already live inside a free AI ecosystem that universities cannot control. That was the unresolved problem hanging over OpenAI’s 2026 Education Summit that was held at their San Francisco office on March 5th. It was attended by 100+ university leaders, with a few faculty who also held joint teaching and administrative roles, like me. There was an intense focus on adoption and how to scale AI across your campus, but more interestingly, we got to see how companies like OpenAI are starting to respond to criticism about how ChatGPT impacts higher education. Representatives from the company talked about the economic impact, potential learning loss from overuse, and methods for how they plan to train students and validate what they learned using ChatGPT.
Unfortunately, there was no roadmap or vision for shifting students from the free version of ChatGPT to the educational version of the tool, and there is an undeniable tension there that cannot be ignored. Students are using the free version of ChatGPT to study and learn, but they’re also using it to commit academic fraud by cheating, becoming addicted to easy answers, eroding their critical thinking skills, and running afoul of sycophantic responses. Before universities go all in and purchase expensive education licenses of ChatGPT, I think OpenAI and other AI developers need to establish more meaningful guardrails in the free or private version of their tools; otherwise, there’s not much reason students have for ditching those plans.
That’s no easy task. How OpenAI plans to bridge the gap between the free version of their product and the educational tool they’re attempting to package and sell universities is fraught with competing priorities. ChatGPT remains a general-purpose tool, not one purposefully built for education. Introducing some level of friction within the educational version of ChatGPT to ensure students learn isn’t going to work if students can simply turn to a free version of ChatGPT that is optimized for efficiency.
There also wasn’t any deep conversation about how quickly the landscape has changed from teaching students to use AI iteratively to the way agents now increasingly automate tasks. Free ubiquitous AI chatbots and agents have broken the traditional signals we look for in how students learn in assessment and even companies like OpenAI are struggling to articulate what learning will look like a year from now.
Trying to recap a live event is fraught with its own unique challenges. I’ve attempted to organize what OpenAI addressed at the summit, but I also want to lead off with questions I had going in and those that arose throughout.
What amount of ChatGPT usage by a college student is healthy, productive for their learning, and avoids deskilling them of the vital critical thinking and ethical decision-making skills students need?
How can universities secure versions of ChatGPT Edu against students automating their usage of licensed AI by free AI tools or newer versions of agentic AI, like Perplexity’s Comet browser? I was pretty taken aback by how little emphasis was placed on agentic AI and the implications this has for automating education.
Why should universities invest in AI tools if they are contributing to economic job loss or making it more challenging for our graduates to find jobs? OpenAI’s Chief Economist, Ronnie Chatterji, has an affable approach when talking about the economic challenges AI poses to college graduates, but the message was more about downplaying fears than addressing the uncertain reality many recent graduates face.
We know what massive deployments look like from major university systems, but what does it look like at a small university or community college without those resources?
How are institutions supposed to address the endless onslaught of AI developments that call for constant training and retraining of faculty? There’s a massive human cost to simply dealing with the free AI tools as they exist today. How should schools plan on adding to that cost by adopting educational licenses for more AI tools?
If universities are supposed to view AI as an infrastructure and strategically invest in AI, then where is the measurable ROI? Is it in better educational outcomes for students, reduced faculty labor, increased productivity in research, reduced operations cost, or all of the above? Where is the evidence that scaled AI adoption is having those impacts?
How does a company like OpenAI ask universities to navigate the myriad ethical issues that arise from the adoption of their tools? We’re not simply talking about copyright, environmental concerns, or cheating any longer. Large AI companies like OpenAI have to now address militarization, surveillance, privacy, and the mental health impacts caused by AI.
To OpenAI’s credit, they are at least talking about some of these issues. However, it’s clear that no easy solution is on the horizon. What’s really worrisome is all of these topics and potential solutions trail AI deployments. There’s a massive gulf between the company’s freemium strategy and its education strategy, and it isn’t clear to me why universities should front the cost of AI when students are already the primary users (and products) of free AI tools.
OpenAI’s Learning Outcomes Measurement Suite
The most interesting and provocative session was James Donovan’s work with a new series of classification models that might one day gauge how students learn with AI before an assessment, during it, and if they carry those learning gains with them afterwards. That’s impressive and arguably needed since the launch of ChatGPT in 2022. It is also maybe a step beyond what the tech can do in this highly charged landscape.
The challenges of tracking how students use AI to learn are immense, and Donovan didn’t shy away from it. He held one of the more interactive Q&As I’ve seen and the questions were wide ranging, from the technical to the deeply philosophical about what learning means right now. Clearly, there are researchers at OpenAI who are seriously looking at how ChatGPT is impacting student learning, but there are limits to what they can gauge.
Where a suite of classifiers would work best is within a population of students that uses a singular version of AI, like ChatGPT Edu. Unfortunately, there’s really no way anyone can keep an 18-year-old from switching to a different paid or free AI tool. A student might use Claude for part of an assignment, go to ChatGPT Edu, then use Grammarly’s AI agents for another. They might also not use any AI at all, or use it sporadically for school work but heavily for personal life, or they might only use it for interactions outside of school work. The point is, it may be all but impossible to isolate how AI is impacting core skills because there isn’t a way to isolate its usage.
Perhaps more sobering of all, there wasn’t any mention of students using agentic AI to log into an educational AI tool and have the agent automate those interactions. We now have to face the fact that students can automate those iterative interactions with a variety of AI-enabled browsers. How does anyone possibly measure learning when they cannot identify if a student or an AI agent is the one driving a tool? Once again, the AI developments are outpacing the very mechanisms research teams have created to gauge how students use them to learn.
The Number of Users of ChatGPT is Massive
To understand why the free ecosystem is so hard to displace, we have to look at the sheer gravity of OpenAI’s current user base. According to Leah Belsky, OpenAI’s GM for Education, ChatGPT’s weekly users stand at 900 million, which is about 100 million more than reports indicated in April of 2025. Adding 100 million users is impressive in just a year, but one needs to consider that OpenAI reported around 250 million weekly users in 2024. In the space of just a few months, from the end of 2024 to the beginning of 2025, ChatGPT rose from 250 to 800 million weekly users. Largely because of the success of their image generator, and expanding into foreign markets. ChatGPT’s growth is slowing, but that doesn’t mean that it has plateaued. It’s not clear to me where there is room for further growth when the marketplace is saturated with competing AI apps.
Over 40% of those weekly users are under the age of 24. Which is to say, most are likely K12 or college students. Students remain the so-called “power users” of ChatGPT, turning to it multiple times per week. Over 20% of those weekly chats are related to learning. How OpenAI or any AI developer defines learning isn’t transparent. I would have liked to know how much of that includes “write my essay for me” vs. “help me brainstorm potential topics, explore counterarguments, or get feedback on my draft.”
It’s ChatGPT for Everything
OpenAI’s solution for training students to use ChatGPT effectively and responsibly is to have them use ChatGPT. Keeping with the company’s strategy of ensuring users remain on the chatbot interface, we were shown a demo of various certification courses that students and faculty could take directly through ChatGPT. It is a vertical integration strategy that prioritizes the chat interface, which may not be the best message to tell students the way to responsibly use ChatGPT is to have the AI train you.
The nested features within ChatGPT are often added under the + button. I wonder how many students actually use these or even really know about them? You have to have more than a surface-level understanding of the interface and nested features to understand the capabilities. And many of these demos emphasized this. We saw representatives use features that a novice ChatGPT user would find dizzying. There were points when presenters were moving so quickly between features that it was difficult for me to follow.
I think it’s reasonable to ask if the way OpenAI expects students to interact with these features is too taxing on their cognitive load. For some faculty on the wrong side of 40 (like me!) I know that is a real challenge. So is trying to take students past using the surface level of ChatGPT into exploring Deep Research, Customized GPTs, Codex, or the newly announced Prism interface . Each of these will take quite a bit of time and structured learning for an 18 year old student to use effectively, largely because they don’t have the skills necessary to do so yet. So why rush them?
We Need More Focus on Concrete Use Cases
What I really wanted to see were concrete use cases. Many of the examples that were demoed were designed to be visually pleasing—here’s a heat map of data a student might vibe-code using Codex, here’s a color-coded syllabus that a professor might generate. However, these demos were more about the feature than real world use cases. They also weren’t entirely accessible for students.
Here’s one that I wish I’d seen. Many educators are using AI to ensure their materials are accessible under the newly launched Title II requirements. In fact, using AI to audit a course or assignment materials for accessibility purposes is one of the best use cases I’ve seen faculty use it for. I had the privilege of sitting next to Liza Long at the summit, who just wrote about how she set up agentic workflows to automate accessibility checks. Liza was on her laptop doing just that during the summit!
The ROI for using AI to do this is massive. I cannot imagine how much money universities will have to spend on training faculty to ensure they meet new Title II requirements and remediating content so that it is fully accessible under the newly enacted federal guidelines.
But that wasn’t something discussed. Instead, we saw more tools, more features, more deployments. I genuinely think OpenAI could have slowed down and stopped parading new features and spent more time focusing on concrete use cases, like using AI for accessibility, that educational leaders in the room could go back to their campuses and tout. That is a missed opportunity that would have resonated with this particular audience.
AI Transcends Traditional Systems Thinking
Professor Anne Trefethen from the University of Oxford joined Kyle Bowman from ASU to discuss how AI adoption functioned on their campuses. What I found insightful about Trefethen is her acknowledgment of the long term challenges adopting AI poses for a university. She noted that access shifted the conversation from asking provocative questions about the role faculty and the institution have in our AI era, to one where faculty asked for increased resources to help them navigate this landscape.
Many faculty are now using these tools and wary of the impact this is having on their own skills. I’m one of them. I can acknowledge that these tools make me more productive in certain areas, while also being critical of its impact on my sense of self. That’s something many people wrestle with personally, but there’s little we’ve seen to adjust for system-wide disruption that goes into much larger impacts. Most of the solutions latch onto traditional methods of training, credentialing, or validating learning. AI now transcend much of that, so why are we still thinking this way?
A student can easily use an AI agent to take an online course, including the very courses AI developers envision a student taking to ensure they use AI responsibly.
Prism provides citations automatically while scientists draft their research, cross-checking what others have said and using AI to judge if a discovery is novel or how it contributes to ongoing conversations, drawing many to question what the point of a literature review is if AI can do it for you.
Many employers view AI automation as a workforce replacement for junior-level positions, so why should faculty teach students AI skills if some industries no longer see the need to hire those graduates?
Faculty will use various AI tools to gauge how students learn with and without AI, but unless this is done in secure environments, there’s no way to accurately tell what was done by a student or a machine.
Tools like Codex or Cowork let users create things using vibe-coding, but what mechanism should we use to evaluate an output beyond . . . vibes? Once again, faculty are left asking what the purpose is in scaffolded learning if AI automates much of that process now.
If students, faculty, staff, and admin are all bringing their own agents to campus to automate their workflows, then what does that mean for security and privacy? There’s no realistic way universities can use policy to combat this.
A maxim for our generative era is that if a task can be automated, then it will be, so what now? Purpose and agency are what I go back to again and again when I talk to students about how they use AI, and I emphasize how they should be aware how they use these tools and how these tools are used on them. AI is moving too quickly, and our existing systems both host it and allow for its presence throughout our lives. We aren’t meant to pause and question algorithmic decisions, but we can and should prioritize human reasoning above a machine.










Thanks for this report, Marc. I have had MANY students tell me that they do not use university/college versions of ChatGPT because they are not sure if that means their college can see what they're doing. So that raises another set of questions, even if one could come to understand the "educational" uses that are going on in the contracted versions.
This is super helpful for listing out issues we need to be discussing with university admins. But also just more reasons why some faculty will double down on opting out entirely.
How can we keep up?
Insisting on single company contracts never made sense to me given student use of Chat.