If you teach on a college campus, you likely have access to a slew of generative AI tools or features that have been quietly embedded in applications you use each day. Generative AI is active in your learning management system, databases used for research, accessibility tools for students, and at least one if not more major LLM frontier model.
There is a narrative that’s taken root in the popular press that universities are rushing to purchase generative AI and bring it on campus, but outside of a few outliers, like the California State University system, only a few institutions or systems have actually purchased specific generative AI application. And yet, across nearly every campus, there are dozens of applications that have quietly integrated generative AI during the past two years. You don’t hear about that story because nearly all of these AI features came through system updates with existing contracts. AI arrived via free upgrade, not through intentional purchases. I know that isn’t a very sexy story, but it is one that should be analyzed because the risks are many and easy to ignore.
The story of how generative AI arrived on college campuses is far more complicated and troubling than purchasing a single app, like ChatGPT. Updates in existing contacts don’t require a clearly defined decision making processes or go through shared governance. That means faculty, students, and even administration may not have had a voice or agency in the decision to integrate AI in existing systems.
I think it’s worth spending some time addressing this because the risks of not talking about the AI already present on our campus are very real. I do not do this in a vacuum but as someone who just spent two days with university presidents and provosts at EAB’s Presidential Experience Lab.
AI Arrived On Campus Quietly, One Update At A Time
When Katie Conrad wrote the thoughtful A Blueprint for an AI Bill of Rights for Education in 2023, many of us believed universities would be intentionally buying AI-enabled applications for teaching, research, and operations. Thus, the idea behind an AI Bill of Rights was to ensure all stakeholders had a say in the shared governance of what AI tools their institutions purchased, with an eye on ethics, sustainability, privacy, data, and copyright.
Instead, the vendors colleges use transcended that by providing AI for free. The story that most of the press and public gets wrong about generative AI on college campuses is one of cost. The vast majority of institutions cannot afford to pay for generative AI tools or have the personnel to understand what ‘AI’ means for teaching and learning, research, service, or operations. Even the vendors upgrading their applications are moving so quickly to deploy various AI tools that they struggle to articulate what the technology does, and how it treats student and faculty data. It’s not uncommon to hear a vendor call their AI feature an AI Agent, when in fact it is just a simple automation—99% existing technology with a bit of new generative AI thrown in.
A Timeline of How Generative AI Arrived on Campus
2023—Microsoft releases Bing Chat Enterprise (later renaming it to Copilot, while also naming their integrated suite of tools . . . Copilot). This version is given freely to campuses with Microsoft email accounts.
2023—Blackboard integrates AI into Ultra courses. There are no opt-outs or mechanisms to switch it off. Like many of these AI tools, these features suddenly appear via an update.
2024—Canvas integrates AI and rolls it out for free to all institutions.
2024—D2L does the same, even launching ‘agents’ within the LMS.
2024—Google unveils Gemini for education and gives institutions who have Gmail accounts access to their Gemini suite. This most recently included access to NotebookLM for free under existing contracts (and yes, even their 100% free student email tier gets full access to Gemini).
2024—Dozens of tools that use adaptive and assistive technology for accessibility are upgraded using generative AI to assist with reading, transcription, and vision. Again, it isn’t something universities (or students) necessarily asked for—it simply arrives.
2024—Many vendors providing database access for research to campus libraries (you guessed it) upgraded their search tools using generative AI.
2025—Most of the systems providing operations for institutions have AI features integrated within them. Multiple vendors subcontracting for operations do the same.
AI’s arrival into existing university software happened in bits and pieces and went largely unnoticed by many. After all, who spends time looking at individual system updates? Outside of big tools like Gemini or Copilot, few of these updates required IT to enable them. You simply woke up one day to an email announcing the app you’d used for years now had a slew of AI features, without discussion, warning, or any sort of training.
These updates aren’t static. They follow developments and feature launches with frontier models like ChatGPT or Gemini. You might spend some time figuring out a newly launched AI feature within a university-provided system, only to find it depreciated for a new one or outclassed by another. That’s what makes adopting AI completely impractical. You cannot adopt a tool or technology that changes this rapidly. The most you can ask of yourself is to try and be aware of what an AI update does and see if you need to adapt to it.
The Evolving Risks Few Are Talking About
The lack of awareness about existing AI-enabled applications on our campuses poses a number of challenges to security. Few people are talking about it or preparing for even more advanced features that are sure to arrive along the same pathway.
Charles Bassett’s excellent S.E.C.U.R.E. A GenAI Use Framework for Staff is one of the most robust examples of the type of thinking we should model working with the AI tools we have on our campus. It thoughtfully walks through a risk-observant mindset.
The challenge with any AI risk framework is it must evolve alongside generative AI. The emerging use cases for many AI tools are for companionship and therapy using newly launched multimodal features, like ChatGPT’s recently upgraded voice mode. Students, faculty, and staff are going to bring their own devices with personal AI and talk with it. While many of us are aware we should be mindful about uploading sensitive documents to AI systems, talking to a bot like it is a person and habitually revealing personal information to it is an extraordinary security risk when you deal with sensitive data. Our words are now prompts, our conversations become data, and the potential FERPA and HIPPA violations that may come from talking about someone with something is not being discussed enough.
Another emerging risk comes from people who use multimodal AI vision, which gives an AI system access to your computer screen or camera to ‘see’ your workspace or interact with you. You may be working with data on a secure cloud and have it open in one tab while working with an AI vision model on a separate project in another tab and mistakenly click on the secure tab making private information visible as a prompt to an AI vision model. The same could happen while using a vision model and absently checking an email from a student discussing a grade or health issue—both potential violations of student data protection. Let that sink in for a moment and consider the mindfulness required for anyone using an AI multimodal system.
The vendors offer most of these AI-enabled features by purchasing API access to one of the big four providers, but there’s no reason to think that we won’t see many of them move away from marque providers like Google, OpenAI, Microsoft, or Anthropic because of price. There are thousands of companies on the market today that offer access to hundreds of generative models. Many of those models are open source and don’t come with the same level of security of safety as the big AI developers. Without transparency from vendors about what AI models they run or what contracts they have in place, institutions find themselves in increasingly perilous (and frankly powerless) situations.
The risks with the foundation models are likewise evolving. OpenAI recently released a powerful Codex coding tool, and Sam Altman included a clear warning that an AI tool that can write and deploy malicious code needs to be used with caution. OpenAI also faced a judge’s order in their NYT’s lawsuit requiring them to preserve chats and not automatically delete them. This is another potential security issue that could lead to a massive breach of consumer data and erode public trust in AI.
We also won’t be seeing any sort of regulation or security from the federal government. NIST’s AI Safety Consortium was rebranded by the Trump administration as the Center for AI Standards and Innovation, dropping any real sense of safety or regulatory intent.
Campus AI Risk Hiding in Plain Sight
I want to tell you that the solution lies in systems like S.E.C.U.R.E. or a new social contract around AI in education like Katie Conrad’s Blueprint for an AI Bill of Rights, but those seem increasingly impossible given our circumstances. AI is accelerating and we don’t even have basic AI literacy for our professional faculty or staff, let alone for students. I’m increasingly being asked to advise people for practical pathways to help navigate AI in education and am beginning to suspect that there might not be one that doesn’t involve spending millions on endless cycles of training just to keep up.
We have to see the challenge for what it is now. We don’t need to purchase more generative AI applications and wring our hands over the consequences or what-ifs—we already are dealing with a labyrinth of generative AI applications active on our campus with no clear solution to mitigating the existing risks these tools pose. Someone has to raise this to campus leadership, their IT department, their campus library, at their faculty senate and start asking what generative AI tools are already enabled on campus, what the risks are, and if anyone has plans for training faculty and staff to be aware of them. The danger of AI arriving via update is that it goes unnoticed, undiscussed, and like a lobster in a pot, we may not realize the water was boiling long before our campuses intentionally purchased a single ‘AI’ tool.
This is an excellent analysis and recognition of our current state of AI integration. I just came from giving a talk at a CSU, and my sense is that faculty didn't feel included in these decisions at all, plus they came mid-semester. It's hard to retain agency about AI choices when apps are so stressing aggressively integrating AI without warning or options to switch it off.
Great piece, Marc. For those who are interested in reading the Blueprint for an AI Bill of Rights for Education but can’t get past the paywall, I have reproduced it here for reading: https://open.substack.com/pub/kconrad/p/a-blueprint-for-an-ai-bill-of-rights?r=97c7a&utm_medium=ios
If I could edit it now, 2 years later, I would likely keep the same principles and perhaps add a simple right to refusal in both sections-including a right for educators to opt out of tools of data capture. Even that, of course, is now attenuated by a number of factors including the broad usage of “devices” (rather than appropriate narrow and/or air-gapped tools) in IEPs and accommodations. That is not to say that one could or should not still strategically use tech tools for those who need accommodations, but the ability for students and teachers to control for instance those data has been profoundly undermined.
Ultimately, the document can still be useful in a conversation with administrators: ask why a given right is no longer feasible and who was responsible for making it so. The same can be said of the Bill of Rights the White House put out on which this was modeled: within months of its release, the White House had dropped it as a talking point except when they wanted to pat itself on the back and instead moved to bring in contracts for Open AI - and this was the Biden administration, not the Trump administration that has clearly gone all in on a tech-first approach. Had they protected those principles from the beginning with actual regulation we would not be where we are today. Frightening how quickly the industry has moved to undermine what at the time seemed like achievable rights.