What’s really going on with campus-wide AI adoption is a mix of virtue signaling and panic purchasing. Universities aren’t paying for AI—they’re paying for the illusion of control. Institutions are buying into the idea that if they adopt AI at scale, they can manage how students use it, integrate it seamlessly into teaching and learning, and somehow future-proof education. But the reality is much messier.
Puts me in mind of the Microsoft study reported on by 404 Media about the atrophying of critical thinking skills, as well as Anthropic asking job applicants not to use AI in their applications. I’ve been doing a unit on AI with my students (predominantly first gen, working class, and immigrant students), and they are seeing through the hype.
I totallly agree Marguertie, and thanks Mark it's a great piece, but on this topic of atrophying our cognitive capacities i wrote something a few days ago. As it happens, I learned about this Microsoft just paper after i finshed writing it - so I slotted that in,
But it is striking that both Microsoft and Google have both highlighted this issue....
Perhaps the AI acronym might actually mean "As If"? It's allowing fantastic mimicry at scale, and it's arriving into a world where mimicry is generally enough. It reminds me of what an economist once said about the Walmart business model: "Keep shopping there and you'll probably find yourself working there." Without a more patient and thoughtful approach to AI adoption, we'll all live in a world where shallow pastiche is all we got. I hope we can do better.
I'm working with a couple of universities doing faculty learning communities about generative AI and teaching. At both places, the campus has adopted Microsoft Copilot for an institutional license. And at both places, faculty are advocating for institutional licenses for OpenAI ChatGPT, since ChatGPT is seen as more robust than Copilot.
The issue is not one that you mention here: FERPA. We can't require students to use any technology that would involve the student revealing their identity to the technology vendor, unless the institution has the right kind of legal relationship with the vendor. So while it's true that students have access to all kinds of AI, a lot of faculty are uncomfortable requiring students to use that AI if it's not university approved.
I think these faculty have a good point, but I'd love to hear your take on the FERPA question.
I think FERPA in HE, COPPA and CIPA in K12 were all designed under the impression an institution would provide access to users for these tools and that access needed to be safe and secure. GenAI transcends at least part of the intention behind the regulations by giving free or low cost options to virtually anyone. We found (and I'm guessing you did to) that few people used Copilot. Folks will drift toward the tool that provides them the function/ feature and that isn't likely going to be through an institutional AI app.
That said, the guidance I've been offering everyone is to never disclose or upload something personally identifying to one of these tools and keep all sensitive info out of them.
The new wave of multimodal AI that can see you and your screen and listen to your conversation and likely those around you is going to challenge our concepts of privacy in ways I doubt many have thought about. FERPA needs to be retired and new regulation that actually considers this stuff put into place. I sincerely doubt we'll see that happen during the next four years!
Way back when rocks hadn’t yet solidified, the state in which I went to high school had a required course for graduation: Consumer Education. A lot of it was basics related to contracts, check books, and tax filing. But there was room for caveat emptor and the beginning of thinking about how consumers are often maneuvered into buying, even things that they do not want, let alone actually need. It wasn’t much, but it is something that has, on the whole, seemingly disappeared. Critical thinking is a skill we need more than ever in a time when we’re bombarded day in and day out with messaging that may or may not be rooted in reality. This applies not only to news but to marketing. Hype. It’s become a word that people associate with sales and selling. It’s from hyperbole, the act of “extravagant exaggeration,” as Merriam-Webster says. It’s not just being sold on something. It’s being sold through vastly overstating the reality. We need to be thinking about and discussing this more, not only as it relates to so-called AI but in so much more that is happening now.
If you really want to talk about "the human cost of AI," please read _Feeding the Machine_. There are tens of thousands of low-paid people like me, worldwide - - - many with advanced degrees----who work for low wages as, effectively, "Mechanical Turks."
I read it, Valéria, last fall when it was fresh out. By the time I finished the introduction, I conceived of an undergrad course on AI in sociological perspective. Seven weeks later, I got approved to teach “AI and Society” in fall 2025. For which I’m adopting Feeding the Machine:
I think this is a really insightful article - and you’re spot on about the panic / fomo response coming from higher education.
However, I would caution against the problemistic framework when discussing nuclear energy.
I recently posted a piece about Utilities-as-a-Service agreements that are changing the nuclear landscape for tech companies and data centers.
While your points about nuclear’s shortcomings are valid to an extent, they overlook the transformative potential of Small Modular Reactors (SMRs) and innovative projects like the Surry Green Energy Center, which counter the narrative of nuclear energy being impractical for modern applications.
While traditional nuclear plants have faced delays and cost overruns, SMRs represent a paradigm shift in nuclear technology. Unlike large-scale reactors, SMRs are: factory-built, reducing construction time and costs - designed for modular deployment, allowing scalability based on demand - and, equipped with advanced safety features, making them more attractive for regulatory approval.
For example, NuScale’s VOYGR SMR design has already received approval from the U.S. Nuclear Regulatory Commission (NRC), signaling growing confidence in this technology.
The Surry Green Energy Center directly challenges the claim that nuclear energy is unfeasible for powering AI or data centers. It plans to deploy 4-6 SMRs adjacent to the existing Surry Nuclear Power Plant to power a massive data center campus (19-30 centers).The project also utilizes green hydrogen, demonstrating the hybrid versatility of nuclear energy. While this may take 10-15 years, that’s still faster than conventional nuclear energy. Notably, this speed could be accelerate with greater investment scale.
It’s also worth noting that Constellation is restarting the 835 MW Three Mile Island Unit 1 reactor, which was shut down in 2019, under a power purchase agreement with Microsoft to supply carbon-free energy for its data centers. It is currently ahead of schedule and expected to be operational by 2028 with plans to run until at least 2054.
Further signaling the future importance of SMRs, Constellation is also currently seeking an early site permit for deploying Small Modular Reactors (SMRs) in combination with hydrogen production at its Nine Mile Point facility in New York.
One thing to note: looking at the numbers I don’t think the cost of providing AI services is getting much cheaper -- all the big tech companies are still running at huge year over year losses in AI if you only count subscriptions and licensing deals vs all the operating costs and cost of acquiring GPUs. I think they are pushing subs so hard simply to expand the userbase because they need more human minds to vacuum up to feed their insatiable plagiarism machine. The profit will come from the military contracts they are all now competing for, as well as the deals they plan to make with a new swarm of marketing and advertising companies who will monetize every neuron in every brain.
Puts me in mind of the Microsoft study reported on by 404 Media about the atrophying of critical thinking skills, as well as Anthropic asking job applicants not to use AI in their applications. I’ve been doing a unit on AI with my students (predominantly first gen, working class, and immigrant students), and they are seeing through the hype.
I totallly agree Marguertie, and thanks Mark it's a great piece, but on this topic of atrophying our cognitive capacities i wrote something a few days ago. As it happens, I learned about this Microsoft just paper after i finshed writing it - so I slotted that in,
But it is striking that both Microsoft and Google have both highlighted this issue....
https://reclaimedsystems.substack.com/p/ai-is-degenerating-human-ecologies?r=1m8y4b
Perhaps the AI acronym might actually mean "As If"? It's allowing fantastic mimicry at scale, and it's arriving into a world where mimicry is generally enough. It reminds me of what an economist once said about the Walmart business model: "Keep shopping there and you'll probably find yourself working there." Without a more patient and thoughtful approach to AI adoption, we'll all live in a world where shallow pastiche is all we got. I hope we can do better.
Thank you, Marc, for your excellent work — and for making it freely available here. Again and again.
I'm working with a couple of universities doing faculty learning communities about generative AI and teaching. At both places, the campus has adopted Microsoft Copilot for an institutional license. And at both places, faculty are advocating for institutional licenses for OpenAI ChatGPT, since ChatGPT is seen as more robust than Copilot.
The issue is not one that you mention here: FERPA. We can't require students to use any technology that would involve the student revealing their identity to the technology vendor, unless the institution has the right kind of legal relationship with the vendor. So while it's true that students have access to all kinds of AI, a lot of faculty are uncomfortable requiring students to use that AI if it's not university approved.
I think these faculty have a good point, but I'd love to hear your take on the FERPA question.
I think FERPA in HE, COPPA and CIPA in K12 were all designed under the impression an institution would provide access to users for these tools and that access needed to be safe and secure. GenAI transcends at least part of the intention behind the regulations by giving free or low cost options to virtually anyone. We found (and I'm guessing you did to) that few people used Copilot. Folks will drift toward the tool that provides them the function/ feature and that isn't likely going to be through an institutional AI app.
That said, the guidance I've been offering everyone is to never disclose or upload something personally identifying to one of these tools and keep all sensitive info out of them.
The new wave of multimodal AI that can see you and your screen and listen to your conversation and likely those around you is going to challenge our concepts of privacy in ways I doubt many have thought about. FERPA needs to be retired and new regulation that actually considers this stuff put into place. I sincerely doubt we'll see that happen during the next four years!
Way back when rocks hadn’t yet solidified, the state in which I went to high school had a required course for graduation: Consumer Education. A lot of it was basics related to contracts, check books, and tax filing. But there was room for caveat emptor and the beginning of thinking about how consumers are often maneuvered into buying, even things that they do not want, let alone actually need. It wasn’t much, but it is something that has, on the whole, seemingly disappeared. Critical thinking is a skill we need more than ever in a time when we’re bombarded day in and day out with messaging that may or may not be rooted in reality. This applies not only to news but to marketing. Hype. It’s become a word that people associate with sales and selling. It’s from hyperbole, the act of “extravagant exaggeration,” as Merriam-Webster says. It’s not just being sold on something. It’s being sold through vastly overstating the reality. We need to be thinking about and discussing this more, not only as it relates to so-called AI but in so much more that is happening now.
If you really want to talk about "the human cost of AI," please read _Feeding the Machine_. There are tens of thousands of low-paid people like me, worldwide - - - many with advanced degrees----who work for low wages as, effectively, "Mechanical Turks."
I read it, Valéria, last fall when it was fresh out. By the time I finished the introduction, I conceived of an undergrad course on AI in sociological perspective. Seven weeks later, I got approved to teach “AI and Society” in fall 2025. For which I’m adopting Feeding the Machine:
https://bit.ly/CaMuGr-2024
I'm one of the humans that makes A.I. run. Someday I hope I will be able to write about my experiences.
I think this is a really insightful article - and you’re spot on about the panic / fomo response coming from higher education.
However, I would caution against the problemistic framework when discussing nuclear energy.
I recently posted a piece about Utilities-as-a-Service agreements that are changing the nuclear landscape for tech companies and data centers.
While your points about nuclear’s shortcomings are valid to an extent, they overlook the transformative potential of Small Modular Reactors (SMRs) and innovative projects like the Surry Green Energy Center, which counter the narrative of nuclear energy being impractical for modern applications.
While traditional nuclear plants have faced delays and cost overruns, SMRs represent a paradigm shift in nuclear technology. Unlike large-scale reactors, SMRs are: factory-built, reducing construction time and costs - designed for modular deployment, allowing scalability based on demand - and, equipped with advanced safety features, making them more attractive for regulatory approval.
For example, NuScale’s VOYGR SMR design has already received approval from the U.S. Nuclear Regulatory Commission (NRC), signaling growing confidence in this technology.
The Surry Green Energy Center directly challenges the claim that nuclear energy is unfeasible for powering AI or data centers. It plans to deploy 4-6 SMRs adjacent to the existing Surry Nuclear Power Plant to power a massive data center campus (19-30 centers).The project also utilizes green hydrogen, demonstrating the hybrid versatility of nuclear energy. While this may take 10-15 years, that’s still faster than conventional nuclear energy. Notably, this speed could be accelerate with greater investment scale.
It’s also worth noting that Constellation is restarting the 835 MW Three Mile Island Unit 1 reactor, which was shut down in 2019, under a power purchase agreement with Microsoft to supply carbon-free energy for its data centers. It is currently ahead of schedule and expected to be operational by 2028 with plans to run until at least 2054.
Further signaling the future importance of SMRs, Constellation is also currently seeking an early site permit for deploying Small Modular Reactors (SMRs) in combination with hydrogen production at its Nine Mile Point facility in New York.
If you’re interested, check out my piece:
https://strategyandsignal.substack.com/p/the-convergence-of-utilities-as-a
Great article, thank you.
One thing to note: looking at the numbers I don’t think the cost of providing AI services is getting much cheaper -- all the big tech companies are still running at huge year over year losses in AI if you only count subscriptions and licensing deals vs all the operating costs and cost of acquiring GPUs. I think they are pushing subs so hard simply to expand the userbase because they need more human minds to vacuum up to feed their insatiable plagiarism machine. The profit will come from the military contracts they are all now competing for, as well as the deals they plan to make with a new swarm of marketing and advertising companies who will monetize every neuron in every brain.
Does anyone know of examples where institutions are embracing AI well?