I've said it before, and I'll say it again. Anything that by definition encourages students to think less can not be good for them. And save the lame arguments that everybody's doing it so...
The concept of a liberal arts education as far back in Ancient Greece was about developing the mind, the whole person. It was never on the job training for a specific career.
It was funny and predictable this week watching the AI worshippers try to explain away the latest data showing how AI harms cognitive processes.
Marc - one thing I'm noticing in the discourse - and I can see this in the evolution of your thinking since 2023 - is a shift from the possibilities of AI and education to a focus on the limitations of the use of AI in education. Is that mostly because of the speed of the technology, the way in which it's mostly being used by students, the lack of training, or something else? What's changed? I do think this was a tipping point year - are we going to see a return to trying to ban AI in classrooms in the fall? I agree that it's unlikely to work, but it just seems to me there is a collision course between the corporate / private use of AI and academia's response. What do you make of academics who are still championing the technology? Are they becoming outliers and if so, why? What is the response to students who say this is now a part of their daily existence - not because they asked for it, but because it was given to them - and to pretend otherwise is simply unrealistic? Is the goal going forward to convince students not to use AI? There is a disconnect to me between how AI is discussed within education and how it's discussed in the wider culture, but especially in the business and corporate sector.
No, bans aren't practical or even possible at this stage. I think the goal going forward should be to advocate for faculty and students to be aware when they're using AI. That concerns me the most because the thoughtful critical engagement I've been advising faculty to approach AI with students is becoming equally fraught with challenges. Simply put, generative AI is now active in most software. Many vendors on campus now offer AI assisted . . . search, reading, transcription, writing, coding, etc. all in existing apps outside of ChatGPT. This makes it really difficult to ask students to intentionally engage AI when it is a quiet feature built into an application students may have used for years. Faculty struggle just keeping up with all the integrations.
I'm just noticing a hostility to AI that I hadn't seen before. A visceral hatred of the technology that borders on "Ai derangement." I get why to an extent, but that does not bode well for many faculty accepting advice they need to be "AI Aware." I feel like more and more academics - or maybe it's mostly the ones who write online! - that their mind is made up. I just see more of the same cat and mouse game continuing.
I'm not sure. Writing vs. text generation are two distinctly separate processes. It doesn't surprise me that one isn't as cognitively demanding as the other. But the experiment treated writing as an exercise without process (e.g. write this as an exam in one sitting). There was no revision, drafting, or feedback. When I teach writing, I'm really teaching students to embrace that process, regardless of the end product. I'd be curious to see what the implications are on the human brain when using an LLM in parts of their process work. My guess is if we keep the scaffolding of process work in tact, that it won't have much impact. But that's just speculation on my part. I also wonder what happens if students start offloading the process entirely to an LLM. Though, in my own observations, that has yet to happen.
Hi Colman, I'm not Marc, but I'm also a college writing prof, and I think this is a very good question. I've read the study you're referring to, and its findings tend to reinforce the concerns that I've had about AI in education.
That being said, I do think it's important to consider that it's only one study, it's still in preprint, and the experimental design involves a somewhat artificial situation. (Of necessity, to be able to do the brain scans involved in the study.)
I'm not at all an expert in this kind of cognitive science, but here's one point that stands out to me: the participants in the study were divided into three groups, one that was permitted to use ChatGPT and only ChatGPT to assist with an essay writing task, one that was permitted to use websites but no LLMs, and one that was permitted to use neither. But while these conditions were in place for the testing sessions (three or four sessions over three or four months), they don't appear to have been in place between testing sessions. (Unless I missed something, which is entirely possible.)
In other words, it seems conceivable that there were members of the "ChatGPT group" who never used AI outside of the testing sessions, and members of the "brain only" group who used it all the time. If this was the case, then the study may have been better designed to measure the acute effects of using or not using AI than it was the effect over time, which would arguably be the greater concern from an educational point of view.
This is the first time I've read anything you've written, Marc. I'm looking at this from the perspective of a high school English teacher. My stance on AI for students seesaws like nothing else in my life. I'm sorry if you've covered this elsewhere, but I have a few questions. What age do you think students should start using AI to supplement their learning? I got the impression that your audience is geared more for higher education. Does your stance change at all for the high school level?
You seem opposed to banning AI in schools and cite a number of other behaviors which we have tried to use policy to police in the past. I'm young, so I can't speak seriously to all of the examples, but I have seen cell phone bans in high schools during my own career. I know kids still use their cell phones during the school day when they're not supposed, but the bans certainly changed their behavior during class.
I am sympathetic toward the argument that we need to teach kids how to use AI, yet you point out my reservation wonderfully in your writing: AI is changing too fast to implement into our curriculums in meaningful ways. There are some "meta" rules we can teach kids, but I think we all know how good kids are at a following rules. I don't think that's a good enough safeguard to make sure they use the technology responsibly. My English department is currently in the process of championing a return to pen, pencil, and paper (which is a whole other can of worms). Why shouldn't we?
Thanks for whatever thoughts you can spare. I really appreciated your perspective.
Not Marc here, but I am a former teacher who is now a private tutor for high school and for postsecondary. In my region, no form of AI is allowed whatever, even for research. Does that make any sense? No. There is a return to pen and paper here in the classroom. Here are a few of my thoughts from tutoring my students over the past year.
First, work completed is not edited beyond a draft version. Why? There isn't sufficient time. Second, as work previously assigned for homework is now done in class, there is Less Teaching time, which means less learned knowledge in a subject.
Third, there is no preventing a smart student from reading about the topic, going home, using AI to write up something, memorizing it, and then regurgitating that with pen and paper in class. In some sense, isn't that what tests basically are?
So, how do we tackle AI in learning? Like you, my stance wavers. I feel like a split personality, where I will teach a student how to use AI and then worry that I've destroyed their learning! There is a responsible way to use AI, and those who want to learn, and to be successful in their lives will do it. For those who are not interested in learning, they won't use it responsibility. This is a quandary, to be sure.
It is a difficult time to be an educator today.
The summer weeks that I am relaxing, the pace of AI releases will continue unabated, and their potential to negatively impact what I would call authentic learning will continue.
While I spend even more time that Marc suggests learning AI, I feel like I'm swimming as fast as I can to, yet, all I do is manage to stay afloat in one spot.
Generally good advice, and I'd love to know your thoughts on a 'core curriculum' for a faculty AI training program. As a librarian I particularly appreciate the reading recommendations!
I would suggest faculty limit their AI explorations to tools they actually think would be useful in their teaching/research, and evaluate the privacy and ethical issues before investing a lot of time in them. For example, I like NotebookLM because I control the sources and I know I have a fair use right to work with them; other tools access sources that are behind paywalls, and I have no such right to the content. Semantic Scholar has secured partnerships with their content providers, where other companies refuse to reveal where they got their content.
This has proven useful: https://sr.ithaka.org/our-work/generative-ai-product-tracker/ "Ithaka S+R’s Generative AI Product Tracker lists generative AI products that are either marketed specifically towards postsecondary faculty or students or appear to be actively in use by postsecondary faculty or students for teaching, learning, or research activities."
I've said it before, and I'll say it again. Anything that by definition encourages students to think less can not be good for them. And save the lame arguments that everybody's doing it so...
The concept of a liberal arts education as far back in Ancient Greece was about developing the mind, the whole person. It was never on the job training for a specific career.
It was funny and predictable this week watching the AI worshippers try to explain away the latest data showing how AI harms cognitive processes.
Marc - one thing I'm noticing in the discourse - and I can see this in the evolution of your thinking since 2023 - is a shift from the possibilities of AI and education to a focus on the limitations of the use of AI in education. Is that mostly because of the speed of the technology, the way in which it's mostly being used by students, the lack of training, or something else? What's changed? I do think this was a tipping point year - are we going to see a return to trying to ban AI in classrooms in the fall? I agree that it's unlikely to work, but it just seems to me there is a collision course between the corporate / private use of AI and academia's response. What do you make of academics who are still championing the technology? Are they becoming outliers and if so, why? What is the response to students who say this is now a part of their daily existence - not because they asked for it, but because it was given to them - and to pretend otherwise is simply unrealistic? Is the goal going forward to convince students not to use AI? There is a disconnect to me between how AI is discussed within education and how it's discussed in the wider culture, but especially in the business and corporate sector.
No, bans aren't practical or even possible at this stage. I think the goal going forward should be to advocate for faculty and students to be aware when they're using AI. That concerns me the most because the thoughtful critical engagement I've been advising faculty to approach AI with students is becoming equally fraught with challenges. Simply put, generative AI is now active in most software. Many vendors on campus now offer AI assisted . . . search, reading, transcription, writing, coding, etc. all in existing apps outside of ChatGPT. This makes it really difficult to ask students to intentionally engage AI when it is a quiet feature built into an application students may have used for years. Faculty struggle just keeping up with all the integrations.
I'm just noticing a hostility to AI that I hadn't seen before. A visceral hatred of the technology that borders on "Ai derangement." I get why to an extent, but that does not bode well for many faculty accepting advice they need to be "AI Aware." I feel like more and more academics - or maybe it's mostly the ones who write online! - that their mind is made up. I just see more of the same cat and mouse game continuing.
Marc - College prof concerned about writing here; have been following your substack for the past few months.
Do you have any comments, or know of any, on the recent MIT research concerning neuro-connectivity of LLM users?
https://arxiv.org/pdf/2506.08872v1
I'm not sure. Writing vs. text generation are two distinctly separate processes. It doesn't surprise me that one isn't as cognitively demanding as the other. But the experiment treated writing as an exercise without process (e.g. write this as an exam in one sitting). There was no revision, drafting, or feedback. When I teach writing, I'm really teaching students to embrace that process, regardless of the end product. I'd be curious to see what the implications are on the human brain when using an LLM in parts of their process work. My guess is if we keep the scaffolding of process work in tact, that it won't have much impact. But that's just speculation on my part. I also wonder what happens if students start offloading the process entirely to an LLM. Though, in my own observations, that has yet to happen.
Hi Colman, I'm not Marc, but I'm also a college writing prof, and I think this is a very good question. I've read the study you're referring to, and its findings tend to reinforce the concerns that I've had about AI in education.
That being said, I do think it's important to consider that it's only one study, it's still in preprint, and the experimental design involves a somewhat artificial situation. (Of necessity, to be able to do the brain scans involved in the study.)
I'm not at all an expert in this kind of cognitive science, but here's one point that stands out to me: the participants in the study were divided into three groups, one that was permitted to use ChatGPT and only ChatGPT to assist with an essay writing task, one that was permitted to use websites but no LLMs, and one that was permitted to use neither. But while these conditions were in place for the testing sessions (three or four sessions over three or four months), they don't appear to have been in place between testing sessions. (Unless I missed something, which is entirely possible.)
In other words, it seems conceivable that there were members of the "ChatGPT group" who never used AI outside of the testing sessions, and members of the "brain only" group who used it all the time. If this was the case, then the study may have been better designed to measure the acute effects of using or not using AI than it was the effect over time, which would arguably be the greater concern from an educational point of view.
Again, speaking as a non-expert here.
This is the first time I've read anything you've written, Marc. I'm looking at this from the perspective of a high school English teacher. My stance on AI for students seesaws like nothing else in my life. I'm sorry if you've covered this elsewhere, but I have a few questions. What age do you think students should start using AI to supplement their learning? I got the impression that your audience is geared more for higher education. Does your stance change at all for the high school level?
You seem opposed to banning AI in schools and cite a number of other behaviors which we have tried to use policy to police in the past. I'm young, so I can't speak seriously to all of the examples, but I have seen cell phone bans in high schools during my own career. I know kids still use their cell phones during the school day when they're not supposed, but the bans certainly changed their behavior during class.
I am sympathetic toward the argument that we need to teach kids how to use AI, yet you point out my reservation wonderfully in your writing: AI is changing too fast to implement into our curriculums in meaningful ways. There are some "meta" rules we can teach kids, but I think we all know how good kids are at a following rules. I don't think that's a good enough safeguard to make sure they use the technology responsibly. My English department is currently in the process of championing a return to pen, pencil, and paper (which is a whole other can of worms). Why shouldn't we?
Thanks for whatever thoughts you can spare. I really appreciated your perspective.
Not Marc here, but I am a former teacher who is now a private tutor for high school and for postsecondary. In my region, no form of AI is allowed whatever, even for research. Does that make any sense? No. There is a return to pen and paper here in the classroom. Here are a few of my thoughts from tutoring my students over the past year.
First, work completed is not edited beyond a draft version. Why? There isn't sufficient time. Second, as work previously assigned for homework is now done in class, there is Less Teaching time, which means less learned knowledge in a subject.
Third, there is no preventing a smart student from reading about the topic, going home, using AI to write up something, memorizing it, and then regurgitating that with pen and paper in class. In some sense, isn't that what tests basically are?
So, how do we tackle AI in learning? Like you, my stance wavers. I feel like a split personality, where I will teach a student how to use AI and then worry that I've destroyed their learning! There is a responsible way to use AI, and those who want to learn, and to be successful in their lives will do it. For those who are not interested in learning, they won't use it responsibility. This is a quandary, to be sure.
It is a difficult time to be an educator today.
The summer weeks that I am relaxing, the pace of AI releases will continue unabated, and their potential to negatively impact what I would call authentic learning will continue.
While I spend even more time that Marc suggests learning AI, I feel like I'm swimming as fast as I can to, yet, all I do is manage to stay afloat in one spot.
Generally good advice, and I'd love to know your thoughts on a 'core curriculum' for a faculty AI training program. As a librarian I particularly appreciate the reading recommendations!
I would suggest faculty limit their AI explorations to tools they actually think would be useful in their teaching/research, and evaluate the privacy and ethical issues before investing a lot of time in them. For example, I like NotebookLM because I control the sources and I know I have a fair use right to work with them; other tools access sources that are behind paywalls, and I have no such right to the content. Semantic Scholar has secured partnerships with their content providers, where other companies refuse to reveal where they got their content.
This has proven useful: https://sr.ithaka.org/our-work/generative-ai-product-tracker/ "Ithaka S+R’s Generative AI Product Tracker lists generative AI products that are either marketed specifically towards postsecondary faculty or students or appear to be actively in use by postsecondary faculty or students for teaching, learning, or research activities."
your sharing best value
recently I share article about How AI is Revolutionizing Homework in 2025. You can read this and share with your friends.
https://iubfun.substack.com/p/how-ai-is-revolutionizing-homework