7 Comments
User's avatar
Colman Hogan's avatar

Marc - College prof concerned about writing here; have been following your substack for the past few months.

Do you have any comments, or know of any, on the recent MIT research concerning neuro-connectivity of LLM users?

https://arxiv.org/pdf/2506.08872v1

Expand full comment
Marc Watkins's avatar

I'm not sure. Writing vs. text generation are two distinctly separate processes. It doesn't surprise me that one isn't as cognitively demanding as the other. But the experiment treated writing as an exercise without process (e.g. write this as an exam in one sitting). There was no revision, drafting, or feedback. When I teach writing, I'm really teaching students to embrace that process, regardless of the end product. I'd be curious to see what the implications are on the human brain when using an LLM in parts of their process work. My guess is if we keep the scaffolding of process work in tact, that it won't have much impact. But that's just speculation on my part. I also wonder what happens if students start offloading the process entirely to an LLM. Though, in my own observations, that has yet to happen.

Expand full comment
jwr's avatar

Hi Colman, I'm not Marc, but I'm also a college writing prof, and I think this is a very good question. I've read the study you're referring to, and its findings tend to reinforce the concerns that I've had about AI in education.

That being said, I do think it's important to consider that it's only one study, it's still in preprint, and the experimental design involves a somewhat artificial situation. (Of necessity, to be able to do the brain scans involved in the study.)

I'm not at all an expert in this kind of cognitive science, but here's one point that stands out to me: the participants in the study were divided into three groups, one that was permitted to use ChatGPT and only ChatGPT to assist with an essay writing task, one that was permitted to use websites but no LLMs, and one that was permitted to use neither. But while these conditions were in place for the testing sessions (three or four sessions over three or four months), they don't appear to have been in place between testing sessions. (Unless I missed something, which is entirely possible.)

In other words, it seems conceivable that there were members of the "ChatGPT group" who never used AI outside of the testing sessions, and members of the "brain only" group who used it all the time. If this was the case, then the study may have been better designed to measure the acute effects of using or not using AI than it was the effect over time, which would arguably be the greater concern from an educational point of view.

Again, speaking as a non-expert here.

Expand full comment
Will Granger's avatar

I've said it before, and I'll say it again. Anything that by definition encourages students to think less can not be good for them. And save the lame arguments that everybody's doing it so...

The concept of a liberal arts education as far back in Ancient Greece was about developing the mind, the whole person. It was never on the job training for a specific career.

It was funny and predictable this week watching the AI worshippers try to explain away the latest data showing how AI harms cognitive processes.

Expand full comment
Stephen Fitzpatrick's avatar

Marc - one thing I'm noticing in the discourse - and I can see this in the evolution of your thinking since 2023 - is a shift from the possibilities of AI and education to a focus on the limitations of the use of AI in education. Is that mostly because of the speed of the technology, the way in which it's mostly being used by students, the lack of training, or something else? What's changed? I do think this was a tipping point year - are we going to see a return to trying to ban AI in classrooms in the fall? I agree that it's unlikely to work, but it just seems to me there is a collision course between the corporate / private use of AI and academia's response. What do you make of academics who are still championing the technology? Are they becoming outliers and if so, why? What is the response to students who say this is now a part of their daily existence - not because they asked for it, but because it was given to them - and to pretend otherwise is simply unrealistic? Is the goal going forward to convince students not to use AI? There is a disconnect to me between how AI is discussed within education and how it's discussed in the wider culture, but especially in the business and corporate sector.

Expand full comment
Marc Watkins's avatar

No, bans aren't practical or even possible at this stage. I think the goal going forward should be to advocate for faculty and students to be aware when they're using AI. That concerns me the most because the thoughtful critical engagement I've been advising faculty to approach AI with students is becoming equally fraught with challenges. Simply put, generative AI is now active in most software. Many vendors on campus now offer AI assisted . . . search, reading, transcription, writing, coding, etc. all in existing apps outside of ChatGPT. This makes it really difficult to ask students to intentionally engage AI when it is a quiet feature built into an application students may have used for years. Faculty struggle just keeping up with all the integrations.

Expand full comment
Stephen Fitzpatrick's avatar

I'm just noticing a hostility to AI that I hadn't seen before. A visceral hatred of the technology that borders on "Ai derangement." I get why to an extent, but that does not bode well for many faculty accepting advice they need to be "AI Aware." I feel like more and more academics - or maybe it's mostly the ones who write online! - that their mind is made up. I just see more of the same cat and mouse game continuing.

Expand full comment