Faculty need to also be ready to work with students who refuse to use AI out of environmental concerns. If you teach a non-AI focused class but introduce a requirement to use AI for one of your assignments, have an alternate assignment ready for these students.
Way to go, Marc! So glad to see such a thoughtful critic of AI in education getting exposure in so many outlets.
My views align with yours in many ways, but one area where we diverge is the lack of uniformity or standardization in the approach to AI in higher education.
I see this plurality of approaches as a benefit of the autonomy higher ed faculty enjoy. For the most part, we have enormous freedom in our classroom practices, and I think the resulting variety benefits our students. I wish K-12 teachers were given a similar level of autonomy.
The biggest problem I see is the lack of clarity many teachers give their students on what is allowed and a general lack of dialogue between students and teachers outside the power dynamics of the classroom. I'm increasingly wary of institutional policies that don't respect the range of views and practices I hear about every day.
Since we don't really know much yet about the educational value of AI, I hope we avoid constraining experiments, including experiments by those who prohibit the use of AI in the classes and those who embrace it.
Thanks Rob, I think teacher choice matters a great deal in how they navigate generative AI. However, I'm increasingly seeing that invoked by faculty who want to ignore GenAI and not discuss it with students. At minimum, all faculty should have a clearly articulated policy on their syllabi. There's pushback about that.
Autonomy in the classroom shouldn't be used to sideline conversations about generative AI. What Burnett wrote in the New Yorker really struck me:
"[E]veryone seems intent on pretending that the most significant revolution in the world of thought in the past century isn’t happening. The approach appears to be: “We’ll just tell the kids they can’t use these tools and carry on as before.” This is, simply, madness. And it won’t hold for long. It’s time to talk about what all this means for university life, and for the humanities in particular."
Could not agree more. Autonomy comes with responsibility. That responsibility starts with talking with students in and out of class and extends to talking with colleagues and administrators about what this new cultural technology means for educational practices. Those practices can range widely, but they must be grounded in dialogue.
Lack of uniformity can result in unequal access, as Marc points out. It’s compounded by a refusal on the part of a lot of faculty to educate themselves on and about genAI, to all of our detriment. I am extremely frustrated by my colleagues’ head in the sand attitude, and that I have to be the one who raises the alarm and then has to be the one to educate them.
I’ve been sending this post out to a lot of people because I think it encapsulates what those of us in higher ed , including our IT departments, should be low key freaking out about.
That Burnett piece you cite is absolutely the way to think about the humanities in all this, imo, and the approach I’ve taken in my art history survey courses at a school that serves predominantly first gen, immigrant, and working class students in a highly multicultural region. The ‘work of being, not knowing’ was the bedrock of the SLAC education I received 40 years ago and even though I feel like I’m a lone voice in the vocational miasma we swim in, I do know it has resonance for students, at least mine. I know from Reddit that a lot of faculty are not so lucky in the students they teach.
There are other learning challenges we’re dealing with but this one will exacerbate them and yet we are not doing much.
"Is the assessment valid if the learning was heavily AI-assisted?" How is this different from hiring a tutor to get you through math - aside from being free? A lot of the methods - quizzing, flashcards - are the same ones learning science validates.
Still, I was alarmed by this development and really appreciate you sharing your thoughts. The equity issue deserves our immediate attention, even if the solutions are mostly long term or outside our influence.
AI-assistance will certainly have its place in learning, but without a human in the loop model to validate responses, I cringe to think what that means. You have to know the material to actually understand if it is accurate, so we're tacitly saying students don't need a human being like a tutor or instructor to validate that learning assistance offered by a LLM. That's the scary rabbit hole I go into with this tech. The whole concept behind flashcards and quizzes for learning is a person synthesizes knowledge and develops strategies to transfer that information through memorization and retrieval. Each question or phrase is a choice, a decision, and I'm not keen on accepting that a machine can do that effectively, or if it can, should we embrace that level of influence.
As you've pointed out previously, these chatbots aren't designed for the purposes for which they are now being aggressively marketed. The folks at Dartmouth provided clinical training to Therabot and the APA finds their progress very promising. Chatbots are mimicking scholarship, they don't have training in disciplinary standards to guide their responses, much less the desired outcomes of the students' professors. I'm still seeing lots of responses that are poorly sourced, inaccurate or just cobbled together to look like what it thinks I want.
I don’t think this move is about equity at all. I think it is a very smart investment by the major AI players. The way I’m thinking about it is that if the LLMs improve their learning to some degree through prompt and response, why not encourage a ton of that activity in from the demographic that represents our most fertile and intellectually curious minds. But—more importantly for the companies as they try to move toward profitability while also dumping tons of money into their technology development stage—if they “hook” students with an alluring free year (or two months) of access, there is a high likelihood that the students who take advantage of this offer will subsequently become paid subscribers. They can reach a wider audience among this demographic by offering some free access.
And, of course, the inequity between those students who can afford premium AI and those who can't is a real concern, particularly as even talking about things like "equity" is getting harder to get away with in today's political climate.
I’m Harrison, an ex fine dining industry line cook. My stack "The Secret Ingredient" adapts hit restaurant recipes (mostly NYC and L.A.) for easy home cooking.
Faculty need to also be ready to work with students who refuse to use AI out of environmental concerns. If you teach a non-AI focused class but introduce a requirement to use AI for one of your assignments, have an alternate assignment ready for these students.
Way to go, Marc! So glad to see such a thoughtful critic of AI in education getting exposure in so many outlets.
My views align with yours in many ways, but one area where we diverge is the lack of uniformity or standardization in the approach to AI in higher education.
I see this plurality of approaches as a benefit of the autonomy higher ed faculty enjoy. For the most part, we have enormous freedom in our classroom practices, and I think the resulting variety benefits our students. I wish K-12 teachers were given a similar level of autonomy.
The biggest problem I see is the lack of clarity many teachers give their students on what is allowed and a general lack of dialogue between students and teachers outside the power dynamics of the classroom. I'm increasingly wary of institutional policies that don't respect the range of views and practices I hear about every day.
Since we don't really know much yet about the educational value of AI, I hope we avoid constraining experiments, including experiments by those who prohibit the use of AI in the classes and those who embrace it.
Thanks Rob, I think teacher choice matters a great deal in how they navigate generative AI. However, I'm increasingly seeing that invoked by faculty who want to ignore GenAI and not discuss it with students. At minimum, all faculty should have a clearly articulated policy on their syllabi. There's pushback about that.
Autonomy in the classroom shouldn't be used to sideline conversations about generative AI. What Burnett wrote in the New Yorker really struck me:
"[E]veryone seems intent on pretending that the most significant revolution in the world of thought in the past century isn’t happening. The approach appears to be: “We’ll just tell the kids they can’t use these tools and carry on as before.” This is, simply, madness. And it won’t hold for long. It’s time to talk about what all this means for university life, and for the humanities in particular."
Could not agree more. Autonomy comes with responsibility. That responsibility starts with talking with students in and out of class and extends to talking with colleagues and administrators about what this new cultural technology means for educational practices. Those practices can range widely, but they must be grounded in dialogue.
Lack of uniformity can result in unequal access, as Marc points out. It’s compounded by a refusal on the part of a lot of faculty to educate themselves on and about genAI, to all of our detriment. I am extremely frustrated by my colleagues’ head in the sand attitude, and that I have to be the one who raises the alarm and then has to be the one to educate them.
I’ve been sending this post out to a lot of people because I think it encapsulates what those of us in higher ed , including our IT departments, should be low key freaking out about.
That Burnett piece you cite is absolutely the way to think about the humanities in all this, imo, and the approach I’ve taken in my art history survey courses at a school that serves predominantly first gen, immigrant, and working class students in a highly multicultural region. The ‘work of being, not knowing’ was the bedrock of the SLAC education I received 40 years ago and even though I feel like I’m a lone voice in the vocational miasma we swim in, I do know it has resonance for students, at least mine. I know from Reddit that a lot of faculty are not so lucky in the students they teach.
There are other learning challenges we’re dealing with but this one will exacerbate them and yet we are not doing much.
"Is the assessment valid if the learning was heavily AI-assisted?" How is this different from hiring a tutor to get you through math - aside from being free? A lot of the methods - quizzing, flashcards - are the same ones learning science validates.
Still, I was alarmed by this development and really appreciate you sharing your thoughts. The equity issue deserves our immediate attention, even if the solutions are mostly long term or outside our influence.
AI-assistance will certainly have its place in learning, but without a human in the loop model to validate responses, I cringe to think what that means. You have to know the material to actually understand if it is accurate, so we're tacitly saying students don't need a human being like a tutor or instructor to validate that learning assistance offered by a LLM. That's the scary rabbit hole I go into with this tech. The whole concept behind flashcards and quizzes for learning is a person synthesizes knowledge and develops strategies to transfer that information through memorization and retrieval. Each question or phrase is a choice, a decision, and I'm not keen on accepting that a machine can do that effectively, or if it can, should we embrace that level of influence.
As you've pointed out previously, these chatbots aren't designed for the purposes for which they are now being aggressively marketed. The folks at Dartmouth provided clinical training to Therabot and the APA finds their progress very promising. Chatbots are mimicking scholarship, they don't have training in disciplinary standards to guide their responses, much less the desired outcomes of the students' professors. I'm still seeing lots of responses that are poorly sourced, inaccurate or just cobbled together to look like what it thinks I want.
I don’t think this move is about equity at all. I think it is a very smart investment by the major AI players. The way I’m thinking about it is that if the LLMs improve their learning to some degree through prompt and response, why not encourage a ton of that activity in from the demographic that represents our most fertile and intellectually curious minds. But—more importantly for the companies as they try to move toward profitability while also dumping tons of money into their technology development stage—if they “hook” students with an alluring free year (or two months) of access, there is a high likelihood that the students who take advantage of this offer will subsequently become paid subscribers. They can reach a wider audience among this demographic by offering some free access.
And, of course, the inequity between those students who can afford premium AI and those who can't is a real concern, particularly as even talking about things like "equity" is getting harder to get away with in today's political climate.
Thanks for sharing Marc!
I’m Harrison, an ex fine dining industry line cook. My stack "The Secret Ingredient" adapts hit restaurant recipes (mostly NYC and L.A.) for easy home cooking.
check us out:
https://thesecretingredient.substack.com