"AI is now being used by students to learn the material we want to see students show us within those proctored assessments. Learning itself, not simply assessment, is now being impacted by AI."
This is the line that has me ruminating most right now, in thinking about how this impacts skills-driven learning compared to content-driven learning; how to continue to center the process over the product (and make sure the AI "learning" isn't skipping past the process to the detriment of the learner); and how to the skills themselves that are centered in our courses may need to evolve in the years ahead, given the technological capabilities.
Yes to all of this. It's unfortunate that most teacher's are having conversations about AI as if it's 2023 compounded by the fact that those who opt out are fairly ignorant about what their current capabilities are which are significantly more advanced. But if the conversation moves to safer and more familiar pedagogical ground about the skills and habits that enable the productive use of any tool, let alone AI, that may be the better focus. But the using AI to learn the material itself is very real - I have had multiple conversations with teachers and students in the past few weeks with the key takeaway that poor professors, lousy teachers, and bad communicators are extremely vulnerable. Kids are just going around them and using AI to learn the content. Not sure that's their fault.
The parts about the ever-changing palette of tools, capabilities, and interfaces rings a loud bell here, trying to prepare a fall course offering allowing full-but-responsible student use of AI. Heck, even at this late date, I have literally no idea what toolset will be available. It will be a wild ride.
I'd love to see more concrete examples -- especially across various domains and fields -- of where AI is used to foster genuine learning. I think we need lots of use cases like that, and it can be hard to translate a few examples to all the different fields a professor might be teaching in.
If I have 100+ students, it's not trivially easy to design situations to invoke deep learning for students who have AI access and to be at just the right scaffolded level when I'm teaching in Department X or Department Y that feels so different from the few cool examples I've run into where AI use cultivates genuine learning.
I know that solid learning requires effortful, hard System 2 engagement of the brain, so I'm most interested in ways people have integrated AI to actually cause that kind of engagement. An LLM wrapper that using system prompts and post-training to do something like interact socially as a study group buddy to quiz and tutor and discuss course ideas/content together? Could be cool. Using AI to solve a real world problem like the arcade cabinet? Applies for some courses but not many others. I just need more concrete examples -- do you know of a resource like that?
Yes! I keep being told that I need to rethink education, often in stirring or catchy language (the quote about things being grown is a good example of this) but I have yet to come in contact with any actual, specific examples of this that I can take into my (history) classroom. Would love some practical, real world templates. The fact that I'm not really seeing any makes me wonder if the AI advocates have any idea of what this might look like beyond the flowery rhetoric.
I'm finding AI extremely helpful for language learning. For context, I'm in my 40s, a life-long language learner (I speak four languages that are non-native to me at a fairly high level), and last fall, I started learning Spanish. I'm mostly learning independently, though I did end up hiring an online tutor for speaking practice. And, as I said, I've been using AI.
For example, AI will break down sentences for me, and it'll explain what this or that word (say, a pronoun) is doing in a place where I wouldn't expect it. With my previous languages, I would have had to either accept not understanding what's going on in a sentence, or I would have had to wait until the next lesson in order to ask the teacher about it. Now I can get a quick answer as soon as I need it. It also means that I get to spend pretty much 100% of the time with my (human) teacher on speaking practice. Also, AI has enabled me to read content that would have been over my head otherwise. This may have some drawbacks, though. From a strict language learning perspective, I probably would be better off still reading graded readers. Ah, but graded readers tend to be kind of silly (story-wise), and it's simply a lot more rewarding (and motivating) to read an actual novel. (Speaking of which: the first novel that I read in Spanish was recommended to me by AI, as was the novel - my second - that I'm reading now!) Additionally, AI has been very helpful for proofreading my writing.
I wonder if anything that I'm doing would count as cheating in a college classroom(?). I'm not sure. Perhaps all the editing that AI does for me would constitute cheating. Of course, since I'm not getting graded on anything, it makes no sense to speak of cheating! If I ever want a credential for my Spanish, I'm welcome to take one of those certified exams, which obviously don't allow the use of AI. In the meanwhile, I study as I see fit, and that includes pretty significant AI help.
You are right to talk about the extraordinary challenges. I'd add to what you've said that some teachers have teenagers or young adult kids, and they are fearful that those children will never find jobs, or that they will sink deeper into their phones and disengage with the real world more, exacerbating mental health and financial difficulties. Not only are they trying to redesign their classes, they are trying to do so with a huge source of their anxiety. This is like telling someone with a fear of heights to each week learn to teach their class in a different high rise.
This is a vital and deeply insightful piece! It cuts through the noise to the core human challenges AI poses to education. Your assertion that "Some Things Need to Be Grown, Not Graded, and Definitely Not Automated" resonates not just among educators but across all industries. It reveals a profound challenge for traditional systems struggling to keep up with AI's relentless, evolving nature (a point vividly echoed by Harold and Stephen)
Irena's practical example of AI for language learning is truly fascinating. It beautifully illustrates how, when technology is truly put in service of human needs, it can profoundly empower individual learning journeys and open new paths for personal growth.
Brian and George's calls for more tangible examples beyond grand rhetoric are extremely crucial; they are directly pointing to the ultimate call to focus on what truly matters:
Curiosity, Adaptability, Ethical Reasoning, and the capacity to collaborate and create meaningfully.
THAT's the ultimate signal.
This is precisely the profound human spirit and unique creativity that no machine can truly emulate, and the space where we must collectively build the new paths that only human imagination can forge.
"AI is now being used by students to learn the material we want to see students show us within those proctored assessments. Learning itself, not simply assessment, is now being impacted by AI."
This is the line that has me ruminating most right now, in thinking about how this impacts skills-driven learning compared to content-driven learning; how to continue to center the process over the product (and make sure the AI "learning" isn't skipping past the process to the detriment of the learner); and how to the skills themselves that are centered in our courses may need to evolve in the years ahead, given the technological capabilities.
Appreciative of this post!
Yes to all of this. It's unfortunate that most teacher's are having conversations about AI as if it's 2023 compounded by the fact that those who opt out are fairly ignorant about what their current capabilities are which are significantly more advanced. But if the conversation moves to safer and more familiar pedagogical ground about the skills and habits that enable the productive use of any tool, let alone AI, that may be the better focus. But the using AI to learn the material itself is very real - I have had multiple conversations with teachers and students in the past few weeks with the key takeaway that poor professors, lousy teachers, and bad communicators are extremely vulnerable. Kids are just going around them and using AI to learn the content. Not sure that's their fault.
The parts about the ever-changing palette of tools, capabilities, and interfaces rings a loud bell here, trying to prepare a fall course offering allowing full-but-responsible student use of AI. Heck, even at this late date, I have literally no idea what toolset will be available. It will be a wild ride.
I'd love to see more concrete examples -- especially across various domains and fields -- of where AI is used to foster genuine learning. I think we need lots of use cases like that, and it can be hard to translate a few examples to all the different fields a professor might be teaching in.
If I have 100+ students, it's not trivially easy to design situations to invoke deep learning for students who have AI access and to be at just the right scaffolded level when I'm teaching in Department X or Department Y that feels so different from the few cool examples I've run into where AI use cultivates genuine learning.
I know that solid learning requires effortful, hard System 2 engagement of the brain, so I'm most interested in ways people have integrated AI to actually cause that kind of engagement. An LLM wrapper that using system prompts and post-training to do something like interact socially as a study group buddy to quiz and tutor and discuss course ideas/content together? Could be cool. Using AI to solve a real world problem like the arcade cabinet? Applies for some courses but not many others. I just need more concrete examples -- do you know of a resource like that?
Yes! I keep being told that I need to rethink education, often in stirring or catchy language (the quote about things being grown is a good example of this) but I have yet to come in contact with any actual, specific examples of this that I can take into my (history) classroom. Would love some practical, real world templates. The fact that I'm not really seeing any makes me wonder if the AI advocates have any idea of what this might look like beyond the flowery rhetoric.
I'm finding AI extremely helpful for language learning. For context, I'm in my 40s, a life-long language learner (I speak four languages that are non-native to me at a fairly high level), and last fall, I started learning Spanish. I'm mostly learning independently, though I did end up hiring an online tutor for speaking practice. And, as I said, I've been using AI.
For example, AI will break down sentences for me, and it'll explain what this or that word (say, a pronoun) is doing in a place where I wouldn't expect it. With my previous languages, I would have had to either accept not understanding what's going on in a sentence, or I would have had to wait until the next lesson in order to ask the teacher about it. Now I can get a quick answer as soon as I need it. It also means that I get to spend pretty much 100% of the time with my (human) teacher on speaking practice. Also, AI has enabled me to read content that would have been over my head otherwise. This may have some drawbacks, though. From a strict language learning perspective, I probably would be better off still reading graded readers. Ah, but graded readers tend to be kind of silly (story-wise), and it's simply a lot more rewarding (and motivating) to read an actual novel. (Speaking of which: the first novel that I read in Spanish was recommended to me by AI, as was the novel - my second - that I'm reading now!) Additionally, AI has been very helpful for proofreading my writing.
I wonder if anything that I'm doing would count as cheating in a college classroom(?). I'm not sure. Perhaps all the editing that AI does for me would constitute cheating. Of course, since I'm not getting graded on anything, it makes no sense to speak of cheating! If I ever want a credential for my Spanish, I'm welcome to take one of those certified exams, which obviously don't allow the use of AI. In the meanwhile, I study as I see fit, and that includes pretty significant AI help.
You are right to talk about the extraordinary challenges. I'd add to what you've said that some teachers have teenagers or young adult kids, and they are fearful that those children will never find jobs, or that they will sink deeper into their phones and disengage with the real world more, exacerbating mental health and financial difficulties. Not only are they trying to redesign their classes, they are trying to do so with a huge source of their anxiety. This is like telling someone with a fear of heights to each week learn to teach their class in a different high rise.
This is a vital and deeply insightful piece! It cuts through the noise to the core human challenges AI poses to education. Your assertion that "Some Things Need to Be Grown, Not Graded, and Definitely Not Automated" resonates not just among educators but across all industries. It reveals a profound challenge for traditional systems struggling to keep up with AI's relentless, evolving nature (a point vividly echoed by Harold and Stephen)
Irena's practical example of AI for language learning is truly fascinating. It beautifully illustrates how, when technology is truly put in service of human needs, it can profoundly empower individual learning journeys and open new paths for personal growth.
Brian and George's calls for more tangible examples beyond grand rhetoric are extremely crucial; they are directly pointing to the ultimate call to focus on what truly matters:
Curiosity, Adaptability, Ethical Reasoning, and the capacity to collaborate and create meaningfully.
THAT's the ultimate signal.
This is precisely the profound human spirit and unique creativity that no machine can truly emulate, and the space where we must collectively build the new paths that only human imagination can forge.
Thanks for this piece!