Time is one of the luxuries every teacher searches for within their classrooms. In higher education, most faculty don’t just plan around teaching a concept—they shape their lecture or classroom activities around a clock. When I teach in person, I get to see my students two and a half hours a week. Hybrid halves that time. Online obliterates it. That doesn’t account for student illness, class cancellations for meetings, conferences, or other university bureaucracy. Most faculty thus pack their classes full of material, often leaving a few moments for students to ask questions before they pack up their laptops and trudge to the next course, where the process repeats itself three, four, sometimes five times a day. One question that consistently emerges from talking with faculty about generative AI is how they’re supposed to magically deal with it, given the time constraints of teaching?
Two Emerging AI Frameworks
I don’t think anyone agrees on what terminology we should be using to discuss generative AI in education, but I’ll take a stab at defining what I’m seeing more broadly about the two frameworks we are starting to see.
Critical AI Literacy:
focuses on understanding and analyzing AI systems broadly (not just generative AI). An AI literacy approach emphasizes how the technology works with the goal of demystifying AI, while acknowledging and evaluating the broader societal implications and consequences of AI’s impact. I understand what this technology is, how it works, and what it means for our world.
AI Fluency:
focuses on practical, responsible AI usage. Fluency advocates for users to understand their role and obligations and emphasizes safety and accountability when they use generative AI. I can use this technology responsibly and effectively in my work and take ownership of the end result.
And it is in this landscape that faculty are increasingly expected to 1). adopt AI fluency frameworks and integrate generative tools within their discipline, or 2). practice some form of AI refusal by critically examining the cultural and societal impact of AI, while limiting the technology’s impact on student learning. Neither of those frameworks has discussed the time needed to implement them, and that must change because we’re entering a period where our culture and education will call for both. I believe most students should be critically analyzing AI and practicing responsible usage at different points throughout their college experience. Neither mean adopting AI or using it as a principled means for learning.
Few Talk About Time in AI Frameworks
You may be faced with a university initiative or vision statement to integrate AI across the curriculum, or have an impassioned colleague take a stance about not using the technology based on strong moral and ethical objections. Both positions will require a considerable amount of material resources to implement; otherwise, a vision statement is just hype, and an opposition statement is just virtue signaling. I realize that might offend folks on both sides of this increasingly polarized AI-divided landscape. That’s not my intention. Rather, I’d ask each position a specific question: How much time should an educator spend on AI fluency or critical AI literacy?
That’s not an easy question to answer. Like most tough questions, it tends to spawn a series of thorny subquestions that branch off into their own worlds: What assessment practices should I change? What assignments should I drop from my crowded curriculum to accommodate either? How am I supposed to find the time or resources to simply keep up to date about AI to understand its potential impact on my teaching?
Time dictates the discourse around generative AI in education. So much so that many folks I’ve spoken with have told me what is being asked of them in AI fluency and critical AI literacy frameworks is so far beyond what they deem to be reasonable or actionable on their part. I also believe that most people who understand generative AI in education see elements of critical AI literacy and AI fluency as something students will need to use in their future lives. I don’t think this is a centrist position. To me, it acknowledges our moment.
As Annette Vee Shared in On AI Aware courses—
To ask these brilliant teachers to integrate AI into their classes is ridiculous. I don’t think they should have to spend weeks on AI literacy exercises or encourage students to explore AI writing techniques. But their world of texts, writing, and meaning is impacted by AI, just as mine is.
I don’t think that all of our courses need to engage with AI. But all courses need to consider AI’s now ubiquitous presence in our writing and reading environments. Consideration of AI could include active refusal, minimal engagement, or embracing AI. With a mix of these approaches, the curriculum as a whole can support AI literacy—along with a host of other established learning outcomes like critical reading and thinking.
I think of this as AI Aware curriculum.
For me, being AI aware means valuing both critical AI literacy and AI fluency. After all, we will need students who can think critically about AI and use AI responsibly.
Putting AI Awareness into Practice
With this in mind, I think most faculty will only be able to implement about 15 minutes on AI literacy or AI fluency each week. 15 minutes is just 10% of a 150-minute weekly course schedule, but even that is asking a lot from faculty. Starting at no more than 10% also puts some boundaries around your discipline-specific teaching.
You can actually accomplish quite a bit in 15 chunks of time with your students without it becoming overwhelming.
Activities for Cultivating AI Literacy
For critical AI literacy, you might want to assign a reading from Casey Fiesler’s AI Ethics and Policy news list and ask your students to reflect and analyze AI’s impact on society. Five minutes of reading, followed by a ten-minute group or class discussion can help cultivate more inquiry and reflection about generative AI.
Harvard’s AI Pedagogy project has many activities created by faculty designed to help students develop AI literacy. Maha Bali’s Demystifying AI Hype is a solid example that asks students to analyze marketing claims around generative AI. This can be integrated into many in-person classes for a short assignment.
For approaching embodied teaching practices that give students space to work on practical skills without AI influence, consider the approach Jenny Lederer and Jennifer Trainor discuss in their recent guest post for the Important Work. Scaffolding in-class writing over short periods can help build autonomy and give students agency, limiting the temptation to offload learning to AI. Doing so in spaces of 5-15 minutes can create a sense of community in many types of in-person classes.
You can also approach active reading through good old-fashioned annotation practices or use social annotation tools for online or hybrid courses to limit generative AI’s impact on deep reading skills and give learners a chance to analyze short passages critically. Turning an assignment into one that involves annotating a text doesn’t have to be time-consuming, but we should all be mindful that not everyone reads at the same pace or annotates the same way.
Creating discussion spaces in large lecture courses or fully online and hybrid courses is really challenging in our AI moment. Many students turn to ChatGPT to generate text, and many traditional discussion boards in LMS systems are ineffective. Some are using video discussion boards in Padlet and Voice Threads to replace discussion boards, but this does incur a cost. My university uses Panopto as its video client, so you might see if your institution provides access to a video or voice tool to augment or replace one or all text-based discussion boards. I limit my students’ video recording to no more than 3 minutes and ask them to reply to two other students, so that is manageable on their time.
Activities for Developing AI Fluency
Part of fluency is understanding your responsibility as a user of generative AI, so you can have students use an LLM to generate a short activity in class, then show them how to openly disclose their AI usage and even make what they learned during that process transparent.
Mike Caulfield’s SIFT Toolbox for using AI to help fact-check claims found on social media is a great tool to explore with students to help them understand how AI is both contributing to online misinformation and can likewise be used as a tool to help a user navigate that landscape. You can preload Mike’s prompts into Claude or Gemini, but you may need a professional plan for the best results. Students can explore their own topics in short activities.
TextGenEd and its subsequent editions are filled with creative teaching experiments with AI that easily translate into classroom assignments. You can also explore the assignment library in Harvard’s AI Pedagogy Project for many thoughtful engagements with generative AI in teaching and learning.
All the major companies have some sort of “here’s how to use our AI tool.” Anthropic’s AI Fluency Framework is likely the most up to date. I like that the course is released under a Create Commons license, making adaptation permissible, usable with other LLMs beyond Claude, and filled with activities that have specific time commitments. I think the weak areas I’d invest time in developing further are in Discernment and Diligence.
The thing is, I see tremendous value in doing both sets of activities at different moments with my students so they can develop the awareness necessary to navigate AI saturated spaces. I also couldn’t imagine teaching AI fluency without critical AI literacy or vice versa. Some will disagree, to be sure.
Assessments Take Time to Change
I would really like to see more people in the AI education space move from discussing broad divisions about generative AI in education and start seeing more focus on actualized experiences that take into account time in the classroom. Without more emphasis on the clock, I fear we’re just talking about AI instead of bringing practical and meaningful experiences to our teaching.
I’m not sure we know how to grade assignments when generative AI is used. Thats going to be an on going challenge, one that requires far more energy (and time!) than any of us want to imagine. I fear emphasizing one framework of just analyzing AI vs. using AI will further entrench staunch advocates of their respective positions, when what we need are faculty engaged with both.
Assessment practices will have to change, and maybe this could be a start that doesn’t overwhelm faculty. We are seeing some assessment redesign frameworks, like the University of Sydney’s Two Lane Approach to securing assessments, begin to mature. But each time I look into the open Canvas site of the Two Lane approach, I’m struck by how daunting assessment reform will be with generative AI. I also wonder if other extremely important forms of assessment reform, like alternative grading practices, addressing the collapse of DEI in higher education in the US, and the transactional model of education, may be left out of the much-needed assessment reform conversation. We all only have so much that we can process, and making small end roads isn’t trivial at this stage—it may lead to more profound and lasting changes.
Time will always be the ultimate constraint in education, regardless of technology. Until we design AI education frameworks that respect this reality, we'll continue to see even the most well-intentioned initiatives fail, leaving educators feeling adrift. Fifteen minutes a week isn't revolutionary, but I think it is possible for most of us. Let’s stop asking educators to choose between being AI adopters or AI resistors. Instead, let's give them permission to be AI aware within the boundaries of their actual teaching lives.
One of the concerns I have right now is that even if you magically waved a wand and every teacher and professor out there committed to address Critical AI Literacy or AI Fluency in their classroom, two things would happen: [a] there would be overlap in which many students were encountering roughly the same thing in different classes and [b] there would also be contradiction, with different classrooms pushing students in different directions around AI norms—a contradiction made worse, perhaps, by the investment teachers are making ad hoc in "what is right" for their classroom right now that is difficult to walk back.
I don't know what the solution is, particularly given the fast-evolving nature of AI, but I'm quite skeptical that everyone charting their own path for their courses is a sustainable, equitable solution for students.
Agreed. As an adjunct, I'm not required to attend regular PD sessions...which is baffling in any era. I've pushed myself to attend AI-related PD sessions and to follow folks like you, but I also would like to be fairly compensated for the energy I invest in developing new materials related to AI. At some universities and community colleges, adjuncts occupy a large share of assigned positions and are not required/or paid to attend meetings or participate in these kinds of discussions. How this can be considered professional is beyond me... Thanks for sharing your thoughts here.