The mood around AI in education has taken a markedly darker tone of late. Faculty are increasingly exhausted and not just angry but agonizing over students choosing to offload their learning to tools like ChatGPT. Many of us have advocated for taking a path of information literacy and teaching students about this technology under the umbrella of AI literacy.
The MLA-CCCC Joint AI Task Force published its third working paper Building a Culture for Generative AI Literacy in College Language, Literature, and Writing about teaching all stakeholders about AI. These reports represent tireless and often thankless work. They also don’t make people happy. That’s not the point. As the authors state in the opening:
There is the risk in documents such as this, however, that through the very act of creating them, we are foreclosing alternatives by acquiescing rather than resisting. After all, we did not ask for or create these tools, so why should we make a good faith effort to engage with them now? But the technology is here, and while most educators were not its architects we believe it is in our collective professional interest to offer students, colleagues, and administrators balanced and informed perspectives on the risks and harms as well as the potential benefits.
A number of folks are upset at starting a culture around AI literacy in first-year composition courses and I’m not quite sure why. Engaging with AI doesn’t equal adoption. We can’t hide from AI, though I’d argue that’s the path many are attempting to take. The text-based generative tools used by students in apps like ChatGPT pale in comparison to the multimodal generative AI that can mimic your voice, image, likeness, and mannerisms. Just because you aren’t interacting with that form of AI when marking a paper or giving feedback doesn’t mean your students won’t see that elsewhere. AI literacy isn’t advocating for a specific position about AI—far from it. Rather, it aims to inform users and equip them with knowledge while removing the hype around these tools that are often marketed as magic to our students.
Why We Need To Teach AI Literacy
“Ever wish you could be in two places at once?” so begins the marketing email from HeyGen, a company that creates AI avatars.
Now you can – with HeyGen's Interactive Avatar. Our latest update lets your AI avatar join one or multiple Zoom meetings, simultaneously, 24/7. Your avatar won't just look and sound like you, it'll think, talk, and make decisions, just like you. Armed with whatever knowledge or persona you give them, the Interactive Avatar is perfect for online coaching, customer support, sales calls, interviews, and more. It can take on repetitive meetings with ease, freeing you up for what really matters.
No, the avatar won’t “think” or “make decisions just like you.” And I cringe to think how people on the opposite ends of those endless Zoom meetings will greet a synthetic version of yourself. Part of AI literacy advocates teaching students how to spot this BS. OpenAI’s Education Forum revealed that the majority of its global users are students. Does anyone really think we’re going to make it through this without doing the hard work of talking about AI with our students?
I get the absurdity of our moment. We’re seeing mega-corporations try to mask their massive AI energy consumption by investing in small nuclear reactors just to try to keep their climate pledges. OpenAI might claim they’ve achieved true artificial intelligence (of course they haven’t) just to get out of their contract with Microsoft. These and countless other reasons are why we must talk about AI with students so they understand how AI is shaping our culture. You can absolutely have those conversations without adopting a generative tool, but that does requiring engaging in discourse about AI. A wonderful resource to start with is Casey Fiesler’s AI Ethics and Policy News. Assign students readings that speak to their interests and invite them to explore just how deeply AI is impacting our society.
Advocacy Is The Only AI Policy We Have
The notion that we should ban or embrace AI in education is a false dichotomy, one far too many people are running afoul. We should all be very, very skeptical about these tools and how our students use them. But at the end of the day, our students in higher education are adults and they’re going to have to come to terms with using or avoiding using generative AI in their private lives and their work. We have an opportunity here to shape that. And I don’t believe that will happen by pasting boilerplate language on syllabi trying to explain why AI is bad or telling students they’re free to use ChatGPT to their heart’s content.
This fall, I’ve asked my students to adopt open disclosure if they use an AI tool, reflect on what it offers or hinders their learning, and use restorative practices to try and help them understand that misusing generative AI isn’t about rule-breaking; it impacts the ethical framework of trust and accountability we’re trying to establish as a class. I don’t offer this framework as a perfect solution, but I’ve made the choice not to despair about AI and instead use this as a moment to help my students explore the world around them.
I’ve included the first article I wrote about open AI disclosure for The Chronicle of Higher Education below:
Why We Should Normalize Open Disclosure of AI Use
The start of another fall semester approaches and wary eyes turn once again to course policies about the use of generative AI. For a lot of faculty members, the last two years have been marked by increasing frustration at the lack of clear guidance from their institutions about AI use in the classroom. Many colleges have opted against setting an official AI policy, leaving it to each instructor to decide how to integrate — or resist — these tools in their teaching.
From a student’s perspective, enrolling in four or five courses could mean encountering an equal number of different stances on AI use in coursework. Let’s pause for a moment and take the issue out of the realm of syllabus-policy jargon and focus instead on a very simple question:
Should students — and faculty members and administrators, for that matter — be open about using generative AI in higher education?
Since ChatGPT was released, we’ve searched for a lodestar to help us deal with the impact of generative AI on teaching. I don’t think that’s going to come from a hodgepodge of institutional and personal policies that vary from one college to the next and even from one classroom to another. Many discussions on this topic flounder because we lack clear standards for AI use. Students, meanwhile, are eager to learn the standards so they can use the technology ethically.
We must start somewhere, and I think we should begin by (a) requiring people to openly disclose their use of these tools, and (b) providing them with a consistent means of showing it. In short, we should normalize disclosing work that has been produced with the aid of AI.
Calling for open disclosure and a standardized label doesn’t mean faculty members couldn’t still ban the use of AI tools in their classrooms. In my own classroom, there are plenty of areas in which I make clear to my students that using generative AI will be unhelpful to their learning and could cross into academic misconduct.
Rather, open disclosure becomes a bedrock principle, a point zero, for a student, teacher, or administrator who uses a generative AI tool.
It’s crucial to establish clear expectations now because this technology is moving beyond models of language. Very soon, tools like ChatGPT will have multimodal features that can mimic human speech and vision. That might seem like science fiction, but OpenAI’s demo of its new GPT-4o voice and vision features means it will soon be a reality in our classrooms.
The latest AI models mimic human interaction in ways that make text generation feel like an 8-bit video game. Generative tools like Hume.ai’s Empathetic Voice Interface can detect subtle emotional shifts in your voice and predict if you are sad, happy, anxious, or even sarcastic. As scary as that sounds, it pales in comparison to HeyGen’s AI avatars that let users upload digital replicas of their voices, mannerisms, and bodies.
Multimodal AI presents new challenges and opportunities that we haven’t begun to explore, and that’s more reason to normalize the expectation that all of us openly acknowledge when we use this technology in our work.
The majority of faculty members will soon have generative tools built into our college’s learning-management system, with little guidance about how to use them. Blackboard’s AI Design Assistant has been on the market for the past year in Ultra courses, and Canvas will soon roll out AI features.
If we expect students to be open about when they use AI, then we should be open when we use it, too. Some professors already use AI tools in instructional design — for example, to draft the initial wording of a syllabus policy or the instructions for an assignment. Labeling such usage where students will see it is an opportunity to model the type of ethical behavior we expect from them. It also provides them with a framework that openly acknowledges how the technology was employed.
What, exactly, would such disclosure labels look like? Here are two examples a user could place at the beginning of a document or project:
A template: “AI Usage Disclosure: This document was created with assistance from AI tools. The content has been reviewed and edited by a human. For more information on the extent and nature of AI usage, please contact the author.”
Or with more specifics: “AI Usage Disclosure: This document [include title] was created with assistance from [specify the AI tool]. The content can be viewed here [add link] and has been reviewed and edited by [author’s full name]. For more information on the extent and nature of AI usage, please contact the author.”
Creating a label is simple. Getting everyone to agree to actually use it — to openly acknowledge that a paper or project was produced with an AI tool — will be far more challenging.
For starters, we must view the technology as more than a cheating tool. That’s a hard ask for many faculty members. Students use AI because it saves them time and offers the potential of a frictionless educational experience. Social media abounds with influencer profiles hawking generative tools aimed at students with promises to let AI study for them, listen during lectures, and even read for them.
Most students aren’t aware of what generative AI is beyond ChatGPT. And it is increasingly hard to have frank and honest discussions with them about this emerging technology if we frame the conversation solely in terms of academic misconduct. As faculty members, we want our students to examine generative AI with a more critical eye — to question the reliability, value, and efficacy of its outputs. But to do that, we have to move beyond searching their papers for evidence of AI misuse and instead look for evidence of learning with this technology. That happens only if we normalize the practice of AI disclosure.
Professional societies — such as the Modern Language Association and the American Psychological Association, among others — have released guidance for scholars about how to properly cite the use of generative AI in faculty work. But I’m not advocating for treating the tool as a source.
Rather, I’m asking every higher-ed institution to consider normalizing AI disclosure as a means of curbing the uncritical adoption of AI and restoring the trust between professors and students. Unreliable AI detection has led to false accusations, with little recourse for the accused students to prove their words were indeed their own and not from an algorithm.
We cannot continue to guess if the words we read come from a student or a bot. Likewise, students should never have to guess if an assignment we hand out was generated in ChatGPT or written by us. It’s time we reclaim this trust through advocacy — not opaque surveillance. It’s time to make clear that everyone on the campus is expected to openly disclose when they’ve used generative AI in something they have written, designed, or created.
Teaching is all about trust, which is difficult to restore once it has been lost. Many faculty members will question trusting their students to openly disclose their use of AI, based on prior experience. And yet our students will have to put similar trust in us that we will not punish them for disclosing their AI usage, even when many of them have been wrongly accused of misusing AI in the past.
Open disclosure is a reset, an opportunity to start over. It is a means for us to reclaim some agency in the dizzying pace of AI deployments by creating a standard of conduct. If we ridicule students for using generative AI openly by grading them differently, questioning their intelligence, or presenting other biases, we risk students hiding their use of AI. Instead, we should be advocating that they show us what they learned from using it. Let’s embrace this opportunity to redefine trust, transparency, and learning in the age of AI.
[1] Love this mindset around normalizing disclosure rather than living in the false dichotomy you name. It is reasonable and it is achievable—and it is also a fair standard to hold educators to (which is my primary concern right now, much more than student usage).
[2] I do think students are in an incredibly precarious position at the moment, though, in having to navigate different expectations and consequences around AI from classroom to classroom (or sometimes within a given classroom). I know educators are doing their best, but the consequences are severe and trust-destroying not just in individual classrooms, but more broadly.
[3] I agree that the need is there for these conversations in our classrooms—but I don't have a ton of faith that most of us (raises hand) have the support and knowledge necessary to facilitate those conversations, especially with a landscape in education and beyond on this topic that continues to move. I have very little faith in ad hoc conversations happening in a way that substantively moves the needle in a positive direction—this needs to be institutional and collaborative and normed, and that's beyond any of our individual classrooms, right?
Once again, Marc is very observant! AI is here, and while we didn't design this tech, we're living in this world. We have a responsibility to our students to help them discern and reflect on AI's role in their lives and writing. This description of approaching AI among students mirrors mine, and I really like the way you put it:
"This fall, I’ve asked my students to adopt open disclosure if they use an AI tool, reflect on what it offers or hinders their learning, and use restorative practices to try and help them understand that misusing generative AI isn’t about rule-breaking; it impacts the ethical framework of trust and accountability we’re trying to establish as a class."