10 Comments

[1] Love this mindset around normalizing disclosure rather than living in the false dichotomy you name. It is reasonable and it is achievable—and it is also a fair standard to hold educators to (which is my primary concern right now, much more than student usage).

[2] I do think students are in an incredibly precarious position at the moment, though, in having to navigate different expectations and consequences around AI from classroom to classroom (or sometimes within a given classroom). I know educators are doing their best, but the consequences are severe and trust-destroying not just in individual classrooms, but more broadly.

[3] I agree that the need is there for these conversations in our classrooms—but I don't have a ton of faith that most of us (raises hand) have the support and knowledge necessary to facilitate those conversations, especially with a landscape in education and beyond on this topic that continues to move. I have very little faith in ad hoc conversations happening in a way that substantively moves the needle in a positive direction—this needs to be institutional and collaborative and normed, and that's beyond any of our individual classrooms, right?

Expand full comment
author

I think we're still in early days with AI in education and that is going to require us to push these adhoc conversations into admin and bring them into the conversation so that we get the support we need.

Expand full comment

Very well said. We cannot really wait on the threat of AI. It is just going to get worse now.

Expand full comment

Once again, Marc is very observant! AI is here, and while we didn't design this tech, we're living in this world. We have a responsibility to our students to help them discern and reflect on AI's role in their lives and writing. This description of approaching AI among students mirrors mine, and I really like the way you put it:

"This fall, I’ve asked my students to adopt open disclosure if they use an AI tool, reflect on what it offers or hinders their learning, and use restorative practices to try and help them understand that misusing generative AI isn’t about rule-breaking; it impacts the ethical framework of trust and accountability we’re trying to establish as a class."

Expand full comment
Oct 20Liked by Marc Watkins

Well spoken. As someone who heavily rejects and is concerned about AI, awareness is nonetheless essential if we are to survive.

Expand full comment
Oct 20·edited Oct 20Liked by Marc Watkins

I agree entirely with your characterization of the problem. I am not so sure about formal disclosure as the answer, especially in the context of the shame and embarrassment associated with perceptions of it as a cheating machine. That said, I don't know of any solutions that fit the scale of the problem. I'll be interested to see how disclosures works out in practice.

Expand full comment
author

Part of normalizing disclosure has to be to get past that shame and embarrassment. As faculty, we need to work moving from shaming students for using AI and getting them to critically explore if the way they're using it is helpful to their learning.

Expand full comment
Oct 20Liked by Marc Watkins

As always Marc, you are at the forefront identifying the current state of affairs, at least in higher ed. The situation, based on my anecdotal evidence, is even worse in K-12. Administrators and teachers are all overwhelmed and most are sticking their heads in the sand. I stand by my initial observation from Sarah Eaton's 6 Tenets of Post-Plagiarism (link below) - "trying to determine where human writing ends and AI writing begins will be pointless and futile." But she also emphasizes the importance of attribution. This is not an issue that will go away unless the entire AI edifice collapses because of the expense and the models don't get any better which is unlikely. What we have now is still enough to make educators lives difficult. The problem is most students are still using it surreptitiously and many more have learned to use it "behind the scenes" to develop thesis statements, brainstorm, and do important and more difficult critical thinking instantly. Some of this could be useful and embraced or at least taught by teachers but most don't have the experience or interest in using the models. Right now it's the status quo and regression to the mean. The next round of angst will be when and if the new models represent a significant jump in quality and capability which I am skeptical of, especially as they keep releasing these incremental tools. I am certainly curious when GPT-5 is made available and what it actually does better. I still find the LLM's useful for a wide variety of tasks but I use it less often but much deeper when I do.

https://drsaraheaton.com/2023/02/25/6-tenets-of-postplagiarism-writing-in-the-age-of-artificial-intelligence/

Expand full comment

I was with you unless you mean that the students are using them to develop more important and difficult critical thinking themselves. Outsource it to the bot? Yes. But just as with GPS and other technologies, the main result of the outsourcing is that the human doesn't do the thinking anymore(same thing happened to me after I attached Tiles to my valuables).

Expand full comment

“More conversations like this are essential, if for no other reason than to bring the whole tangled web of issues into open discussion. I also appreciate the idea of using a template to make an author’s use of AI in writing more transparent.

What I wonder is, where do we draw the line when it comes to labeling AI use? If the majority of AI’s involvement in a piece is limited to editing for grammar, spelling, and improving the flow of thought, should that still require a label? Should readers be informed even when AI’s role is minimal or purely editorial?

My two cents:

When I teach or coach others on AI, I often remind the class that AI rarely gives you a final answer but instead leads to insights that help you shape and write new ideas on your own.”

Expand full comment