14 Comments
User's avatar
Roger Thompson's avatar

Love the questions used to engage with the framework. Made me wonder if they could be made into a kind of workflow.

Very interesting post.

Stephen Fitzpatrick's avatar

My first reaction to this is that many students are already using AI independently of their teachers when it comes to getting feedback on their work. Many in education call that cheating but I think we all know the reality is much messier. Teachers incorporating AI feedback on student work directly into their own process is an entirely different issue. From my experience, a huge part of the problem continues to be that anyone who is responsible for policy-making (at least in K-12 - not sure about higher ed) is so far behind the curve on what AI can or cannot actually do well that they simply have no strong basis on which to make an effective decision. Just like many students have jumped ahead of their teachers in the absence of clear and useful AI guidelines, early adopter educators have also pushed the envelope within schools to the point where it's very hard to pull back from what's already being done. Frameworks are helpful I suppose but who is going to enforce these rules? Administrators? Department Heads? I've seen a number of podcast / zoom sessions with employers and graduate program leaders who are basically saying AI skills are now a must for anyone graduating into the job market going forward. How does that complicate the equation?

Marc Watkins's avatar

Excellent questions! We're in the wild west in terms of how AI is being used. I'm not sure how this would play out through admin or even HR about usage. That's why I think we need to start somewhere with some basic framework that we can use to develop consensus around.

Maha Bali's avatar

I love that you brought this up and I agree it is an important topic that is not discussed critically enough. I do want to say something I have observed over the past few years. Bringing up "free of bias" and "fair" as a condition to using AI is useless if people who are looking at such guidelines are not themselves deeply conscious about bias and equity. I believe most people who are very concerned and aware about unconscious bias and various dimensions and levels of oppression and inequity all know that there is no almost way Gen AI as it exists today can avoid these. The issue is that people who are less conscious about things like implicit bias and systemic inequalities are also not going to notice it when they see it in AI. So I don't know how these guidelines can help, you know?

Marguerite Mayhall's avatar

A colleague of mine at another university and I are arguing this very thing, and are taking the issue to our university senates. This post will be very helpful, thanks!

Annette Vee's avatar

Thank you for this conversation, Marc! I think that a framework for evaluating faculty use of AI is really needed. Are you thinking that this is something an instructor would run through as they consider how to integrate AI into their workflow? Or something a dept or academic unit would discuss in order to reach consensus about uses of AI?

Marc Watkins's avatar

Thanks, Annette! I view it as extremely flexible and something faculty can use or units/depts. It’s really focused on guidance and something that can be used or adapted with ease.

Mark A. Bassett's avatar

Thanks for mentioning the SECURE framework. It is a starting point as you note, great to see your framework too.

Mark A. Bassett's avatar

I’d also note that neither of our frameworks are designed to be enforceable.

Marc Watkins's avatar

Yeah, I don't think either of us are arguing for enforcement. I dunno how that would even play out. Guided and intentional engagement instead of haphazard shoot from the hip hot takes is what I am trying to go for.

Stephen Badalamente's avatar

In discussion with faculty today regarding AI ethics, the question raised was 'what about sustainability?' Difficult to assess, but it is clear to me that the use of AI comes with a cost, even if it is 'free.'

DrP's avatar

My colleague and I are promoting the L.O.V.E. framework to guide educators in using generative AI. L - Logic, O - Originality, V - Verifiability, E - Ethics.

Marguerite Mayhall's avatar

Not sure about the best place to put this one here but a colleague and I are looking for papers who address this very thing. Call for papers: https://www.pdcnet.org/inquiryct/Calls-for-Submissions

Kelly's avatar

Your thinking on this is really interesting. I don't agree with your positioning of academic freedom -- will address that in a post of my own shortly. Inviting collegial conversation!