Discussion about this post

User's avatar
Anita Sundaram Coleman's avatar

Thank you! Many faculty members are resisting AI labeling or disclosure, often dismissing it with comments like, “Aww, come on, we’ve all been using Grammarly.” 🤦🏽‍♀️ But you know what will ultimately drive change? Promotion and tenure requirements, along with scholarly communication policies. I conducted an analysis earlier this year and found that most major publishers have already implemented clear AI disclosure requirements. For example, here’s the AI policy from Taylor & Francis: https://taylorandfrancis.com/our-policies/ai-policy/. (Labels are coming!) Additionally, Google announced their AI detection tool just last month (works better for longform text than short social media posts) meaning AI use will increasingly be enforceable in journal and book submissions. (Still doesn't mean AI detection is foolproof!)

I also wanted to mention that the Authors Guild is doing fascinating work in this area. All of this gives me hope that, while government regulation of AI may be stalled, the American story of techno-science—the way technology and science are often harnessed by capitalism to drive profit—will still be powered by the human spirit. I remain hopeful about real chances for a more ethical integration of AI, one that balances innovation with accountability and creativity. Thanks. again!

Expand full comment
Christine Ross's avatar

Thank you for saying it out loud. Both myself and my students, who are reading "Burning Data" from RESET, by Ronald Deibert, talked about this issue "The Day After".

We should be collecting data on AI use and potential learning loss and the things it's not good for when the user is learner still acquiring literate discourse so we can establish a counter-narrative to the commercialized utopianism currently out there

Christine Ross,

Rochester Institute of Technoloigy

Expand full comment
7 more comments...

No posts