Several universities tried AI detection and stopped, and still, many more will cling to the unreliable and ultimately failing detection cat-and-mouse chase because it's easier to chase one's own tail than to confront the daunting questions genAI poses for our students and ourselves.
Lots of interesting observations in here. One issue Marc identifies is the increasing pushback from the anti-AI crowd. I'd like a more detailed analysis of those arguments. I've seen scattered criticisms from a variety of different sources but if anyone has a link to a thorough and sober critique by educators who have advanced significant philosophical and pedagogical arguments against adopting AI in higher education I'd love to see them. The AI hype can sometimes seem overwhelming and underestimate the skepticism that still remains. Much of the very reasonable current criticism in the corporate context revolves around the fact that genAI is just not good enough (even the paid models) to justify a major investment in AI tools. That tracks with what I've experienced with some of the platforms that promote abilities that just don't work yet. For example, my experience with uploading lengthy PDF's is middling at best - most of the tools just don't "read" the document with any degree of fidelity or detail that make it worth the effort. The tricky part is the assumption that AI will continue to "get better" on an almost exponential scale. What if that turns out not to be true and we enter a lengthy plateau period? The best AI currently is still very impressive but those of us who have been using it consistently since November of 2022 will likely concur there has been a lull. I want to see another significant leap forward before I am back on the "AI will change everything" train. The frequency of poor and inaccurate AI output will legitimately support skeptics arguments until those issues substantially improve.
We reject AI detectors for being to inaccurate but are the LLMs any more accurate in their results?
I am glad to see how much attention you give to the equity issues. They often get mentioned but are often shoved on the back burner - or off the burners completely.
This discussion also reminds me of the early days on online learning (really, even through the 2000s) where an online degree wasn't seen as legitimate--it still, in some pockets and for some institutions, doesn't hold the same weight or is seen as less-than...
This post really hits at the heart of the complex challenges and opportunities that generative AI brings into the academic landscape. Your balanced take on the potential for AI to either deskill learners or empower them with future-ready tools was particularly thought-provoking - the notion that access to advanced AI models could become a new sort of digital divide among students is a powerful reminder of the need for thoughtful, equitable integration of technology in education.
I was also drawn to the point about the push and pull between adopting AI tools and the resistance rooted in traditional learning methodologies. It's fascinating to see how institutions like ASU are navigating these waters, setting precedents that will likely inform broader educational policies in the future.
Lots of interesting observations in here. One issue Marc identifies is the increasing pushback from the anti-AI crowd. I'd like a more detailed analysis of those arguments. I've seen scattered criticisms from a variety of different sources but if anyone has a link to a thorough and sober critique by educators who have advanced significant philosophical and pedagogical arguments against adopting AI in higher education I'd love to see them. The AI hype can sometimes seem overwhelming and underestimate the skepticism that still remains. Much of the very reasonable current criticism in the corporate context revolves around the fact that genAI is just not good enough (even the paid models) to justify a major investment in AI tools. That tracks with what I've experienced with some of the platforms that promote abilities that just don't work yet. For example, my experience with uploading lengthy PDF's is middling at best - most of the tools just don't "read" the document with any degree of fidelity or detail that make it worth the effort. The tricky part is the assumption that AI will continue to "get better" on an almost exponential scale. What if that turns out not to be true and we enter a lengthy plateau period? The best AI currently is still very impressive but those of us who have been using it consistently since November of 2022 will likely concur there has been a lull. I want to see another significant leap forward before I am back on the "AI will change everything" train. The frequency of poor and inaccurate AI output will legitimately support skeptics arguments until those issues substantially improve.
We reject AI detectors for being to inaccurate but are the LLMs any more accurate in their results?
I am glad to see how much attention you give to the equity issues. They often get mentioned but are often shoved on the back burner - or off the burners completely.
This discussion also reminds me of the early days on online learning (really, even through the 2000s) where an online degree wasn't seen as legitimate--it still, in some pockets and for some institutions, doesn't hold the same weight or is seen as less-than...
Thanks for the shout-out for the AIDL institute, Marc!
This post really hits at the heart of the complex challenges and opportunities that generative AI brings into the academic landscape. Your balanced take on the potential for AI to either deskill learners or empower them with future-ready tools was particularly thought-provoking - the notion that access to advanced AI models could become a new sort of digital divide among students is a powerful reminder of the need for thoughtful, equitable integration of technology in education.
I was also drawn to the point about the push and pull between adopting AI tools and the resistance rooted in traditional learning methodologies. It's fascinating to see how institutions like ASU are navigating these waters, setting precedents that will likely inform broader educational policies in the future.