Several universities tried AI detection and stopped, and still, many more will cling to the unreliable and ultimately failing detection cat-and-mouse chase because it's easier to chase one's own tail than to confront the daunting questions genAI poses for our students and ourselves. For starters, how exactly do we come to a consensus about genAI's place in teaching and learning?
What Will the Students of Tomorrow Need?
Already, dozens of third-party developers are launching their generative Edtech tools for K-12. Khan Academy has their GPT-4 powered Khanmingo, while Microsoft is slowly unveiling a suite of generative tools, like Reading Coach. In five years, the students who enter higher education will likely have some experience using these systems within their studies. And, no, none of these will use free models.
ChatGPT's public GPT-3.5 is an extremely nerfed system compared to paid-for models, raising questions of equity and access. The outputs from ChatGPT are nearly all generic and lackluster compared to GPT-4. Likewise, Google's newly launched Gemini Advanced uses their Ultra model vs Gemini Pro, which is ChatGPT's 3.5 equivalent. Do we want students using lackluster and underpowered language models in their studies vs. top-of-the-line systems, or perhaps more pointedly, do we want them to use these systems at all? What about environmental costs? Should we invest in more sustainable open models? Will we have opt-outs for faculty who don't want to use generative AI in their teaching? I guarantee that each question deserves at least a book-length response.
The State of Discourse: To Generate or Not Generate
Arizona State University's partnership with OpenAI to allow students GPT-4 access to personalized learning bots and interfaces received a lot of well-earned criticism. It feels too early, chaotic, and fraught with risk to deploy generative tools without first establishing shared governance of how faculty could, should, or should not use these features with students. We're still in the early phase of establishing guidelines and principles for educators to use these features, as outlined in the thoughtful CCCC/MLA Joint Task Force on AI and Writing initial guidance for generative AI in Scholarly and Creative Publications.
What most concerns me is the technology isn't 'settled' yet. We're not going to see a static landscape where generative tools, use cases, and even the underlying technology stabilize for more than a few months before we witness newer, more capable ones arise. This assumes we will be dealing with generative AI and not general artificial intelligence. AGI is true artificial intelligence—the stuff of SciFi dreams and nightmares.
If we follow the logic of those in the effective accelerationism movement, the current copilot assistive systems are but a brief stopping point on the road to true artificial reasoning. Most in education and indeed society aren't preparing for that or even could if they tried. I'm convinced we're nowhere near the type of capabilities folks in the e/acc movement portent, but that isn't going to slow down their efforts to build it.
But let's hope AGI isn't anywhere near on the horizon. What are we supposed to do in the next five years? Are we to give students access to the most capable models to help them learn and hopefully future-proof their careers by teaching them to use these tools? ASU thinks so. So do likely many other institutions. This will inevitably create a patchwork of adopters whose access to AI literacy, training, and the most advanced models gives them a decisive edge over their peers who don't. This, of course, assumes that's the correct path forward. I think many will disagree and envision schools giving students access to the most advanced models as a recipe for deskilling learners at scales we've never encountered in education. So, the biggest question reveals itself—how do we navigate the tension between these two emerging camps?
The AI Culture Wars
People will use genAI. We're going to have to come to terms with accepting this. That in no way means we have to advocate for full adoption, but we must make room for those who do and set clear expectations. One area of guidance that I applaud from the CCCC/MLA Joint Task Force comes at the end of their recommendations:
When evaluating work that has used AI in its writing process, it is important that reviewers assess the work on its own merits, regardless of what they may know about the use of AI. Prejudice and backlash against authors who use these technologies are unacceptable, especially when they are following the guidelines described above, largely because it would discourage transparency about the use of AI in the writing process.
Will people heed this advice or allow their biases against AI to influence and further extend the divisions we are seeing online between the pro and anti-AI camps?
The cultural battle that's emerging over AI in education and, indeed, the broader world seems to fall between those two groups, and I don't think people with existing skills will ever adopt generative AI as part of their daily practice. Instead, I think we're going to see lines drawn in the sand. After all, the folks who have those skills worked to establish them, often spending years honing such skills and going into debt to establish mastery in their fields. I fully expect to see many people bias generative AI as a form of cheating.
Early testing shows that those with underdeveloped or emerging skills rather than those who have mastered skills are the most likely to benefit from adopting generative AI in their jobs. This suggests that such adoption could benefit those unprepared, unmotivated, and struggling students the most. It also suggests that their higher-performing peers will see the least amount of help from adopting generative AI. What's lost in this is we want as many students as possible to develop mastery in skills for their studies and their future careers, not use generative AI as a crutch to help them pass.
Our society is constructed on social hierarchies where access to higher education is often viewed as the vehicle to move up social and economic classes. Shockingly, the pandemic years swelled the average net worth of individuals to absurd levels. An average American in their 50s now has a net worth of over one million dollars! (the median is still quite low at $300,000). Partly because of surging home prices and higher stock ownership, but how much of that also has to do with having a college degree?
ASU likely sees diving into generative AI as a means to ensure more of its students graduate with a degree, granting them access to a middle-class lifestyle. However, I doubt those who followed a traditional path will view students using generative AI in the same way they learned.
I said this last year and think it rings truer today—the mark of future mobility will not be having access to a college education. Rather, it will be if you could afford to go to an institution where a human being taught you or if you had to attend one where you learned from an algorithm.
Navigating the Divide
The establishment of transparency and accountability in the use of generative AI is critical for building trust among those with advanced degrees and skills, who may harbor biases against AI integration. Developing policies that mandate clear disclosure about the use of AI tools in professional work, research, and decision-making processes can significantly mitigate concerns about ethical use and potential misuse.
We're seeing such approaches in academic research. Many guidelines now require authors to specify when and how AI has contributed to their work. Doing so helps maintain the integrity of the scholarly record and reassures peers that AI is being used to augment rather than replace human intellect.
It's tricker in corporate settings, where transparency about AI's role in product development, data analysis, and customer service may be opaque. Adopting a culture of openness can clarify genAIs value addition and ensure that stakeholders understand the human oversight involved in critical decisions. Accountability measures, such as audit trails for AI-generated content or decisions, further enhance trust by making it possible to trace outcomes back to their AI or human origins, facilitating a culture of responsibility and ethical AI use. By setting and enforcing standards for transparency and accountability, organizations and institutions can demonstrate a commitment to ethical AI practices, fostering a more accepting and informed attitude towards the technology among professionals.
Expanding Access to AI Literacy
Many in education are building the groundwork for sustained AI literacy. The type of engaged in-person training that I helped create for educators at the Mississippi AI Institute is being adopted by other institutions, expanding on and hopefully vastly improving the model.
The University of Kansas will host a five-day institute for Kansas City-area educators. The AI and Digital Literacy: Toward an Inclusive and Empowering Teaching Practice will take place this summer and support participating faculty with a stipend.
The University of Kentucky's Center for the Enhancement of Learning and Teaching will hold a year-long Teaching Innovation Institute that offers participants fellowships. This extends the model beyond in-person into sustaining relationships and building AI literacy through the evolving landscape of AI.
I'm sure others will follow, so please share them widely with me so I can spread the word. Training faculty in AI literacy offers educators the best path toward answering some of these thorny questions that we continually encounter related to AI and learning, and I firmly believe that this conversation should be teacher-led, not top-down from the administration when it comes to what skills we need to protect and preserve and what new competencies we should help our students explore.
Lots of interesting observations in here. One issue Marc identifies is the increasing pushback from the anti-AI crowd. I'd like a more detailed analysis of those arguments. I've seen scattered criticisms from a variety of different sources but if anyone has a link to a thorough and sober critique by educators who have advanced significant philosophical and pedagogical arguments against adopting AI in higher education I'd love to see them. The AI hype can sometimes seem overwhelming and underestimate the skepticism that still remains. Much of the very reasonable current criticism in the corporate context revolves around the fact that genAI is just not good enough (even the paid models) to justify a major investment in AI tools. That tracks with what I've experienced with some of the platforms that promote abilities that just don't work yet. For example, my experience with uploading lengthy PDF's is middling at best - most of the tools just don't "read" the document with any degree of fidelity or detail that make it worth the effort. The tricky part is the assumption that AI will continue to "get better" on an almost exponential scale. What if that turns out not to be true and we enter a lengthy plateau period? The best AI currently is still very impressive but those of us who have been using it consistently since November of 2022 will likely concur there has been a lull. I want to see another significant leap forward before I am back on the "AI will change everything" train. The frequency of poor and inaccurate AI output will legitimately support skeptics arguments until those issues substantially improve.
We reject AI detectors for being to inaccurate but are the LLMs any more accurate in their results?
I am glad to see how much attention you give to the equity issues. They often get mentioned but are often shoved on the back burner - or off the burners completely.