Publication Announcement-Thresholds in Education Special Issue on GenAI
Volume 48, Issue 1: Generative AI’s Impact on Education: Writing, Collaboration, & Critical Assessment
For the past year, I’ve been working with my colleague Stephen Monroe on editing a special issue for Thresholds in Education about generative AI’s impact on education. The first of three issues for this special volume is now available and may be accessed freely. Copied below is our introduction. We are so grateful to have the opportunity to elevate voices who are confronting GenAI in our classrooms!
Pedagogical Crossroads: Higher Education in the Age of Generative AI
By Stephen Monroe and Marc Watkins
The rise of Generative Artificial Intelligence (GenAI) is changing higher education. Our teaching and learning environments are radically different today than they were two years ago, before the launch of OpenAI’s ChatGPT and other Large Language Models (LLMs). The shifts have been dizzying, as rapid expansion of GenAI tools have outpaced traditional academic rhythms and disrupted the general cycle of pedagogical adoption. For some faculty, this new age of GenAI has thus far been an age of pedagogical anxiety. Educators are worried. For some faculty, this new age of artificial intelligence has become an age of pedagogical creativity. Educators are invigorated. For many other faculty, GenAI is both worrying and interesting. They are waiting to engage. The following three-part volume seeks to inform faculty in higher education by both confronting worrying realities and exploring inspiring opportunities.
Why Worry?
Faculty have good reasons to be anxious—we have all been involved in a giant public experiment with generative AI that no one asked for. We read alarming headlines about AI; we know, with or without proof, that some of our students are surreptitiously using AI; we are swamped in our daily lives with AI commercials; we hear conflicting opinions and approaches from colleagues. We notice silence (or incomplete attention) from our institutional leaders. Our classroom environments are changing rapidly, and yet we do not control the agents of that change. Educators are understandably off-balance and uncertain about best practices for the present and future.
Each conversation about AI in education should start by addressing the material conditions faculty face that impede their ability to engage AI. Most higher education institutions have not yet prioritized a reckoning with GenAI. Faculty at those schools have been left to their own devices without adequate training or support in this emergent issue. It is likely that the average level of AI literacy among college instructors is lower than the average AI literacy among college students. The inversion is troubling and speaks to the issue of capacity and support. Some of our colleagues are highly engaged and informed, of course, but even for them it is difficult to monitor the rapid advancements flowing incessantly from Silicon Valley. GenAI models change weekly, and educators are busy people, having neither the time nor resources to adjust pedagogies each and every Monday morning. To regroup—and to regain our pedagogical footing—we need stable tools, practices, and institutional policies, but such stability is unavailable, at least for now.
In other words, educators across higher education are anxious about GenAI for very good reasons. While large technology companies care first and foremost about advancing toward profitability, educators care first and foremost about student learning. Thanks to decades of research and practice, we know a lot about how students learn—without AI. We know very little, as of yet, about how students learn— with AI.
For two years since the launch of ChatGPT, OpenAI remained largely silent about their tool’s impact on education. Releasing ChatGPT for free, without guidance or clear use cases has had a profound impact. OpenAI’s “A Student’s Guide to Writing with ChatGPT” finally appeared in October of 2024—two years too late. Also unfortunate is that OpenAI’s guide frames how students should use ChatGPT as mostly a timesaver— “delegate citation grunt work to ChatGPT” or as a synthetic stand-in for human wisdom— “compare your ideas against history’s greatest thinkers.” We view such perfunctory advice as little more than marketing. Students and faculty need more substantial guidance.
The following three-part volume of Thresholds gives voice to faculty who are grappling with GenAI’s early impact in higher education. We hear reflections and results from college and university educators about both the opportunities and challenges posed by the powerful new technology of GenAI. Contributors come from many disciplines, and their insights are practicable and profound. They offer points of stability and studied optimism during this anxious era.
Terms & Focus
Before proceeding, a note on terminology: while artificial intelligence is a broad field within computer science, engineering, and robotics, one with roots in the 1950s, we are focused in this Thresholds series on Generative Artificial Intelligence (GenAI). We will use the specific term “GenAI” and the umbrella term “AI” more or less interchangeably, but we will always be referring to chatbots like ChatGPT, Claude, and Gemini. These are the nascent and disruptive technologies of our time.
Exploratory Openness to GenAI
In building this volume, our philosophy has been openness and pragmatism. We seek to avoid both the hype and anti-hype surrounding this technology. GenAI is here to stay, in all likelihood, and so we must engage it. Contributors to this volume are not AI advocates. Neither are they AI naysayers. Instead, they are practical investigators intent on confronting a new reality in which GenAI is widespread and expanding.
Thoughtful frameworks have emerged that discuss both AI literacy and AI refusal in writing studies. The MLA/CCCC Joint AI Taskforce on Writing and AI released its third working paper this fall. “Building a Culture of Generative AI Literacy in College Language, Literature in Writing” is a call for greater and deeper engagement with AI across institutions. In dissent, another group of insightful scholars released “Refusing GenAI in Writing Studies.” We realize that some scholars, particularly in the humanities, are advocating for AI resistance, and we support that exploratory work, even if we agree more with those calling for engagement. Why? Resisting AI encourages healthy and critical explorations of the ecological costs of data farms, myriad educational disruptions, the proliferation of disinformation, privacy concerns, the reification of societal biases, the dehumanization of our classrooms, etc. (U.S. Department of Education). It is important to surface these concerns, particularly because technology companies often seek to minimize them.
Both faculty GenAI resisters and adopters will need support from their institutions. The pace of AI developments and deployment is not normal, nor should faculty feel pressured to keep up with these new tools as part of their normal jobs. Institutions must adapt by offering faculty more professional development opportunities—course buyouts, fellowships, structured communities of practices around AI and teaching, and in-depth institutes that bring both AI skeptics and AI early adopters together.
Many critics worry that GenAI will make the world worse, not better. Linguist Emily Bender writes that “AI research development and sales involves dehumanization” on a large scale. Creative writing professor Melanie Dusseau believes that employing GenAI in education is tantamount to dismantling “the imaginative practice of human writing [and] is abhorrent and unethical…there is no path to ethically teach AI skills” . We respect (and sympathize with) such oppositional perspectives on GenAI. However, the following three-part volume does not amplify resistance. At this early stage, we believe in experimentation and engagement. Our students are already living and working in a world occupied by GenAI. It is our obligation to begin working through the myriad consequences and to help them navigate the new reality.
We also believe that open disclosure of GenAI usage should be fundamental for both faculty and students. The generative text of early large language models (LLMs) like ChatGPT has been replaced by large multimodal models (LMMs) that can generate synthetic voices, videos, podcasts, diagrams, and even avatars. These new multimodal systems pose additional challenges and opportunities within education, and when they are deployed in our classrooms, we should all be upfront about their usage.
As scholarly debates rage about GenAI, we are unconvinced by one particular camp: the AI competency deniers. Meta’s Yann LeCun argues, “we’re never going to reach anything close to human-level intelligence by just training on text. It’s just not going to happen.” Other naysayers focus on hallucinations and biases as evidence that GenAI is fundamentally flawed. Gary Marcus writes, “We have no concrete reason, other than sheer technoptimism, for thinking…any given [GenAI] system will be honest, harmless, or helpful, rather than sycophantic, dishonest, toxic or biased.” Indeed, while some of the most creative and invigorating writing about AI comes from AI skeptics, their predictions about impending limitations have thus far been proven wrong. Even if AI progress stops today, the current capabilities of these systems are quite extraordinary, consequential, and difficult to deny.
Indeed, today’s major GenAI tools are remarkably competent when compared to previous systems from only a few years ago. They have now surpassed non-expert human performance in many areas central to education, as noted in Stanford’s most recent “Artificial Intelligence Index Report.” This thorough, technical, and rigorous annual report documents GenAI’s evolving capabilities on many different tests and benchmarks. The research team concludes that “AI systems routinely exceed human performance on standard benchmarks. Progress accelerated in 2023. New state-of-the-art systems like GPT-4, Gemini, and Claude 3 are impressively multimodal: they can generate fluent text in dozens of languages, process audio, and even explain memes.”
More specific to education, a published paper from OpenAI shows GPT-4 scored in the 93rd percentile on the SAT (Reading and Writing), the 99th percentile on the GRE Verbal exam, the 85th- 100th percentile on the AP exams in History, Biology, Environmental Science, Macroeconomics, Microeconomics, Psychology, Statistics, US Government, and US History. There are also some remaining weaknesses, such as a lower score on the AP English Language and Composition test (14th – 44th percentile), but the overall results are impressive and disconcerting. If GPT-4 was an applicant to our universities, it would receive scholarships, accolades, and lots of recruiting mail. Furthermore, we recognize that technology firms like OpenAI and Google are constantly addressing current limitations and shortcomings—and that GenAI tools often outperform even the expectations of their creators. As Julian Togelius has written, “the last few years have seen amazing results in training very large neural networks on enormous amounts of data scraped from the internet. The resulting networks are sometimes shockingly capable, performing tasks they were not explicitly trained for.” As Togelius points out, the major LLMs have only been trained to predict text, but they can also translate, summarize, philosophize, argue, and more. Indeed, one of the scariest truths about GenAI is that even the developers do not fully understand how or why GenAI is achieving such remarkable results. Frightening but also fascinating.
Why Be Optimistic?
Higher education is prepared—better than any other sector—to reckon with GenAI. Given our disciplinary breadth and varied research methods, faculty in higher education can detangle the complicated and emergent issue of applied GenAI. How will GenAI affect our society? How might the current shift compare to previous shifts like the Industrial Revolution? What will GenAI mean to the arts? To human psychology? To science and medicine? Thanks to the rise of this new technology, scholars and students are suddenly faced with thousands of new and invigorating questions, not just about AI—but about humanity.
This humanistic work begins in our classrooms. While GenAI poses real pedagogical challenges to academic integrity, it also presents many pedagogical opportunities. As faculty build AI literacy, they will naturally rethink teaching strategies and classroom practices. They may find new ways to use technology to activate their students’ curiosity. With funding and institutional support, we hope faculty will shift from worrying about the misuse of GenAI to exploring many of the deeper questions posed by the technology and its impact on learning.
This move toward pedagogical creativity is already occurring, as evidenced by the seven essays in the following issue. First, Ruth Li shares a collaborative classroom activity, one in which students read and critique AI-generated writing. When students annotate ChatGPT-generated essays, according to Li, they co-construct knowledge while building AI literacy. Next, Katt Blackwell-Starnes shows the promise of using GenAI in the first-year writing classroom. Blackwell- Starnes leaned into the technology, partially in frustration with its pervasive but unacknowledged presence, and she found students eager to talk about GenAI and to explore more open and ethical usage. Similarly, Holly Ryan highlights the need for writing educators to talk with their students about GenAI and to give them low-stakes opportunities for exploration and experimentation. Ryan’s students engaged ChatGPT and, upon reflection, began to challenge current definitions of collaboration and authorship.
The remaining essays forward useful frameworks for teaching and learning with GenAI and encourage college educators to use GenAI for exploration and student growth. David Nelson and his co-authors provide actionable suggestions for faculty with varying perspectives on technology. They advocate for open communication with students—and for flexible classroom policies that encourage co-creation and transparency. Meghan Velez, Zackary Reed, Darryl Chamberlain, and Cihan Aydiner present a study of ChatGPT as a feedback and assessment tool in college writing courses. They find potential in their experiments, but urge caution as LLMs can oversimplify assessment criteria and ignore social-rhetorical dynamics. Another rhetorician, Guy Krueger, reports on his classroom experiments with GenAI, finding that students can and do make ethical choices and are often reluctant to surrender too much of their own authorial voices to AI. Krueger concludes that many student writers, when using AI, are comfortable with a new kind of co-agency. Finally, Daire Uanachain and Lila Aouad argue that GenAI is shifting our educational paradigm. They advocate for fully integrating GenAI into our learning environments, as a proactive way of teaching ethics and research skills.
The scholars represented in this issue—and in the two issues to follow—give us good reasons to be optimistic about the future of higher education. They forward thoughtful (and sometimes provocative) concepts and share pragmatic, data-driven ideas for teaching and learning. Certainly, GenAI will challenge our institutions, but faculty across higher education are adept learners and savvy human beings. We seek to make the world better through our teaching, research, and service. GenAI can disrupt, but GenAI can also assist.
Individual Manuscripts:
Critiquing ChatGPT Compositions: Collaborative Annotation as an Approach to Enhancing Students’ Metalinguistic Awareness of AI-Generated Writing
Ruth Li“I prefer my own writing”: Engaging First-year Writers’ Agency with Generative AI
Katt Blackwell-StarnesCan AI Be a Co-Author?: How Generative AI Challenges the Boundaries of Authorship in a General Education Writing Class
Holly Ryan, Daniel Abramov, Samantha Acker, & Sydney ElkinsCollaborative Intelligence: Towards Practical, Critical and Cooperative Teaching & Learning with AI
David B. Nelson, Anaelle Emma Gackiere, Samantha Elizabeth LeGrand, & Daniel A. GubermanBlack Boxes Revisited: Understanding GenAI Responses to Student Writing Across the Curriculum
Meghan Velez, Zackery Reed, Darryl Chamberlain, & Cihan AydinerRhetorical Choices & Voice: Generative AI in the First-Year Composition Classroom
Guy J. KruegerGenerative AI in Education: Rethinking Learning, Assessment, & Student Agency for the AI Era
Daire Maria Ni Uanachain & Lila Ibrahim Aouad
Thanks, Marc. This is a must read for educators interested in how GenAI is currently impacting education right now - as always, I am interested in how this will filter down to considerations at the high school level. Do you know of anyone doing equivalent work in 9-12?
This simply isn’t true: “Indeed, while some of the most creative and invigorating writing about AI comes from AI skeptics, their predictions about impending limitations have thus far been proven wrong.”
Ten years ago, I made a prediction about the ability of computers to automate parts of human communication, including the limits of that capacity. And nothing has proven my initial prediction wrong.
If you are interested in my point of view, I would be happy to speak with you. I must warn you that I have a gargantuan (and, given the circumstances, justifiable) chip on my shoulder regarding this issue. I think there a massive blindspot here. And I think it is very consequential to the future of writing pedagogy, the humanities, AI, the whole thing.