Anthropic’s Claude 3 is here, and with it comes much-hyped rhetoric of true artificial intelligence, which in turn leads to the predictable pattern of early adopters seeing ghosts of AGI in every output, followed by legions of skeptics poking holes in these flimsy observations. We should pause and remember the people at Anthropic and OpenAI see current transformer-based systems as mere stepping stones on the road to true Artificial Intelligence, and playing into boosting or bashing creates more engagement for a technology that hasn’t slowed down enough for people to actually adopt it as part of their daily practice.
OpenAI’s response to Elon Musk’s bizarre lawsuit reveals the true intention for these companies is to monetize AI assistants so that they fuel hype and criticism, which in turn create momentum and capital to fund research. ChatGPT’s and Claude’s basic purpose is as a societal klaxon, a noise maker, a vehicle to drive interest to fund research into the most speculative science fiction fantasy in human history.
Many of us (including me!) have added to this dynamic each time we’ve posted some example of an output we found astounding or took the time to note the critical lack of coherence, ethical challenges, or cryptic implications this technology poses for the future. Claude 3 is not an example of AGI. It’s simply an improved version of a thing most people still haven’t found a use for in their daily life, and speculation is driving another version, begetting another then another, to fuel research. It’s like living in a world where a new iPhone comes out each spring, but everyone still uses landlines.
People Haven’t Abandoned Writing For Text Generation
There’s this weird divide between certain developers building foundational models and, I guess you’d call ‘normies’ who don’t see a future where they log onto a device and merrily offload their labor, their skills, and even their thinking. The rationalist argument goes that if a machine can do it better, then what’s the point of a human's efforts? I think this is a much-misplaced idea that sweeps away humanity’s messiness for an often streamlined view of human nature.
Recently, Scott Aaronson, one of OpenAI’s safety researchers, penned a sprawling post about generative AI’s impact on society and included the following bit about the future of pedagogy:
But as I talked to my colleagues about watermarking, I was surprised that they often objected to it on a completely different ground, one that had nothing to do with how well it can work. They said: look, if we all know students are going to rely on AI in their jobs, why shouldn’t they be allowed to rely on it in their assignments? Should we still force students to learn to do things if AI can now do them just as well?
And there are many good pedagogical answers you can give: we still teach kids spelling and handwriting and arithmetic, right? Because, y’know, we haven’t yet figured out how to instill higher-level conceptual understanding without all that lower-level stuff as a scaffold for it.
But I already think about this in terms of my own kids. My 11-year-old daughter Lily enjoys writing fantasy stories. Now, GPT can also churn out short stories, maybe even technically “better” short stories, about such topics as tween girls who find themselves recruited by wizards to magical boarding schools that are not Hogwarts and totally have nothing to do with Hogwarts. But here’s a question: from this point on, will Lily’s stories ever surpass the best AI-written stories? When will the curves cross? Or will AI just continue to stay ahead?
Now, maybe Aaronson didn’t mean to abruptly end this section on that note, but the worldview this suggests is incredibly bleak to me. The point isn’t if Lily’s writing will surpass that of a machine or if the skills we teach students now will ever help them outthink a supercomputer—the point is each human being should have the opportunity to think, to dream, to learn, to exist outside of the dichotomy between man vs. machine. Lily’s writing and her ideas have a dignity that no machine could ever possess, and it genuinely concerns me that certain people envision one day instilling such values onto synthetic systems.
When I write the words on this Substack, I do so understanding they won’t be the finest words ever written and that there is someone smarter, more articulate, and better than myself. I write because I want people to know what I think. I read others because I’m interested in hearing what their thoughts are. I argue, sometimes, in ridiculous and illogical ways, with people because I’m human and not a robot. I’m not at all eager to give that up, and neither are my students.
Even OpenAI’s own Greg Brockman thinks writing, not text generation, has immense value. Writing isn’t going anywhere.
.
Writing is Labor and We Have a History of Offloading It
So why the concern? In higher education in the US and elsewhere, there are faculty members who struggle to balance teaching with research and service. Many on the tenure track face the ‘publish or perish’ pressure of producing articles and research to keep their jobs. Often times students get shortened in this arrangement. Faculty either elect to give their students exams that don’t involve much outside of rote memorization of material or offload the work of more authentic assessments involving writing and critical application of what students learned to a graduate teaching assistant.
Now, generative AI has the potential to replace that TA’s labor in the context of supporting a tenure-line professor's teaching efforts by replacing human feedback with its synthetic equivalent. The precedent of offloading labor is already established and can now be done effectively for a fraction of the amount it would cost to fund the stipend a graduate assistant.
I’m sure many will greet this notion with disbelief and question that this is even a possibility in academia, or argue that it will be a mediocre replacement for human engagement. Those are all valid arguments; however, they miss the forest for the trees—many faculty in higher education already set the conditions for devaluing student learning by delegating the task of feedback to other students.
Houghton Mifflin Harcourt’s recent purchase of Writable for AI grading and feedback in K-12 settings should be all the evidence one needs to see how quickly a task can be automated.
Why Teaching Matters
I’ve always been ‘support faculty,’ a term that qualifies my labor to the university and its non-tenure track status as teaching students skills, such as writing, so that they do not pose a burden to upper division faculty. My service as a faculty member who teaches students is to support my tenured colleagues by ensuring their future pupils are prepared enough to write so the professor does not have to pause their choreographed lecture to reteach students what they consider basic writing skills.
Certainly, this isn’t all professors. There are many who do take their time to meet with their students and respond to their work, but let’s not pretend the behavior of offloading teaching isn’t pervasive in higher education. I don’t have the energy to talk about the often brutal labor conditions many on the non-tenure track and contingent faculty experience. Suffice it to say, most of the faculty teaching students today in higher education have little prospect of attaining full-time status, let alone coveted tenure track positions.
The Role of First-Year Writing Programs
First-year writing programs are an outlier in higher education in the US and, indeed, most of the rest of the world, but being an odd duck can actually help when nearly every other discipline is searching for a path to deal with generative AI. Indeed, FYW programs are the front lines of navigating the balance required to help students develop the traditional skills needed to write along with new competencies for our AI era.
The stigma associated with FYW is it is a remedial course and this remains one of the most ridiculous preconceptions about writing. Many teachers of writing are no strangers to having to teach within disruption. Some of the more thoughtful writing courses emerge from non-tenure track teachers of writing asking their students to engage in cultural and technological shifts in society as they happen to critically examine the world around them. There is no set curriculum for this. When I teach an FYW course, the majority of the readings aren’t listed on the syllabus because many haven’t been written yet. I read and assign material as it arises in our world.
The skills I model for my students go beyond rudimentary writing and research, and many scoff at them because they aren’t strictly academic. I want my students to be curious about their world, often asking them to critically engage in arguments with people they cannot stand or read closely about ideas alien to them. Perhaps more than writing, reading closely and critically is the skill I try to instill in my students. Not reading for pleasure but for intention. The siloing effect of social media means many people don’t bother reading outside of their digitized zones.
Writing will remain. Students will learn how to use text generation alongside of it, but hitting a generate button will never be a substitute for learning. We’re about to be on the receiving end of AI-powered edtech that will force us to examine questions about out labor and teaching. It’s time we have a conversation about the implication of what it means if we start using these systems to offload our work, not simply our students using text generation to offload learning.
Excellent post!
I think there is fascination with offloading the work of teaching (and writing) to technology, whether AI or edtech/Web 1.0 and 2.0 tools, because there is a fear that perhaps technology can give students more of what they need than teachers can. Writing is hard. Teaching is hard! As a teacher, how can I compete with machine learning: a gamified computer program helping students master a particular concept? If I have a classroom of 30 students, all with different learning gaps, needs, and strengths, there is an allure to the idea that I can plug them into the computer and after 15 minutes they will re-emerge with more/better understanding.
What people don’t realize is that while machine learning may be more adaptive, teaching and learning is a human experience requiring human to human interaction. I may be able to learn something from a computer, but I won’t remember it nearly as well as a conversation I had with another person. I won’t be able to make connections to other aspects of my life. Learning is in an innately social act, and AI will always fail in replicating the social humaneness of learning.
As the abilities of genAI models have slightly plateaued in recent months, it may give us a chance to have more of these kinds of philosophical conversations about the overall role AI will play, not just with regard to writing, but also with respect to other important educational areas like scientific research (see the NY Times this morning - https://www.nytimes.com/2024/03/10/science/ai-learning-biology.html). But to Marc's point, the observation by Scott Aaronson about his daughter's writing is just incredibly sad and anyone who has spent significant time with AI text output knows that it is not even close to exceeding the kind of professional writing celebrated by authors, reviewers, and readers. GenAI writing at the moment, from a teaching standpoint, is useful for a variety of tasks, including idea generation, outlining, and demonstrating structure and clarity, but originality and style are not its strong suits. In some fields where the writing is often turgid, dense, and tedious (law for example), genAI will likely have a significant impact, especially with its ability to scan and summarize large volumes of information, such as case law. But for more creative and open ended writing projects, genAI is not good enough and may never actually be "good enough" to surpass the best human writing. My focus as a teacher at the moment is trying to harness what genAI writing programs can do "right now" and not get caught up speculating about the future. That is more than enough of a challenge for most teachers in the current AI hype climate.