First— Thank You!
As the year comes to a close, I want to take a lap before my final blog post of the year to give thanks. This blog has helped me wrangle with some often challenging issues about generative AI in education and much broader trends in society. During the past year, I’ve been able to write nearly 40 blog posts, and it’s been incredibly gratifying to interact with so many throughout education.
How bizarre to think the thing that caused me to write once again was text generation? Thirteen years ago, I graduated with an MFA in creative writing with an emphasis in fiction. I wrote daily for years and published over a dozen stories, even earning recognition by having one of those stories reprinted in the Pushcart Prize anthology. But like so many other things that fall by the wayside as we age, I stopped writing. Work, two wonderful children, and a pandemic, all meant that time slipped past.
I doubt I would have returned to writing if I didn’t know generative AI existed—if I hadn’t sensed how threatening and fascinating it was. In some ways, it was like waking up and finding an alien and just trying to learn what it is and how its creation impacts our world. That mix of fascination and sheer terror energized me and helped me explore the implications.
I’m committed to keeping this blog free and I thank those of you who paid to subscribe. Your paid subscriptions have helped me pursue this project and mean so much to me! I hope to continue the pace into the coming year.
Tools Vs. Agents
This will be the final post from me for the year before I go on a much-needed break. Before I go though, I want to lay out the divide I see emerging in academia and in broader society over generative AI. What’s forming are two distinct camps: those who view generative AI as tools, like ChatGPT, Bard, and Bing’s chatbot, vs. those who see generative AI as a step toward intelligent agents akin to second brains or workforce replacements. Education is still grappling with introducing generative AI tools into teaching and learning; we’re not prepared or equipped to approach the notional possibility of semi-autonomous agents being put into teaching roles. But we’re going to have to start considering the implications of autonomous agents and what they mean for our labor.
Tools We Can Handle, Agents Are A Different Story
Predicting where industry plans to push AI next is like trying to accurately forecast tomorrow’s weather and the only tool you have is a rusted weathervane. So take anything I’m about to lay out with a healthy dose of skepticism. We need more skeptics—they make the world worth living!
All the veritable pieces are coming into place to have semi-autonomous agents introduced into K12 and higher education over the next few years. Here are the breadcrumbs: voice, vision, image, video, and interface design are all emergent fields moving well beyond text generation. When you combine them, you get many of the abilities we’d expect to see in a semi-autonomous agent. Here’s what it looks like now in the wild:
Generative Avatars
Channel1.AI is a forthcoming news service that is completely generated. Each one of the anchors/ reporters is a synthetic avatar. The user can program them to give them news the way they want it, including asking for political slants on information. If this were implemented for education, this type of choose-your-own educator model would have some very dark implications for displacing faculty labor with generative AI.
Interfaces are Evolving
Most people associate chatbot interfaces with generative AI, but as I’ve written previously, chatbot interfaces stink, and they aren’t the only way for users to interact with generative AI. Buried in Google’s Gemini announcement blitz was a video showing you how far interface design is shifting. A user asks a question of the LLM, and the model goes through a multi-step process designing a bespoke interface to shape its output! We’ve gone from predictive text to predictive design—all based on the user experience. The implications this has for learning are hard to articulate. We’re exiting the familiar terrain into something far more personalized and nuanced than a simple chatbot interface. I don’t even know how a teacher would ask a student to cite their AI usage when the tool creates the interface for them before offering them the answer.
Open Models are Improving
You can now download open LLMs that complete tasks on par with GPT 3.5 and run them locally on your own device—for free. Mistral and Meta’s Llama series of open language models are sending shockwaves through the developer world. Not only are the models vast improvements over previous open-source versions, but they are small enough, cheap enough, and malleable enough for novices to use with their own data. The implication here for education is every teacher, learner, and district will be able to train their own model based on their user data and program it for the user’s instance. That sounds great, but the duality of open-sourced models means less safety features and fewer guardrails.
Where This Is Leading Us
If you’ve stayed with me this far, I think you can start to imagine what programmable generative avatars, predictive interfaces, and cheap open models mean. Education is going to be presented with some combination of the above to first augment, then possibly replace some portion of an educator’s labor. What was unthinkable to many, including myself nearly a year ago, is now very much a possibility we will need to contend with.
A few weeks ago I wrote about how the Alpha school in Austin moved to replace its teachers with generative AI, relegating educators to coaches to guide learners through their lessons. What I said then and echo now is—teaching is not a problem for AI to solve. And so we arrive at yet another inflection point—generative AI stands poised to either uplift or undermine education as we know it. Will industry harness these tools to augment teachers and enrich learning? Or allow impersonal agents to displace human mentors with a belief that humanity can be digitally replicated?
I have little doubt corporate interests will push the latter route. Their sights are fixated on efficiency and cost savings, not nurturing young hearts and minds. And yes, such semi-autonomous agents grow impressively more capable by the day. This is why we must focus on developing the needed AI literacy to advocate for standards, for teaching is an intrinsically human act. Of passion and creativity, empathy and wisdom —born of personal experience. We cannot outsource the vital relationships at the core of transformative education. And so AI literacy will become a perennial challenge in understanding exactly what this technology can and cannot do.
I leave you with a choice and a plea. Administrators, developers, thought leaders, unions, must all carefully evaluate AI integration, elevating student well-being over convenience and not move to replace educator’s labor. Teachers should see chatbots as assistants, not replacements. And all of us must recall that education, at its best, teaches what it means to be human. For once we dehumanize the learning process, we have lost far more than jobs.
I agree with all of this. There are efforts to promote "AI literacy" but most (all?) of them frame that as "how to get the most out of this cool new thing!" rather than "let's think critically about whether and how to use this this tool at all." I'm working on starting an organization devoted to the latter; let me know if you'd be interested in chatting more about this.
The problem as I see it from the vantage point of a classroom teacher is that administrators are so overwhelmed with post-pandemic issues that AI literacy is nowhere near the top of their priorities. The majority of decision makers who oversee curriculum and are in charge of distributing funds are themselves already so far behind the curve. Consequently, schools will be at the mercy of edtech companies as the driving force pitching AI "solutions" to problems they may not even know they have. On the other hand, schools are so notoriously poor at using technology to innovate their teaching practices (as opposed to simply replicating traditional teaching methods with technology - big difference) that hopefully most schools will think twice before allocating scarce resources to unproven AI products. But I do think as AI continues to improve and it becomes more difficult for schools to insulate themselves from its implications, educators will need to emphatically and consistently remind technocrats to evaluate what is gained and lost by outsourcing teaching and learning to AI. Like every other disruptive technology that upends the job market, workers must persuasively make the case - to parents, politicians, policy makers, and yes, students - of their value. I doubt teachers will be the exception. The temptation to do more with less and cut costs has almost always been the ultimate arbiter of decision-making.