George Saunders is one of my favorite authors. I read his story collection CivilWarLand in Bad Decline when I was in college and it made me want to write short stories. It was funny, weird, and like nothing I’d read before. When I applied to MFA programs to study fiction writing it would have been a dream come true to work with the likes of George Saunders, to have him read my work, tear it apart, add suggestions, and just be within his orbit. I never got that opportunity. But in our newly minted AI era, what if instead of the flesh-and-blood Saunders, I had access to his digital replica? A SaundersGPT, if you will, powered by machine learning and trained on the author's oeuvre. Always available, endlessly patient, able to dispense writing advice and critique 24/7 to countless students across the globe. Some in the tech world, like former OpenAI founder Andrej Karpathy, believe this "Teacher + AI symbiosis" is the future of education, an "ideal experience for learning something new."
AI-Powered Teaching Assistants
This fall, we’re going to see some version of this vision take place at multiple universities throughout the US. Some will use customized chatbots with OpenAI’s Custom GPTs, while others, like Morehouse College, plan to integrate AI-powered avatars as embedded teaching assistants. Morehouse’s own media announcement describes the role of AI teaching assistants as a necessity:
The Morehouse professor Dr. Muhsinah Morris says every professor will have an AI assistant in three to five years. Morris says the technology has taken off in the last 24 months faster than it has in the last 24 years. Meanwhile, baby boomers are leaving the workforce amid national teacher shortages and burnout.
While I'm as eager as anyone to democratize access to great teaching and mentorship, I don’t want to conflate the illusion of intimacy with the real deal. AI tutors may be able to mimic certain attributes of human beings—their knowledge, verbal tics, and even their most quotable lines. But can generative AI really forge the kind of authentic relationships that make mentorship so transformative in education? Can they inspire us, challenge us, see us in all our messy humanity, and reflect that back to us as we grow? Or is it simply novelty, an entertainment masked as profound innovation?
A World Where Everyone Can Talk To An Expert
Andrej Karpathy announced that he was pivoting into education with a new company called Eureka Labs. In his announcement on X, Karpathy envisions a world where people learn from AI-powered replicas:
We are Eureka Labs and we are building a new kind of school that is AI native. How can we approach an ideal experience for learning something new? For example, in the case of physics one could imagine working through very high quality course materials together with Feynman, who is there to guide you every step of the way.
Unfortunately, subject matter experts who are deeply passionate, great at teaching, infinitely patient and fluent in all of the world's languages are also very scarce and cannot personally tutor all 8 billion of us on demand. However, with recent progress in generative AI, this learning experience feels tractable. The teacher still designs the course materials, but they are supported, leveraged and scaled with an AI Teaching Assistant who is optimized to help guide the students through them. This Teacher + AI symbiosis could run an entire curriculum of courses on a common platform. If we are successful, it will be easy for anyone to learn anything, expanding education in both reach (a large number of people learning something) and extent (any one person learning a large amount of subjects, beyond what may be possible today unassisted).
I may be an outlier here, but I wouldn’t want to call my “ideal experience for learning something new” as engaging with the AI-powered shadow of human genius. My apologies to those who want to hear about how I’d love to work with SaundersGPT. Like most of us, I think I’d prefer the real thing.
When We Confuse Performance In Place of Authenticity
What’s concerning about Karpathy’s announcement and Morehouse’s own logic to employ AI teaching assistants is the rhetoric envisions a world where 8 billion people yearn for a compute-powered illusion of working with the best teacher humanity can produce. When someone makes a bot that mimics a human being, they are creating a performance in lieu of reality. AI tutors that mimic real-life humans are not educational in and of themselves, but a brand of novelty akin to Elvis impersonators and shopping mall Santas. People embrace these experiences because of the spectacle. The value is in suspending our disbelief for a few moments. If we learn something out of the process, it is likely limited to curbing our expectations that the performance is just a shadow of the real thing.
Some believe that using AI in this manner is an equity engine for those without the same access to resources. Wouldn’t humanity be better served if everyone had access to the smartest, most capable, most compassionate educators? They may have a point, but their focus is on the performance, not on authentic flesh and bone. Should we really be investing billions of dollars into systems that promise digitized compassion, when we can instead use that money to invest in the badly needed human capital to create those relationships in real life?
As much as educators scoff at this notion they do so at a critical moment in generative AI deployment in education. Men like Karpathy are serious people with serious resources who can and will make an impact on how generative systems are adopted into education. That’s all the more reason for us to question the underlying assumptions that “this Teacher + AI symbiosis” might cause more problems than it proposes to solve.
I wouldn’t want a George Saunders bot to tutor me and I very much doubt anyone else would because I wouldn’t learn anything new about myself or Saunders. Outside of the entertainment value, the promise of having AI serve the function of an always-patient, ever attentive, mentor, guide, and tutor is just plain alien. I want someone who gets cranky, goes on tangents, loses focus, and has to be brought back around. I want the messiness of human relationships that causes us to work harder and longer and push ourselves to limits we didn’t think were possible. I just don’t see that happening with a machine. If I loaded an early work of fiction into SaundersGPT, I’d get a pastiche response from a digitized persona. It would praise my work because someone prompted it too, criticize it because a developer fine-tuned the system to focus on the most generic feedback, and ultimately reveal little about my work.
Relationships mold us, drive us, and ultimately shape who we are. Novelty may have a purpose in education and I can certainly see how some could use AI this way for inspiration, but we shouldn’t let the promise of scaling predictive computing erode those human connections between real teachers and real students.
Karpathy’s rhetoric reveals a worldview where scale triumphs over the quiet human moments of personalized discovery between human beings. The tech may not be ready to deploy this vision throughout the world, but the rhetoric is already shaping the landscape for its arrival. We, too, must be prepared to take an active role in shaping the discourse. Part of that means pointing out absurd and ridiculous claims while not losing sight of what matters most in our classrooms.
I’ll leave you with some classic Saunders. In January, he was asked about AI mimicking people in an interview and said the following:
it can serve up a good imitation of other people who have done it, but until AI can walk down a dusty street, feeling a certain way because of the overhanging trees, and then realise it’s late for a job interview and show up with dust on its pants, and slightly out-of-breath – I’m not sure what it can teach me.
The issue, though, is this: can we continue to be good enough readers to recognise the difference? To feel that human spark and, also, to feel its absence, in a piece of writing. My fear is that reading too much AI-generated work will dull our sensors.
Yes, all about frame of mind. I would hate to live in the world where this wasn’t a possibility.
My sense is that none of these tools are ready for prime time yet, even if they do have an audience / market. Another thing that strikes me is few students really want to interact with a chatbot - at some point if AI gets to the stage (and I suspect it will eventually) where the interface is more intuitive and natural, perhaps some kind of AI tutor to supplement flesh and blood instructors will make sense, but the quality and reliability simply isn't there yet. I don't know if any of you are familiar with this project (Rebind - https://www.rebind.ai/) which strikes me as an interesting way to leverage some of what you are referring to in this piece, but it still all feels very niche to me at the moment. I suspect schools that are actively following the AI conversation but making more careful and deliberate decisions about how to go about actually integrating AI tools may fare better than ones that jump in with both feet. If the experiment does not go well, it might turn people off to the potential benefits of AI. Better to roll out gradually and thoughtfully and wait until more proven use cases emerge. Right now, genAI clearly needs more seasoning before it is going to have the kind of impact predicted by all the hype.