Would you consent to upload your name, image, and likeliness to a generative AI system and allow it to do your job?
We’ve invested so much time, energy, and mind-numbing frustration in education debating text generation that I fear we’ve exhausted the discourse and attention of many. If you think that generative AI begins and ends with ChatGPT, you are in for quite the surprise. Multimodal AI is here, with diffusion transformer models used to generate images, videos, and music. The latest deepfakes of Taylor swift making the rounds on X are mediocre examples of what this technology is truly capable of. Any user can now use natural language to create an avatar using HeyGen’s video and speech models and program that model for real-time interaction. You can even take a video of yourself to fine-tune a model on your voice, mannerisms, and image, as Ethan Mollick did.
I played around with a stock avatar for HeyGen a few weeks ago and recorded a simple clip for our AI Institute for Teachers. It’s a clumsy attempt, filled with Uncanny Valley moments that make the viewer cringe and easily dismiss the idea of interacting with synthetic avatars. That’s a mistake. Very soon, users can program avatars of their liking for bespoke interactions, and these models will only improve. Poor Taylor Swift cannot catch a break. Even her fans are creating deep fake TikToks of the star to praise or harass people.
Let’s return to our opening question. My guess is many of you said no or inserted a series of four-letter words of your choice in front of that no. I don’t blame you, but what if you were given something in return for your digital likeness?
Would you consent to upload your name, image, and likeliness to a generative AI system and allow it to teach classes for you in exchange for being granted a sabbatical so you can focus on your research?
We’re entering into an era in society where users will be faced with such questions with their digital labor, with unknown consequences. In response to the rise of text generation, Matthew Kirschenbaum wrote a terrific essay called Prepare for the Textpocalypse, foreseeing the loss of intimacy readers and writers will face when confronting a world awash with generative text. With the rise of generative AI avatars that respond to users in real-time, that loss of intimacy may soon extend beyond the written word into many of our digital interactions.
Being Human Matters as a Teacher
In education, we hear from both faculty and students about how messy it can be to interact with another human being. Each semester, educators open up student evaluations and are greeted with puzzling and downright infuriating comments left by students who do not like their teacher’s gender, sexuality, politics, or ethnicity. With a synthetic avatar, you can program it to fit any user’s bias. This risks creating a techno-solutionism for social interaction in education and broader society, numbing what makes us human and siloing ourselves further from meaningful discourse that may discomfort us but ultimately serves to help develop and nurture our humanity. Indeed, the cornerstone of a college education shouldn’t be a degree or a list of certificates for a LinkedIn profile; it should be an experience where one is forced to ask uncomfortable questions, challenge assumptions, and lead to a deeper personal inquiry.
Sounds cheesy, I know. With the costs of college rising and more and more students choosing not to even go to college, selling education as an ideal vs. a set of determinable skills that are instantly marketable into a middle-class career isn’t fashionable at the moment and is very much a humanities argument for education. But if AI systems eventually become ubiquitous and usurp knowledge jobs, do we believe that STEM education will somehow be immune to AI’s synthetic reach? The stakes are moving far beyond deskilling students via text generation. Why will students learn from human beings when their synthetic equivalent can offer them tailored instruction that meets their cultural, religious, and social desires?
Solutionism Run Amok
Student: I hated my teacher because she was a woman.
AI Solution: Your digital interactions will always be with a male instructor.
Student: The stupid leftist ideals from my professor make me cringe.
AI Solution: No problem. Your digital avatar will always respond from the persona with your flavor of conservatism.
Student: My professor took forever to return my emails and didn’t answer my question.
AI Solution: Your digital avatar never sleeps, responds instantly, and is always there for you.
What is gained and what is lost when we cede copies of ourselves or allow aggregated content to stand in for human interaction? If all we care about is imparting information to someone, then by all means, let the digital avatars free! But to teach is a fundamental human endeavor. I’m not simply imparting information; I’m seeking connection, understanding, empathy, discourse, challenges, reversals, and opinions foreign to my own. I do not walk into a classroom and expect to see a sea of faces that look like my own, nodding in agreement with every idea I have, holding every belief in unison with me. My students aren’t always smiling, aren’t always ready to talk, and aren’t eager to offer opinions about everything you ask them. Neither am I. Neither are you.
If we allow ourselves to be replaced with avatars with near-perfect pleasantries and comforting language that speaks to our bias, then we are admitting defeat to what it means to be human. We’re offloading and tidying up all the interactions we say we value to a convenient replicant—valuing the performance of being human vs. the actualization of what makes us who we are. What’s truly horrifying is that many yearn for just this, and I’m not simply talking about students.
An Avatar For Our Digital Loneliness
The Taylor Swift deepfake controversy may only be confined to images and audio at the moment, but it won’t be long before society sees avatars of their favorite performers they can program. Cloning a celebrity is just one use case; The ultimate performative action many will seek from real-time avatars is a cure for loneliness. In education, students may turn to always on support avatars to deal with isolation, alienation, and mental health challenges.
Many laughed and cringed at Her, with its depiction of an AI companion as a custom-built bot for intimacy. Whatever you do, don’t scoff and dismiss this desire to seek connection from a synthetic persona as a sad male fantasy. Last summer, the New York Times produced an Op-Doc about three women dating AI boyfriends. What’s remarkable about the video is that the women were seeking companionship that they idolized and controlled—not satisfied with existing physical or intellectual intimacy. Several were using their AI boyfriend to get something out of the synthetic interaction that they weren’t finding in existing relationships. And no, these women are not the female equivalent of incels; many have real-life partners.
Replika is one of the biggest names in the AI partner space. Mostly, men use the service to create digital girlfriends and mimic the intimacy and connection that is supposed to come from real relationships. Unsurprisingly, this comes with the usual host of problems, from users verbally abusing their chatbot partners to fears this will simply make young males less capable of interacting with women in real life. The truly sad part is many young men turn to these bots as a sort of therapy. Some believe this could help coach people who struggle to find a connection in the real world, letting them practice intimacy with a bot. Others view bot-dating as the only means to find a connection in the real world after untold failures, finding hope that a synthetic relationship will, at the very least, mean they won’t have to be alone.
I won’t go in-depth about the porn bots—there are legions of them, but the implication is pretty obvious and deeply disturbing. If you can program an avatar to be sexually idolized not simply in terms of looks but also in behavior, then you risk further fraying the already warped fabric of relationships. I remember when Only Fans started and was universally decried as a failed venture for amateur porn stars. What quickly made OF viable for many was the interaction between content creator and consumer—a user could ask for many things—and get them for a price. With an AI avatar, you get the same on-demand experience, only without the need to negotiate what you want from a human being.
So many of us have been leading increasingly digital lives in lieu of human interaction that it’s hard to judge what it means to be real with one another. We each maintain a persona online and brand ourselves based on our likes and our followers, but the number of clicks and views doesn’t equate to meaningful human interaction. At least, I don’t think it does. Maybe I’m simply aging and not seeing the forest for the trees as many generations past viewed technological change. Still, generative AI’s implications for our world feel so much more hollow than the future I imagined as a child.
Our Labor, Ourselves
The major impact of synthetic avatars as a stand-in for humans is on labor. The always-on teacher who never misses an email and won’t say anything that will offend or challenge you is using AI to turn education into a fast food service—let’s make learning your way. The same goes for dating, customer service, therapy, medical advice, and dozens upon dozens of jobs.
I know that some of you will ask—why do any of these jobs or skills matter if they can be automated cheaply and be personalized to provide near-universal appeal? After all, we’ve seen such automation come for factory jobs; isn’t it now white-collar workers' turn to face the auto-chopping block? Wouldn’t we embrace a synthetic doctor who had the corpus of human knowledge in its training weights, see a tireless therapy bot who never gets lost in conversation, cancels appointments, or secretly judges you, or send our students to school knowing they’ll be taught the ‘right kind of instruction’ that speaks to our values?
Adopting AI systems that are interactive and increasingly capable of mimicking human mannerisms will drive the cost down for companies across industries, be tuned to address racist, sexist, and classist barriers in labor, and bring forth an unheralded efficacy in tasks. Sounds great—a techno-utopia.
Except it isn’t.
What’s being automated away is the human itself in many of these cases. If AI systems fully integrate into our economy, then what jobs will students be going to college for that cannot be completed in part or wholly by a machine? Certainly, some students will still attend higher education as a means to further their minds, but the vast swath of public higher education will be a call without a response. That has consequences for not just labor but democracy that we haven’t even imagined.
In our frantic rush to adopt and integrate all things generative into our existing workflows, we haven’t paused to consider the repercussions of so-called soft reasoning agents will have on society. Thirty years ago, the futurist Neal Postman recognized that new technologies are often uncritically adopted by societies to such an alarming degree that they start warping how a culture functions. Postman’s essay The Judgment of Thamus gives this trend a name:
New technologies alter the structure of our interests: the things we think about. They alter the character of our symbols: the things we think with. And they alter the nature of community: the arena in which thoughts develop. As Thamus spoke to Innis across the centuries, it is essential that we listen to their conversation, join in it, and revitalize it. For something has happened in America that is strange and dangerous, and there is only a dull and even stupid awareness of what it is—in part because it has no name. I call it Technopoly.
We have the ability to fight against Technopoly, but to challenge technological determinism – the belief that technology shapes society unidirectionally, we must first actively shape how AI is developed and integrated intentionally into daily life, rather than passively accepting its influence. To do this, we need AI literacy and, more importantly, a pause in the deployment and evolution of these generative systems. We might be able to cobble together the former, but I doubt we’ll get any respite from the latter.
The Allure of Convenience
The promise of personalized, always-on synthetic helpers is undoubtedly alluring, but in our quest for convenience and customization, we stand to lose part of our humanity. Our jobs and skills shape our identities and connect us in a shared community, one we shouldn’t rush to outsource. In doing so, we gain back time and help make tasks more efficient, but at the risk of eroding the human bonds that give life meaning.
As Postman warned, we must be judicious in integrating new technologies, no matter how wondrous they appear. There is nuance here. We can navigate generative technology. It just takes a lot of work. I wouldn’t sublet a portion of humanity by allowing an avatar to teach for me. Not because I wouldn’t love a sabbatical. No, I wouldn’t because I know that I would be changed by the experience in ways that would further jade and erode my own self.
The answer to the question "What do I want?", most of us know, is "Not what I think I want". Many corporations use that point cynically--Substack's executives have used it to explain why they don't want strong content moderation. (e.g., that you cannot find what you really want if there is a governance system only providing you what you think you want). It turns out that the answer is also not "Exactly the opposite of what I think I want" or "What corporations think I should want" or "What some specific group of political thinkers have modelled me as wanting".
The deeper problem behind the question is "how did we arrive in a sociopolitical norm where 'what do I want'? is a determinatively important question, and where the thought that I am not getting what I want is a problem that needs to be solved?" Because for many past human societies, it isn't even a question they would have thought to ask in those terms, let alone try to resolve it the ways we have imagined.
An AI that was a 'strange attractor', that could think of the things we want that we can't name or imagine for ourselves, would be interesting, even if it was inhuman or ahuman. That is not how we think of AI. What we hope for in other people is to be understood just enough that we discover satisfaction, possibility, hope, aspiration, knowledge, wisdom, that we need and deserve and thus that we wanted without fully knowing that we did. We often especially hope for that from teachers--that they satisfy a condition of incompleteness that we couldn't have named or described before it was fulfilled. That's what might get 'dehumanized' by the idea of an AI as a satisfaction engine, as a mirror of the desires we can articulate fully already. If AI is at all useful, it would be as one more 'helpful stranger'; if that is possible, tech capitalism is fully incapable of thinking of it except as an alibi for decisions they've reached for other reasons.
Great piece! The connection between porn deepfakes of Taylor Swift and the AI teacher avatar seems to be dehumanization. We are being encouraged to see everything as mediated through increasing layers of technology while being reduced to our value as data and data consumers. But that dehumanization is not evenly spread. Many of the longtermists who run Silicon Valley are very interested in the human – just a very small sliver of very particular ones.