A few months ago, Micheal Spencer asked me to write a piece about the new wave of multimodal AIs and how they might impact education. He published the post on his great newsletter AI Supremacy this week and it turned out to be pretty timely. OpenAI announced it was deploying the advanced voice features of their GPT-4o model to paying subscribers this week. And an AI company revealed a wearable dubbed Friend. I’ll let you watch the trailer. Sadly, this isn’t a parody. Just how lonely have we become?
I’ve included the post Micheal published below:
Synthetic Relationships Will Alter Real Ones
ChatGPT came out 20 months ago and we’ve gone from Large Language Models (LLMs) that mimic human language to Large Multimodal Models (LMMs) that mimic human skills, like speech, vision, and even complex interactions. The dominant view of “AI as tools” won’t last for much longer in this new multimodal era, especially since OpenAI is treating their new GPT-4o demo like the voice chatbot from the film HER.
In education, the fine line between teacher and student and the empathetic nature of that relationship will be increasingly challenged by new AI features that give users the illusion of an emotional response from a predictive algorithm. A few months ago I wrote about Hume.ai’s EVI and asked Do we Need Emotionally Intelligent AI? While some students will undoubtedly learn from EVI and GPT-4o, we need to be extremely cautious about the impact synthetic relationships have on real human skills and be mindful of how rapidly the uncritical adoption of this new technology can erode the relationships that are crucial for learning.
Left unchecked, there’s no telling how quickly or how destabilizing AI will be for education. After all, a degree is no different than currency—without the backing and trust of an institution, it is simply a piece of paper. If we accept synthetic relationships as stand-ins for real ones, how much of a leap will it be to see massive corporations take on the role of educational providers? People entrust institutions as knowledge brokers, molding the next generation of citizens, and vetting students on the merits of their skills and intelligence. An AI that talks to you is also an AI that surveils you, tests you, and can quite easily certify what you’ve learned and what you haven’t learned.
It is increasingly looking like generative AI won’t become intelligent to achieve true AGI, but human beings will still put their trust into these black box systems and may one day be willing to cede autonomy and critical decision-making to an algorithm. To those who scoff at this, and I imagine there are many, know that I was very much among your ranks. Then I started thinking about how much of my life is already mitigated by algorithms and machine learning. How many of us are lost without GPS guiding us, mobile food orders, and all things digital commerce, or our wearable smart devices informing us of our diets, heart rates, and even when a female is ovulating?
We have collectively ceded so much of our autonomy to unseen forces that most of us only give an annoyed thumb scroll through the terms and conditions for every app that we download, never bothering to read just how much of our privacy we give up each time we use a new feature on a digital device.
Why Agentic AI Is Different
Education has gone through the MOOC craze and survived the influx of tech that promised students personalized learning, and many fail to see AI as any different—simply a passing fade. But a synthetic voice that talks with you and empathizes with you, and even creepily flirts with you is not something to ignore. We know now how much of an impact social media has had on the attention span of maturing minds. Are we really going to wait for evidence to see that robotic relationships may harm just as they help?
If students divulge some of the most intimate details of their lives to these robot partners, this means mega corporations will have access to a type of personal dialogue that is highly regulated in professional human relationships that involve therapists or counselors. I think most people don’t think about that and it worries me how easily it is to slip into personal territory when talking with a machine.
Teachers shape student’s lives because they lean into human relationships. What future are we entering that a machine may begin to slowly creep and occupy that space? Science fiction is about to become a hard truth and we simply haven’t had time to discuss this or the downstream consequences of what it means for society.
AI’s impact on human behavior is going to be hard to judge and that alone should cause us to pause integrating agentic systems with students. Peter Greene’s post-AI Proves Adept At Bad Writing Assessment poses one of the more provocative questions about what happens to human skills when we know a computer is assessing our work and not a human being:
There are bigger questions here, really big ones, like what happens to a student's writing process when they know that their "audience" is computer software? What does it mean when we undo the fundamental function of writing, which is to communicate our thoughts and feelings to other human beings? If your piece of writing is not going to have a human audience, what's the point? Practice? No, because if you practice stringing words together for a computer, you aren't practicing writing, you're practicing some other kind of performative nonsense.
When you take AI outside of the grading dynamic and move it into the alien space of teaching and tutoring, what happens to a student’s learning? We have no idea how our habits, our moods, or the very essence of communication will change once we stop talking to each other and start freely conversing with a machine.
How you interact and view AI matters. Your philosophy toward the technology and the choices you make to use or not use it are one of the most powerful ways you can exercise agency within this new automated era. My advice is to adopt the stance of a curious skeptic when it comes to AI.
What It Means to Be a Curious Skeptic
What matters most about being a curious skeptic is modeling that behavior for students. Generative AI is new to all of us, so taking the time to explore something new can be a powerful learning experience. So, too, can involve students in this process. We may not have the opportunity to decide if these tools exist, but we most certainly can decide how we approach challenges in our world.
A curious skeptic isn’t siloed into a pro or against faction when they approach new technology, like AI. They are critical in how they look at the technology. This means that they are cautious and intentional in their interactions with AI.
They view AI doomers and boosterism as two sides of the same ill-fated marketing coin. Both visions try to sell users a version of the future where human beings are obsolete. There’s plenty to push back and be critical of how the technology is marketed.
Being curious matters in exploring the limits of AI. Testing systems, use cases, and thinking about ways it can be improved are all hallmarks of this trait. However, being curious about the technology does not mean that you’ve adopted it. We need nuance at all levels and far too many people are being armchair critics without testing generative systems to their full extent.
We Need Authentic Relationships
The age of Large Multimodal Models is upon us and true agentic systems may not be far off. We cannot sleepwalk into this new era. Unchecked adoption of generative systems in education that create synthetic relationships with machines, mimicking human connection threatens the very foundation of what makes us human. Our ability to form authentic bonds, to communicate deeply, to be vulnerable—these are the cornerstones of growth. Ceding human experience to programmed algorithms will fundamentally alter the definition of our relationships.
The path we take now will define the world we will live in. I hope we see more curious skepticism and less isolated view points that only echo in our perspective silos. We need to get more people involved and talking about the most basic needs of education and that isn’t teaching kids to pass a test. Connections, real human connections, matter more in the wake of the pandemic than before. Lean into those relationships. Rediscover them. Otherwise, they might be automated away.
Some Recent Interviews and Podcasts
July was another busy month of interviews and Podcasts and I was really happy to sit down for quite a few of them about the need for transparency and openness for educators using AI in instructional design.
Jeff Young from EdSurge: Should Educators Put Disclosures on Teaching Materials When They Use AI?
Stephanie Verkoeyen from AI Dialogues: The need for transparency and ongoing dialogue
Marcus Luther from The Broken Copier: How Educators Should Be thinking about AI
The human approach to reality is not through AI. In fact AI will take ones knowledge further from it and is also likely to make it harder to get there. Human approach to reality is impossible to completely achieve but our science is the best way we can approach this matter. When it comes to less scientific things about our own relationships, the degree of reality is likely to be closer, because we actually take part in these relationships. If and when AI finally manages to become a deductive method for the investigation new topics and problems, we might stand a chance in really getting something for the amount of hope and investment put into this new computerized sport. But to take it seriously, when (as an engineer with an inquisitive mind and a self nurtured logical way of thinking), I can still manage to think about what things really consist of and how they actually work (particularly in my pet subject of macroeconomics), is a still a better way than asking for a machine to do it for me!
Thank you, Marc, for sharing very human feelings one might have and experience at the presence of growing pressure ‘to normalise’ new patterns of AI intrusion into intimate web of human relationship. Covic pandemics itself left visible scars on young generation’s socialising, collaborating and human to human communication skills. I concur with you about a need to adopt critical, aka, sceptical, approach to introducing LLM to education. Educading students to be and stay humans no matter what promises they are given by tech corporations.