Agency and AI have become a focal point in education around the current batch of generative tools, but the focus on personal use and autonomy (e.g., to generate or not generate) profoundly misunderstands the frankly shocking impact autonomous agents pose to learning. Calling something AI slaps a label on a slew of machine learning technologies that most of us barely understand or frankly have an interest in exploring. That needs to change because education is set to be one of the biggest consumers of this vague new technology, and I'm not talking about generative AI like ChatGPT.
Let's introduce some terms:
Generative AI
You know it as transformer technology, like ChatGPT. It generates text, images, video, sound, etc. But it isn't the only 'AI' we're going to focus on.
Predictive AI
It uses machine learning to predict behaviors and even events. In education, this can mean predicting how well a student performs on an assessment or whether they are at risk of failing a course.
Strategic Reasoning AI
Think about the most advanced video games you play that seem to adapt to your moves and respond in kind. Strategic reasoning is one area of AI that allows other autonomous agents to communicate with one another and analyze decisions.
While each of these types of machine learning exists on its own, they can also be lashed together to create some pretty dynamic systems that can influence human behavior in ways we haven't imagined. One example is Meta's Cicero, where Meta's developers trained several types of AI systems to beat human players in the game of Diplomacy. Cicero used strategic reasoning to plan its moves and a language model to persuade, empathize, manipulate, and ultimately betray human players it had aligned with. Yes, AI can do that.
The Predictive AI We Have
If you are in higher education, you may have a student success unit at your institution that tracks how well your students are predicted to perform in a university setting. Many of these units use predictive AI or other machine learning to create a profile of a student and weigh a number of factors to determine if they will succeed or struggle. Some of those factors include first-generation status, socio-economic background, high school performance, race, gender, and disabilities. Much of this is calculated upon entry. New factors come into play once the student actually begins taking classes.
My university began using a system to track first-year students, and it is shockingly accurate in predicting how a student will struggle. Before I entered grades in the LMS, I received a report asking me to check in on a number of seemingly random students. Sure enough, the factors they use help identify nearly 90% of those struggling students. I'm left to fill in the details, such as who is missing class, not submitting homework, etc.
Predictive AI is one of the more promising use cases in education, with one substantial caveat. The human-in-the-loop method of confirmation isn't always timely for intervention. Oftentimes, it is the third or fourth week in the semester before the check-in arrives, then it takes me a few days or up to a week to respond, and then a few more weeks for the unit to set up a meeting with the student and get them on track. This, of course, is independent of me actively trying to set up a meeting with the students, but if they aren't coming to class or answering emails, there's little I can do.
Education with Agentic AI
Now, imagine there isn't a time lag of more than a few hours between a student missing a class and an intervention taking place. Remember that strategic reasoning AI feature? Well, in this imagined scenario, a programmer has gamified education setting conditions for 'winning' to use all the tools available to ensure a student graduates within 4 four years with a 3.5 GPA. The AI agent can use predictive AI to track student progress and generative AI to produce text and even a voice to empathize, encourage, persuade, and even manipulate students to reach their goals.
Take this scenario. A student misses a class and, within twenty minutes, receives a series of texts and even a voicemail from a very concerned and empathic-sounding voice wanting to know what's going on. Of course, the text is entirely generated, and the voice is synthetic as well, but the student likely doesn't know this. To them, communication isn't something as easy to miss or brush off as an email. It sounds like someone who cares is talking to them.
But let's say that isn't enough. By that evening, the student still hadn't logged into their email or checked the LMS. The AI’s strategic reasoning is communicating with the predictive AI and analyzing the pattern of behavior against students who succeed or fail vs. students who are ill. The AI tracks the student's movements on campus, monitors their social media usage, and deduces the student isn't ill and is blowing off class.
The AI agent resumes communication with the student. But this time, the strategic AI adopts a different persona, not the kind and empathetic persona used for the initial contact, but a stern, matter-of-fact one. The student's phone buzzes with alerts that talk about scholarships being lost, teachers being notified, etc. The AI anticipates the excuses the student will use and presents evidence tracking the student's behavior to show they are not sick.
Bewildered, the student responds to the messages and is directed to a real life person's office for a meeting the next morning. The intervention that would have taken weeks is now done in hours. The labor cost and time cost of having a professor and a staff member attempting to coordinate meetings with what may be dozens of students is largely automated. Sounds great, right? But how did the AI know a student missed that class or assignment or was falling behind in the first place?
We've Normalized Tracking and Surveillance
The above scenario is only possible if we agree to give up personal privacy and autonomy to automated systems under the vague promise that they are here to help us. You probably cringed as I do thinking about it, wondering why anyone would ever agree to this, but the truth is pretty simple—people want a guarantee that an expensive college education will lead to a middle-class existence. Students want an undergraduate degree in four years, not six. They also want assurances that they will have good grades, too.
We already give up our autonomy and place our trust in automated systems each day, from the GPS that guides me along the highway to the NLP software that recognizes my signature when I use mobile banking to cash a check to smart devices that track our steps, weight, heartbeat, even a woman's menstrual cycle. The trust we place in such systems is evident in so much of our daily interactions that most people are oblivious to it.
The surveillance and tracking are already so invasive that GM and LexisNexis are being sued because vehicle data showing a driver's habits behind the wheel was being sold to insurance companies without the driver’s consent and used to automatically increase premiums or deny coverage based on how often a person braked too hard, sped, or drove erratically. All are monitored by the vehicle’s on board sensors.
The argument is that using predictive software will create safer drivers by forcing them to change their habits to avoid paying higher insurance premiums. On the surface, it isn't a bad argument. Neither is tracking students or using AI agents to keep them on track to graduate. Of course, this comes at the cost of human autonomy, judgment, and free will (whatever that means now).
Do we want to cede control of our lives, our decision making, to autonomous systems? Do we even care?
Will GenAI Lead to AGI?
It's worth keeping in mind that the people building current generative AI and other AI systems are convinced these are the logical step toward Artificial General Intelligence (AGI) and believe this will usher in an untold utopia for humanity, eliminating wars, disease, poverty, etc. This is one side of the accelerationist coin. The other side is AGI destroying humanity out of some SciFi scenario.
The 1970 film Colossus: The Forbin Project shows both versions. Here, a US AI and a USSR AI merge to form a world-dominating AI that takes control of human beings and offers them a choice: peace or death. To put it another way, give up autonomy or your existence.
Education Should Empower Individuals, Not Subjugate Them To Opaque Algorithms
While predictive and strategic reasoning AI could revolutionize student support and interventions, we must carefully consider the ethical implications. Ceding autonomy and privacy to algorithms raises critical questions about individual rights, consent, and the role of human judgment. ChatGPT's generative text looks quaint in comparison.
We must prioritize transparency and accountability in AI systems, ensuring they are free from bias and aligned with societal values. We must encourage a culture of cautious skepticism around the integration of this technology into our lives. That isn't something that will be easy for most people to consider, given generative AI's marketing gimmick to offload thinking. Human oversight and the ability to appeal automated decisions should be non-negotiable.
What's so interesting about the scenario Marc describes is that it sounds like some sci-fi story but we already have the technology to accomplish this provided there was the political will to implement it (a big if). Courts have been using AI to predict recidivism for a while - well before the breakout of ChatGPT - with mixed results and major examples of bias.
(https://www.lawyersalliance.com.au/opinion/predicting-recidivism-the-questionable-role-of-algorithms-in-the-criminal-justice-system)
Whether or not those issues can be fixed is anyone's guess, but applying Generative, Predictive, and Strategic AI to students is one fascinating example of how utopian tech entrepreneurs think of the future but those of us in education, while recognizing the potential of such a system, immediately could tear apart the issues involved with tracking students in this fashion. But what jumped out at me most in this post is Marc's reminder about how much personal information we already give up to algorithms through our phones, online shopping, internet browsing, and every other way we live digitally. Most of us know it but never really think about it. All those terms of service contracts basically mean these companies can use our information in ways most of us would never imagine. And all of this underscores what to me is at least one of the major themes of Marc's work that the gap between how educators (and really, the average citizen) understand the implications of generative AI versus what is just around the corner is a huge problem in almost every respect.
Very powerful piece. Well written—the pace and vivid scenarios are riveting. Man, I hope it doesn’t roll that way, but everything you say seems plausible to me. Wow.