As the semester winds down, I can tell you our students are not all right and neither are faculty.
More than a quarter of students are suffering from absurdly high rates of chronic absenteeism. As the NYT reports: “The pandemic changed families’ lives and the culture of education: ‘“Our relationship with school became optional.”’
Students increasingly report that college has worsened their mental health.
The pressure to maintain a high GPA is causing students to lose focus on what college is all about.
And, of course, there are countless stories about students using AI to cheat.
Each crisis feeds off the other, creating a dynamic narrative of education being broken. Sadly, that’s nothing new. The great fear I have is AI marketing itself as the solution to "fix" all of these issues, not with simplistic tools like ChatGPT but with much more complex AI agents. The announcements the last two weeks should set off alarm bells.
I feel like I'm standing on the shore looking at an approaching storm, trying to rouse people to notice the impending danger, only to have them stare as if I'm crazy. After all, those are just storm clouds in the distance—why worry? It's not even raining.
AI Grading is Here
Texas has moved to replace human raters on their STARR exam using AI instead. This isn't the first time automated tools have been used for grading, but the devil is in the details. Texas recently revamped its STARR test to make the answers more open-ended, which in turn makes the human side of grading more labor-intensive. Texas believes it will save $15-$20 million each year by replacing human raters with AI. The remaining human raters will be tasked with checking the AI.
The takeaway—another largely human-process has been outsourced, how long until we see this shift outside of standardized testing and move into our classrooms? I echo Mike Sharples' concern below—what do we risk when we offload teaching and assessment?
OpenAI Removes Sign On Barriers to Access ChatGPT
If you thought it was easy for your students to access the free version of ChatGPT before, you're in for a nasty surprise. OpenAI removed account requirements for signing up and using ChatGPT. Any user can now point their browser to the app and get instantaneous access. They're rolling out the feature now. What does it mean when there are no age restrictions to access generative AI? This move also allows OpenAI to scrape all user interactions with their chatbot, which will certainly be used to train future models
Meta Unveils Llama 3
Were you excited to hear about Meta's choice to roll out free access to their Llama 3 models by asking you to use it once you logged into Facebook or Instagram? Me neither. But this is how they've pushed AI into their existing apps. As annoying as Meta's marketing is, the big thing is the continued speculation that Zuckerburg will continue to commit to open-sourcing Llama 3's biggest model. Open source is crucial for researchers to understand how LLMs function. They also pose risks that are very different from proprietary models like ChatGPT. Having a GPT-4 level AI being open-sourced so any person can adapt it is exciting and terrifying all at once. For every solid educational use case, and there are many of them, I can imagine dozens of nefarious and unethical ones.
The Wearable Pendant That Records Your Every Conversation
About a year ago, Rewind AI announced that it planned to create a pendant that used AI to automatically record every conversation you had and then transcribe it using a mix of generative AI language and voice models. What was the Rewind pendant is now launching as the Limitless Pendant, promising to keep all of your conversations safe and secure. I hope by now, people are exercising stronger caution by examining many of these hyped claims. I cannot imagine anyone consenting to have me record our daily conversations. For students, I imagine they'd use it to record lectures, at least at first. But I'm sure the most popular use case will be to record their friend's and partner's conversations. We’re not going to have privacy anymore, even when we are offline.
Google Warning About Risk Posed By AI Agents
I've previously written about the risks AI agents pose here and here,. I think it is safe to say most people are unaware of the implications agents pose. Google published a 250+ page report detailing a myriad of ethical challenges. AI agents that can control tools, conduct long-term planning, and possess strategic reasoning are not something we’ve dealt with before. The Axios summary of the article regarding the capability of AI agents aptly captures the implications:
They could "radically alter the nature of work, education and creative pursuits as well as how we communicate, coordinate and negotiate with one another, ultimately influencing who we want to be and to become," DeepMind researchers write in their paper.
Measuring the Persuasiveness of Language Models
Anthropic continues to put out startling research about the capability of LLMs. It turns out that their latest model, Claude 3, is about as persuasive as a human being in some tests. There are, of course, limitations and quite a few caveats within their report, but I hope this doesn't fly under folks' radar. I know many of my friends do not want to believe AI is capable of the very human skill of persuading real flesh and blood humans, but please pay attention to this and don't dismiss it. What isn't talked about in the report is AI's ability to scale and maintain a level of human-like persuasion over days, weeks, or even longer with users through AI agents.
We're Accelerating Toward a Reckoning
When you connect the dots from the last few days alone — Texas green lighting AI grading, OpenAI removing barriers for ChatGPT access, Meta's open-sourcing of powerful language models, wearables for constant transcription, Google warning about unaligned AI agents —it's pretty clear where we're headed. We're sprinting towards integrating these systems into the core fabric of our lives— education, work, privacy, and even human autonomy.
Left unchecked, AI agents that can persuade, brainstorm, and act with general intelligence could rewrite the fundamental principles of how we learn, create, and make decisions as individuals. And we’re ignoring this. I suppose ignoring reality is one of those annoying traits that make us who we are. In the end, we're just human. But we owe it to ourselves and future generations to hash out these questions now while we can still steer the trajectory.
If you go back and reread Marc's initial posts from more than a year ago, it's clear that his initial phase of guarded optimism and amazement at what these genAI tools can do and the possibilities they offered for education have been, if not supplanted, at least significantly tempered by the pace of change and range of models and abilities that are coming at us faster than even the most engaged AI watchers can handle. And I completely share his observation that the vast majority of those in education - whether it's K-12 or higher ed - are mostly oblivious, either deliberately so or simply as a result of not having the bandwidth to deal with it. I've been listening to Ezra Klein's series on AI and his most recent guest, Anthropic CEO Dario Amodei, quite casually talked about a rate of scaling and model improvements over the coming months and short term years (1-3, 2-5) demonstrating higher and higher capabilities at an exponential rate. I realize that predictions on genAI abilities are all over the map, but if he is even half right - and there is nothing that seems to indicate things are going to slow down anytime soon - the power of AI models and platforms which will continue to surge into our daily lives in the near term will be truly staggering. I think the fears of cheating are really going to be beside the point (though of course they will be main way most educators continue to encounter AI) but I am really curious, fascinated, and a little terrified to see what the landscape is going to look like 12 - 18 months from now.
We can justifiably expect the next generation of students as AI users to become managers of this system of ideas and their use, rather than their education still being about what things are and do without any means for their derivation by computational means. If this is a bad thing it can only be due to the amount of electronic brain-washing our AI minds have sadly been manipulated and absorbed.