TL/DR note: I recently released a course about Generative AI in Education, and the first pathway is about teaching folks AI literacy. Such understanding will be foundational in education as major tech companies continue to deploy and scale generative AI systems in public. I’ve released the assignments from the course under free CC-By licensing and offer discounted group pricing, scholarships, and OER Fellowships for access to the course.
Anthropic’s Claude is Here
It’s starting to get crowded in the world of generative AI models. Anthropic released the public beta of its language model Claude, joining OpenAI, Google, Microsoft, Meta, and a whole slew of other competitors too long to list. It’s a chat interface, unsurprisingly, but its main standout feature is the 100,000 token context window. This translates to around 75,000 words Users can upload five documents at a time and have the AI analyze and synthesize information from a mix of formats. That's impressive and terrifying.
The testing notes show that Claude is capable of prefoming tasks nearly on par with GPT-4, and unlike OpenAI’s model, Anthropic has released free access to Claude. You can read up on the company’s ethos, which touts safety above profits. I can’t quite see the logic of releasing another language model with the thin promise that it’s much safer than competitors. It’s like putting someone in a Be Kind shirt before tossing them into a mosh pit.
What 100,00 Tokens Mean for Education
Educators have been laser-focused on academic honesty and what generative AI means for students, but that's not the audience we should be concerned about using it. With a 100,000 token five-file limit (for now), I can upload a set of student papers, a rubric, and sample essays of what makes a paper an A, B, or C and give instructions to provide helpful, structured feedback—all in a single shot! I can automate my grading workload, just as students can offload the process of learning when they generate text instead of write. I think I've used Mike Sharples' quote from his AIED essay a dozen times to illustrate what's at stake in education, but let me shout it one more time for those in the back:
"Students employ AI to write assignments. Teachers use AI to assess and review them (Lu & Cutumisu, 2021; Lagakis & Demetriadis, 2021). Nobody learns, nobody gains."
I know what you’re thinking because the same thought traveled through my skull—this is not something I’d willingly do. The problem is Anthropic, just like OpenAI, is selling access to the tool through third-party plugins and APIs. It won’t be long before we see learning management systems upgrade to it or something like it and this will simply be another feature in an application or site we use daily. The implications this poses for education are profound.
Will we have enough sense to know when a generative AI system is running in the background or displayed as just another feature?
What boundaries do we need to put on our usage of generative AI in teaching and learning?
How would you feel if you were a student and a teacher used the technology on your course work without your knowledge or consent?
What biased training data impacts these black box systems?
What level of accountability are we offloading to automated systems that cannot explain how they arrived at any decision?