A little over a year ago, I stood in front of a group of less than 20 people and showed them OpenAI’s Playground and used GPT-3 to produce a few simple examples. That was my first presentation on generative AI. Since then I’ve done close to 70 talks/ presentations. A few weeks ago I gained access to Google’s NotebookLM, an interface that lets you upload up to 10 documents with 50,000 words per document and use Google’s AI to summarize, explore, and synthesize up to 500,000 words. In the space of a year, we’ve gone from context windows of a few thousand words to half a million. I struggle to think about the use cases or implications, but here is where we are, and generative AI isn’t going anywhere. As we near the one year anniversary of ChatGPT’s release, let’s take a moment and consider just how generative AI is already impacting education.
Instructional Design:
There is massive interest from most ed tech companies to develop and deploy interfaces that ease content creation for teachers. Below are just a few players that made recent announcements:
Reading:
Students can use AI to assist with reading in many instances. This has tremendous potential to help students with disabilities, non-native speakers, and those who struggle with focus. I’ve deployed some of the tools in my pilots and students responded favorably. These interfaces potentially threaten close reading skills we attempt to cultivate in our students by rendering complex writing into simple text. Worse, it can remove nuance and authorial intent from authors by eliminating stylistic choices. We need a discourse around generative reading assistance! We’re about to see the tech in K-12 in a big way with Microsoft’s Reading & Progress Coach.
Research:
Google and Microsoft both dove into generative AI with Generative Search Experience and Bing Chat, but before the launch of ChatGPT, there were dozens of research tools from the approachable and general, like Perplexity, to the niche scientific database engine being developed by Elicit and the just plain awe-inducing interfaces Nomic uses to create explorable universes of data. Newer tools, like Anthropic’s Claude 2 with its 75,000 word context window and Google’s NotebookLM 500,000 word window pose unknown opportunities and challenges for using generative AI with your data.
Personalized Tutoring:
OpenAI’s new personalized interface allows a user to build their bot for their unique instance. For education, you can build a personalized tutor or upload an open textbook you are using and deploy a smart assistant to help guide and nurture your students on the specific content in your course. Perhaps more alarming, you can build a version of yourself by uploading your lectures, blog posts, and even critical writing. John Warner did a recent experiment using his book columns, and while the public version isn’t quite there yet, OpenAI’s API will give developers access to 300-page context windows to fine-tune a bot on your data.
What’s Missing?
Notice no mention of any of the main LLMs on the list outside of the GPTs? Trust me, their omission is purposeful. I’ve piloted generative AI applications with students for over a year and a half now—starting months before OpenAI released ChatGPT. What students have shown me consistently is that chatbots make for poor interfaces—they generate chunks of text and offer writers little in the way of affordances or flexibility. We’re never going to see the mass migration of users to such limited interfaces because writing is a technology with one of the most humanistic interfaces and its use is tied to habit and years of cultural practices that are ingrained in education. This doesn’t mean our students won’t find uses for LLMs outside of chatbots. Most of the interfaces above let users engage generative AI beyond chat.
Developers Thinking Beyond Chatbots
Here are some amazing developers you should read who are exploring more human interfaces:
Maggie Appleton—Ought
Appleton is one of the head designers behind Elicit. Her ethos on gathering knowledge in public spaces is a pretty amazing take on digital gardening. Think of a living online portfolio of ideas that take shape in public. Her thoughts on chatbots form some of the most coherent critical frameworks for how developers can use LLMs with more humanistic frameworks.
Linus Lee—Notion
If there is a Rick Ruben in the UI/UX space, it just might be Linus Lee. He’s one of the most well-known and oft-cited designers whose experiments with LLMs and building unique interfaces are fascinating and informed by some deeply human questions about how people interact with machines.
Amelia Wattenbarger—Adept
A few developers are working with generative AI and approach writing not as a problem in need of being solved by technology. Wattenbarger is one of them. Her experiments with her PenPal app reveal a designer tinkering with the notion of what it means to use LLMs to support writers, not replace them. There are precious few people who have taken such a deep dive into exploring how generative AI can move beyond text generation to provide users with supportive scaffolds and intelligent summaries.
Max Drake—Fermat
The development team at Fermat has created one of the coolest user interfaces on the market. Imagine an endless spatial canvas of smart cards, some generating text, images, or reading your writing and reacting to it. Drake developed some of the most fascinating reaction tools I’ve seen. His Implicit Reaction tool offers writers live counterarguments, automatic summaries, and a keyword generator. I’ve tested Fermat with students and they liked it, but there’s nothing like it on the market. Students may be more used to linear interfaces as opposed to spatial ones.
Nathan Baschez—Lex
What happens when the co-founder behind Every, one of the larger long-form writing apps, decides to build an interface using generative AI? You get Lex and trust me, Lex is very, very good. It’s built like a traditional word processor and that familiar user interface makes the tool incredibly easy to use, even when engaging the underlying LLM to generate text, get feedback, or brainstorm. Of all the tools I’ve tested, Lex is the one I can see writers adopting as part of their daily practice.
Charting an Uncertain Course
Many thoughtful educators are exploring how generative AI impacts students. Jane Rosenzweig, The Director of Harvard’s Writing Center, has been asking her students to engage critically with several types of generative AI systems this semester:
Jane’s points are solidly grounded in how humans use writing. She joins others, like John Warner, Maha Bali, Lance Eaton, and Anna Mills who are trying to guide educators in understanding how this new technology is impacting the way we teach and how students might learn or become deskilled because of it. My one concern is we tend to thoughtlessly adopt new things at staggering rates, and education only has limited moments we can reach students, ask them to pause, and consider what these new things mean. Neal Postman articulated this troubling trend some 30 years ago in his book Technopoly, The Surrender of Culture to Technology. His description of Technopoly sounds an awful lot like how quickly we’ve rushed into generative AI:
AI Literacy is our Lodestar
As educators, we stand at a precipice - generative AI has barreled into view, bearing tremendous potential to aid learning but also risks that should give all of us pause. How do we embrace its possibilities while upholding the very educational values that make us human, when the impulse is to uncritically adopt technology?
We need a lodestar to navigate these uncharted waters, and AI literacy may be it. Rather than surrendering to technological momentum, we must chart a course guided by humanistic principles. While generative AI offers a compelling new compass, the true north of education should remain empowering human potential. This demands a pedagogy-first approach, where technology serves learning rather than driving it. But to achieve this calls for a change in our culture and our behaviors around technology—no simple feat. AI literacy requires cultivating the established digital literacy and critical faculties students need to evaluate generative AI wisely, while safeguarding the intellectual values and equitable ideals that academia holds dear, even as these tools evolve.
Generative AI is no magic solution - we’ve all seen our share of digital snake oil sold to higher education. But if we stay true to ethical frameworks that have long-oriented education, we may be able to help shape how this new technology aids learners without displacing the vital human role of teaching.
Marc - Thanks for this. Have you considered creating or helping to facilitate either an online community of K-12 educators who are deeply immersed in the potential for generative AI in schools? Your post also deals almost exclusively with LLM's. I've seen teachers experiment with Midjourney and OpenAI's Dall-E-3 to incorporate images into their classrooms and express interest in many of the other AI platforms you covered in your online course. Perhaps some kind of AI summit or other space to bring together folks who have been following your work might be of interest to people to get together and share what they've experienced. Many thanks for all your updates.
Yes to "thoughtless adoption" as a concern. I think we need to understand why it happens. I see three major causes: 1) administrative leaders need to show they are aligned with 'innovation', especially in institutions that are in more precarious financial states--it becomes part of a branding strategy; 2) mid-ranking administrators and managers need to show "proof of life"--they have to demonstrate continuously that they aren't just keeping the lights on but are producing new efficiencies, innovations, etc.; 3) ed tech is relentlessly patrolling around secondary education and higher education like a tiger prowls around a pig in a cage: we are one of the few pinatas that "disruptors" haven't fully broken into smithereens yet, there's a lot of candy inside, and after the substantive failure of fully online education in the pandemic (they got what they'd been loudly demanding, and it turns out almost nobody likes it and it wasn't effective), AI is their desperate hope for some new whacks at the pinata. Against that, pointing out that generative AI isn't any more of a magic fix than fully online education or MOOCs or anything else was is as unhearable as earlier cautions were.