OpenAI's New Multimodal Feature is Not Something to Fear
Faculty Shouldn't Panic With Each New Update
Many faculty moved to in-person, proctored exams using Blue Books as a response to generative AI, seeing the tech as a threat to learning and academic honesty. ChatGPT Plus’s recent multimodal features further complicate that decision because paper exams can now be used as an input into the AI through a simple photograph. Will students turn to this in mass? I doubt it. But we’re sure to see the predictable panic set in as paper exams don’t turn out to be a secure moat against generative AI.
There’s a lot of hype around OpenAI’s newly released multimodal features. The company touted this feature as game-changing, but like many initial offerings, the feature appears rushed and imperfect. My guess is they are frightened about Google’s impending announcement about Gemini, Google’s direct competitor to OpenAI’s GPT-4. There are some areas where the technology performs well in analyzing images, but it is nowhere near 100%.
Faculty who turned to in-person exams did so as a means of preserving how they taught because they recognized generative AI’s implications as a threat; however, there’s little doubt in mind that this approach is merely a stopgap as we grasp for a lodestar in the increasingly chaotic landscape of AI. Just this week Microsoft released Copilot for Windows 11, putting generative AI directly into the operating systems of nearly 70% of all computer users worldwide. Most faculty haven’t had time to consider the implications, let alone process the news of OpenAI’s new multimodal functionality.
I empathize with faculty who elected to move to in-person proctoring. It’s not like we all can deal with new technological changes that roll in like storms. We haven’t had time or training to deal with any of this. I still have multiple conversations with faculty every week or so who ask what ChatGPT is and how to use it. As shocking as that is to hear, I also have similar conversations with students.
A newly released feature isn’t a call to panic—it’s an invitation to explore
Remember, students aren’t eager to offload their writing or learning completely to an algorithm. Adopting a sense of play toward something new allows exploration without the pressure of norms stifling discovery. Teaching your students ethical use cases about this new multimodal feature is meaningful to them and helps chart a course to academic integrity.
Examples:
Have students scan the Creative Commons Image library , select images or research posters, and run tests using the new multimodal features. What did the AI get correct? What nuance did it find or miss? Check out dense images that a human would struggle to analyze.
Have students take screenshots of your assignment sheets and have GPT rephrase the assignment for a different audience. Have students analyze and rate the output. Was it helpful? Did it contain hallucinated material?
Teach students about copyright and ethics by discussing what they legally can upload into the AI. This is especially true about pictures of written text, but also for artistic works. Ask students how they feel if someone uploaded a picture of them without their consent to an AI. Is that a violation of boundaries? Would they object?
One of the potential use cases would be to create a confusingly worded assignment and have students take a picture of it and ask GPT to clarify the instructions. You can also ask the students to rewrite the output at different reading levels and ask them how this might help younger students or those with reading disabilities.
We Have Hard Work Ahead
Teaching is going to have to change to adapt to generative technologies. We cannot ignore or ban its usage in our classes. Those positions are increasingly untenable. Instead, teaching students to intentionally engage with generative technology in ethical contexts appears to be the best path we can chart. Tim Laquintano, Carly Schnitzler, and Annette Vee edited the newly launched TextGenEd-a terrific open-access collection of over 30 educators exploring generative AI tools in structured assignments with their students. I was happy to contribute my assignment to the collection and it will be an on going project. Exploring critical approaches to engaging students with technology will be crucial for education. If we continue down the surveillance path we risk adopting increasingly intrusive means of preserving what we value in education at great cost, all while industry adopts generative technology, creating knowledge and experience gaps with new employees that need AI aptitude.
We’re going to have to start asking what aspects of our teaching and our student’s learning we need to preserve from automation and what aspects of generative AI can be used to help support the former and latter. Related to this conversation is what impact offloading certain tasks will have on existing skills and what new skills we need to teach ourselves and our students. If we don’t approach generative AI adoption carefully, we risk being rolled by the technology and allowing big tech to chip away at education through the uncritical adoption of technology that automates education.
Mississippi AI Institute NL;MR
We’ve set up the Mississippi AI Institute blog called Not Long; Must Read and will be releasing short blog posts about how University of Mississippi faculty are using generative AI technology in their teaching. Posts from Emily Pitts-Donahoe and Stephen Monroe will soon have more company. Watch this space!